text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
The Center for Geometry and Physics is loosely organized into multiple research groups, each of which comprises a senior scholar who leads the group and several researchers whose areas of expertise and interest overlap synergistically. A brief description of each group's areas of focus, research goals, and members can be seen below.
Symplectic topology, Hamiltonian dynamics and mirror symmetry
Team Leader: Yong-Geun Oh
The current status of symplectic topology resembles that of classical topology in the middle of the twentieth century. Over time, a systematic algebraic language was developed to describe problems in classical topology. Similarly, a language for symplectic topology is emerging, but has yet to be fully developed. The development of this language is much more challenging both algebraically and analytically than in the case of classical topology. The relevant homological algebra of $A_\infty$ structures is harder to implement in the geometric situation due to the analytical complications present in the study of pseudo-holomorphic curves or "instantons" in physical terms. Homological mirror symmetry concerns a certain duality between categories of symplectic manifolds and complex algebraic varieties. The symplectic side of the story involves an $A_\infty$ category, called the Fukaya category, which is the categorified version of Lagrangian Floer homology theory. In the meantime, recent developments in the area of dynamical systems have revealed that the symplectic aspect of area preserving dynamics in two dimensions has the potential to further understanding of these systems in deep and important ways.
Research themes and research members
Symplectic and contact topology:
Yong-Geun Oh (symplectic topology, Hamiltonian dynamics and mirror symmetry)
Byunghee An (algebraic topology, geometric group theory)
Eunjeng Lee (Toric topology, Newton-Okounkov bodies, representation theory, and algebraic combinatorics)
Yat-Hin Suen (Complex geometry, Symplectic Geometry, SYZ Mirror Symmetry, Homological Mirror Symmetry, Mathematical Physics)
Minkyoung Song (Geometric topology and related topics in group theory)
Jongmyeong Kim (Homological mirror symmetry)
Taesu Kim (Homotopy theoretic aspects of symplectic geometry)
Dogancan Karabas (Symplectic topology, microlocal sheaves, mirror symmetry, low-dimensional topology)
Sangjin Lee (Lagrangian foliations, Symplectic mapping class group, Fukaya category)
Seungwon Kim (Topology and geometry)
Hongtaek Jung (Symplectic structures of Hitchin components and Anosov representations)
Hamiltonian dynamics and dynamical systems:
Jin Woo Jang (Partial differential equations, gas dynamics, fluid dynamics, harmonic analysis)
Mathematical theory of quantum fields and string theory
Team Leader: Calin Lazaroiu
The mathematical foundations of field and string theories remain poorly understood. As a consequence, many mathematical theories which are intimately related to the quantization of such systems are not yet subsumed into a unifying framework that could guide their further development. In particular developing a general theory of B-type Landau-Ginzburg models, as required by a deeper understanding of mirror symmetry, and developing a general global mathematical formulation of supergravity theories, which could afford a deeper understanding of the Mathematics behind "supersymmetric geometry". Physical and mathematico-physical approaches to string theory and quantum eld theory use homotopical methods in various ways, depending on the formalizations at play. Many basic tools used in algebraic geometry are refinements of constructions from algebraic topology. In order to ensure that these tools are employed within the proper framework, with attention to contemporary developments, to facilitate their use, and to help bridge the gaps between participants in different research programs, it is useful to have an active research group focused on the fundamentals of homotopy algebra itself.
Calin Lazaroiu (Landau-Ginzburg models and supergravity)
Chang-Yeon Chough (Algebraic geometry, homotopy theory, infinity category theory)
Damien Lejay (Mathematical physics, vertex algebras, factorisation algebras, higher toposes)
Anna Cepek (Manifold topology)
Yifan Li (Algebraic geometry, algebraic topology and mathematical physics)
Mathematical Physics:
Alexander Aleksandrov (Mathematical physics, random matrix models, integrable systems, enumerative geometry)
Arithmetic, birational and complex geometry of Fano varieties
Team Leader: Jihun Park
Fano varieties are algebraic varieties whose anticanonical classes are ample. They are classical and fundamental varieties that play many significant roles in contemporary geometry. Verified or expected geometric and algebraic properties of Fano varieties have attracted attentions from many geometers and physicists. In spite of extensive studies on Fano varieties for more than one centuries, numerous features of Fano varieties are still shrouded in a veil of mist. Contemporary geometry however requires more comprehensive understanding of Fano varieties.
Jihun Park (Arithmetic, birational and complex geometry of Fano varieties)
Jun-Yong Park (Geometry & topology of 4-manifolds and the arithmetic of moduli of surfaces)
Yuto Yamamoto (Tropical geometry)
Kyeong-Dong Park (Geometry of complex Fano manifolds and geometric structures)
Sungmin Yoo (Complex geometry and geometric analysis)
Shinyoung Kim (Complex geometry)
|
CommonCrawl
|
Protecting the grid topology and user consumption patterns during state estimation in smart grids based on data obfuscation
Lakshminarayanan Nandakumar1,
Gamze Tillem2,
Zekeriya Erkin2 &
Tamas Keviczky3
Energy Informatics volume 2, Article number: 25 (2019) Cite this article
Smart grids promise a more reliable, efficient, economically viable, and environment-friendly electricity infrastructure for the future. State estimation in smart grids plays a pivotal role in system monitoring, reliable operation, automation, and grid stabilization. However, the power consumption data collected from the users during state estimation can be privacy-sensitive. Furthermore, the topology of the grid can be exploited by malicious entities during state estimation to launch attacks without getting detected. Motivated by the essence of a secure state estimation process, we consider a weighted-least-squares estimation carried out batch-wise at repeated intervals, where the resource-constrained clients utilize a malicious cloud for computation services. We propose a secure masking protocol based on data obfuscation that is computationally efficient and successfully verifiable in the presence of a malicious adversary. Simulation results show that the state estimates calculated from the original and obfuscated dataset are exactly the same while demonstrating a high level of obscurity between the original and the obfuscated dataset both in time and frequency domain.
Smart grids are widely regarded as a key ingredient to reduce the effects of growing energy consumption and emission levels (Commission 2014b). By 2020, the European Union (EU) aims to replace 80% of the existing electricity meters in households with smart meters (Commission 2014b). Currently, there are close to about 200 million smart meters accounting for 72% of the total European consumers (Commission 2014b). This smart metering and smart grid rollout can reduce emissions in the EU by up to 9% and annual household energy consumption by similar amounts (Commission 2014b). Despite the environment-friendly and the cost-cutting nature of the smart grid, deployment of smart meters at households actually raises serious data privacy and security concerns for the users. For example, with the advent of machine learning and data mining techniques, occupant activity patterns can be deduced from the power consumption measurement data (Molina-Markham et al. 2010; Lisovich et al. 2010; Kursawe et al. 2011; Zeifman and Roth 2011). Additionally, the configuration of the power network/grid topology can be used by attackers to launch stealth attacks (Liu et al. 2011). Thus, despite the apparent benefits, without convincing privacy and security guarantees, users are likely to be reluctant to take risks and might prefer conventional meters to smart meters.
State estimation in smart grids enables the utility providers and Energy Management Systems (EMS) to perform various control and planning tasks such as optimizing power flow, establishing network models, and bad measurement detection analysis. State estimation is a process of estimating the unmeasured quantities of the grid such as the phase angle from the measurement data. The operating range of the state variables determines the current status of the network which enables the operator to perform any necessary action if required. The state of the system, the network topology, and impedance parameters of the grid can be used to characterize the entire power system (Huang et al. 2012). Traditionally, the centralized state estimation technique with the weighted-least-squares method yielded a very accurate result (Rahman and Venayagamoorthy 2017). However, now due to the increased complexity and the scale of the grid size, state estimation in a wide area grid network requires multiple smart meters from different localities to share data, some of which could be hosted by a third-party cloud infrastructure (Kim et al. 2011) due to coupling constraints, superior computational resources, greater flexibility, and cost-effectiveness.
The problem with the current cloud computation practice is that it operates mostly over plaintexts (Ren et al. 2012; Deng 2017); hence users reveal data and computation results to the commercial cloud (Ren et al. 2012). It becomes a huge problem when the user data contains sensitive information such as the power consumption patterns in smart meters. Moreover, there are strong financial incentives for the cloud service provider to return false results especially if the clients cannot verify or validate the results (Wang et al. 2011). For example, the cloud service provider could simply store the previously computed result and use it as the result for future computation problems to save computational costs. A recent breakthrough in fully homomorphic encryption (FHE) (Gentry and Boneh 2009) has shown that secure computation outsourcing is viable in theory. However, applying this mechanism to compute arbitrary operations and functions on encrypted data is still far from practice due to its high complexity and overhead (Wang et al. 2011). This problem leads researchers to alternative mechanisms for the design of efficient and verifiable secure cloud computation schemes.
Existing work and our contributions
Numerous privacy challenges related to smart grids are pointed out in the literature in different contexts. Amongst them, the most popular and widely studied is the privacy-preserving billing and data aggregation problem in smart grids (Molina-Markham et al. 2010; Kursawe et al. 2011; Erkin 2015; Ge et al. 2018; Knirsch et al. 2017; Emura 2017; Danezis et al. 2013). Our main objective is different from these work since we focus on the privacy concerns of state estimation in smart grids. Existing literature in smart grid state estimation problem focuses either on the problem of protecting the grid topology (Liu et al. 2011; Rahman and Venayagamoorthy 2017; Deng et al. 2017) or on preserving the power consumption data of the users separately (Kim et al. 2011; Beussink et al. 2014; Tonyali et al. 2016). In Liu et al. (2011), the authors present a new class of attacks called false data injection attacks (FDI) against state estimation in smart grids and show that an attacker can exploit the configuration of a power network to successfully introduce arbitrary errors into the state variables while bypassing existing techniques for bad measurement detection. The authors in Deng et al. (2017) propose a design for a least-budget defense strategy to protect the power system from such FDI attacks. The authors in Rahman and Venayagamoorthy (2017) extends this problem to a non-linear state estimation and examines the possibilities of FDI attacks in an AC power network. To preserve the privacy of the user's daily activities, (Kim et al. 2011) exploits the kernel of the electric grid configuration matrix. In Beussink et al. (2014), a data obfuscation approach for an 802.11s-based mesh network is proposed to securely distribute obfuscated values along the routes available via 802.11s. The obfuscation approach in Tonyali et al. (2016) tackles this problem through advanced encryption standard (AES) scheme for hiding the power consumption data and uses elliptic-curve cryptography (ECC) for authenticating the obfuscation values that are distributed within the advanced metering infrastructure (AMI) network.
Contrary to the above work in smart grid state estimation, we focus on protecting both the power consumption data of the users and the grid topology. An open problem pointed out in Efthymiou and Kalogridis (2010); Li et al. (2010); Kim et al. (2011) is to provide a light-weight implementation of state estimation that can run in a smart meter platform. In this paper, we attempt to solve this problem by proposing Obfuscate(.), an efficient secure masking scheme based on randomization. Our scheme obfuscates the measurement data of a collection of smart meters installed in a particular locality and send it to the lead smart meter in their respective locality. These lead smart meters, in turn, gather these randomized data and send it to the cloud service provider to perform the required computations.
The major contributions of our paper are as follows:
We propose Obfuscate(.), the first batch-wise state estimation scheme in smart grids with the goal of protecting both the power consumption data of the consumers and the grid topology. Our scheme is based on secure masking through obfuscated transformation and is proven to be efficient with no major computational overhead to the users.
We evaluate the performance of Obfuscate(.) with real-time hourly power consumption dataset of different smart meters. We use the dataset under the assumption that these meters are connected to an IEEE-14 bus test grid system and a fully measured 5 bus power system. Furthermore, we evaluate the illegibility of the obfuscated dataset with respect to the original dataset.
In the rest of the paper, first we discuss the necessary prerequisites on state estimation in smart grids and the adversarial models in "Background information" section. In "Secure state estimation with Obfuscate(.)" section, we explain Obfuscate(.) in detail. In "Analyses of Obfuscate(.)" section, we present the correctness, privacy, verification and complexity analyses of our scheme. In "Simulation results" section, we present the simulation results and we conclude the paper in "Conclusions and future work" section.
Static state estimation in electric grids
The static state estimation (SSE) in smart grids is a well established problem with well-known techniques that rely on a set of measurement data to estimate the states at regular time intervals (Schweppe and Wildes 1970; Schweppe and Rom 1970; Schweppe 1970). The state vector \(x = [x_{1}, x_{2}, \cdots x_{n}]^{T} \in \mathbb {R}^{n}\) represents the phase angles at each electric branch or system node, and the measurement data \(z \in \mathbb {R}^{m}\) denotes the power readings of the smart meters. The state vector x and the measurement data z are related by a nonlinear mapping function h such that z=h(x)+e, where the sensor measurement noise e is a zero-mean Gaussian noise vector. Typically, for state estimation a linear approximation of this equation is used (Kim et al. 2011; Liu et al. 2011; Gera et al. 2017) as z=Hx+e, where \(\mathbf {H} \in \mathbb {R}^{m \times n}\) is the full column rank (m>n) measurement Jacobian matrix determined by the grid structure and line parameters (Liang et al. 2017). The matrix H is known as the grid configuration or the power network topology matrix (Kim et al. 2011; Liang et al. 2017; Gera et al. 2017). In an electric grid m≫n (Zimmerman et al. 2009) and the best unbiased linear estimation of the state (Wood and Wollenberg 1996) is given by
$$ \hat{x} = \left(\mathbf{H}^{T} W \mathbf{H}\right)^{-1} \mathbf{H}^{T} W z, $$
where \(W^{-1} \in \mathbb {R}^{m \times m}\) represents the covariance matrix of the measurement noise. W−1 is taken to be a diagonal matrix W−1=σ2I (Wood and Wollenberg 1996), so Eq. 1 reduces to
$$ \hat{x} = \left(\mathbf{H}^{T} \mathbf{H}\right)^{-1} \mathbf{H}^{T} z. $$
The SSE technique reduces the computational complexity of performing state estimation in smart grids, where the estimates are usually updated on a periodic basis (Huang et al. 2012). Measurement devices in current transmission systems are installed specifically catering to the needs of SSE (Krause and Lehnhoff 2012). The recent evolution of phasor measurement units (PMUs) are able to measure voltage and line current phasors with high accuracy and sampling rates. However, deployment of a large number of PMUs across the system requires significant investments since the average overall cost per PMU ranges from $40k to $180k (Department of Energy 2014). Hence SSE will remain an important technique to estimate the state variables at medium and low voltage levels (Cosovic and Vukobratovic 2017). Practically, state estimation is run only for every few minutes or only when a significant change occurs in the network (Cosovic and Vukobratovic 2017; Monticelli 2000).
Bad measurement detection (BMD)
Bad measurements may be introduced due to meter failures or malicious attacks. They may affect the outcome of state estimation and can mislead the grid control algorithms, possibly causing catastrophic consequences such as blackouts in large geographical areas. For example, a large portion of the Midwest and Northeast United States and Ontario, Canada, experienced an electric power blackout affecting a population of about 50 million (n.a. 2003). The power outage cost was about $80bn in the USA and usually, the utility operators amortize it by increasing the energy tariff, which is unfortunately transferred to consumer expenses (Salinas and Li 2016). Thus, BMD is vital to ensure smooth and reliable operations in the grid.
The most common technique to detect bad measurements is to calculate the L2-norm \( \left \Vert z - \mathbf {H}\>\hat {x} \right \Vert \), and if \( \left \Vert z - \mathbf {H} \> \hat {x} \right \Vert > \tau \), where τ is the threshold limit, then the measurement z is considered to be bad. The reason is that, intuitively, normal sensor measurements yield estimates closer to their actual values, while abnormal ones deviate the estimated values away from their true values. This inconsistency check is used to differentiate the good and the bad measurements (Liu et al. 2011). However, this is not always the case, as exposing H could make the grid vulnerable to stealth attacks (Liu et al. 2011). Liu, Reiter and Ning proved that a malicious entity can exploit the row and column properties of H when exposed, and launch false data injection attacks without getting detected (Liu et al. 2011). The H matrix includes the arrangement of of loads or generators, transmission lines, transformers, and status of system devices and is an integral part of state estimation, security, and power market design (Gera et al. 2017). Thus, there is a strong need to protect not just the power consumption data but also the power network topology during state estimation.
Cryptographic preambles
To understand the privacy goals of our problem, we state the following definitions:
Obfuscation (Shoukry et al. 2016) is the procedure of transforming the data into masked data through randomization and performing the necessary operations on this masked obfuscated data. The obfuscated data can be unmasked by inverting the randomized transformation using the respective private keys.
Semi-honest Adversary (Lindell and Pinkas 2009) is an adversary who correctly follows the protocol specification but keeps track of all the information exchanged to possibly analyze it together with any other public information to leak sensitive data. It is also known as honest-but-curious or passive adversary.
Malicious Adversary (Lindell and Pinkas 2009) is an adversary who can arbitrarily deviate from the protocol specification. Here the attacks are no longer restricted to eavesdropping since the adversary might actually inject or tamper with the data provided. It is also known as active adversaries.
Secure state estimation with O b f u s c a t e(.)
In this section, we explain our secure state estimation protocol Obfuscate(.) along with the setup and the threat model.
Let an area \(\mathcal {A}\) consist of two localitiesFootnote 1 denoted by L1 and L2 as shown in Fig. 1. The symbol Sij refers to the smart meter installed at the household j situated in locality Li and \(X_{i} \in \mathbb {R}^{n_{i} \times T}\) denotes the state sequences of all the smart meters installed in Li for a given batch of time duration T. The electric grid configuration matrix of Li is represented as Hi and the coupling matrices between Li and Lj are denoted as Hij and Hji respectively. The symbol [·] denotes the obfuscation of a vector or matrix. For example, [Zi] represents the obfuscated value of the matrix \(Z_{i} \in \mathbb {R}^{m_{i} \times T}\) where mi is the number of smart meters in Li. The participating entities in our design are as follows:
Proposed solution framework
Utility Provider\(\mathcal {U}\): provides utility services to \(\mathcal {A}\) and has access to the grid configuration matrix H. \(\mathcal {U}\) generates all the keys to initiate Obfuscate(.) and distributes a selected portion of these keys to the smart meters at each locality through a private channel to carry out obfuscation. \(\mathcal {U}\) is a decision-making entity performing any necessary action after receiving the state variables at regular intervals.
Lead Smart Meter Si1 receives the randomized masked data from the other meters connected to it and obfuscates the dynamics of the power consumption pattern of all the meters in its locality. Then, sends it to the cloud for state estimation. The lead meter at every locality is assumed to be a trusted node in the local network. A similar entity was proposed in Kim et al. (2011) where the lead meter is connected to all the meters based on the mesh topology network. The lead meter, for instance, could be the local distributed system operator (DSO) of a particular locality.
Other Smart Meters Sij (∀j≠1) are all the other meters in Li. They obfuscate their measurement data and send it to the lead meter Si1 to avoid leaking information about their respective consumptions to any potential eavesdropping.
Cloud\(\mathcal {C}\) is computationally super efficient and hence provides computation services for \(\mathcal {A}\) performing state estimation. As pointed out before, since most of the current cloud computations are performed in plaintext, modeling the cloud as a malicious entity is crucial in practice.
Threat model
The smart meters in Li and Lj, where j≠i, are considered to be semi-honest to each other i.e., clients living in different localities are curious about each other consumption data. This means that people who are situated geographically apart may try to learn information about people in other localities such as energy usage consumption pattern, pricing, etc. Also, households living in the same locality are modeled to be honest-but-curious. Albeit, it is natural for people living in the same locality - next to each other to have at least some prior knowledge about each other's activity pattern, it is not acceptable if the neighbors can deduce the usage of a particular appliance at a given time-stamp applying techniques such as non-intrusive load monitoring (Zeifman and Roth 2011) to the original power consumption data. Thus, all the smart meters in a particular locality securely mask their consumption data before sending it to their respective lead meter.
Unlike the problem of protecting the user power consumption data from the utility provider for billing, data aggregation and other statistical purposes (Kursawe et al. 2011; Erkin 2015; Ge et al. 2018; Knirsch et al. 2017; Emura 2017; Danezis et al. 2013), here we study the problem of carrying out secure state estimation by outsourcing the data to an untrusted third party. These state variables with high accuracy are essential to the utility provider for effective decision-making and providing good quality services such as demand forecasting, optimal power flow, and contingency analysis. Hence \(\mathcal {U}\) here is not considered to be an adversarial entity and is non-colluding in nature. The utility provider's main objective is to earn the consumer trust by protecting their privacy and encouraging more user participation to install smart meters for business and commercial purposes. Investment in smart metering technology is directly impacted by customer trust in the utility operators (Commission 2014a). To protect the privacy of consumers, utility providers make use of secure communication channels and databases with access control (Kim et al. 2011). In addition, with EU's newly devised General Data Protection Regulation (GDPR), energy companies are liable to pay large penalties up to €20m (Hunt 2017), if customer data are misused. One might argue about the need to apply a similar compliance factor to the cloud service provider. However, the major problem specific to cloud computation services is that, with the current technology, most of the computations in the cloud are performed in plaintext (Ren et al. 2012; Deng 2017). Arbitrary computations on encrypted data using FHE schemes are still under active research for effective implementation (Tebaa and Hajji 2014). Providing data in the clear makes the cloud vulnerable to both active and passive attacks. According to the latest Microsoft security intelligence report (Simos 2017), the number of attacks in the cloud environment has increased by 300% which further justifies considering the cloud as a malicious entity in our problem setup.
O b f u s c a t e(.)
The aim of our scheme is to protect the privacy of the power consumption data of the consumers Zand the grid configuration matrix H during state estimation, while outsourcing these pieces of information to an untrusted malicious third party cloud. Our design goals are as follows:
Input/Output Privacy: Neither the input data sent nor the output data computed by the cloud should be inferred by the cloud.
Correctness: Any cloud server faithfully following the protocol must be able to compute an output that can be verified successfully.
Verification: If the cloud server acts maliciously, then it should not be able to pass the utility-side verification test with a high probability.
Efficiency: Computational overhead for the clients (\(\mathcal {U}\) and Sij) should be minimal.
Nevertheless it is important to note that local smart meters in the localities cannot estimate the states on their own due to the coupling constraints (See Eq. 3). The efficiency criteria is mainly considered to exploit the nearly unlimited computational resources of the cloud. Furthermore, since the smart meters in different neighborhoods are semi-honest to each other, the designed protocol should also guarantee a very low probability of a particular neighbour inferring any sensitive information through eavesdropping and combining any other publicly available information of the localities.
Proposed scheme
Consider the proposed scheme depicted in Fig. 1. The equation z=Hx+e, can be rewritten as :
$$ \left[\begin{array}{l} Z_{1} \\ Z_{2} \\ \end{array}\right] \> = \>\underbrace{ \left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \\ \end{array}\right]}_{\mathbf{H}} \>\left[\begin{array}{l} X_{1} \\ X_{2} \\ \end{array}\right] \> + \>\left[\begin{array}{l} e_{1} \\ e_{2} \\ \end{array}\right], $$
where \(H_{1} \in \mathbb {R}^{m_{1} \times n_{1}}\) and \(H_{2} \in \mathbb {R}^{m_{2} \times n_{2}}\) are the grid configuration matrix of L1 and L2. The matrix \(H_{12} \in \mathbb {R}^{m_{1} \times n_{2}}\) and \(H_{21} \in \mathbb {R}^{m_{2} \times n_{1}}\) denote the coupling matrices. The measurement data and the states of Locality Li are represented by \(Z_{i} \in \mathbb {R}^{m_{i} \times T}\) and \(X_{i} \in \mathbb {R}^{n_{i} \times T}\) respectively. The solution to Eq. 3 is given by Eq. 2.
In general, the configuration of the power network H is not time-varying during the state estimation process (Schweppe and Wildes 1970; Schweppe and Rom 1970; Schweppe 1970; Wood and Wollenberg 1996), and hence the matrix H+=(HTH)−1HT can be pre-computed during the offline stage. Typically, this information is computed during the creation of the power network by the utility provider using a trusted party. Hence, the state estimation can be recast and reduced into \(\hat {X} = \mathbf {H}^{+} Z\), where \(\hat {X} \in \mathbb {R}^{n \times T}\), \(Z \in \mathbb {R}^{m \times T}\) and \(\textbf {H}^{+} \in \mathbb {R}^{n \times m}\) with m=m1+m2 and n=n1+n2. Thus, our privacy-aware state estimation problem can be recast into solving a matrix multiplication securely. The matrix H+ can be rewritten block-wise as follows:
$$ \begin{aligned} \mathbf{H}^{+} \> &= \>\left(\left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right]^{T} \left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right] \right)^{-1} \>\left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right]^{T} \> = \>\left[\begin{array}{ll} F_{1} & F_{12} \\ F_{21} & F_{2} \\ \end{array}\right], \end{aligned} $$
where \(F_{1} \in \mathbb {R}^{n_{1} \times m_{1}}, F_{2}\in \mathbb {R}^{n_{2} \times m_{2}}, F_{12} \in \mathbb {R}^{n_{1} \times m_{2}}\) and \(F_{21} \in \mathbb {R}^{n_{2} \times m_{1}}\). The exact expression of the F matrix is omitted here due to space constraints. Notice from \(\hat {X} = \mathbf {H}^{+} Z\) that it is not possible for the lead meter in each locality to carry out the estimation process locally due to the coupling constraints generated by the matrices H12 and H21. Namely, the state estimate \(\hat {X}_{1}\) also depends on the consumption data of the other locality Z2 and vice versa. Thus, the lead meter collects all the obfuscated measurement data from the other meters in its locality and sends it to the cloud. The matrix H+ is obfuscated by the utility provider and sent to the cloud. However, it is important that the matrix H+ is not completely randomized using a single key but is randomized block-wise with different keys for different blocks (see Eq. 4). The estimation problem can be further broken down into
$$ \left[\begin{array}{l} \hat{X_{1}} \\ \hat{X_{2}} \\ \end{array}\right] \> = \>\left[\begin{array}{l} F_{1}\, Z_{1} + F_{12} \, Z_{2} \\ F_{21}\, Z_{1} + F_{2} \, Z_{2} \\ \end{array}\right]. $$
Let us denote the matrix
$$ Y \> = \>\left[\begin{array}{llll} F_{1} Z_{1} & F_{12} Z_{2} \\ F_{21} Z_{1} & F_{2} Z_{2} \\ \end{array}\right] \> = \>\left[\begin{array}{llll} Y_{1} & Y_{12} \\ Y_{21} & Y_{2} \\ \end{array}\right]. $$
Using Eq. 5 for estimating the states, we solve the matrix multiplication of each blocks in Eq. 6 privately and then perform matrix addition.
The matrix multiplication is a fundamental problem in cryptography and several solutions have been proposed to solve it (Atallah and Frikken 2010; Atallah et al. 2012; Fiore and Gennaro 2012; Zhang and Blanton 2014). However, these protocols are not designed for the cloud environment and hence do not consider the computational asymmetry of the cloud server and the client. Another drawback is that these protocols use advanced cryptography to encrypt the input and output dataset, which makes them unsuitable for the computation on the cloud with large datasets due to high overhead. Furthermore, the verification of the result, which is an essential requirement in a malicious cloud setting, is not considered in these protocols (Kumar et al. 2017). A secure multiparty computation (MPC) approach was considered in Dreier and Kerschbaum (2011); López-Alt et al. (2012), where the computation is divided among multiple parties without allowing any participating entity to access another individual's private information. However, this approach is not feasible for our problem setup since all the parties are required to have a comparable computing capability. Also, in MPC approach, the result verification is often troublesome since it requires expensive zero-knowledge proofs (Saia and Zamani 2015; Goldwasser et al. 2015).
Recently, a privacy-preserving, verifiable and efficient outsourcing algorithm for matrix multiplication to a malicious cloud was proposed in Kumar et al. (2017) utilizing linear transformation techniques. In our paper, we adopt a similar approach to the one prescribed in Kumar et al. (2017) to outsource the multiplication of block matrices in Eq. 6 securely to the cloud. However, Obfuscate(.) is not a straightforward application of the protocol in Kumar et al. (2017). Kumar et al. (2017) considers only a single client and a cloud setup, where the client performs the key generation, problem transformation, re-transformation and verification on his/her own. In our scheme, there are multiple smart meters installed in different neighborhoods. The keys cannot be generated locally by the individual households because the smart meters have access only to their respective consumption data which forms only a part of the information required for state estimation. Hence, besides the key generation we also propose KeyDist - a key distribution scheme as shown in Fig. 2 used by \(\mathcal {U}\) to distribute keys to the smart meters. Obfuscate(.) comprises of eight subalgorithms which are explained in the rest of this section.
A triangular key distribution scheme for a locality Li
KeyGen(1λ,m1,n1) algorithm (Algorithm 1) takes as input the security parameter λ and generates a total of n1+m1 non-zero random numbers each of bit size λ. These numbers are used to generate the key matrices of size \(\mathbb {R}^{m_{1}}\) and \(\mathbb {R}^{n_{1}}\). Table 1 shows the entire keys that are generated per batch.
Table 1 Key generation protocol run by \(\mathcal {U}\) per batch
After the KeyDist() (Algorithm 2), matrix transformation ψK() is carried out by the respective entities using their respective keys K. For every new input matrix, ψK() is invoked to securely mask the input through linear transformation in order to preserve the privacy. This operation dominates the client-side computation cost, but is not significant compared to the computations performed by the cloud. The matrix transformation for a given input matrix F1 and Z1 are given by Algorithm 3 and 4, respectively. Table 2 summarizes the complete matrix transformation protocol.
Table 2 Matrix transformation protocol run per batch
Next, the obfuscated matrix H+ and the masked measurement matrix Zi are sent by \(\mathcal {U}\) and Si1, respectively to \(\mathcal {C}\) to perform Computeψ([F1],[Z1]) algorithm given in Algorithm 5. This algorithm performs the computation on the cloud server. It computes MM as \(\psi (\left [F_{1}\right ], \left [Z_{1}\right ]) \> = \> (D_{1} F_{1} A_{1}). \left (A_{1}^{-1} Z_{1} D_{2}\right)\). Table 3 shows the Computeψ() protocol run by the cloud server for estimating the state samples.
Table 3 Computation protocol run by \(\mathcal {C}\) per batch
Upon computing the Y matrix, the cloud sends the computed result to the utility provider \(\mathcal {U}\) to execute the verification step. Verify([Y],γ) algorithm computes Q=([F]·([Z]·γ))−([Y]·γ), where γ is a binary key matrix of size T i.e. γ∈{1,0}T. The algorithm introduces the binary column matrix key γ to minimize the complexity of computation since the matrix-vector multiplication only cost quadratic time. The verification protocol for Li is given in Algorithm 6.
It is important to note that the verification step serves as the BMD test in our setup and is run for all the four block matrices given by Eq. 6. Table 4 presents the verification protocol. The results are accepted only if the cloud server passes all the four verification tests. If the verification is positive, then it means that no false data has been injected into the measurements by the cloud which is conclusive to the absence of bad measurements in the network.
Table 4 Verification Protocol run by \(\mathcal {U}\) per batch
After positive verification, Unmask(Y,K) algorithm (Algorithm 7) is run by \(\mathcal {U}\). This algorithm returns the original values of the states \(\hat {X}\) by de-randomizing Y using their respective keys K. Table 5 summarizes the Unmask() protocol carried out for all the four block matrices. Once, all the four blocks of Y are unmasked, \(\mathcal {U}\) carries out the protocol given in Algorithm 8 to reach the final state estimates.
Table 5 Unmasking Protocol run by \(\mathcal {U}\)
Analyses of O b f u s c a t e(.)
In this section, we show that Obfuscate(.) complies with the design goals stated in "Secure state estimation with Obfuscate(.)" section which are correctness, privacy, verifiability, and efficiency.
Correctness analysis
If the smart meters, utility provider, and the cloud correctly follow Obfuscate(.) as per the protocol, then Obfuscate(.) produces correct results for all the four matrix multiplications. This follows from a simple proof:
\(\mathcal {U}\) first transforms the matrix F1 into [F1]=D1F1A1 and the lead smart meter in L1 transforms the matrix \(Z^{\prime }_{1} = A^{-1} Z_{1}\) into \(\left [Z_{1}\right ] = A_{1}^{-1} Z_{1} D_{2}\). The cloud server computes \(\left [Y_{1}\right ] = \left [F_{1}\right ] \cdot \left [Z_{1}\right ] = (D_{1} F_{1} A_{1}) \cdot \left (A_{1}^{-1} Z_{1} D_{2}\right) = D_{1} Y_{1} D_{2}.\) Then, in the de-randomization step, \(\mathcal {U}\) computes Y1, where \(Y_{1} = D_{1}^{-1} \left [Y_{1}\right ] D_{2}^{-1} = F_{1} \cdot Z_{1}\). □
The above analysis holds for all the Computeψ(.) presented in Table 3, thereby proving the correctness of Obfuscate(.).
Privacy analysis
Input Privacy: Since \(\mathcal {C}\) has only access to the masked input matrices [F] and [Z], it cannot not retrieve the original input matrices F and Z. Furthermore, the keys generated as in Table 1 do not leak any information about the original input since the keys are completely random devoid of dependency on the topology and the power consumption data. This can be seen from the following proof:
The key matrix A1 and A2 are diagonal matrices with each element being a random real number of λ bit long. There are \(\phantom {\dot {i}\!}2^{m_{i} \lambda }\) possibilities of Ai matrix where i∈{1,2}. For diagonal matrices D1 and D2, there are in total \(2^{n_{1} \lambda + T \lambda }\phantom {\dot {i}\!}\) possibilities. Thus for a single block F1 in Y, there are a total of \(\phantom {\dot {i}\!}2^{(m_{1} + n_{1} + T) \lambda }\) possible choices of key matrices, which is an exponential bound quantity in terms of (m1,n1,T). □
For example, consider a practical scenario where a locality has m1=1000,n1=600,T=400 for which we have 22000λ possibilities. Thus, with increase in m1,n1 and T, the cloud does not recover any meaningful information.
Output Privacy: Similar to the input privacy analysis, the output result is also protected. The resulting obfuscated matrix does not leak any information to \(\mathcal {C}\), even if it records all the computed results. Besides, for every batch, \(\mathcal {U}\) generates new keys given in Table 1 which makes our protocol resistant to any known-plain-text attack (KPA) or chosen-plain-text-attack (CPA) (Kumar et al. 2017).
Verification analysis
Since in a malicious threat model, the cloud server may deviate from the actual instructions of the given protocol, we equip Obfuscate(.) with a result verification algorithm to validate and verify the correctness of the result. The proof that a wrong or an invalid result never passes the verification step follows from the total probability theorem as followed in Kumar et al. (2017); Lei et al. (2013).
If the cloud produces the correct result say Y1, then Q1=([F1]·[Z1]−[Y1])=[0,0,⋯0]T. If the cloud produces the wrong result, then Q1·γ1≠[F1][Z1]·γ−[Y1].γ, i.e. there exists at least a row in Q1 which is not equal to zero, \(\phantom {\dot {i}\!}Q_{1} \gamma _{1} = [q_{1},\cdots q_{m_{1}}]^{T}\). Let the row qi≠0, where
$$q_{i} = \sum_{j=1}^{T} Q_{1i,j} \cdot \gamma_{j} = Q_{1i,1} \cdot \gamma_{1} + \cdots Q_{1i,k} \cdot \gamma_{k} + Q_{1i,T} \cdot \gamma_{T}.$$
There exists at least one element in this row which is not equal to zero. Let Q1i,k≠0, qi=Q1i,k·γk+Γ where \(\Gamma = \sum _{j=1}^{T} Q_{1i,j}. \gamma _{j} - Q_{1i,k}. \gamma _{k}\). Applying the total probability theorem yields,
$$\begin{array}{*{20}l} &\Pr (q_{i} = 0) = \Pr [(q_{i} = 0) | (\Gamma = 0)] \Pr [\Gamma = 0] + \Pr [(q_{i} = 0) | (\Gamma \neq 0)] \Pr [\Gamma \neq 0] \\ &\Pr [(q_{i} = 0) | (\Gamma = 0)] = \Pr [\gamma_{k} = 0] = 1/2 \\ &\Pr [(q_{i} = 0) | (\Gamma \neq 0)] \leq \Pr [\gamma_{k} = 1] = 1/2 \end{array} $$
Substituting (8) in (7), we derive
$$ \begin{aligned} \Pr [(q_{i} = 0) ] &\leq 1/2 \Pr [\Gamma = 0] + 1/2 \Pr [\Gamma \neq 0],\\ \Pr [(q_{i} = 0) ] &\leq 1/2 (1 - \Pr [\Gamma \neq 0]) + 1/2 \Pr [\Gamma \neq 0], \\ \Pr [(q_{i} = 0) ] &\leq 1/2. \end{aligned} $$
If the verification process is run p times, then Pr[(qi=0)]≤1/2p. □
The value p reveals the trade-off between computational efficiency and verifiability. Theoretically p≥80 is sufficient to ensure negligible probability for the cloud to pass the verification test despite producing wrong result. However, in practice, p=20 is acceptable with 1/220≈1 million (Kumar et al. 2017; Lei et al. 2013). The verification process fails to detect a wrong result one in a million times.
Efficiency analysis
In this section, we carry out the computation complexity analysis to prove the efficiency of Obfuscate(.). The computational cost of each step in Obfuscate(.) is analyzed in Table 6. KeyDist() protocol introduces an additional communication cost of O(m) since \(\mathcal {U}\) distributes the key aij to all the smart meters through a private channel for obfuscating their measurement data. In Table 6, it is clear that the computations performed by the client side are substantially lower than that of the cloud server. Due to the diagonal structure of the key matrices, the problem transformation step given by Algorithms 3 and 4 only costs O(nm+mT). The asymptotic complexity of the client side computation is only O(nm+mT+nT) (Kumar et al. 2017). Thus, outsourcing the computation yields a performance gain of \(O\left (\frac {1}{n} + \frac {1}{m} + \frac {1}{T}\right)\). Clearly, when n, m, and T increases, the clients will achieve a higher performance gain. Especially, with the increase in the number of smart meters m by the year 2020 as aimed by the EU (Commission 2014b), Obfuscate(.) will significantly reduce the computational overhead of its clients in the long run.
Table 6 Computation complexity analysis of the protocol
Simulation results
In this section, we evaluate the degree of obscurity of Obfuscate(.) using two case studies: a fully measured 5-bus system and the IEEE 14-bus system with real-time power consumption data. We start with a fully measured 5-bus system and the structure of the H matrix for this system can be found in the Appendix. In this case, the total number of meters m=10 and the state variables n=4. We consider m1=4, m2=6 and n1=n2=2 and the duration of every batch to be 13 hours. Note in practice, smart meters can sample at much higher frequencies (Chen et al. 2011). Research on disaggregating electricity load has been conducted on smart meter readings with a fine granularity of frequency between 1 Hz to 1 MHz (Chen et al. 2011). The authors in Kim et al. (2011) collected real-time power consumption data of both residential and office spaces with a sampling rate of 1 Hz. Hence in practice the number of data points collected per batch T could be in order of tens of thousands. However, due to the unavailability of such high-frequency measurement data, we restrict the size of T. Since, we had access to only hourly power consumption data we restrict T=13. Although the size of the matrix \(Z \in \mathbb {R}^{m \times T}\) is smaller than in practice, the state estimation still cannot be performed locally due to the coupling constraints between the two localities. Upon inspecting the power consumption values of all the meters, we found these values are mostly 4 to 5 decimal digits long. To mask this data securely, we use a key size of length λ=log2(105)≈16 + 80 ≈ 96 bits. The additional 80 bits ensures that Obfuscate() follows the National Institute of Standards and Technology (NIST) recommendationsFootnote 2 to securely mask the data. Based on the present computational capabilities, it is not possible to break our scheme, thereby proving it's robustness in terms of attack from a malicious adversary.
Figure 4 shows the illegibility of the Obfuscate(.) for a fully measured 5-bus power system. Illegibility measures the level of difficulty of interpreting and mining data to the malicious cloud server (Kim et al. 2011). In Fig. 4a, we can see the original power consumption data of a household (blue) is always positive, whereas, the obfuscated data (red) show negative power readings and behave more as random variables. The degree of obscurity becomes more clear when transforming these datasets into the frequency domain. Figure 4b plots the Fast Fourier Transform (FFT) coefficients against various frequencies and shows that the original data consists mostly of low-frequency components, whereas the obfuscated data exhibits high-frequency components. This can also be inferred from the power spectral density plot shown in Fig. 4c. Clearly, we can see that the original data (top) consists of a higher power in low-frequency regions, whereas the obfuscated dataset (bottom) behaves exactly the opposite consisting of a higher power in high-frequency regions. Nevertheless, as it can be seen from Fig. 4d, the estimated states from these obfuscated dataset are exactly the same as that of the original measurement data. Thus, Obfuscate(.) does not degrade the quality of the estimate of the state variables. Furthermore, to evaluate the resilience of Obfuscate(.), we estimate the Pearson's correlation coefficient. The Pearson's correlation coefficient gives us the measure of the degree of similarity between two signals. The correlation coefficient between two identical signals in phase is always 1 while two identical signals out of phase (phase difference = 180∘) is −1. Figure 3 depicts the plot showing the Pearson correlation coefficient of all the metering points of the 5-bus systems. It can be seen that the correlation between the original and the obfuscated datasets are mostly smaller than 0.2 for almost all the metering points. This implies that it is very hard for any pattern recognition and data mining algorithm to infer information about the original power consumption pattern of the smart meters from the obfuscated datasets (Kim et al. 2011).
Pearson correlation coefficients for all the metering points in a 5-bus power network
Illustration of data Obfuscation in a 5-bus power network. a Original and Obfuscated Time Domain Data from Meter #1. b Original and Obfuscated Frequency Domain Data from Meter #1. c Power Spectral Density of True and Obfuscated Measurement Data. d Estimate State Value at Branch #1. Estimation error between true and obfuscated dataset = 0
Next, we evaluate the degree of obscurity for an IEEE 14 bus system. The H matrix for the 14 bus system is extracted from MATPOWER (Zimmerman et al. 2011), an open-source tool for solving steady-state power system simulation and optimization problems. In this case, the number of metering points m=31 and the number of state variables n=13. We further partition the number of meters and state variables for L1 and L2 as m1=15,m2=16 and n1=6,n2=7. Figure 5 depicts the time domain, frequency domain data and the estimated states from the original and obfuscated measurement data. Comparing Figs. 4 and 5, we arrive at similar conclusions for a 14-bus system to that of a 5-bus system. Figure 6a shows the correlation coefficients of all the 31 metering points for T=13 and it can be seen that the values are lesser than 0.3. Note from Fig. 6b that as expected when the number of measurement data samples is increased i.e., when the value of T was increased from 13 to 360, the correlation coefficient was found to be lesser than 0.2 which makes this scheme practically secure for estimation with fine granular high-frequency meter readings. Also, in this case, since each key size is 96 bits, a semi-honest neighbor trying to infer the power consumption of a household in the same locality has about 296=7.92×1028 possibilities for every batch. Naturally, when the time duration per batch drops down to every few minutes with high-frequency datasets, the task becomes almost impossible for a semi-honest adversary to deduce the appliance usage patterns of his/her neighbor living in the same locality.
Illustration of data obfuscation in IEEE 14-bus power network. a Original and Obfuscated Time Domain Data from Meter #27. b Original and Obfuscated Frequency Domain Data from Meter #27. c Power Spectral Density of True and Obfuscated Measurement Data. d Estimate State Value at Branch #7. Estimation error between true and obfuscated dataset = 0
Pearson Correlation Coefficients of all the metering points in IEEE 14 bus system. a T=13. b T=360
However, Obfuscate(.) still has a shortcoming since it cannot preserve the privacy of zero elements. The power grid topology matrix H is, in general, a full column rank and a sparse matrix. However, H+ is less sparse than H and is likely to be dense. Upon inspecting the sparsity pattern of H+ for both the 5-bus and 14-bus power system, we found that H+ for the 14-bus was about 20% sparse, whereas H+ for the 5-bus power system was completely dense. Exposing the sparsity pattern of H+ to the cloud may, in turn, reveal some information about the structure of H which is undesirable. Thus, to confront such sparse attacks, we introduce the matrix \(\mathbf {H}^{+}_{\Delta } = \mathbf {H}^{+} + \Delta \), where the matrix Δ is 100% dense. The utility provider \(\mathcal {U}\) sends \(\mathbf {H}^{+}_{\Delta }\) instead of H+ to the cloud which computes XΔ=(H++Δ)Z. Then, \(\mathcal {U}\) computes the product ΔZ by invoking Obfuscate(.) again. Later, the original state estimates can be retrieved by \(\mathcal {U}\) as \(\hat {X} = X_{\Delta } - \Delta Z\). Note that this step does not incur any major computational overhead on the utility provider since it requires another simple invocation of Obfuscate(.).
Conclusions and future work
In this paper, we considered a privacy-aware batch-wise state estimation problem in power networks with the objective of protecting both the grid configuration and power consumption data of the smart meters. We formulated a weighted least-squares problem and reduced the state estimation problem of a power grid into a matrix multiplication problem of four block matrices. Our proposal, Obfuscate(.), exploits highly efficient and verifiable obfuscation-based cryptographic solutions. It supports error-free estimation between the original and obfuscated dataset without compromising the accuracy of the state variables essential to the utility provider and is proven to be correct and privacy-preserving. Complexity analysis shows the efficiency and the practical applicability of Obfuscate(.). We further evaluated the performance of Obfuscate(.) in terms of its illegibility and resilience with a real-time hourly power consumption data. Simulation results demonstrate a high level of obscurity making it hard for the malicious cloud server to infer any behavioral pattern from the obfuscated datasets. We also discussed the problem of revealing the sparsity structure of the pseudo-inverse of network topology matrix and proposed a solution to resist such sparse attacks.
Currently, our scheme does not take into account that the grid configuration matrix H, although time invariant during the state estimation process may still be susceptible to changes all the time. For example, consider a person living in a particular locality is now motivated to install a smart meter at his home due to good security reasons or a person living in one locality is now moving to another locality. Such situations clearly result in an extra row addition or deletion of the existing H matrix, and assuming a pre-computation of H+ at every stage is not reasonable. Hence, to deal with such instances, we require a protocol computing the matrix A=(HTH)−1 for secure outsourcing of large matrix inversion to the cloud ensuring the privacy of sparsity pattern of the matrix. It is also important to point out that the proposed solution can be applied only to those classes of state estimation which essentially boils down to solving a matrix multiplication problem batch-wise or recursively.
Although the behavioral pattern and the power dynamics of the other smart meters in every locality are hidden from the malicious cloud, the respective lead meter has access to this information. The lead meter can access to the scaled measurements z′=aij·z (Pearson coefficient =1) whose dynamics are exactly the same as z. Hence, it was essential in our problem setup to consider a single non-collusive trusted node in every local network termed as the lead meter to initiate the obfuscation of the measurement data dynamics. Future work may involve developing privacy-aware protocols without any such assumptions. Another possible future work is developing a statistical measure to quantify the degree of obscurity introduced by these obfuscation schemes to understand how indistinguishable the obfuscated datasets are compared to the original measurement datasets.
A fully measured 5-bus power system. Taken from (Deng et al. 2017)
A fully measured 5-bus power system is shown in Fig. 7. The total number of meters m is 10 and the meter measurements are z=[F12,F23,F24,F35,F45,P1,P2,P3,P4,P5]T where Fij represents the branch (i,j) active power flow and Pj represents bus j active power injection. The structure of the measurement matrix H is given in Eq. 10, where bij denotes the susceptance of the transmission line (i,j) (Deng et al. 2017). The susceptance is the imaginary part of admittance and the admittance matrix is obtained from (McCalley 2018). The H+ is pre-computed from H and the F blocks are partitioned according to their respective dimensions.
$$ {H =\left(\begin{array}{cccc} b_{12} & 0 & 0 & 0 \\-b_{23} & b_{23} & 0 & 0 \\-b_{24} & 0 & b_{24} & 0 \\0 &-b_{35} & 0 & b_{35} \\0& 0 & -b_{45} & b_{45} \\b_{12} & 0 & 0 & 0\\ -b_{12}-b_{23}-b_{24} & b_{23} & b_{24} & 0 \\b_{23} & - b_{23}-b_{35} & 0 & b_{35}\\ b_{24} & 0 & -b_{24}-b_{45} & b_{45} \\0 & -b_{35} & -b_{45} & b_{35} + b_{45} \end{array}\right)} $$
For brevity, here we assume that the area consists of only two localities. The protocol presented in this paper can easily be extended to an area consisting of more than two localities.
https://www.keylength.com/en/
Atallah, MJ, Frikken KB (2010) Securely outsourcing linear algebra computations In: Proceedings of the 5th ACM Symposium on Information, Computer and Communications Security, ASIACCS 2010, Beijing, China, April 13-16, 2010, 48–59.. ACM, New York.
Atallah, MJ, Frikken KB, Wang S (2012) Private outsourcing of matrix multiplication over closed semi-rings In: SECRYPT 2012 - Proceedings of the International Conference on Security and Cryptography, Rome, Italy, 24-27 July, 2012, SECRYPT Is Part of ICETE - The International Joint Conference on e-Business and Telecommunications, 136–144.. SciTePress, Setúbal.
Beussink, A, Akkaya K, Senturk IF, Mahmoud M. M. E. A. (2014) Preserving consumer privacy on IEEE 802.11s-based smart grid AMI networks using data obfuscation In: 2014 Proceedings IEEE INFOCOM Workshops, Toronto, ON, Canada, April 27 - May 2, 2014, 658–663.. IEEE, New York.
Chen, F, Dai J, Wang B, Sahu S, Naphade MR, Lu C (2011) Activity analysis based on low sample rate smart meters In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21-24, 2011, 240–248.. ACM, New York.
Commission, E (2014a) Benchmarking Smart Metering Deployment in the EU-27 with a Focus on Electricity. https://ses.jrc.ec.europa.eu/publications/reports/benchmarking-smart-metering-deployment-eu-27-focus-electricity.
Commission, E (2014b) Energy. Smart Grids and Meters. https://ec.europa.eu/energy/en/topics/market-and-consumers/smart-grids-and-meters.
Cosovic, M, Vukobratovic D (2017) Fast real-time DC state estimation in electric power systems using belief propagation In: 2017 IEEE International Conference on Smart Grid Communications, SmartGridComm 2017, Dresden, Germany, October 23-27, 2017, 207–212.. IEEE, New York.
Danezis, G, Fournet C, Kohlweiss M, Béguelin SZ (2013) Smart meter aggregation via secret-sharing In: SEGS'13, Proceedings of the 2013 ACM Workshop on Smart Energy Grid Security, Co-located with CCS 2013, November 8, 2013, Berlin, Germany, 75–80.. ACM, New York.
Deng, R (2017) Why We Need to Improve Cloud Computing's Security? https://phys.org/news/2017-10-cloud.html.
Deng, R, Xiao G, Lu R (2017) Defending against false data injection attacks on power system state estimation. IEEE Trans Ind Inform 13(1):198–207.
Department of Energy, US (2014) Factors Affecting PMU Installation Costs. https://www.smartgrid.gov/files/PMU-cost-study-final-10162014_1.pdf.
Dreier, J, Kerschbaum F (2011) Practical privacy-preserving multiparty linear programming based on problem transformation In: PASSAT/SocialCom 2011, Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Conference on Social Computing (SocialCom), Boston, MA, USA, 9-11 Oct., 2011, 916–924.. IEEE, New York.
Efthymiou, C, Kalogridis G (2010) Smart grid privacy via anonymization of smart metering data In: 2010 First IEEE International Conference on Smart Grid Communications, 238–243.. IEEE, New York.
Emura, K (2017) Privacy-preserving aggregation of time-series data with public verifiability from simple assumptions In: Information Security and Privacy - 22nd Australasian Conference, ACISP 2017, Auckland, New Zealand, July 3-5, 2017, Proceedings, Part II, 193–213.. Springer, Cham.
Erkin, Z (2015) Private data aggregation with groups for smart grids in a dynamic setting using CRT In: 2015 IEEE International Workshop on Information Forensics and Security, WIFS 2015, Roma, Italy, November 16-19, 2015, 1–6.. IEEE, New York.
Fiore, D, Gennaro R (2012) Publicly verifiable delegation of large polynomials and matrix computations, with applications In: the ACM Conference on Computer and Communications Security, CCS'12, Raleigh, NC, USA, October 16-18, 2012, 501–512.. ACM, New York.
Ge, S, Zeng P, Lu R, Choo KR (2018) FGDA: fine-grained data analysis in privacy-preserving smart grid communications. Peer-to-Peer Netw Appl 11(5):966–978.
Gentry, C, Boneh D (2009) A fully homomorphic encryption scheme. PhD thesis, Stanford University, Stanford vol. 20, no. 09.
Gera, I, Yakoby Y, Routtenberg T (2017) Blind estimation of states and topology (BEST) in power systems In: 2017 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2017, Montreal, QC, Canada, November 14-16, 2017, 1080–1084.. IEEE, New York.
Goldwasser, S, Kalai YT, Rothblum GN (2015) Delegating computation: Interactive proofs for muggles. J ACM 62(4):27–12764.
Huang, Y, Werner S, Huang J, Kashyap N, Gupta V (2012) State estimation in electric power grids: Meeting new challenges presented by the requirements of the future grid. IEEE Signal Process Mag 29(5):33–43.
Hunt, G (2017) What Does GDPR Mean for Your Energy Business? https://www.siliconrepublic.com/enterprise/gdpr-energy-sector.
Krause, O, Lehnhoff S (2012) Generalized static-state estimation In: 2012 22nd Australasian Universities Power Engineering Conference (AUPEC), 1–6.. IEEE, New York.
Kumar, M, Meena J, Vardhan M (2017) Privacy preserving, verifiable and efficient outsourcing algorithm for matrix multiplication to a malicious cloud server. Cogent Eng 4(1).
Lei, X, Liao X, Huang T, Li H, Hu C (2013) Outsourcing large matrix inversion computation to A public cloud. IEEE Trans Cloud Comput 1(1).
Li, F, Luo B, Liu P (2010) Secure information aggregation for smart grids using homomorphic encryption In: 2010 First IEEE International Conference on Smart Grid Communications, 327–332.. IEEE, New York.
Liang, G, Zhao J, Luo F, Weller SR, Dong ZY (2017) A review of false data injection attacks against modern power systems. IEEE Trans Smart Grid 8(4):1630–1638.
Lindell, Y, Pinkas B (2009) Secure multi-party computation for privacy-preserving data mining. J Privacy Confidentiality 1(1):59–98.
Lisovich, MA, Mulligan DK, Wicker SB (2010) Inferring personal information from demand-response systems. IEEE Secur Priv 8(1):11–20.
Liu, Y, Ning P, Reiter MK (2011) False data injection attacks against state estimation in electric power grids. ACM Trans Inf Syst Secur 14(1):13–11333.
López-Alt, A, Tromer E, Vaikuntanathan V (2012) On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption In: Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, 1219–1234.. ACM, New York.
Kim, Y, Ngai ECH, Srivastava MB (2011) Cooperative state estimation for preserving privacy of user behaviors in smart grid In: IEEE Second International Conference on Smart Grid Communications, SmartGridComm 2011, Brussels, Belgium, October 17-20, 2011, 178–183.. IEEE, New York.
Knirsch, F, Engel D, Erkin Z (2017) A fault-tolerant and efficient scheme for data aggregation over groups in the smart grid In: 2017 IEEE Workshop on Information Forensics and Security, WIFS 2017, Rennes, France, December 4-7, 2017, 1–6.. IEEE, New York.
Kursawe, K, Danezis G, Kohlweiss M (2011) Privacy-friendly aggregation for the smart-grid In: Privacy Enhancing Technologies - 11th International Symposium, PETS 2011, Waterloo, ON, Canada, July 27-29, 2011. Proceedings, 175–191.. Springer, Heidelberg.
McCalley, JD (2018) The Power Flow Problem. Technical report, Iowa State University. Iowa State University. https://home.engineering.iastate.edu/~jdm/ee553/PowerFlow.doc.
Molina-Markham, A, Shenoy PJ, Fu K, Cecchet E, Irwin DE (2010) Private memoirs of a smart meter In: BuildSys'10, Proceedings of the 2nd ACM Workshop on Embedded SensingSystems for Energy-Efficiency in Buildings, Zurich, Switzerland, November 3-5, 2010, 61–66.. ACM, New York.
Monticelli, A (2000) Electric power system state estimation. Proc IEEE 88(2):262–282.
n.a. (2003) U.S.-Canada Power System Outage Task Force. https://digital.library.unt.edu/ark:/67531/metadc26005/.
Rahman, MA, Venayagamoorthy GK (2017) Distributed dynamic state estimation for smart grid transmission system. IFAC-PapersOnLine 50(2):98–103.
Ren, K, Wang C, Wang Q (2012) Security challenges for the public cloud. IEEE Internet Comput 16(1):69–73.
Salinas, SA, Li P (2016) Privacy-preserving energy theft detection in microgrids: A state estimation approach. IEEE Trans Power Syst 31(2):883–894.
Saia, J, Zamani M (2015) Recent results in scalable multi-party computation In: SOFSEM 2015: Theory and Practice of Computer Science - 41st International Conference on Current Trends in Theory and Practice of Computer Science, Pec Pod Sněžkou, Czech Republic, January 24-29, 2015. Proceedings, 24–44.. Springer, Heidelberg.
Schweppe, FC (1970) Power system static-state estimation, part III: Implementation. IEEE Trans Power Appar Syst PAS-89(1):130–135.
Schweppe, FC, Rom DB (1970) Power system static-state estimation, part II: Approximate model. IEEE Trans Power Appar Syst PAS-89(1):125–130.
Schweppe, FC, Wildes J (1970) Power system static-state estimation, part I: Exact model. IEEE Trans Power Appar Syst PAS-89(1):120–125.
Shoukry, Y, Gatsis K, Al-Anwar A, Pappas GJ, Seshia SA, Srivastava MB, Tabuada P (2016) Privacy-aware quadratic optimization using partially homomorphic encryption In: 55th IEEE Conference on Decision and Control, CDC 2016, Las Vegas, NV, USA, December 12-14, 2016, 5053–5058.. IEEE, New York.
Simos, M (2017) Microsoft Security Intelligence Report. https://www.microsoft.com/en-us/security/Intelligence-report.
Tebaa, M, Hajji SE (2014) Secure cloud computing through homomorphic encryption. CoRR abs/1409.0829.
Tonyali, S, Cakmak O, Akkaya K, Mahmoud MMEA, Güvenç I (2016) Secure data obfuscation scheme to enable privacy-preserving state estimation in smart grid AMI networks. IEEE Internet Things J 3(5):709–719.
Wang, C, Ren K, Wang J (2011) Secure and practical outsourcing of linear programming in cloud computing In: INFOCOM 2011. 30th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 10-15 April 2011, Shanghai, China, 820–828.. IEEE, New York.
Wood, AJ, Wollenberg BF (1996) Power Generation, Operation, and Control. Wiley, Hoboken.
Zhang, Y, Blanton M (2014) Efficient secure and verifiable outsourcing of matrix multiplications In: Information Security - 17th International Conference, ISC 2014, Hong Kong, China, October 12-14, 2014. Proceedings, 158–178.. Springer, Cham Heidelberg New York.
Zeifman, M, Roth K (2011) Nonintrusive appliance load monitoring: Review and outlook. IEEE Trans Consum Electron 57(1):76–84.
Zimmerman, RD, Murillo-Sánchez CE, Thomas RJ (2009) Matpower's extensible optimal power flow architecture In: 2009 IEEE Power Energy Society General Meeting, 1–7.. IEEE, New York.
Zimmerman, RD, Murillo-Sánchez CE, Thomas RJ (2011) Matpower: Steady-state operations, planning, and analysis tools for power systems research and education. IEEE Trans Power Syst 26(1):12–19.
We would also like to thank Antans Sauhatas from Riga Technical University for sharing the real-time power consumption data of the smart meters.
About this supplement
This article has been published as part of?Energy Informatics?Volume 2 Supplement 1, 2019: Proceedings of the 8th DACH+ Conference on Energy Informatics. The full contents of the supplement are available online at?https://energyinformatics.springeropen.com/articles/supplements/volume-2-supplement-1.
This work was supported by the TU Delft Safety and Security Institute under the DSyS Grant. Publication of this supplement was funded by Austrian Federal Ministry for Transport, Innovation and Technology.
CGI Nederland B.V, Rotterdam, The Netherlands
Cyber Security Group, Delft University of Technology, Delft, The Netherlands
Gamze Tillem
& Zekeriya Erkin
Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
Tamas Keviczky
Search for Lakshminarayanan Nandakumar in:
Search for Gamze Tillem in:
Search for Zekeriya Erkin in:
Search for Tamas Keviczky in:
Authors 1, 2, and 4 conceived and conceptualized the presented framework. Author 1 developed the theory, performed the simulations and analyses, and took the lead to write the manuscript. Author 2 helped in drafting the manuscript and in critical revision of the same. Authors 3 and 4 supervised the findings of this work and provided valuable feedback for the final version of the manuscript. All authors read and approved the final manuscript.
Correspondence to Lakshminarayanan Nandakumar.
Nandakumar, L., Tillem, G., Erkin, Z. et al. Protecting the grid topology and user consumption patterns during state estimation in smart grids based on data obfuscation. Energy Inform 2, 25 (2019) doi:10.1186/s42162-019-0078-y
Data obfuscation
|
CommonCrawl
|
Bioresources and Bioprocessing
Synthesis of silver nanoparticles using seed exudates of Sinapis arvensis as a novel bioresource, and evaluation of their antifungal activity
Mehrdad Khatami1,2,
Shahram Pourseyedi1,
Mansour Khatami2,
Hadi Hamidi1,
Mehrnaz Zaeifi3 &
Lida Soltani3
Bioresources and Bioprocessing volume 2, Article number: 19 (2015) Cite this article
In general, silver nanoparticles (AgNPs) are particles of silver with a size less than 100 nm. In recent years, synthesis of nanoparticles using plant extract has gained much interest in nanobiotechnology. In this concern, this study investigates green synthesis of AgNPs from silver nitrate using Sinapis arvensis as a novel bioresource of cost-effective nonhazardous reducing and stabilizing compounds. A stock solution of silver nitrate (0.1 M) was prepared. Different concentrations of silver nitrate (1, 2.5, 4, and 5 mM) were prepared from the above solution, then added to 5 mL of S. arvensis seed exudates. The mixtures were kept in 25°C. The synthesis of AgNPs was confirmed by the change in mixtures color from light yellow to brown. The antifungal activity of synthesized AgNPs was investigated in vitro.
The resulting AgNPs were characterized by UV-vis spectroscopy, X-ray diffraction (XRD), transmission electron microscopy (TEM), and Fourier transform infrared spectroscopy (FTIR). Formation of the AgNPs was confirmed by the change in mixture color from light yellow to brown and maximum absorption at 412 nm due to surface plasmon resonance of AgNPs. The role of different functional groups in the formation of AgNPs was shown by FTIR. X-ray diffraction was shown that the AgNPs formed in our experiments were in the form of nanocrystal, and TEM analysis showed spherical particles with an average size of 14 nm. Our measurements indicated that S. arvensis seed exudates can mediate facile and eco-friendly biosynthesis of colloidal-spherical AgNPs with a size range of 1 to 35 nm. The synthesized AgNPs showed significance antifungal activity against Neofusicoccum parvum cultures.
The AgNPs were synthesized using a biological source. This synthesis method is nontoxic, eco-friendly, and a low-cost technology for the large-scale production. The AgNPs can be used as a new generation of antifungal agents.
Silver nanoparticles (AgNPs) are particles of silver which are in the range between 1 and 100 nm. Nanostructure materials indicate unique physicochemical and biological environmental properties, including optical, magnetic, electronic, catalytic activity, and biological properties [1], which have increased their applications in medicine, agriculture, environment, and industry [2]. AgNPs have high potential as commercial nanomaterials [3] and an effective antimicrobial agent.
Several techniques for synthesizing AgNPs have been proposed. Generally, AgNPs are prepared by different kinds of chemical and physical methods, but majority of these techniques are both expensive and environmentally hazardous [4]. Furthermore, the synthesized nanoparticles may be unstable and tend to agglomerate rapidly and become useless unless capping agents are applied for their stabilization [5]. Diverse chemical and physical methods have been used to prepare AgNPs with various sizes and shapes, such as UV irradiation [6,7], microware irradiation [8], chemical reduction [9], electron irradiation [10], photochemical [11], and lithography methods [12]. However, most of these methods involve more than one step, high energy requirement, low material conversions, difficulty in purification, and hazardous chemicals [13]. The synthesis of nanoparticles by chemical methods may lead to the production of some toxic chemical compound that may have adverse effects on their applications [14].
The biological synthesis of nanoparticles can potentially eliminate these problems. Biological synthesis of nanoparticles is nontoxic, eco-friendly, and a low-cost technology for the large-scale production of well-characterized nanoparticles [14]. Therefore, there is a need to develop biological processes for nanoparticle synthesis. Recently, many live organisms such as bacteria, fungi, algae, and plants have been used for synthesis of nanoparticles [15-17]. The reduction of Ag+ to Ag0 took place by combinations of biomolecules such as proteins, polysaccharides, and flavonoids [18]. Green synthesis of AgNPs has become very important in the recent years. Green AgNPs have the potential for large scale applications in the formulation of dental resin composites, bone cement [19,20], water and air filters [21,22], clothing and textiles, medical devices and implants [23], cosmetics [4], and packaging [24]. Besides their antimicrobial properties, AgNPs and silver nanocomposites have other interesting characteristics which will further enable them to be used in catalysts, biosensors, conductive inks, and electronic devices [25,26]. They can be produced economically and in large industrial scale [14].
In this paper, we report biosynthesis of stable colloidal AgNPs using Sinapis arvensis seed exudates. This plant is an important medicinal crop in the southern regions of Iran. Antifungal activities of synthesized AgNPs were also investigated. Details of biosynthesis, physical characterizations, and antifungal activity of AgNPs are described.
Silver nitrate and potato dextrose agar (PDA) medium were obtained from Merck, Darmstadt, Germany. Seeds of S. arvensis were obtained from Pakanbazr, Isfahan, Iran. Strain of the fungus N. parvum was prepared from the Department of Plant Protection, Shahid Bahonar University of Kerman, Iran.
Seed exudates preparation
The surface of S. arvensis seeds were disinfected using 30% sodium hypochlorite for 5 min and rinsed with sterile distilled water three times. In the next step, the seeds were placed in 70% alcohol for 2 min and then washed four times with sterile distilled water and then imbibed in deionized (DI) water. (1 g dry weight/10 mL DI water) After being incubated at 25°C for 48 h in the dark, seeds were removed from the soaking medium. The supernatant phase was collected and centrifuged at 4,500 rpm for 10 min to separate the liquid fraction from any large insoluble particles and filtered by Whatman filter paper no. 1 (Sigma Aldrich, St. Louis, MO, USA). During the experiment, pH was 4.5.
Biosynthesis of AgNPs
For this reason, a 50 mL stock solution containing 0.1 M silver nitrate was prepared. Different concentrations of silver nitrate (1, 2.5, 4, and 5 mM) were prepared from the above solution, then added to 5 mL of S. arvensis seed exudates and incubated at 25°C as described previously [27]. After treatment, the pale yellow color of reaction mixture was changed to brown indicating synthesis of AgNPs.
Characterization of AgNPs
UV-vis spectroscopy
The biosynthesis of AgNPs was monitored periodically using a UV-vis spectrophotometer (Scan Drop-type product, Analytik Jena, Germany) at different concentrations at room temperature. These measurements operated at a resolution of 1 nm and wavelength range between 300 and 650 nm [14].
The formation and quality of compounds were investigated by X-ray diffraction (XRD) technique. For this purpose, AgNPs was centrifuged (at 13,000 rpm; 25°C) for 10 min, washed with DI water and re-centrifuged in three cycles. Then purified AgNPs were dried and subjected to XRD experiment. X-ray diffraction was performed by STOE Stadi P (STOE & Cie. GmbH, Darmstadt, Germany) (λ = 1.54178 Å). The scanning was done in the region of 2θ from 10° to 80° [17].
Transmission electron microscopy (TEM) was performed by using of a Carl Zeiss (Jena, Germany, 80 kV) for determining the morphology and size of AgNPs.
Inductively coupled plasma spectrometry
Inductively coupled plasma spectrometry (ICP) Varian BV ES-700, Sydney, Australia, was used to determining the remaining concentration of silver ions after synthesizing AgNPs. Changing rate of metal ion to nanoparticles is calculated by the following equation:
$$ Q=\left(\frac{C_0-{C}_f}{C_0}\right)\times 100 $$
where C0 and Cf stands for initial and final concentration of silver ion (mg/L), respectively. Q is the conversion percentage of silver ion to AgNPs [28].
Antifungal activity of AgNPs
To determine the antifungal activity of AgNPs, mycelium growth inhibition test was used. Four concentrations of AgNPs (2.5, 5, 10, and 40 μg/mL) were prepared in PDA medium after autoclaving. 6-mm agar plugs of fresh culture of N. parvum prepared and transferred to centers of media containing different concentrations of synthesized AgNPs. Control plates contain no AgNPs. All plates incubated at 28°C. When in control, the fungus completely covered the entire surface of the medium, the mean radius of fungal growth in all plates measured and recorded [29]. All treatments were performed in triplicates.
The following formula was used to assess of the growth inhibition of mycelium:
$$ \mathrm{Mycelium}\ \mathrm{growth}\ \mathrm{inhibition}\ \%\kern0.5em =\kern0.5em \frac{R-r}{R}\times 100 $$
R is the mean radius of control and r is the radius of samples treated with nanoparticle. Data processed with SAS statistical analysis using Duncan's test.
Visual observation
Reduction of Ag+ to Ag0 was confirmed by color change of the reaction mixture from colorless to brown (Figure 1).
Reduction of Ag + to Ag 0 . The color change of the S. arvensis exudates upon the formation of AgNPs from different concentrations of silver nitrate: ( a ) control, ( b ) 1 mM, ( c ) 2.5 mM, ( d ) 4 mM, and ( e ) 5 mM.
Ultraviolet visible scanning spectroscopy studies
It was observed that the maximum absorbance of reaction mixture occurs at 412 nm, indicating that AgNPs were produced. Figure 2 shows the UV-vis absorption spectra of synthesized AgNPs with different concentrations of silver nitrate.
UV-vis absorption spectrum of AgNPs synthesized by treating 1, 2.5, 4, and 5 mM silver nanoparticles after 1 week.
FTIR analysis
Fourier transform infrared spectroscopy (FTIR) spectrum of biosynthesized AgNPs shows absorption peaks at 3,429, 2,928, 1,632, 1,406, 1,103, and 617 cm−1 (Figure 3). Strong absorption peak at 3,429 cm−1 is resulted from stretching of the N-H band of amino groups or is indicative of present O-H groups due to the presence of alcohols, phenols, carbohydrates, etc. The peak that appeared around 2,928 cm−1 is related to the stretching of the C-H bonds [30]. The peaks at 1,632 and 1,406 cm−1 are assigned for aliphatic amines. The absorption peak at 1,632 cm−1 is close to that reported for native proteins [31]. The peak at 1,406 cm−1 corresponds to C-C stretching vibrations for aromatic ring [32]. FTIR study indicates that probably the carboxyl (-C=O), hydroxyl (-OH), and amine (N-H) groups in seed exudates are mainly involved in the reduction of Ag+ ions to Ag0 nanoparticles.
FTIR spectra of the S. arvensis exudates before and after the synthesis of AgNPs.
XRD analysis
The formation of the nanocrystalline AgNPs was further confirmed by the XRD analysis as showed in Figure 4.
XRD pattern of AgNPs synthesized by treating silver nitrate with S. arvensis seed exudates.
Strong peaks were observed at 2θ values at 38.09°, 44.15°, 64.67°, and 77.54°, corresponding to (111), (200), (220), and (311) Bragg's reflection based on the face-centered-cubic (fcc) crystal structure of AgNPs [33]. The broadening of Bragg's peaks indicates the formation of AgNPs. The XRD pattern thus shows that the AgNPs formed by the reduction of Ag+ ions by S. arvensis seed exudates are crystalline in nature.
Transmission electron microscopy analysis
TEM was used to determine the size and shape of nanoparticles. The TEM images of the prepared AgNPs at 55 and 90 nm scales are shown in the Figure 5a1 and a2. TEM images show that they have spherical shape. Particle size distribution histogram determined from TEM is shown in Figure 5b. AgNPs size is between 1 and 35 nm.
The TEM images of the prepared AgNPs. ( a 1 , a 2 ) TEM images of AgNPs that were synthesized by S. arvensis seed exudates in different scales. ( b ) Histogram of particle size distribution of the biosynthesized AgNPs.
ICP analysis revealed complete reduction of Ag ions within 50 days of the reaction, and more importantly, it showed that the conversion percentage of metal ion to metal nanoparticles is more than 95% (Table 1).
Table 1 ICP analysis
Antifungal activity of synthesized AgNPs
Inhibitory effects on fungal growth in a PDA medium containing 2.5, 5, 10, and 40 μg/mL concentrations of AgNPs were studied. The results showed a very significant effect of synthesized AgNPs on the mycelium growth of the fungus N. parvum. More than 83% mycelium growth inhibition of the fungus N. parvum was treated with a concentration of 40 μg/mL of AgNPs. The lowest level of growth inhibition was observed at a concentration of 2.5 μg/mL of AgNPs with the 15% mycelium growth inhibition. The growth inhibition for concentrations of 5 and 10 μg/mL, respectively, were 15% and 71%. Results clearly showed that with the increase in the concentration of AgNPs, the inhibitory effects on fungal mycelium growth increased (Figure 6).
Inhibitory effects on fungal mycelium growth. Fungal growth on PDA medium containing control ( a ). PDA medium containing ( b) 2.5 μg/mL, ( c) 5 μg/mL, ( d) 10 μg/mL, and ( e) 40 μg/mL of AgNPs.
Percentage of growth inhibition due to the effect of AgNPs was analyzed by statistical software SAS. Results confirmed a significant effect of AgNPs on fungal growth inhibition at 1% confidence interval.
Metallic nanoparticles are traditionally synthesized by wet chemical synthesis techniques where the chemicals used are quite often toxic and flammable. But in this study, the S. arvensis seed exudates were successfully used for the single-pot biosynthesis of spherical AgNPs in ambient conditions with the size range from 1 to 35 nm, as inferred from the TEM imaging. This was achieved without the use of external stabilizing or capping agents. We concluded that S. arvensis seed exudates are a bioreductant and capping agent as well as an easily available plant source playing an important role in the synthesis of highly stable AgNPs. X-ray diffraction pattern strongly indicated a high purity of biosynthesized AgNPs. This pristine method is facile, cost effective, clean and green, and therefore is applicable for a variety of purposes. Moreover, it is easy to scale up the production of AgNPs to industrial scale using this method.
This green chemistry approach towards the synthesis of AgNPs has many advantages such as ease with which the process can be scaled up, economic viability, etc. Application of such nanoparticles in medicine and other applications makes this method useful for the large scale synthesis of other inorganic nanomaterials. Toxicity studies of AgNPs on pathogen open a door for a new range of antifungal and antibacterial agents. So in the present study, we demonstrated that AgNPs have significant antifungal activity. It was determined that the growth inhibitory effect of AgNPs strongly depends on the concentration of AgNPs and with the increase concentration of AgNPs in the medium the inhibitory effect on fungal growth increased. So AgNPs can be used as excellent new antifungal agent.
Phanjom P, Ahmed G (2015) Biosynthesis of silver nanoparticles by Aspergillus oryzae (MTCC No. 1846) and its characterizations. Nanoscience and Nanotechnology 5:14-21
Song JY, Kim BS (2009) Rapid biological synthesis of silver nanoparticles using plant leaf extract. Bioprocess Biosyst Eng 32:79–84
Chaloupka K, Malam Y, Seifalian AM (2010) Nanosilver as a new generation of nanoproduct in biomedical applications. Trends Biotechnol 28:580–588
Sintubin L, Verstraete W, Boon N (2012) Biologically produced nanosilver: current state and future perspectives. Bioeng 109:2422–2436
Kassaee M, Akhavan A, Sheikh N, Sodagar A (2008) Antibacterial effects of a new dental acrylic resin containing silver nanoparticles. J Appl Polym Sci 110:1699–1703
Alt V, Bechert T, Steinrücke P, Wagener M, Seidel P, Dingeldein E, Doman E, Schnettler R (2004) An in vitro assessment of the antibacterial properties and cytotoxicity of nanoparticulate silver bone cement. Biomaterials 25:4383–4391
Jain P, Pradeep T (2005) Potential of silver nanoparticle-coated polyurethane foam as an antibacterial water filter. Biotechnol Bioeng 90:59–63
Sharma VK, Yngard RA, Lin Y (2009) Silver nanoparticles: green synthesis and their antimicrobial activities. Adv Colloid Interfac 145:83–96
de Mel A, Chaloupka K, Malam Y, Darbyshire A, Cousins B, Seifalian AM (2012) A silver nanocomposite biomaterial for blood-contacting implants. J Biomed Mater Res A Part A 100:2348–2357
Kokura S, Handa O, Takagi T, Ishikawa T, Naito Y, Yoshikawa, T (2010) AgNPs as a safe preservative for use in cosmetics. Nanotechnol 6:6570–574
Azeredo H (2009) Nanocomposites for food packaging applications. Res Int 42:1240–1253
Tsuji M, Gomi S, Maeda Y, Matsunaga M, Hikino S, Uto K, Tsuji T, Kawazumi H (2012) Rapid transformation from spherical nanoparticles, nanorods, cubes, or bipyramids to triangular prisms of silver with PVP, citrate, and H2O2. Langmuir 28:8845–8861
Ghaffari-Moghaddam M, Hadi-Dabanlou R, Khajeh M, Rakhshanipour M, Shameli K (2014) Green synthesis of AgNPs using plant extracts. Korean J Chem Engineering 31:548–557
Banerjee P, Satapathy M, Mukhopahayay A, Das P (2014) Leaf extract mediated green synthesis of silver nanoparticles from widely available Indian plants: synthesis, characterization, antimicrobial property and toxicity analysis. Bioresources and Bioprocessing 1:3
Mohammadinejad R, Pourseyedi S, Baghizadeh A, Ranjbar S, Mansoori G (2013) Synthesis of silver nanoparticles using Silybum marianum seed extract. Int J Nanosci Nanotechnol 9:221–226
Swamy M.K, Sudipta K, Jayanta K, Balasubramanya S (2015) The green synthesis, characterization, and evaluation of the biological activities of silver nanoparticles synthesized from Leptadenia reticulata leaf extract. Appl. Nanosci 5:1–9
Lei B, Zhang X, Zhu M, Tan W (2014) Effect of fluid shear stress on catalytic activity of biopalladium nanoparticles produced by Klebsiella Pneumoniae ECU-15 on Cr(VI) reduction reaction. Bioresources and Bioprocessing 1:28
Speth TF, Varma RS (2011) Microwave-assisted green synthesis of silver nanostructures. Accou of Chem Rese Acc of Cheml Resea 44:469–478
Song K, Lee S, Park T, Lee B (2009) Preparation of colloidal silver nanoparticles by chemical reduction method. Kore J of Che Eng 26:153–155
Li K, Zhang FS (2010) A novel approach for preparing silver nanoparticles under electron beam irradiation. J of Nanopa Resea 12:1423–1428
Harada M, Kawasaki C, Saijo K, Demizu M, Kimura Y (2010) Photochemical synthesis of silver particles using water-in-ionic liquid microemulsions in high-pressure CO2. J of Coll Interf Sci 343:537–545
Jensen MD, Malinsky CL, Haynes RPV (2000) Duyne, Nanosphere lithography: tunable localized surface plasmon resonance spectra of silver nanoparticles. J Phys Chem. 104 1059
Philip D (2011) Mangifera indica leaf-assisted biosynthesis of well-dispersed AgNPs. Spectrochimica Acta 78:327–331
Bankar A, Joshi B, Kumar AR, Zinjarde S (2009) Banana peel extract mediated novel route for synthesis of AgNPs. Colloid Surf A Physicochem Eng 368:58–63
Dwivedi AD, Gopal K (2010) Biosynthesis of silver and gold nanoparticles using Chenopodium album leaf extract. Colloid Surf A Physicochem Eng Aspect 369:27–33
Velmurugan P, Shim J, Kamala-Kannan S, Lee KJ, Oh BT, Balachandar V (2011) Crystallization of silver through reduction process using Elaeis guineensis biosolid extract. Biotechnol Prog 27:273–279
Singh K, Panghal M, Kadyan S, Chaudhary U, Parkash YJ (2014) Antibacterial activity of synthesized silver nanoparticles from Tinospora cordifolia against multi drug resistant strains of pseudomonas aeruginosa isolated from burn patients. J Nanomed Nanotechnol 5:2
Dubey M, Bhadauria S, Kushwah BS (2009) Green synthesis of nanosilver particles from extract of Eucalyptus hybrida (safeda) leaf. Dig J Nanomater Bios 4(34):537–543
Kim SW, Jung JH, Lamsal K, Kim YS, Min JS, Lee YS (2012) Antifungal effects of silver nanoparticles (AgNPs) against various plant pathogenic fungi. Koren j of Microbiol 40:53–58
Mulvaney P (1996) Surface plasmon spectroscopy of nanosized metal particles. Longmuir 12:788–800
Khatami M, Pourseyedi S (2015) Phoenix dactylifera (date palm) pit aqueous extract mediated novel route for synthesis high stable AgNPs with high antifungal and antibacterial activity. IET Nanobiotechnol 9:1–7
Yilmaz M, Turkdemir H, Kilic MA, Bayram E, Cicek A, Mete A, Ulug B (2011) Biosynthesis of AgNPs using leaves of Stevia rebaudiana. Mater Chem Phys 130:1195–1202
Kaviya S, Santhanalakshmi J, Viswanathan B, Muthumary J, Srinivasan K (2011) Biosynthesis of silver nanoparticles using Citrus sinensis peel extract and its antibacterial activity. Spectrochimica Acta 79:594–598
The authors are thankful to all members of the Department of Biotechnology Science, University of Kerman. We also acknowledge Mr. Abazari, Laboratory of Science, University of Kerman, for helping us. Also, the authors are thankful to the Iran Nanotechnology Council. Dr. Jian He Xu and Dr. Zolala J have made substantive intellectual contributions to this study, substantial contributions to the conception and design of it as well as to the acquisition, analysis, and interpretation of data.
Department of Biotechnology, Shahid Bahonar University of Kerman, End of 22-Bahman Blvd, 76169–133, Kerman, 76189-18951, Iran
Mehrdad Khatami, Shahram Pourseyedi & Hadi Hamidi
Department of Enviroment, The Enviromental Researches Center, Havaniroz Street, Jomhori Blvd, Kerman, 76189-18951, Iran
Mehrdad Khatami & Mansour Khatami
Department of Plant Protection, Shahid Bahonar University of Kerman, End of 22-Bahman Blvd, 76169–133, Kerman, 76169-14111, Iran
Mehrnaz Zaeifi & Lida Soltani
Mehrdad Khatami
Shahram Pourseyedi
Mansour Khatami
Hadi Hamidi
Mehrnaz Zaeifi
Lida Soltani
Correspondence to Mehrdad Khatami.
All of them have been also involved in the drafting and revision of the manuscript. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Khatami, M., Pourseyedi, S., Khatami, M. et al. Synthesis of silver nanoparticles using seed exudates of Sinapis arvensis as a novel bioresource, and evaluation of their antifungal activity. Bioresour. Bioprocess. 2, 19 (2015). https://doi.org/10.1186/s40643-015-0043-y
DOI: https://doi.org/10.1186/s40643-015-0043-y
AgNPs
Spherical AgNPs
|
CommonCrawl
|
Mutations in ponA, the Gene Encoding Penicillin-Binding Protein 1, and a Novel Locus, penC, Are Required for High-Level Chromosomally Mediated Penicillin Resistance in Neisseria gonorrhoeae
Patricia A. Ropp, Mei Hu, Melanie Olesky, Robert A. Nicholas
Patricia A. Ropp
Departments of Chemistry
Mei Hu
Pharmacology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7365
Melanie Olesky
Robert A. Nicholas
For correspondence: [email protected]
DOI: 10.1128/AAC.46.3.769-777.2002
Chromosomally mediated penicillin resistance in Neisseria gonorrhoeae occurs in part through alterations in penicillin-binding proteins (PBPs) and a decrease in outer membrane permeability. However, the genetic and molecular mechanisms of transformation of a penicillin-susceptible strain of N. gonorrhoeae to high-level penicillin resistance have not been clearly elucidated. Previous studies suggested that alterations in PBP 1 were involved in high-level penicillin resistance. In this study, we identified a single amino acid mutation in PBP 1 located 40 amino acids N terminal to the active-site serine residue that was present in all chromosomally mediated resistant N. gonorrhoeae (CMRNG) strains for which MICs of penicillin were ≥1 μg/ml. PBP 1 harboring this point mutation (PBP 1*) had a three- to fourfold lower rate of acylation (k2/K') than wild-type PBP 1 with a variety of β-lactam antibiotics. Consistent with its involvement in high-level penicillin resistance, replacement of the altered ponA gene (ponA1) in several CMRNG strains with the wild-type ponA gene resulted in a twofold decrease in the MICs of penicillin. Surprisingly, transformation of an intermediate-level penicillin-resistant strain (PR100; FA19 penA4 mtr penB5) with the ponA1 gene did not increase the MIC of penicillin for this strain. However, we identified an additional resistance locus, termed penC, which was required along with ponA1 to increase penicillin resistance of PR100 to a high level (MIC = 4 μg/ml). The penC locus by itself, when present in PR100, increases the MICs of penicillin and tetracycline twofold each. These data indicate that an additional locus, penC, is required along with ponA1 to achieve high-level penicillin resistance.
Up until 1987, Neisseria gonorrhoeae was treated with a single dose of penicillin, which kills the bacteria by inhibiting the penicillin-binding proteins, or PBPs, that synthesize the cell wall peptidoglycan. N. gonorrhoeae has four PBPs, designated PBP 1, 2, 3, and 4 (1; P. A. Ropp and R. A. Nicholas, unpublished data.). Of these, only PBPs 1 and 2 are essential for cell viability and thus are potential antibiotic killing targets in N. gonorrhoeae. Because penicillin G has an approximately 10-fold higher rate of acylation with PBP 2 than with PBP 1, penicillin G kills N. gonorrhoeae at its MIC by inactivation of PBP 2 (1).
While early isolates of N. gonorrhoeae were extremely sensitive to penicillin (MICs ≅ 0.004 to 0.01 μg/ml), this sensitivity gradually decreased such that by the late 1970s strains emerged that were resistant to penicillin (MICs ≥ 2 μg/ml). Resistance to other antibiotics, including tetracycline and erythromycin, also increased during this time. Penicillin resistance in the gonococci arose by two independent mechanisms: the plasmid-mediated production of a penicillinase (TEM-1 β-lactamase) and the chromosomally mediated expression of multiple resistance genes. In 1998, 29.4% of the clinical gonococcal isolates collected throughout the United States by the Gonococcal Isolate Surveillance Project overseen by the Centers for Disease Control and Prevention (CDC) were resistant to either penicillin, tetracycline, or both (6). While the percentage of penicillinase-producing N. gonorrhoeae organisms (PPNG) declined significantly during the years 1991 to 1998, the percentage of chromosomally mediated resistant N. gonorrhoeae organisms (CMRNG) rose.
The genetic mechanisms of chromosomally mediated resistance to penicillin have been investigated in some detail (8, 11). Because N. gonorrhoeae is naturally transformable, penicillin-resistant strains are capable of transferring their resistance genes to susceptible strains in a stepwise manner via transformation and homologous recombination (Fig. 1). The first step in transformation to high-level penicillin resistance (MIC ≥ 2 μg/ml) is mediated by the penA gene, which encodes altered forms of PBP 2 that display 5- to 10-fold decreases in their rate of acylation by penicillin (28). The second step of transformation is mediated by the mtr locus, which confers nonspecific resistance to erythromycin, rifampin, and detergents through increased expression of the MtrC-MtrD-MtrE efflux pump (17). Transfer of the penB locus leads to further increases in both penicillin and tetracycline resistance (5). Recently, the penB phenotype has been correlated to mutations in the porin P1B allele that presumably decrease the porin-mediated flux of antibiotics across the outer membrane (15).
Stepwise acquisition of resistance genes in gonococci by DNA uptake and homologous recombination. The stepwise transfer of resistance genes occurs in the order shown, and each step increases resistance to at least one of the three antibiotics shown on the right. The first three resistance genes (penA, mtr, and penB) are well characterized, and transformants with these genes are easy to obtain in the laboratory. In contrast, transformation of an FA19 penA mtr penB strain up to a level of resistance equal to that of the donor strain has been very hard to achieve in the laboratory. The last two steps of transformation to high-level resistance (in blue) are mediated by two resistance loci, penC and ponA1, that are the subjects of this study.
Unexpectedly, transformation of a penA mtr penB strain (for which the MIC of penicillin is 0.5 to 1.0 μg/ml) to a level of penicillin resistance equal to that of the donor strain (MIC = 4 μg/ml) has been difficult to achieve in the laboratory (8, 11). Because penicillin-resistant strains for which MICs are ≥2 μg/ml appear to express an altered form of PBP 1 that displays a lower rate of acylation by penicillin, the PBP 1 gene likely is involved in transformation to high-level penicillin resistance (7). Dougherty was able to obtain high-level penicillin-resistant transformants at low frequency by using a modified transformation procedure and by selecting transformants at penicillin concentrations two- to fourfold less than the MIC for the donor strain (7). Assessment of the binding of [3H]penicillin G to membranes prepared from these transformants revealed an apparent heterogeneity in the affinity of PBP 1 for penicillin. One group expressed a PBP 1 with an affinity for penicillin G equal to that of the donor strain, whereas another group expressed a PBP 1 with an intermediate affinity. These results suggested that PBP 1 plays a role in mediating high-level penicillin resistance in CMRNG.
We recently cloned the ponA gene encoding PBP 1 from a penicillin-susceptible strain of N. gonorrhoeae (24). In the present study, we isolated and sequenced the ponA gene from several clinical isolates of N. gonorrhoeae for which penicillin MICs were ≥1 μg/ml in order to determine the changes in the amino acid sequence that presumably lead to a decreased rate of acylation by penicillin. In addition, we investigated the role of the altered ponA gene in mediating transformation of intermediate-level penicillin-resistant transformants to high-level penicillin resistance. Our results indicate that both the ponA1 gene encoding an altered form of PBP 1 and another resistance locus, termed penC, are required to achieve high-level chromosomally mediated penicillin resistance.
Bacterial strains and plasmids.Clinical isolates of N. gonorrhoeae for which penicillin G MICs were ≥1 μg/ml were obtained from several sources. Strains 2227, 0387, 3391, 5611, and 9634 were kindly provided by Marcia Hobbs and Myron S. Cohen, University of North Carolina at Chapel Hill, from a Wilson county (North Carolina) surveillance program (12). FA19 (21) and FA6140 (for which the penicillin MIC was 4 μg/ml) (5) were obtained from Fred Sparling, University of North Carolina at Chapel Hill. Piliated FA6140 was obtained from William Shafer, Emory University. Strains 111, 114, 131, 151, and 154 were obtained from Joan Knapp at the CDC. Chromosomal DNA from CMRNG strains CDC77-124615 (8) and CDC84-060418 (7) was kindly provided by Brian Spratt, Imperial College, London.
The plasmids pPR16 and pPR17 harbor the coding regions of the wild-type and mutant ponA genes, respectively, with an extra 546 bp of downstream sequence to facilitate homologous recombination. To aid in selection, each plasmid also contained the Ω fragment encoding spectinomycin and streptomycin resistance (23), inserted 68 bp downstream of the ponA stop codon (see Fig. 2). pMutS-erm, which contained the gonococcal mutS gene inactivated by insertion of an erythromycin resistance cassette, was provided by Janne Cannon, University of North Carolina at Chapel Hill.
Location of the Leu-421→Pro mutation in PBP 1*. (A) The ponA gene is shown along with the locations of the two functional domains, the transglycosylase domain and the transpeptidase domain. Locations of the three highly conserved sequence motifs found in all penicillin-interacting proteins are shown above the transpeptidase domain. The location of the Leu-421→Pro mutation is shown by an asterisk. Also shown are the DNA sequences of the ponA and ponA1 genes. The mutation fortuitously disrupts a unique PstI restriction site. (B) Schematic showing the construct (pPR17) used to create the strains containing the ponA1 gene. The Ω fragment, which encodes resistance to both spectinomycin and streptomycin, was used to identify transformants containing the ponA1 mutation (see text for details). pPR16 is identical to pPR17, except that it contains the wild-type ponA gene.
Media, growth conditions, and MIC determinations.N. gonorrhoeae strains were grown on GC medium base (GCB) agar (Difco Laboratories, Detroit, Mich.) with supplements I and II (20) at 37°C in 5% CO2. Liquid cultures were grown in GCB broth containing supplements I and II, supplement B, and 10 mM NaHCO3 in a shaking incubator at 37°C. MICs were determined according to the method of Sparling et al. (27) or by the spot method. Briefly, cells were resuspended in GCB broth, and 1,000 colonies were spread onto GCB agar plates containing increasing concentrations of the appropriate antibiotic. The MIC was defined as the minimal concentration of antibiotic at which no more than five colonies were observed after 24 h. Alternatively, colonies were suspended in GCB broth and 5 × 104 colonies in 5 μl were spotted onto GCB agar plates containing increasing twofold concentrations of the antibiotic. The MIC was defined as the minimal concentration of antibiotic that inhibited growth after 24 h. The two methods gave very similar results.
General methods.Chromosomal DNA was isolated by ethidium bromide-CsCl equilibrium centrifugation by a modification of the method of Dowson et al. (10). In some instances, chromosomal DNA was isolated with the DNAzol reagent (Gibco/BRL, Gaithersburg, Md.) following the manufacturer's instructions. p-Trimethylstannyl-penicillin V (a gift from Larry Blaszczak; Eli Lilly and Co.) was converted to [125I]iodopenicillin ([125I]IPV) as described previously (2). Protein concentrations were determined by Bradford assay (Bio-Rad, Hercules, Calif.).
Isolation and sequencing of the ponA gene from penicillin-resistant strains.The ponA1 gene was cloned initially from FA6140. The complete coding region of the ponA gene was amplified by PCR from FA6140 DNA with primers GC23 (5'-GTGAGTAACCGTTTCGGTATCC-3'), which hybridizes 124 bp upstream of the ATG start codon, and GC44 (5'-TTAAAACAGGGAATCCAACTGC-3'), an antisense oligonucleotide that hybridizes to the last 22 bp of the ponA gene. The amplified product was subcloned into pUC19-K (pUC19 containing the kanamycin resistance gene). Both strands of the plasmid harboring the ponA gene as well as the ponA PCR product were sequenced with the Amplicycle CS sequencing kit (Perkin-Elmer, Foster City, Calif.). The same sequence was obtained from multiple independent amplifications, verifying that the mutation was not a PCR artifact. Sequences of the ponA genes from the other penicillin-resistant strains examined in this study were amplified by PCR and sequenced directly as described above.
Genetic transformation.N. gonorrhoeae was transformed as described previously (25). Briefly, cells were passaged on GCB agar, and a single piliated colony was streaked onto a fresh GCB plate and allowed to grow overnight. Cells were scraped from the plate and gently resuspended in prewarmed GCB broth containing supplements I and II and 10 mM MgCl2. Resuspended cells were diluted in prewarmed GCB broth to a cell density of 108/ml (optical density at 560 nm, 0.18), followed by the addition of NaHCO3 to a final concentration of 10 mM. Aliquots (900 μl) of diluted cells were mixed gently with 100 μl of 20-μg/ml donor DNA (either plasmid or chromosomal DNA) and allowed to incubate for 5 h at 37°C in a humidified 5% CO2 atmosphere. Various amounts of cells were then plated onto GCB agar containing the appropriate antibiotic concentrations (described below) and grown 24 to 48 h at 37°C in 5% CO2.
PR100 (FA19 penA4 mtrR penB5; see Table 3) was constructed by transforming FA19 with FA6140 genomic DNA and selecting for each resistance gene at the following antibiotic concentrations: (i) penA4, 0.16 μg of penicillin G per ml; (ii) mtr, 0.5 μg of erythromycin per ml; and (iii) penB5, 0.37 μg of tetracycline per ml. Each of these genes was amplified by PCR from PR100 genomic DNA and sequenced; all three genes corresponded exactly to the sequences obtained from FA6140. Other transformants were selected on GCB agar containing antibiotic concentrations as indicated. Gonococci transformed with pPR16 or pPR17 plasmid DNA harboring the Ω fragment were selected on GCB agar containing 100 μg of spectinomycin per ml, whereas cells transformed with plasmid constructs containing the erythromycin resistance gene (i.e., for disruption of mutS) were selected on 10 μg of erythromycin per ml. Since pUC plasmids are not replicative in N. gonorrhoeae, transformation occurred via homologous recombination into the genome. To test for the presence of the ponA1 mutation in transformation experiments, individual colonies were resuspended in 100 μl of 10 mM Tris-HCl-1 mM EDTA, pH 8.0, placed in a boiling water bath for 10 min, and then centrifuged. The ponA gene was then amplified from 1 μl of the boiled lysates with the following primers: 5'-CGCGGTGCGGAAAACTGATATCGAT-3' (bp 955 to 978 of ponA open reading frame) and 5'-AGCCCGGATCGGTTACCATACGTT-3' (bp 2218 to 2195 of ponA open reading frame). Aliquots (5 μl) of the amplification reaction mixture were used for subsequent PstI digestion without further purification.
To examine if the increased resistance in penC strains was due to an additional mutation(s) in one of the genes comprising the mtr operon, we amplified the mtrR + mtrC, mtrD, mtrE, and mtrF genes from the mtr operon by PCR from 1 ng of PR102 DNA and tested the ability of each fragment to transform PR100 to PR100 penC. The primers were designed such that each amplified fragment contained at least one uptake sequence. Following amplification, the fragments were purified from an agarose gel and mixed with piliated PR100 cells, and resistant transformants were selected on GCB plates containing 0.95 μg of penicillin per ml. As a positive control, the mtrR + mtrC fragment (which contains the original mtr mutation) was used to transform FA19 penA4 to FA19 penA4 mtr (transformation frequency, ≈3 × 10−5).
Purification of recombinant PBP 1 and PBP 1*.The coding sequences of the ponA genes from both FA19 and FA6140 were amplified by PCR with Taq DNA polymerase (Gibco/BRL), cloned into pET15b-K (pET15b containing the kanamycin resistance gene in place of the β-lactamase gene), and transformed into Escherichia coli BL21(DE3) as previously described (24). These constructs result in the fusion of 20 additional amino acids, including a hexahistidine tag, to the amino termini of PBP 1 and PBP 1* (PBP 1 containing the Leu-421→Pro mutation).
E. coli BL21(DE3) cells containing PBP 1 expression plasmids were grown in Luria-Bertani medium supplemented with kanamycin (50 μg/ml). Overnight growth resulted in significant expression of PBP 1 in the absence of induction. Cells were pelleted, resuspended in cold lysis buffer (50 mM sodium phosphate buffer [pH 8.0], 10% glycerol, 0.5 mM phenylmethylsulfonyl chloride), and lysed by three passes through an Aminco French pressure cell (Champaign, Ill.) at 16,000 lb/in2. Membranes were isolated from the cell lysate by centrifugation at 225,000 × g, washed, resuspended, and stored at −20°C. Crude membranes were solubilized with an equal volume of 50 mM sodium phosphate buffer (pH 8.0)-2 M NaCl-2% Triton X-100-40 mM imidazole for 1 h with stirring at room temperature.
Following centrifugation at 225,000 × g, the supernatants containing solubilized PBP 1 or PBP 1* were loaded directly onto Ni2+-NTA columns (Qiagen, Chatsworth, Calif.) and washed with 50 mM sodium phosphate (pH 8.0)-1 M NaCl-10% glycerol-0.1% Triton X-100-15 mM imidazole. PBP 1 and PBP 1* were eluted with an increasing linear gradient of imidazole (15 to 500 mM) in the above buffer. Fractions containing PBP 1 or PBP 1* were pooled, concentrated by ultrafiltration, and then dialyzed extensively against 50 mM sodium phosphate (pH 8.0)-0.5 M NaCl-10% glycerol-0.1% Triton X-100 to remove imidazole. The purified proteins were stored at −80°C at a concentration of 2 to 4 mg/ml.
Determination of the kinetic constants for the interaction of β-lactam antibiotics with PBP 1 and PBP 1*.k2/K' constants of recombinant PBP 1 and PBP 1* for [125I]IPV were determined from time courses of acyl-enzyme formation as described by Frere et al. (14). Briefly, purified proteins were incubated at 30°C with [125I]IPV concentrations ranging from 0.625 to 5 mM in 50 mM sodium phosphate-1 mM EDTA-0.1% Triton X-100 (pH 7.4), and the reaction was stopped by the addition of sodium dodecyl sulfate-polyacrylamide gel electrophoresis sample buffer at 0.5- to 1-min time intervals. The samples were run on a sodium dodecyl sulfate-8% polyacrylamide gel, and the dried gel was exposed to a phosphorimaging screen for 12 to 24 h. Levels of acyl-enzyme formed were quantitated with ImageQuant software following imaging on a Storm 840 Phosphorimager (Molecular Dynamics, Sunnyvale, Calif.). The k2/K' constants was derived from the slope of a plot of the apparent first-order rate constant ka versus the concentration of [125I]IPV. The k2/K' values for penicillin, ceftriaxone, and cephaloridine were determined by the competition method as described previously (14).
k3 values were determined by monitoring the formation of free PBP from samples of PBP-antibiotic complexes. PBPs were incubated with unlabeled antibiotics for 15 min at 30°C, and excess antibiotics were removed by dialysis at 4°C against the same buffer as above with the addition of 10% glycerol. The addition of glycerol was necessary to ensure the stability and activity of PBP 1 proteins during prolonged incubations. Following dialysis, the protein complex was incubated at 30°C, aliquots were removed at various times, and the amount of free PBP was assessed by incubation with a saturating concentration of [125I]IPV. The relative amounts of labeled PBPs were derived by phosphorimager analysis as described above. No significant hydrolysis was observed over 3 days of incubation.
Identification of an altered form of the ponA gene from CMRNG strains.Previous studies have shown that high-level penicillin resistance in N. gonorrhoeae is correlated with expression of PBP 1 displaying an apparent decrease in its affinity for penicillin (7, 8). These data suggested that one or more mutations were present in the primary sequence of PBP 1 that decreased its rate of acylation with penicillin. To test this hypothesis, we isolated the ponA gene from the high-level penicillin-resistant strain FA6140 (5) (penicillin MIC = 4 μg/ml) by PCR amplification of genomic DNA and compared its nucleotide sequence to that of the ponA gene from the penicillin-susceptible strain FA19 (24). Sequence analyses of several independent clones isolated from separate amplification reactions identified a single point mutation, a T-to-C transition, at nucleotide 1261 of the ponA coding region (hereafter designated ponA1) (Fig. 2A). This mutation, which results in the change of Leu-421→Pro, was the only difference observed in the entire 2,400-bp coding region.
To determine if this mutation occurs in the ponA gene from other penicillin-resistant strains, the ponA genes from 10 geographically and temporally distinct CMRNG isolates for which penicillin MICs were ≥1 μg/ml were amplified and sequenced. Serovars of these strains suggest that these strains were descended from different lineages (Table 1). Nine of these strains harbored the ponA1 gene containing a point mutation identical to that observed in the ponA gene from FA6140 (Table 1). Two of these strains, CDC120177 and FA6140, were isolated in the late 1970s; strains 111, 114, and 131 were isolated in Cincinnati, Ohio, in 1994; and strains 0387, 3391, 5611, and 9634 were isolated in rural North Carolina in 1993 from a gonococcal surveillance program. As we observed with the ponA1 gene from FA6140, there were no other differences in the ponA genes from these other strains. CDC84-060418, the only strain that did not contain the mutation, is known to be different from the majority of CMRNG strains in that it expresses an altered PBP 2 with an extremely low rate of acylation by penicillin, but with little to no change in the rate of acylation of PBP 1 (7). Thus, with the exception of CDC84-060418, all CMRNG strains for which the MIC of penicillin was ≥1 μg/ml showed the presence of the altered codon in the ponA gene.
Characteristics of the N. gonorrhoeae strains used in this study
Interaction of PBP 1 and PBP 1* with β-lactam antibiotics.The T-to-C transition in the ponA1 gene results in a change of Leu-421→Pro in the PBP 1 amino acid sequence (hereafter referred to as PBP 1*; Fig. 2A). Leu-421 is 40 amino acids to the amino-terminal side of the active-site serine residue (Ser-461) and thus is near (but not necessarily within) the active-site cavity. To assess the functional consequences of the Leu-421→Pro amino acid mutation in PBP 1*, both the wild-type and mutant forms of PBP 1 were expressed in E. coli and purified, and the kinetic constants for their interaction with β-lactam antibiotics were determined. β-Lactam antibiotics interact with PBPs according to the following scheme: $$mathtex$$\[\mathrm{E}\ {+}\ \mathrm{S}\ {{\leftrightharpoons}_{k_{{-}1}}^{\mathit{k}_{{+}1}}}\ \mathrm{E}\ {\cdot}\ \mathrm{S}\ {{\rightarrow}^{k_{2}}}\ \mathrm{E}\ {-}\ \mathrm{S}'{{\rightarrow}^{k_{3}}}\ \mathrm{E}\ {+}\ \mathrm{P}\]$$mathtex$$ where E is the enzyme, S is a β-lactam antibiotic, and P is the inactive degradation product (13, 22). The effectiveness of an antibiotic is defined by two parameters: (i) the second-order specificity constant, k2/K', where K' = k−1/k+1; and (ii) the rate constant for hydrolysis of the acyl-enzyme complex, k3. The Leu-421→Pro mutation results in a fourfold decrease in the k2/K' constant for [125I]IPV and approximately three- to fourfold decreases in the k2/K' constants for penicillin G, ceftriaxone, and cephaloridine (Table 2). No changes in the rates of hydrolysis of the acyl-enzyme complex were observed, indicating that resistance arises from a lower rate of acylation by β-lactam antibiotics and not by an increase in the rate of deacylation.
Kinetic constantsa of PBP 1 and PBP 1* for interaction with β-lactam antibiotics
Role of the ponA1 gene in transformation of an intermediate-level resistant strain to high-level penicillin resistance.The identification of a mutation in PBP 1 that decreases its rate of acylation with β-lactam antibiotics suggests that the ponA1 gene is involved in conferring high-level penicillin resistance. To determine the role of PBP 1* in resistance, we first produced an intermediate-level penicillin-resistant strain (PR100; FA19 penA4 mtr penB5) as described in Materials and Methods. PCR amplification and sequencing of the penA4, mtr, and penB5 genes from PR100 verified that these genes were identical to those in FA6140 (data not shown). For PR100, both penicillin and tetracycline have a MIC of 1.0 μg/ml, and this strain also shows increased resistance (relative to FA19) to both ceftriaxone and cephaloridine (Table 3).
Derivation of and MICs for strains of N. gonorrhoeae used in this study
PR100 was transformed with the plasmid pPR17, which contains the entire ponA1 gene and 546 bp of the 3' flanking sequence with the Ω fragment (encoding spectinomycin resistance) (23) inserted 68 bp downstream of the ponA1 stop codon (Fig. 2B). The transformed cells were plated on GCB plates containing either 2 μg of penicillin per ml or 100 μg of spectinomycin per ml. No colonies grew on the GCB-penicillin plates, even though spectinomycin-resistant colonies were isolated at a frequency of ≈10−5. We screened the spectinomycin-resistant transformants for the successful recombination of the linked ponA1 mutation by PCR amplification of the ponA gene and subsequent digestion of the PCR products with PstI. Fortuitously, the ponA1 mutation destroys a unique PstI site within the ponA gene, and thus loss of the PstI site in the amplified fragments is diagnostic for the presence of ponA1 (Fig. 2A). Approximately half of the spectinomycin-resistant colonies had incorporated the ponA1 mutation (data not shown). Surprisingly, there was no increase in the MIC of penicillin for spectinomycin-resistant transformants containing the ponA1 mutation (PR101; Table 3). Thus, acquisition of the ponA1 gene by an intermediate-level penicillin-resistant strain does not increase the MIC of penicillin to the level seen with FA6140.
To determine whether PBP 1* is involved in high-level penicillin resistance at all, we replaced the ponA1 gene in FA6140 with the wild-type ponA gene and determined the MIC of penicillin for the resulting isogenic strain. FA6140 was transformed with pPR16, a plasmid containing the wild-type ponA gene and the linked Ω fragment, and spectinomycin-resistant colonies were screened for the presence of the wild-type gene by PstI digestion of the amplified ponA gene. For several such transformants, designated FA6140 ponA-wt, consistent twofold decreases in the MIC of penicillin (2 μg/ml; Table 3) were observed, indicating that the ponA1 gene is involved in high-level penicillin resistance in FA6140. The MIC of tetracycline was unchanged. Identical results were obtained with the CMRNG strains 111, 114, and 131.
Identification of penC, a previously uncharacterized resistance gene.In our attempts to obtain penicillin-resistant colonies following transformation of PR100 with either pPR17 plasmid DNA or FA6140 chromosomal DNA, we observed the growth of a few (5 to 25) colonies following plating on GCB plates containing 0.95 μg of penicillin per ml. For all of these colonies, twofold increases in the MICs of both penicillin (2.0 μg/ml) and tetracycline (2.0 μg/ml) and similar increases in the MICs of ceftriaxone and cephaloridine (Table 3) were observed. PCR amplification of the ponA gene and digestion with PstI showed that none of these colonies had recombined the ponA1 gene (data not shown), which was consistent with our previous data showing that the ponA1 gene does not increase the MIC of penicillin when transformed into PR100. Moreover, we were able to isolate penicillin- and tetracycline-resistant colonies for which the MICs showed twofold increases simply by plating PR100 (FA19 penA mtr penB) on GCB-penicillin or GCB-tetracycline plates (i.e., in the absence of DNA transformation). The increase in MICs observed with these colonies (designated PR102; Table 3) was stable to repeated passages on GCB agar over several days with no antibiotics present, strongly suggesting that resistance was due to a genetic event and not to an induction of an enzyme in response to antibiotic challenge.
These data suggest that the twofold increase in penicillin and tetracycline resistance is due to a low-frequency, spontaneous mutation within the population of PR100. If true, then plating PR100 containing a disrupted mutS gene (which increases mutation frequency) on selective media should result in a large increase in the numbers of penicillin- and tetracycline-resistant colonies. Consistent with this hypothesis, plating PR103 (PR100 mutS) on selective media produced colonies at a frequency 17-fold higher on average than for PR100 (n = 3; data not shown). Moreover, the MICs of both penicillin and tetracycline for all of the resistant colonies derived from PR103 increased twofold. This experiment provides additional evidence that the increased resistance is due to a spontaneously arising genetic mutation in an as yet unidentified locus. This locus has been designated penC.
Repeated attempts to transform PR100 to PR100 penC with DNA from the CMRNG strains FA6140, 114, and 131 were unsuccessful. However, we were able to transform penC into PR100 with chromosomal DNA isolated from PR102 (PR100 penC; Table 3) at a frequency of ≈10−5. (While this transformation frequency is lower than that obtained when penA4 is transformed into FA19 [≈3 × 10−4], similar transformation frequencies were obtained when PR100 was transformed with either plasmid or chromosomal DNA containing different resistance markers. These results suggest that the lower transformation frequency of PR100 is a characteristic of the strain and is not related to the penC locus.) The inability to transform the penC locus into PR100 from FA6140, 114, and 131 indicates that these CMRNG strains do not contain the penC locus and suggests that a novel mechanism or another as yet unidentified mutation is involved in high-level penicillin resistance in these strains. However, given that penC can arise spontaneously, we suspect that the penC locus will be found in other CMRNG strains when a more extensive search is conducted.
Since both ponA1 and penC are required to mediate high-level penicillin resistance in FA19, we attempted to identify a similar resistance locus in FA6140 by transforming PR101 (PR100 ponA1) with FA6140 DNA to high-level penicillin resistance. However, these attempts were unsuccessful, suggesting that in FA6140 an additional resistance locus that is not easily transformable or a novel genetic mechanism is involved in high-level resistance.
The penC locus does not encode additional mutations in previously characterized resistance genes.We also considered the possibility that additional mutations in the three known resistance loci (penA, mtr, and penB) might underlie the increased resistance observed in penC strains. To test this possibility, we sequenced the penA and penB genes from both PR100 and PR102 (PR100 penC). No differences were noted in either gene obtained from the two strains. To determine if mutation(s) in one or more of the genes encoding the Mtr efflux pump might be responsible for the increased resistance observed in penC strains, we amplified the mtrR + mtrC, mtrD, mtrE, and mtrF genes from PR102 by PCR and used these fragments in an attempt to transform PR100 into PR100 penC. None of the fragments gave rise to penicillin- or tetracycline-resistant colonies at a frequency higher than that of a no-DNA control, while DNA from PR102 transformed PR100 into PR102 with a frequency of ≈10−5. These data are consistent with the idea that penC is a novel resistance locus and not an already-characterized resistance gene that has acquired an additional mutation.
Transformation of a penC strain to high-level penicillin resistance with ponA1.Even though acquisition of the ponA1 gene by PR100 by itself did not increase penicillin resistance, acquisition of penC may be required before the ponA1 gene can increase penicillin resistance. To test this hypothesis, we transformed the penC strain PR102 with pPR17 and screened spectinomycin-resistant transformants for the presence of the linked ponA1 gene as described above. For the resulting transformants, designated PR105, the MIC of penicillin was 4 μg/ml, the same as for FA6140 (Table 3). The MIC of cephaloridine increased following acquisition of ponA1, whereas no increases were observed for ceftriaxone and tetracycline. We were unable to transform PR102 to higher penicillin resistance with FA6140 chromosomal DNA or pPR17 plasmid DNA by selecting on GCB-penicillin plates, but this may simply reflect the difficulty in selecting transformants having such a small increase in penicillin resistance (7). These data demonstrate that both ponA1 and penC are necessary to transform an intermediate-level resistant recipient strain to high-level penicillin resistance. Moreover, the penC locus must already be present in order for ponA1 to increase resistance. Thus, increasing the penicillin resistance of FA19 to a high level (MIC = 4 μg/ml) requires mutations in five genes or loci (penA, mtr, penB, penC, and ponA1), including both essential PBPs (Fig. 1).
The mechanism by which chromosomally mediated penicillin-resistant strains arise in N. gonorrhoeae is different from that utilized by most other bacteria. Susceptible strains of N. gonorrhoeae become resistant to penicillin by acquiring multiple resistance genes in a stepwise fashion. As each gene is acquired, penicillin resistance increases until treatment failure occurs. While the genes involved in transforming N. gonorrhoeae to an intermediate level of penicillin resistance are well established, those that mediate high-level resistance have not been identified. In this study, we show that two loci, the ponA1 gene encoding an altered form of PBP 1 and a newly identified locus, penC, are required to transform an intermediate-level penicillin-resistant strain to high-level resistance.
The ponA1 gene encodes PBP 1 containing a single amino acid mutation, Leu-421→Pro, which decreases the rate of acylation with β-lactam antibiotics three- to fourfold compared to the wild type. An identical mutation was observed in the ponA genes from penicillin-resistant strains isolated 17 years apart, from geographically distinct regions, and with different serovars. Transformation of an intermediate-level penicillin-resistant strain with the ponA1 gene increased penicillin resistance only when an additional locus, penC, was already present. These data demonstrate that a penicillin-susceptible strain must acquire five resistance genes, penA, mtr, penB, penC, and ponA1, to reach the level of resistance of a high-level, chromosomally mediated penicillin-resistant strain such as FA6140.
The mechanism of the emergence and propagation of the ponA1 mutation in resistant strains of N. gonorrhoeae is in marked contrast to that described for the penA gene encoding PBP 2. PenA is the first resistance gene in the stepwise transfer of resistance genes from a resistant donor strain to a susceptible recipient strain (11). Analysis of the penA genes from penicillin-resistant strains of N. gonorrhoeae has shown that the coding sequences contain blocks of DNA with high sequence divergence from penA genes of susceptible strains (28). Further studies showed that in the closely related Neisseria species N. meningitidis, the divergent blocks of DNA in altered penA genes arose by horizontal transfer of resistant penA genes from commensal strains such as N. flavescens or N. cinerea (3, 29, 30). These alterations lead to multiple amino acid changes in PBP 2, with the most important change being an amino acid insertion (Asp-345a) that lowers the rate of acylation of PBP 2 with penicillin four- to fivefold (4, 26).
In contrast to the high sequence divergence observed in altered forms of PBP 2, alteration of PBP 1 is due to a single base change in the ponA gene that results in mutation of Leu-421 to proline. We have sequenced the ponA genes from at least 10 different CMRNG isolates, and the only difference within the entire 2,400 bp of coding sequence is at codon 421. These isolates include one of the original CMRNG isolates from 1977, CDC77-124615, and resistant strains isolated nearly 20 years later in both Cincinnati and North Carolina. This mutation evidently arose in N. gonorrhoeae, since the ponA genes from several Neisseria commensal species show significant sequence divergence from the ponA gene of N. gonorrhoeae, with overall DNA sequence identities of 88% (N. lactamica; accession no. AF085689), 85% (N. cinerea; AF085340), and 73% (N. flavescens; AF087677) (Ropp and Nicholas, unpublished). Although it is possible that the mutation arose in N. meningitidis, whose ponA gene is 100% identical to its gonococcal homologue in the region of the mutation (24), it seems unlikely since high-level penicillin resistance in this species has not yet been noted.
Substitution of Leu-421 with proline causes a three- to fourfold decrease in the acylation rate of the protein when assessed with a variety of antibiotics. One of the most unusual aspects of this mutation is that the alteration is located on the amino-terminal side of the active-site serine residue. In other PBPs whose acylation rates have been altered by amino acid mutations, these mutations occur on the C-terminal side of the active-site serine residue. For example, mutations in altered forms of PBP 2 are located near two conserved active-site sequence motifs, the SXN triad and the KTG triad (28), and a similar location of alterations is observed in the PBPs from penicillin-resistant strains of Streptococcus pneumoniae (9, 16, 18). Because no high-resolution crystal structure of a class A PBP is available, it is not possible at this time to understand at a molecular level how this mutation causes a decrease in the rate of acylation. Although it is possible that Leu-421 is near the active site, mutation to proline may simply impart a structural perturbation (due to the structural effects of proline) that is propagated from a distance and disturbs the active-site architecture. However, in the absence of structural information for PBP 1 or one of its homologues, the mechanism by which the mutation decreases acylation remains highly speculative. Because the fold decrease in k2/K' for each of the antibiotics tested were very similar, i.e., approximately three- to fourfold, it appears that the structural alteration is general in nature and not specific to penicillin. However, a more extensive analysis with additional antibiotics is necessary to confirm this hypothesis.
The original reports detailing the stepwise transformation of susceptible strains of N. gonorrhoeae by donor DNA from high-level penicillin-resistant strains noted that transformation of recipient strains to a level of penicillin resistance equivalent to that of the donor strain could not be achieved (8, 11). However, Dougherty was able to isolate transformants at low frequency for which the MICs of penicillin were equal to those observed with the donor strain by using high concentrations of DNA and a modified transformation protocol and by selecting at penicillin concentrations fourfold below the MIC (7). Given the low frequency of transformation, Dougherty concluded that possibly two genes were involved in high-level penicillin resistance. Our study confirms and extends this hypothesis. We have shown that acquisition of ponA1 increases the MIC only when the penC mutation is present. It is probable that Dougherty also isolated colonies containing spontaneous mutations in penC in his experiments, since well over half of the transformants isolated were cross resistant to tetracycline, even though no selection for tetracycline was carried out (7).
Despite multiple attempts, we could not obtain colonies containing the penC locus (with frequencies greater than those of no-DNA controls) by transforming PR100 with donor DNA from several high-level penicillin-resistant strains. Our inability to obtain penC transformants was not due to penC being a nontransformable locus, since we were able to transform PR100 to PR100 penC with PR102 DNA at reasonably high frequencies (i.e., ≈10−5). This result indicates that the penC mutation is not present in the CMRNG isolates that we tested and suggests that a novel mechanism or perhaps a different genetic background is responsible for resistance in these strains. It is interesting that the penicillin MIC for FA6140 ponA-wt is 2 μg/ml, twofold higher than that of FA19 containing the first three resistance genes from FA6140 (PR100; FA19 penA4 mtr penB5). The fact that the MIC for FA6140 ponA-wt is higher than for PR100 implies that the novel mechanism or genetic background involved in high-level penicillin resistance in FA6140 increases the MIC of penicillin above that which can be accounted for by the penA4, mtr, and penB5 genes. This scenario is reminiscent of the effects of the penC locus.
Introduction of the ponA1 gene into FA19, FA19 penA4, and FA19 penA4 mtr penB5 (PR100) had no effect on the MIC of penicillin (Table 3), indicating that penicillin kills these strains by inhibition of PBP 2. These data also show that PBP 2 remains the killing target even after its rate of acylation is markedly reduced by alterations in its coding sequence (i.e., penA2). However, PBP 1* clearly has a role in clinically relevant high-level penicillin resistance, since replacing the ponA1 gene with the wild-type gene in FA6140 (Table 3) and three other CMRNG strains resulted in twofold decreases in the penicillin MICs for these strains. The fact that introduction of the ponA1 gene into PR102 (PR100 penC) increases the MIC of penicillin indicates that the penC gene must in some way decrease the rate of acylation of PBP 2 such that PBP 1 becomes the killing target. A similar argument can be made for CMRNG strains such as FA6140, although the modifying gene(s) in these strains is not penC but some other as yet unknown gene or genetic mechanism. It is tempting to speculate that in both cases the membrane surface is altered in such a way that the rate of acylation of PBP 2 within the cell division complex is decreased below that of PBP 1, allowing PBP 1* to increase the MIC of penicillin.
In conclusion, this study is the first to show unequivocally that PBP 1 is involved in penicillin resistance. However, transformation experiments demonstrate that a decrease in the rate of acylation of PBP 1 is still not sufficient to impart high-level penicillin resistance (MIC = 2 to 4 μg/ml) to an intermediate-level resistant strain, implicating the involvement of an additional resistance locus in the transformation of gonococci to high-level resistance. In our studies, that additional locus is penC; however, in several clinical isolates, the identity of this locus is not penC but an as yet unidentified locus. Studies are in progress to identify the penC locus and to define the mechanism by which it increases both penicillin and tetracycline resistance.
This work was supported by grant AI-36901 from the National Institutes of Health.
We gratefully acknowledge the help and advice of Janne Cannon and Joanne Demsey, Janne Cannon for comments on the manuscript, and Marcia Hobbs for helping with serotyping the resistant strains.
Received 15 June 2001.
Accepted 6 December 2001.
American Society for Microbiology
Barbour, A. G. 1981. Properties of penicillin-binding proteins in Neisseria gonorrhoeae. Antimicrob. Agents Chemother. 19:316-322.
Blaszczak, L. C., N. G. Halligan, and D. E. Seitz. 1989. Radioiododestannylation. Convenient synthesis of a stable penicillin derivative for rapid penicillin binding protein (PBP) assay. J. Label. Compd. Radiopharm. 27:401-406.
Bowler, L. D., Q. Y. Zhang, J. Y. Riou, and B. G. Spratt. 1994. Interspecies recombination between the penA genes of Neisseria meningitidis and commensal Neisseria species during the emergence of penicillin resistance in N. meningitidis: natural events and laboratory simulation. J. Bacteriol. 176:333-337.
Brannigan, J. A., I. A. Tirodimos, Q. Y. Zhang, C. G. Dowson, and B. G. Spratt. 1990. Insertion of an extra amino acid is the main cause of the low affinity of penicillin-binding protein 2 in penicillin-resistant strains of Neisseria gonorrhoeae. Mol. Microbiol. 4:913-919.
Danielsson, D., H. Faruki, D. Dyer, and P. F. Sparling. 1986. Recombination near the antibiotic resistance locus penB results in antigenic variation of gonococcal outer membrane protein I. Infect. Immun. 52:529-533.
Division of STD Prevention. 1998. Sexually Transmitted Disease Surveillance. Supplement: Gonococcal Isolate Surveillance Project (GISP) Annual Report-1998. Department of Health and Human Services, Public Health Service. Centers for Disease Control and Prevention, Atlanta, Ga.
Dougherty, T. J. 1986. Genetic analysis and penicillin-binding protein alterations in Neisseria gonorrhoeae with chromosomally mediated resistance. Antimicrob. Agents Chemother. 30:649-652.
Dougherty, T. J., A. E. Koller, and A. Tomasz. 1980. Penicillin-binding proteins of penicillin-susceptible and intrinsically resistant Neisseria gonorrhoeae. Antimicrob. Agents Chemother. 18:730-737.
Dowson, C. G., A. Hutchison, and B. G. Spratt. 1989. Extensive re-modelling of the transpeptidase domain of penicillin-binding protein 2B of a penicillin-resistant South African isolate of Streptococcus pneumoniae. Mol. Microbiol. 3:95-102.
Dowson, C. G., A. E. Jephcott, K. R. Gough, and B. G. Spratt. 1989. Penicillin-binding protein 2 genes of non-β-lactamase-producing, penicillin-resistant strains of Neisseria gonorrhoeae. Mol. Microbiol. 3:35-41.
Faruki, H., and P. F. Sparling. 1986. Genetics of resistance in a non-β-lactamase-producing gonococcus with relatively high-level penicillin resistance. Antimicrob. Agents Chemother. 30:856-860.
Fox, K. K., J. C. Thomas, D. H. Weiner, R. H. Davis, P. F. Sparling, and M. S. Cohen. 1999. Longitudinal evaluation of serovar-specific immunity to Neisseria gonorrhoeae. Am. J. Epidemiol. 149:353-358.
Frere, J. M., J. M. Ghuysen, and H. R. Perkins. 1975. Interaction between the exocellular DD-carboxypeptidase-transpeptidase from Streptomyces R61, substrate and β-lactam antibiotics. A choice of models. Eur. J. Biochem. 57:353-359.
Frere, J. M., M. Nguyen-Disteche, J. Coyette, and B. Joris. 1992. Mode of action: interaction with the penicillin binding proteins., p. 148-196. In M. I. Page (ed.), The chemistry of β-lactams. Chapman & Hall, Glasgow, United Kingdom.
Gill, M. J., S. Simjee, K. Al-Hattawi, B. D. Robertson, C. S. Easmon, and C. A. Ison. 1998. Gonococcal resistance to β-lactams and tetracycline involves mutation in loop 3 of the porin encoded at the penB locus. Antimicrob. Agents Chemother. 42:2799-2803.
Grebe, T., and R. Hakenbeck. 1996. Penicillin-binding proteins 2b and 2x of Streptococcus pneumoniae are primary resistance determinants for different classes of β-lactam antibiotics. Antimicrob. Agents Chemother. 40:829-834.
Hagman, K. E., W. Pan, B. G. Spratt, J. T. Balthazar, R. C. Judd, and W. M. Shafer. 1995. Resistance of Neisseria gonorrhoeae to antimicrobial hydrophobic agents is modulated by the mtrRCDE efflux system. Microbiology 141:611-622.
Hakenbeck, R., C. Martin, C. Dowson, and T. Grebe. 1994. Penicillin-binding protein 2b of Streptococcus pneumoniae in piperacillin-resistant laboratory mutants. J. Bacteriol. 176:5574-5577.
Hobbs, M. M., T. M. Alcorn, R. H. Davis, W. Fischer, J. C. Thomas, I. Martin, C. Ison, P. F. Sparling, and M. S. Cohen. 1999. Molecular typing of Neisseria gonorrhoeae causing repeated infections: evolution of porin during passage within a community. J. Infect. Dis. 179:371-381.
Kellogg, D. S., W. L. Peacock, W. E. Deacon, L. Browh, and C. I. Perkle. 1963. Neisseria gonorrhoeae. I. Virulence genetically linked to colonial variation. J. Bacteriol. 85:1274-1279.
Maness, M. J., and P. F. Sparling. 1973. Multiple antibiotic resistance due to a single mutation in Neisseria gonorrhoeae. J. Infect. Dis. 128:321-330.
Martin, M. T., and S. G. Waley. 1988. Kinetic characterization of the acyl-enzyme mechanism for β-lactamase I. Biochem. J. 254:923-925.
Prentki, P., and H. M. Krisch. 1984. In vitro insertional mutagenesis with a selectable DNA fragment. Gene 29:303-313.
Ropp, P. A., and R. A. Nicholas. 1997. Cloning and characterization of the ponA gene encoding penicillin-binding protein 1 from Neisseria gonorrhoeae and Neisseria meningitidis. J. Bacteriol. 179:2783-2787.
Sarubbi, F. A., Jr., E. Blackman, and P. F. Sparling. 1974. Genetic mapping of linked antibiotic resistance loci in Neisseria gonorrhoeae. J. Bacteriol. 120:1284-1292.
Schultz, D. E., B. G. Spratt, and R. A. Nicholas. 1991. Expression and purification of a soluble form of penicillin-binding protein 2 from both penicillin-susceptible and penicillin-resistant Neisseria gonorrhoeae. Protein Expr. Purif. 2:339-349.
Sparling, P. F., F. A. Sarubbi, Jr., and E. Blackman. 1975. Inheritance of low-level resistance to penicillin, tetracycline, and chloramphenicol in Neisseria gonorrhoeae. J. Bacteriol. 124:740-749.
Spratt, B. G. 1988. Hybrid penicillin-binding proteins in penicillin-resistant strains of Neisseria gonorrhoeae. Nature 332:173-176.
Spratt, B. G., L. D. Bowler, Q. Y. Zhang, J. Zhou, and J. M. Smith. 1992. Role of interspecies transfer of chromosomal genes in the evolution of penicillin resistance in pathogenic and commensal Neisseria species. J. Mol. Evol. 34:115-125.
Spratt, B. G., Q. Y. Zhang, D. M. Jones, A. Hutchison, J. A. Brannigan, and C. G. Dowson. 1989. Recruitment of a penicillin-binding protein gene from Neisseria flavescens during the emergence of penicillin resistance in Neisseria meningitidis. Proc. Natl. Acad. Sci. USA 86:8988-8992.
Antimicrobial Agents and Chemotherapy Mar 2002, 46 (3) 769-777; DOI: 10.1128/AAC.46.3.769-777.2002
You are going to email the following Mutations in ponA, the Gene Encoding Penicillin-Binding Protein 1, and a Novel Locus, penC, Are Required for High-Level Chromosomally Mediated Penicillin Resistance in Neisseria gonorrhoeae
|
CommonCrawl
|
best hi hat mic? gearslutz
A subset of R is a null set if, for every ε > 0, it can be covered with countably many products of n intervals whose total volume is at most ε. Covering measure one sets by closed null sets. Both types of measurability offer advantages and which to use depends on what you are trying to study. If so, then there is no natural way to extend the Lebesgue measure to include more null sets. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Is there something glaring missing from taking the Lebesgue measure restricted to Borel/Baire sets? A random variable is a function X from our sample space S to R such that the preimage of every measurable set in R is measurable in S, so enlarging the algebra of measurable sets on R makes it more difficult to be measurable (just as a finer topology on the target spaces makes for less continuous functions). If so, then there is no natural way to extend the Lebesgue measure to include more null sets. For each n, since Bn* is a null set, applying the definition of null set to B*n* and [; \epsilon 2^{-n} ;] there exists open intervals A*n,m* for m=1,2,3,... such that [; B_{n} \subseteq \cup_{m=1}^{\infty} A_{n,m} ;] and [; \sum_{m=1}^{\infty} |A_{n,m}| \leq \epsilon 2^{-n} ;]. The most direct answer is that we really really want to be able to say that every continuous function is a random variable, but this is only true if we use the Borel algebra on R (or C). QED. Borel measurability is better suited to "softer analysis" (implicit approximation). This is actually something that should get more attention in (rigorous) probability courses in my opinion. I don't have much to ask on null sets, but please do one on ultrafilters! process of length continuum. Certainly he didn't use his own name. In order to prevent this, if $B_\alpha$ is measure This function f is continuous since s is but f-1(Lebesgue null sets) often fails to be even Lebesgue measurable. This was also the original motivation for the definition and was given by Lebesgue in his PhD dissertation. continuous implies measurable) and also easier to prove things about all functions since the Borel algebra is countably generated (whereas the Lebesgue is not). When studying function spaces in particular (probability, functional analysis, etc), using the Borel structure makes it easier to classify functions (e.g. Borel sets. this particular positive-measure Borel set. I am not attempting to define measure here, for this thread I plan to stick entirely to the concept of null set which does turn out to be equivalent to a set of Lebesgue measure zero. Actually, first first things first, let me state clearly what I am not doing. By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. The collection of open intervals [; \{ A_{n,m} : n,m \in \mathbb{N} \} ;] is countable (since [; \mathbb{N} \times \mathbb{N} ;] is countable) and [; \sum_{n,m=1}^{\infty} |A_{n,m}| = \sum_{n=1}^{\infty} \epsilon 2^{-n} = \epsilon ;] (the terms are all positive and the series is absolutely convergent so there are no issues with splitting up the double sum). Does it make sense to regard the graph of any function as being a "sort-of-null set"? certain elements are in $A$, in order to ensure that $A$ will be non-null, and that other elements are not in $A$, in such a way so as to prevent $\bigcup_n f_n(A)$ from containing a particular positive-measure Borel set. $c_\alpha\in A$. At stage $\alpha$, we consider first the possibility that $B_\alpha$ might be a Regarding the conversely part, you will have trouble proving countable additivity of the resulting notion of "sort-of-null" sets. Now [; \mathbb{Q} = \{ q_{n} : n \in \mathbb{N} \} \subseteq \cup_{n} A_{n} ;] since for each n, [; q_{n} \in A_{n} ;]. On the other hand, when considering stochastic processes, people often take completions of their sigma algebras, why? a real $b_\alpha\in B_\alpha$ about whose pre-images measure-zero Borel set containing the set $A$ we aim to construct. The proof of that fact is too long for this post but details can be found here: http://www.math.ncku.edu.tw/~rchen/Advanced%20Calculus/Lebesgue%20Criterion%20for%20Riemann%20Integrability.pdf. Set [; B = \cup_{n=1}^{\infty} B_{n} ;], that is B is the union of the countable collection of null sets B*n. Let [; \epsilon > 0 ;] be arbitrary. If I missed an inf in my edit, please tell me. f_n(A)$ for any Borel bijections $f_n$. (Conversely, if not, then it seems reasonable to regard all the counterexemplary sets as "kind-of-null sets".). There are some light LaTeX issues: the symbol for infinity is \infty, not \inf, and your epsilon needs a backslash. Asking for help, clarification, or responding to other answers. The Cantor set is uncountable but is Lebesgue null. What's the use for complete measures? It seems to me that Borel measurability (rather than Lebesgue) is widely used in probability theory and statistics, why is that the case? Antonyms for Lebesgue null set. \{f_n^\alpha\}_n,B_\alpha\rangle$, for $\alpha<\mathfrak{c}$, of all pairs of such objects. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. MathOverflow is a question and answer site for professional mathematicians. Thanks for contributing an answer to MathOverflow! 1 word related to null set: set. See also this, I've edited the title as suggested. Theorem: A countable union of null sets is a null set. The topic I chose is (obviously) Lebesgue null sets, but if there's interest, I'd be happy to do this with pretty much any topic in analysis (I'll try with other fields but can't promise to know any useful answers...). Is it the case that for every non-Lebesgue-measurable set A ⊂ [ 0, 1], there exists a countable family { f n } n ∈ N ⊂ F such that ⋃ n ∈ N f n (A) contains a Lebesgue-measurable set of positive measure? Next, still at stage $\alpha$, we consider the possibility that $B_\alpha$ might be a positive-measure set contained in $\bigcup_n Such a set exists because the Lebesgue measure is the completion of the Borel measure. It was in French so I can't swear by the translation but I think he called them negligible sets. But if $B_\alpha$ has To begin the construction, observe that there are continuum many Borel functions and therefore continuum many countable families $\{f_n\}$ of bijective It is mostly a matter of technical convenience. explicit approximation. $A$, which is countably many additional promises at this stage. Lebesgue measurability is better suited for "hard analysis", i.e. To produce a set in $\mathscr{L}\smallsetminus \mathscr{B}$, we'll assu… But By design, the construction ensures that $A$ is not measure zero, explicit approximation. Is it the case that for every non-Lebesgue-measurable set $A \subset [0,1]$, there exists a countable family $\{f_n\}_{n \in \mathbb{N}} \subset F$ such that $\ \bigcup_{n \in \mathbb{N}} f_n(A)\,$ contains a Lebesgue-measurable set of positive measure? This will ensure that $A$ is not contained in this particular Borel measure-zero set, and therefore, since all such Borel measure-zero sets will eventually be considered, it will ensure that $A$ does not have measure zero. If the assumptions hold only μ-almost everywhere, then there exists a μ-null set N ∈ Σ such that the functions f n 1 S \ N satisfy the assumptions everywhere on S. Then the function f ( x ) defined as the pointwise limit of f n ( x ) for x ∈ S \ N and by f ( x ) = 0 for x ∈ N , is measurable and is the pointwise limit of this modified function sequence. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Led Headlights For Semi Trucks, Silicon Nitride Dielectric Constant, 4 Seater L Shape Sofa, Physician Assistant International, Explain Carbon Cycle With Diagram Class 9, Sajani Kannada Movie Songs Lyrics, Assumption Of The Theotokos, Shahi Paneer Korma, Mission: Impossible 2,
|
CommonCrawl
|
NEWBEDEV Python Javascript Linux Cheat sheet
NEWBEDEV
Chemistry - How to assign overlapping multiplets in 1H NMR spectra?
Solution 1:
Referring to your comment on Buttonwood's answer:
Unfortunately, in this case I only have 1D HNMR and IR available to analyse the product
If this is really the case, then you are essentially out of luck. It is not possible to extract much information from a series of overlapping multiplets, which may well be coupled to each other (these strong coupling effects are likely the cause of the "complex splitting" you mention). It may be the case that by going to a higher field you can alleviate some of this, as multiplets will be more dispersed along the frequency axis. However, you will quickly run into a hard limit this way, since our maximum field (as of the time of writing) is ~1.2 GHz and even that is likely not enough to fully resolve your overlapped multiplets.
With that out of the way, let's talk about some other NMR techniques which will be useful in this situation. I want to preface this with a few points:
These are general suggestions which are broadly applicable to a variety of different problems; they are not tailored to the specific molecule which you asked about, and indeed you may find that some of them are less suitable for your specific compound. For the specific case at hand I would suggest talking to a NMR specialist at your institution, if this is at all possible.
There is a caveat in that many of these techniques have slightly poorer performance in situations where there is strong coupling, although it by no means makes them useless.
You may find that you need more than one of these to get a complete understanding of the $\ce{^1H}$ spectrum (especially in light of the previous caveat).
I assume here that the overlapping multiplets belong to the same species, as is the case in the question. In the situation where you are studying a mixture of different species which may have peaks that overlap with each other, the following techniques will still remain very useful, but there are more options which will help to separate the different subspectra based on molecular properties (e.g. relaxation rates or diffusion coefficients). These are outside the scope of the current answer.
Finally, or further reading about all of these techniques I recommend Claridge, T. D. W. High-Resolution NMR Techniques in Organic Chemistry, 3rd ed.; Elsevier: Amsterdam, 2016..
(1) 2D correlation NMR
This has already been mentioned by Buttonwood, and is the standard answer to issues of resolution in 1D spectra. The downside is that 2D spectra are very, very rarely run with high resolution, partly because of time constraints, but especially so if heteronuclear decoupling is used (e.g. in a HSQC). Thus, instead of measuring well-resolved multiplet structure (which would be transferable to the 1D spectrum, thus allowing you to assign the 1D), you are more likely to get broad blobs that are centred on a certain frequency. This tells you nothing about the multiplets actual proton spectrum, it only tells you that at (say) 1.46 ppm you have some kind of multiplet that corresponds to a certain proton in your structure.
Despite this, one should not neglect the amount of information that is provided by standard 2D spectra (COSY, HSQC, HMBC, NOESY). Often even having a blob is enough to figure out which proton is which, even if it may not tell you the exact shape of the underlying multiplet.
(2) Pure shift NMR
This is a series of techniques which remove all coupling information, thereby collapsing multiplets into singlets and giving a 1D spectrum with exactly one singlet per (chemically inequivalent) proton. I would consider the TSE-PSYCHE method[1,2] to be the "state of the art" in this regard, especially with regard to strong coupling. This is very powerful but suffers from the same drawback as 2D NMR does: it removes all multiplet structure, so on its own does not lend any insight into the original $\ce{^1H}$ spectrum. Nevertheless, I would say that it is really worth having a pure shift spectrum in addition to some of the other methods, as it can greatly aid in assigning peaks.
Here's an example of a pure shift spectrum, versus the original proton spectrum of the same compound (personal data).
(3) 2D J NMR
A 2D J spectrum shows proton chemical shifts along one axis and proton multiplet structure along another axis. What this means is that every multiplet in a $\ce{^1H}$ spectrum is separated along a secondary axis according to its chemical shift. This allows you to not only figure out the position of each multiplet (which both of the above already provide you) but also the shape of each individual multiplet.
In particular, pure shift J spectra[3] are really the best versions in this regard as these provide much higher resolution and do not take a lot of time. To be technical, these don't provide "pure shift" characteristics because the 2D J $f_2$ axis (the horizontal axis) is already "pure shift", with the coupling information already separated out onto the $f_1$ axis. What they do is to convert the lineshapes in the spectrum from a "phase twist", which has long "tails" in both dimensions, to a pure absorption Lorentzian which is a single sharp peak. This is an example of how it looks (personal data again):
You can extract vertical traces from this spectrum to view each individual multiplet, with its chemical shift given simply by the value along the $f_2$ axis (horizontal axis).
(4) Selective excitation experiments
This refers to a family of 1D experiments where you selectively excite one multiplet and then perform some kind of mixing in order to look at spins that are coupled / near to the spin that you excited. A selective 1D NOE experiment (for example) would tell you which spins are in close proximity to a given spin that you chose to excite; it is analogous to acquiring only one row of a 2D NOESY. You could also run selective 1D TOCSY experiments, which would tell you which spins belong to the same spin system as the one you excited. By choosing an appropriate TOCSY mixing time you can limit the "spin diffusion" to one or two coupled spins away.
The benefit of this over the original 2D experiment is twofold: first, it takes much less time; and secondly, you can acquire much higher-resolution data. This allows you to disentangle different overlapping multiplets based on their coupling partners or spatial proximity to other spins. For example let's say you have two multiplets, corresponding to protons A and B, that are overlapping, but only proton A is in close proximity to proton C (whose multiplet is well-resolved). Then you can run a selective 1D NOE experiment where you excite proton C, and the resulting spectrum should include only multiplet A and not B. The same can be applied for scalar couplings via 1D TOCSY.
(5) Chemical shift-selective filters
If you find that you need to excite a peak inside an overlapped region, then the idea of a chemical shift selective filter may be useful. This seeks to excite one multiplet out of a series of overlapping ones, and traditionally involves recording multiple spectra and adding them up, allowing the multiplet of interest to add up while averaging all other multiplets to zero by virtue of them having a different phase. However, there is a recent implementation called GEMSTONE[4] which does this very cleanly in just a single experiment without needing to sum anything up. In my limited experience it works quite well although some of the experimental parameters need to be optimised for each peak to obtain best performance. This particular demonstration that I have (personal data, again) isn't so impressive because the overlap isn't as severe, but the paper has slightly more interesting examples.
Foroozandeh, M.; Adams, R. W.; Meharry, N. J.; Jeannerat, D.; Nilsson, M.; Morris, G. A. Ultrahigh-Resolution NMR Spectroscopy. Angew. Chem. Int. Ed. 2014, 53 (27), 6990–6992. DOI: 10.1002/anie.201404111.
Foroozandeh, M.; Morris, G. A.; Nilsson, M. PSYCHE Pure Shift NMR Spectroscopy. Chem. Eur. J. 2018, 24 (53), 13988–14000. DOI: 10.1002/chem.201800524
Foroozandeh, M.; Adams, R. W.; Kiraly, P.; Nilsson, M.; Morris, G. A. Measuring couplings in crowded NMR spectra: pure shift NMR with multiplet analysis. Chem. Commun. 2015, 51 (84), 15410–15413. DOI: 10.1039/c5cc06293d.
Kiraly, P.; Kern, N.; Plesniak, M. P.; Nilsson, M.; Procter, D. J.; Morris, G. A.; Adams, R. W. Single‐Scan Selective Excitation of Individual NMR Signals in Overlapping Multiplets. Angew. Chem. Int. Ed. 2021, 60 (2), 666–669. DOI: 10.1002/anie.202011642
Recording NMR is not limited to $\mathrm{^1H}$, but for isotopes of other nuclei, too if their magnetic spin is not zero. $\mathrm{^{13}C}$, for example is an other routinely recorded sample.
Literally, you don't have to stick to 1D, but may record correlation spectra, (not exhaustive list:) either between nuclei of same type (e.g., $\{\mathrm{^1H,^1\! H}\}$-COSY), or different type like $\{\mathrm{^1H,^{13}\! C}\}$-HSQC in 2D. Depending on sample, spectrometer, and NMR experiment, these maps may resolve better / completely the overlap of the superimposed signals, than your 1D experiment if you select a good trace along one of the two dimensions and direction of observation (projection). For example in the $\{\mathrm{^1H,^{13}\! C}\}$-HSQC below:
(credit)
you not only see the sum of all signals in the $\mathrm{^1H}$ domain on top, but you may see this map «in slices» for different resonance frequencies / ppm for the $\mathrm{^{13}C}$ domain (here: vertical direction). Thus the signal (6,6') has a correlation around $\pu{3.8 ppm}$ ($\mathrm{^1H}$) and $\pu{72 ppm}$ ($\mathrm{^{13}C}$); while (5,5') displays a correlation around $\pu{3.9 ppm}$ ($\mathrm{^1H}$) and $\pu{82 ppm}$ ($\mathrm{^{13}C}$).
Rules you have learned, e.g. about coupling constants of vicinal protons and coupling patterns, apply again. But the topic is too large for a one-fits-all answer on ChemSE. For example for structure elucidation of e.g., proteins, correlation spectra routinely observe three domains simultaneously (H,C,N) in 3D NMR, too:
It is the careful combination of multiple NMR experiments which eventually leads to the assignment of a structure; typically in conjunction with other techniques (MS, IR; X-ray) and knowledge of the sample's history (previous steps of isolation / synthesis).
I'll try to give some information that would help you according to your statement in the question:
I've actually been trying to find more generalised information to help with the assignment of a different (but similar) cyclohexene compound.
First of all, you may have found now from the two answers given elsewhere that there are no way you can assign all peaks without additional information such as 2D-COSY or other experiments. However, in general, for cyclohexane (or cyclohexene) ring protons, equatorial protons give rise to resonances downfield from their axial counterparts. For instance, the axial and equatorial homoallylic protons in cyclohexene (e.g., two protons at $\ce{C}$-5 in your structure), which give rise to a resonance at $\pu{1.65 ppm}$ at room temperature (the sample is a rapidly inverting mixture of conformers at room temperature). If you are able to gradually reduce the working temperature, the corresponding axial and equatorial resonances would appear to be separated, at very low temperatures, by approximately $\pu{0.4 ppm}$ (this value is an equivalence of $\pu{160 Hz}$ in $\pu{400 MHz}$ spectrum). It is interesting that the value is not greatly different to that in cyclohexane itself (Ref.1). Of course, this is the case only in the absence of complicating factors. Following cases are few exceptions to the general cases:
The axial-2-proton in 2-bromo-4-tert-butylcyclohexanone (where 2-bromo and 4-tert-butyl-groups are in cis-orientation), displays a resonance at around $\pu{4.50 ppm}$ while the equotorial-2-proton in its stereoisomer (where 2-bromo and 4-tert-butyl-groups are in trans-orientation) displays the corresponding resonance at around $\pu{4.26 ppm}$ (Ref.1).
Similarly, the axial-2-proton in 2-bromo-4-phenylcyclohexanone (where 2-bromo and 4-phenyl-groups are in cis-orientation), displays a resonance at around $\pu{4.87 ppm}$ while the equotorial-2-proton in its stereoisomer (where 2-bromo and 4-phenyl-groups are in trans-orientation) displays the corresponding resonance at around $\pu{4.38 ppm}$ (Ref.1).
Also, if you consider 2-methoxy-trans-decalin-1-one, there are two stereoisomers where 2-methoxy group is either axial or equatorial. Since trans-decalin confirmation is relatively "fixed," its corresponding 2-peoton is also either equatorial or axial, respectively. Accordingly, axial-2-proton gives rise to a resonance at around $\pu{3.58 ppm}$ while its equatorial-counterpart gives rise to a resonance at around $\pu{3.36 ppm}$ (Ref.1).
Now, if we concentrated on your spectrum, as you stated that one can easily assign resonances at around $\pu{5.80 ppm}$ (1H) and $\pu{3.34 ppm}$ (2H) for 2-H olefinic proton and methelene group on $\ce{CH2-CCl3}$ function, respectively. The two singlets at around $\pu{1.80 ppm}$ (3H) and $\pu{1.75 ppm}$ (3H) are also assigned as two methyl groups on $\ce{(CH3)2CBr}$-function. Keep in mind that these two methyl groups are diastereomeric since the function is attached to a chiral carbon.
Then, the rest of the protons are in the ring. A three proton multiplet at around $2.48$-$\pu{2.23 ppm}$ and two proton multiplet at around $2.15$-$\pu{2.00 ppm}$ can be tentatively assigned as allylic protons due to their chemical shift values. However, the structure has only four remaining allylic protons (two proton each on $\ce{C}$3 and $\ce{C}$6). Thus, the extra proton should be one of remaining three homoallylic protons (one on the chiral $\ce{C}$4 and two on $\ce{C}$5). A one proton multiplet at around $1.48$-$\pu{1.35 ppm}$ is strongly suggested that it could be the homoallylic proton at $\ce{C}$4. However, I hesitate to assign it is as so because we don't know the stereochemistry of $\ce{C}$4. Finally, another one proton multiplet at around $1.68$-$\pu{1.58 ppm}$ can be also tentatively assigned as the last homoallylic proton. However, it is worth noting that the correct assignments of those ring protons are highly depend on advance 2D-analysis.
L. M. Jackman, S. Sternhell, In International Series in Organic Chemistry, Volume 10: Applications of Nuclear Magnetic Resonance Spectroscopy in Organic Chemistry, 2nd Edition; Pergamon Press, Ltd: Oxford, United Kingdom, 1969 (ISBN: 0 08 012542 5).
Chemistry - Why is the sum of two inexact differentials exact? Chemistry - In electrolysis, why does each atom wait to turn into gas until they reach a particular electrode? Chemistry - How are there two C3 rotation axes in ammonia? Chemistry - What does heterogeneous mean? Chemistry - charge density as a measure of lattice enthalpy & polarizing power? Chemistry - Why "monoxide" but not "diodine"? Chemistry - Using a H NMR spectrum to determine the structure of a protein supplement Chemistry - Chirality and Optical activity Chemistry - Criterion for gravimetric analysis of silver Chemistry - Work done in expanding a gas reversibly and irreversibly Chemistry - Is it better to use a smaller, more accurate measuring cylinder several times or a larger, less accurate one for the same volume? Chemistry - How can a nonlinear field gradient be generated in NMR?
Pandas how to find column contains a certain value Recommended way to install multiple Python versions on Ubuntu 20.04 Build super fast web scraper with Python x100 than BeautifulSoup How to convert a SQL query result to a Pandas DataFrame in Python How to write a Pandas DataFrame to a .csv file in Python
© 2021 newbedevPrivacy Policy
|
CommonCrawl
|
BMC Ecology
Composition, uniqueness and connectivity across tropical coastal lagoon habitats in the Red Sea
Zahra Alsaffar1,2,
João Cúrdia1,
Xabier Irigoien1,3,4 &
Susana Carvalho ORCID: orcid.org/0000-0003-1300-19531
BMC Ecology volume 20, Article number: 61 (2020) Cite this article
Tropical habitats and their associated environmental characteristics play a critical role in shaping macroinvertebrate communities. Assessing patterns of diversity over space and time and investigating the factors that control and generate those patterns is critical for conservation efforts. However, these factors are still poorly understood in sub-tropical and tropical regions. The present study applied a combination of uni- and multivariate techniques to test whether patterns of biodiversity, composition, and structure of macrobenthic assemblages change across different lagoon habitats (two mangrove sites; two seagrass meadows with varying levels of vegetation cover; and an unvegetated subtidal area) and between seasons and years.
In total, 4771 invertebrates were identified belonging to 272 operational taxonomic units (OTUs). We observed that macrobenthic lagoon assemblages are diverse, heterogeneous and that the most evident biological pattern was spatial rather than temporal. To investigate whether macrofaunal patterns within the lagoon habitats (mangrove, seagrass, unvegetated area) changed through the time, we analysed each habitat separately. The results showed high seasonal and inter-annual variability in the macrofaunal patterns. However, the seagrass beds that are characterized by variable vegetation cover, through time, showed comparatively higher stability (with the lowest values of inter-annual variability and a high number of resident taxa). These results support the theory that seagrass habitat complexity promotes diversity and density of macrobenthic assemblages. Despite the structural and functional importance of seagrass beds documented in this study, the results also highlighted the small-scale heterogeneity of tropical habitats that may serve as biodiversity repositories.
Comprehensive approaches at the "seascape" level are required for improved ecosystem management and to maintain connectivity patterns amongst habitats. This is particularly true along the Saudi Arabian coast of the Red Sea, which is currently experiencing rapid coastal development. Also, considering the high temporal variability (seasonal and inter-annual) of tropical shallow-water habitats, monitoring and management plans must include temporal scales.
Coastal lagoons are important transition systems providing essential socio-economic goods and services (e.g. shore protection, fisheries, carbon sequestration) [1,2,3]. Coastal lagoons harbour well-adapted and sometimes unique assemblages of species, which play a vital role directly supporting local populations. These ecosystems are naturally stressed on daily to annual-time scales [4,5,6,7,8] and display high environmental variability (e.g. temperature, salinity, primary productivity, nutrients, dissolved oxygen). Such variability is reflected in the biological patterns that alter in response to the new environmental conditions. Lagoon ecosystems are also being increasingly affected by human disturbances that can compromise their ecological and socio-economic values [5, 9,10,11,12].
Subtropical and tropical coastal lagoons encompass a range of essential soft-substrate habitats, such as mangroves, seagrasses and unvegetated bottoms. These habitats are associated with different environmental conditions, resulting not only from their location along the depth profile but also their structural complexity, and biological assemblages [13,14,15,16]. However, while these habitats contain a diverse range of organisms spatial distribution patterns and connectivity in subtropical and tropical lagoon habitats have mainly been assessed using fish and other mobile marine fauna [17,18,19,20,21,22,23]. Studies describing and comparing macrobenthic distribution patterns and the strength of connectivity linkages across different shallow-water tropical lagoon habitats are particularly limited compared to temperate systems (e.g. [15, 24,25,26,27]). Spatial differences in the community can provide information regarding the ecological requirements of species. For example, species able to colonize multiple habitats will most likely be less sensitive to environmental changes, whereas those more directly associated with a specific habitat may be less tolerant to environmental changes. In general, harsher environmental conditions are observed in the intertidal area, dominated by mangrove trees, with conditions being attenuated with increasing depth, a pattern that is associated with a consistent increase in species richness and abundance [28, 29]. Indeed, mangrove habitats are characterized as unfavourable environments influenced by high salinity, high fluctuation of temperature, desiccation, and poor soil condition (depleted oxygen) [30]. On the other hand, if undisturbed, seagrass habitats provide comparatively more stable environmental conditions through time [31,32,33] as well as protection from predators [34].
Furthermore, the knowledge about the role of temporal variability in driving macrobenthic patterns is still scarce [35,36,37,38,39,40,41]. While seasonal changes in tropical regions are comparatively less distinct than in temperate regions [42], temporal variability in benthic patterns exists [39, 43, 44]. Investigating temporal variability patterns is essential to obtain a deeper knowledge of the dynamics and processes regulating lagoon communities. Indeed, considering the current scenario of global climate change, it is critical to better understand how the distribution patterns of organisms in these habitats are changing and particularly how they respond to changes in temperature and other key environmental drivers [1, 45]. Temporal variation patterns in the abundance and composition of macrofaunal invertebrates have been intensively studied in temperate coastal ecosystems in relation to environmental variables [46,47,48,49]. Temporal variability in temperature and food availability, for example, can influence recruitment events with consequences for the structure, distribution, and abundance of the community [50,51,52]. Similarly, sediment composition, organic matter, and vegetation cover, which may vary in time, are also main drivers of observed ecological patterns. However, most of those studies have been conducted in temperate regions and, more recently in polar habitats (e.g. [53,54,55,56,57]). Comparatively, less attention has been dedicated to sub-tropical and tropical areas (e.g. [58,59,60,61]). This is even more striking in regards to the assessment of inter-annual variability (but see [40, 62, 63]).
Assuming that harsher environmental conditions will occur towards the intertidal area (i.e. mangrove habitats), we hypothesise (i) a decrease in species richness (i.e. the total number of species) and in the number of exclusive species from subtidal to intertidal areas, as less resistant species are progressively excluded along the the environmental gradient. We also hypothesise that (ii) shallow water seagrass meadows will harbour higher numbers of species particularly compared with unvegetated bottoms, as a result of habitat complexity, protection from predators and food availability [64,65,66]. Likewise, we hypothesise (iii) that temporal changes will be less evident in subtidal (vegetated and unvegetated) than intertidal habitats [30, 67] and that subtidal seagrasses areas will support more stable communities through time. Ecologically related management decisions require a sound knowledge of the biodiversity of the ecosystem. By assessing the variability in spatial and temporal patterns of macro benthic organisms we expand on the existing knowledge on tropical coastal lagoons which are sensitive as well as ecologically and economically valuable.
Macrobenthic community composition: general characterization and connectivity among habitats
A total of 4771 invertebrates were identified within the different habitats surveyed in the lagoon (Fig. 1a), belonging to 272 operational taxonomic units (OTUs) distributed among 11 phyla, 16 classes, 40 orders, and 80 families. Annelida dominated both in abundance and number of taxa, contributing to, respectively, 51.0% and 42.0% of the total values. Sipuncula (15.0%), Arthropoda (13.0%), Mollusca (12.0%), and Echinodermata (7.0%) also contributed to the overall density. Regarding the number of species, Arthropoda (28.0%) and Mollusca (18.0%) were, along with Annelida, the phyla contributing the most to the total number of species.
a Map showing the locations of the habitats in the lagoon. b Annual variability in sea surface temperature in the lagoon during the study period. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv.). SU1 and SU2, summer sampling dates 1 and 2; W1 and W2, winter sampling dates 1 and 2. The map was produced by the authors using data freely available (http://www.thematicmapping.org/downloads/world_borders.php; https://www.gadm.org/download_country_v3.html, Saudi Arabia)
At the species level, the sipunculid Phascolion (Phascolion) strombus strombus (12.2% of the total abundance) was the most abundant species, followed by the polychaetes Simplisetia erythraeensis (5.8%), Eunice indica (4.4%), Ceratocephale sp. (3.3%), Aonides sp. (2.7%), Lumbrineris sp.1 (2.7%), and Lysidice unicornis (2.6%), the amphipod Metaprotella africana (3.3%), and the bivalves Barbatia foliata5 (2.7%) and Paphies angusta (2.4%). Most of these taxa were found in at least four of the studied sites, except for Metaprotella africana (exclusive to S1) and Barbatia foliata, exclusive to seagrass habitats (S1 and S2). All the remaining taxa contributed to less than 2% of the total abundance.
Only eight taxa (3% the total number of taxa) spanned across the five habitats. Most of them were polychaetes (Capitellethus sp., Drillonereis sp., Euclymene spp., Lumbrineris sp.1, Lysidice unicornis, Notomastus spp.). Nemertea (und.) and the sipunculid Phascolion (Phascolion) strombus strombus were also observed across the five sites. Simplisetia erythraeensis was absent from the unvegetated site. There were 62 taxa shared between intertidal and subtidal sites, and only 18 exclusive species to the mangrove habitats (as a whole), representing 6.6% of the of the gamma diversity (2.2%, M2; 4.4%, M1). On the other hand, subtidal habitats showed a rather consistent percentage of exclusive species, ranging from 29.4% in S1, 32.3% in S2 and 33.8% in the unvegetated area (S1: 12.8%; S2: 18.4%; Unvegetated: 15.1% of the gamma diversity, i.e. the total number of taxa observed in the lagoon).
Both seagrass habitats showed a higher percentage of resident species (i.e. species present in over 85% of the sampling dates in a certain habitat) compared to mangrove and unvegetated areas (Table 2). In terms of the number of individuals, those taxa contributed to 45.0% and 34.0% for S1 and S2, respectively, of the site's total abundance. S2 showed a more balanced distribution of the four habitat preference traits analysed (i.e. resident, frequent, occasional, rare) and relatively stable numbers throughout the study period (Table 2). Regardless of the habitat, occasional species accounted for more than 12.6% of the total number of species.
Macrobenthic patterns of variability across the lagoon seascape show that the community was structured by habitat with limited seascape ecological connectivity across the different habitats (Fig. 2a). The environmental data gathered partially explained the multivariate variability of the biological data with the two first axes of the distance-based redundancy analysis (dbRDA) explaining more than half of the constrained variability but only 19.1% of the total variability of the biological communities. The dbRDA plot reinforces a clear separation of the communities inhabiting mangrove areas, S1, and the unvegetated habitat, whereas S2 presented affinities (i.e. higher connectivity) with either S1 or mangrove stations depending on the sampling period (Fig. 2b). Samples from the unvegetated habitat were associated with depth and percentages of medium and fine sand. Seagrass habitats (particularly S1) were separated based on the higher silt and clay (fine particles) content, whereas mangrove habitats presented a slightly higher percentage of coarse sand. Multivariate patterns suggest that the nature of the biotope itself drives the composition and structure of macrobenthic communities. The investigation of temporal variability was undertaken for each habitat separately.
Multivariate analysis of the community data. a Ordination (non-metric multivariate dimensional analysis) and classification diagram of the sampling habitats based on the Bray–Curtis dissimilarity on non-transformed data. b Distance-based redundancy analysis (dbRDA) plot based on a set of environmental variables; salinity, temperature, depth, grain size fractions (coarse sand medium sand, fine sand, fines), organic matter: LOI (%) and chlorophyll a on biological data from lagoon habitats; M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv). The points represented the sampling events (winter 1, winter 2, summer 1, and summer 2) for 2014 and 2015. Coarse sand and fines data are square root transformed and LOI loge transformed. Length and direction of vectors indicate the strength and direction of the relationship
Temporal variability within habitats
The high variability patterns in the seagrass biomass along the study period (Fig. 3) was reflected in the biological changes but was not fully aligned with the temporal pattern in sea water temperature (Fig. 1b). When analysing the full dataset and regardless the diversity metric considered, S2 consistently presented the highest number of taxa (155, observed; 184.8–219.7, estimated), whereas M2 was the poorest taxa site. Density was also higher at S2 (801.9 ind.m−2) and lowest at the unvegetated area (388.8 ind.m−2) (Table 1).
Biomass of seagrass plants along the study period (2014–2015) in both seagrass stations. SU1 and SU2, summer sampling dates 1 and 2; W1 and W2, winter sampling dates 1 and 2. S1 and S2, seagrass sites
Table 1 Total number of Operational Taxonomic Units (OTUs), estimated number of taxa based on Chao, Jacknife (1st order) and Bootstrap, and average density (ind.m−2) per habitat. M1 and M2, mangrove; S1 and S2, seagrass
In general, a higher number of OTUs were observed in the subtidal habitats than the intertidal mangrove areas (Fig. 4a), with M2 showing a consistently depressed number of taxa across all sampling dates. Abundance was also generally higher within seagrass meadows (Fig. 4b). M2 also presented the lowest Shannon–Wiener diversity whereas, in general, higher values were observed at S2 or at the unvegetated habitat (Fig. 4c).
Alpha-diversity metrics per habitat and over time. a Number of Operation Taxonomic Units (OTUs), b density, and c Shannon–Wiener diversity. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv)
Biological similarity within each habitat was markedly low, ranging from 14% (M2) to 25% (S1) (Table 2). Both habitats also showed a higher dominance with only four and six species contributing to over 62% of the habitat's abundance, respectively. In the remaining habitats, a minimum of 13 taxa was needed to reach the same level of abundance (Table 2). Except for S1, where none of the dominant taxa was a polychaete, this group dominated all the other habitats. S1 was dominated by a sipunculid (Phascolion (Phascolion) strombus strombus), two bivalves (Barbatia foliata and Cardiolucina semperiana), one amphipod (Metaprotella africana) and two echinoderms (Aquilonastra burtoni and Amphioplus cyrtacanthus).
Table 2 Cumulative percentage of the taxa (Cum %) contributing to more than 60% of each habitat's total abundance
Temporal variation in the structure of macrobenthic assemblages within each habitat examined on the basis of the Bray–Curtis and Jaccard resemblance measures indicated different patterns depending on the habitat in analysis. Major differences were not detected between metrics and therefore only plots for Bray–Curtis matrices are presented (Fig. 5). The results of the Permutational Multivariate Analysis of Variance (PERMANOVA) confirmed different temporal trajectories in the analysed habitats (Table 3). Both resemblance metrics applied to M1 and S1 datasets showed a significant interaction of the main factors (Year x Season). The pair-wise tests indicated for M1 a significant inter-annual difference both in winter and summer. For S1, inter-annual differences were only detected in winter. With regard to seasonal differences, S1 presented significant variability in both years (except in the composition–Jaccard-for 2015) but in M1 differences were only detected in 2014 (Table 3). Macrobenthic communities at M2 and S2 showed significant inter-annual variability (except for S2 with presence/absence) (Table 3). Finally, the unvegetated area showed significant and independent seasonal and inter-annual variability (Table 3).
Non-metric multidimensional scaling (nMDS) based on Bray–Curtis dissimilarity matrices based on untransformed data, for temporal variation in the structure of macrobenthic assemblages within each habitat. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv)
Table 3 Two-way PERMANOVA model and pair-wise tests based on Bray–Curtis and Jaccard matrices within habitats among seasons and year (Year and Season interaction; Yr x Se)
This study investigated the distribution patterns of macrobenthic communities inhabiting adjacent shallow-water habitats in a tropical coastal lagoon with particular focus on how they are connected and how communities within each habitat vary over time. Even though ecological seascape connectivity has been previously demonstrated particularly for fish, information on the benthic dynamics in tropical lagoons is still scarce. The Al Qadimah lagoon, likewise other tropical lagoons, encompasses a wide range of habitats including both hard (not addressed here) and soft-substrates. Within the latter, changes in the vegetation cover result in a mosaic of habitats with different sedimentary properties that will determine the structure of local macrobenthic communities [68]. Here, we observed a clear zonation of the benthic communities, driven by habitat-related factors acting at varying spatial scales [69]. The present results also provided new insights into the temporal variability (seasonal and inter-annual) of different lagoon shallow-water habitats in a tropical seascape.
Uniqueness of lagoon habitats within the seascape
A clear pattern of habitat-dependent association was observed with the different habitats harbouring distinct macrobenthic assemblages. The high spatial variability of macrofaunal patterns is most likely linked to the heterogeneity of the seascape and to the high contribution of rare species to the overall abundance. Recent studies showed that biological variability is driven by the relative high contribution of rare and common species, with rare species playing a major role in the temporal patterns, as a result of their vulnerability to fluctuations in environmental conditions (e.g. [70, 71]).
Subtidal habitats harboured 70% of the total number of species. Overall, seagrass habitats showed the highest number of taxa, which agrees with previous studies [65, 68, 72,73,74]. Variability was, however, high and significant differences within the subtidal area were not detected. The structural complexity provided by the seagrass canopy and the developed rhizome and root systems that contribute to sediment stability may favour the development of diverse communities [70, 75, 76]. In the tropics, the canopy can play an additional critical role providing shade that can attenuate the effects of sea water temperature [8] that in the study region can reach over 32 °C in the summer. Yet, we found that denser seagrass meadows are not always the most favourable habitats for several invertebrates, even though this result may be site-dependent [77,78,79,80]. Indeed, the site displaying the highest variability in the cover during the study period, showed the highest number of taxa, density of individuals, and exclusive number of species (32.3% of the site's total number of species). Dense vegetation can physically obstruct the movement of large burrowing macroinvertebrates [68, 81]. Also, despite the increased aeration within the sediment due to the developed root system [82], the decomposition of the high amounts of organic matter will require increased oxygen consumption and result in anoxic regions and accumulation of toxic products [83, 84]. Therefore, vegetated areas with comparatlively lower cover might harbour higher species numbers as a result of species avoiding toxic anoxic conditions in densely covered areas [85].
Within mangrove habitats species encounter harsh physical environmental conditions (e.g. high salinity, hypoxia, desiccation, high concentration of toxins) and in general nitrogen limitation (C/N ratio often > 100; although mangroves in the Red Sea are carbon limited compared to other locations [86]) due to a low nutritional value of the main source of organic matter, i.e. leaf litter [25]. Under these consitions, populations of a few tolerant/opportunistic species dominate the macrobenthic communities [25, 87]. In the present study, the deepest mangrove area (M2) was dominated by only four species, the polychaetes Simplisetia erythraeensis, Ceratocephale sp. and Paucibranchia adenensis, and the bivalve Paphies angusta contributed to over 60% of the total abundance. In the shallowest mangrove area, despite the dominance of polychaetes, the sipunculid (Phascolion (Phascolion) strombus strombus) and some decapods (Diogenes costatus and Thalamita poissonii) were also co-dominant. Decapods are critical players for the ecosystem functioning of these habitats by processing leaf litter and oxygenating sediment through their burrows [88, 89] and therefore their dominance in the habitat is not surprising. As observed elsewhere, mangrove habitats showed the lowest number of species compared to nearby seagrass and unvegetated substrates, as previously found [90, 91].
Connectedness and stability at the scale of the seascape
In the present study, nearby seagrass meadows differed in cover and depth location, which might have resulted in limited similarity in faunal communities (both habitats shared 35.0% of total number of species). Higher similarities (~ higher seascape connectivity) were detected among subtidal habitats than between those and mangroves (intertidal habitats). Nevertheless, 62 taxa, representing 22.8% of the gamma diversity, were shared between intertidal and subtidal habitats, suggesting that several species may utilize contrasting yet adjacent habitats within the lagoon seascape. Despite the fact that the overlap of species across the five habitats is lower (eight taxa; 2.9% of the total number of taxa) than previously reported [92, 93], the present study suggests the connectivity between intertidal and subtidal areas and the need for integrated management measures. The results obtained may result from the low hydrodynamic conditions present but information on the hydrographic patterns is non-existent. The effect of tides can result in displacement of specimens through water movement [94] and depending on their height can also expose organisms to desiccation for variable periods of time, which may hinder the distribution of most of the species toward the intertidal area. Specially, when analysed together, mangrove habitats contributed to 6.6% (M1, 4.4%; M2, 2.2%) of the gamma diversity, contrasting with the unvegetated subtidal area and the seagrass meadows that supported, respectively, 15.1% and 31.3% (S1, 12.9%; S2, 18.4%).
Mangrove forests can produce relatively large amounts of organic matter through the conversion of leaf litter into detritus [64], that are later exported to nearby habitats [95,96,97]. Therefore, the proximity of the mangrove stands to shallow water seagrass meadows will most likely contribute to the higher biodiversity and, particularly, higher density observed within seagrasses. The populations of suspension-feeders, such as Barbatia foliata, which was dominant in the seagrass meadow (S1), supports the idea of higher availability of organic suspended particulate matter derived from, among others, nearby mangrove canopies and this higher availability will also support more resident organisms [68, 99]. Despite the high temporal variability observed in all habitats, highlighted by the dissimilarity indices, seagrass habitats showed a comparatively higher stability, with the lowest values of inter-annual variability, similar to previous studies in temperate areas [8, 98]. These habitats also supported the highest number of resident species (i.e. those present in over 85% of the sampling periods). At the lagoon entrance, the exclusive presence of Schizaster gibberulus, a sea urchin previously associated with the near shore coastal biotope in the region [16], suggests that the unvegetated area may be located along a corridor connecting offshore and lagoon communities, with patterns likely dependent on the hydrodynamic processes [99]. Its position between the lagoon and the open coastal water may also explain the high number of species observed (121), with a large proportion being exclusively associated with this habitat (33.9%). It is worth noting that given the generally low density observed in the Red Sea [16, 100], future studies will require to increase the replication across multiple spatial scale to fully understand the dynamics of benthic macroinvertebrates under low nutrient, high temperature, and high salinity conditions. Therefore, conclusions related to abundance and diversity should be interpreted with caution.
The present findings reinforce the need for an integrated understanding of shallow-water habitats from a seascape perspective, in opposition to a fragmented analysis of the isolated habitats [21, 101, 102]. Whereas the latter may be relevant when looking at particular species, the contribution of each habitat to the dynamics of the whole macrobenthic assemblages is relevant and should not be disregarded by managers when aiming for marine biodiversity conservation. Indeed, in tropical regions, seagrass beds and mangroves have been reported as key nursery areas for several reef fishes such as parrotfishes (Labridae, Scarini), grunts (Haemulidae) and snappers (Lutjanidae) [103,104,105,106] that rely on the macrobenthos as food resources. Large-scale migrations (over 30 km) by juvenile snappers, between inshore nursery habitats and reefs in the central Red Sea have been reported [22]. Also, mangrove forests have been linked to enhanced biomass and biodiversity of coral reef fishes [18, 21, 104, 107, 108]. Sustained connectivity of the habitats may enhance the resilience of coral populations to recover after disturbance [107]. Therefore, disturbing the corridors connecting coral reefs with other inshore habitats may even have consequences for reef conservation at a local scale.
Overall, the present study confirmed a decreasing gradient in the total number of species and number of exclusive species towards the mangrove habitats. It also supports the role of seagrass habitat complexity in promoting diversity and density of organisms. Nevertheless, high and stable seagrass cover does not necessarily result in the highest biodiversity levels. But the presence of these plants plays an essential role in the biodiversity of coastal lagoons. Seagrass habitats in contrast to mangrove forests and the unvegetated area show lower inter-annual variability and higher number of resident species, suggesting more stable communities.
Current findings highlight habitat-structured patterns and persistent patchiness evidenced by a limited number of overlapping species (dominance of habitat specialists over generalists) within the seascape. This is particularly relevant considering the proximity of the analysed habitats but may result from the low dominance levels compared to temperate regions [92, 98, 109]. Nevertheless, 22.8% of the gamma diversity was represented by taxa spanning between subtidal and intertidal habitats. Hence, holistic, i.e. interconnected seascape management approaches, rather than those focusing on single habitats should be prioritized to protect biodiversity and fisheries [22, 110, 111].
Study area and sampling design
The present study was carried out in the Al Qadimah lagoon (22° 22′ 39.3″ N, 39° 07′ 47.2″ E) located in the central region of the Saudi Arabian Red Sea (Fig. 1a). This shallow lagoon (average depth 2.19 m) has an approximate area of 14 km2 and is not impacted by direct anthropogenic disturbances typical of other coastal lagoons (e.g. freshwater or sewage discharges, fisheries, habitat destruction from coastal development). It is, however, situated between two urbanized areas, which are increasing in size (King Abdullah University of Science and Technology, 7000 inhabitants; King Abdullah Economic City, currently 5000 inhabitants but it is expected to reach 50,000 in the near future) but that are not directly connected with the lagoon. Hence, it offers a rare opportunity to study the natural roles of environmental drivers in shaping macrobenthic communities inhabiting such critical wetlands.
Scattered along the extent of its margins, well-developed mangrove stands of Avicennia marina are observed. The bottom of the lagoon, particularly in the inner areas is characterized by more or less fragmented seagrass meadows. To depths of approximately 50 cm, Cymodocea rotundata is the dominant species with smaller patches of Cymodocea serrulata also being present. Below this depth, seagrass meadows are mainly characterized by mono-specific stands of Enhalus acoroides down to 2 m depth. Towards the sea, unvegetated bottoms with either sponges mixed with coral rubble or sand progressively replace seagrass meadows.
In the Red Sea, there are two marked seasons (Fig. 1b), winter (November–April) and summer (May–October). In order to investigate inter-annual and seasonal changes in macrobenthic patterns, samples were collected in two different periods in winter (January; March) and summer (June; September) of 2014 and 2015. Five permanent soft-sediment habitats typical of tropical coastal lagoons were selected: 1. upper mangrove area (M1); 2. deeper mangrove area (M2); 3. shallow seagrass meadow (S1, mix meadows of Cymodocea serrulata interspaced with Cymodocea rotundata; relatively high cover all year round); 4. deeper seagrass meadow (S2, monospecific stands of Enhalus acoroides with high variability in the vegetation cover throughout the study period); and 5. unvegetated soft-sediments (Fig. 1a). The unvegetated sandy substrate was located between 8 and 10 m depth. Due to the widespread distribution of seagrasses, mangroves and in order to minimize the direct influence of those habitats on the colonization patterns of unvegetated areas, the site was located at the entrance of the lagoon.
Sampling strategy
At each habitat and sampling period, conductivity, temperature, and depth (CTD) casts were carried out with a multiparameter probe (OCEAN SEVEN 316 Plus and 305 Plus). The CTD casts also recorded oxygen saturation in the water column. Water samples for the analysis of chlorophyll a (chl a) were collected using a Niskin bottle at each station (2 L per station). Sediment samples were collected using a 0.1 m2 Van Veen grab in the seagrass meadows and the unvegetated area (subtidal stations), whereas in the mangrove habitats (intertidal), samples were collected using hand corers (3 × 10 cm i.d. making one replicate; total area per replicate ~ 0.024 m2). In 2014, two replicates at each site and sampling date were taken for the study of macrobenthic communities, with additional samples being collected for the study of environmental variables (grain particle size distributions and organic matter content). In 2015, the same approach was followed increasing the number of replicates for the study of macrobenthic communities to three. Macrobenthic samples were sieved through 1 mm mesh screens and preserved in 96% ethanol.
Laboratory analyses
In order to estimate the primary production in the sampling area, the concentration of chl a was quantified by fluorescence using the EPA method 445.0 [112]. Water samples were filtered using GF/F filters as soon as we arrived at the laboratory. The filters were then preserved at -80 °C until extraction of the pigments. 10 ml of 90% acetone were used for each extract and left for 24 h in cold and dark conditions to minimize degradation. The procedure was undertaken in low light conditions to minimize degradation. A Turner Trilogy® fluorometer (Turner Designs) was used to quantify the chl a content using an acidic module. The degradation of the chlorophyll a to phaeophytin was accomplished by acidifying the sample with 60 µl of 0.1 N HCl.
Sediment samples were sorted after all the vegetation associated with sediment was removed. Organisms were whenever possible identified to the species level. Vegetation biomass (seagrass leaves, roots, and mangrove material) was quantified per replicate.
Grain particle-size distribution was quantified after initial wet sieving of the samples (63 μm mesh) to separate the silt and clay fraction from sandy fractions and gravel. The retained fractions were dried at 80 °C for 24–48 h. The dried sandy and gravel sample was then mechanically sieved by using a column of sieves to separate the sandy fractions and the gravel as follows: < 63 μm, silt–clay; 63–125 μm, fine sand; 250-500 μm, medium sand; 1000–2000 μm, coarse sand; > 2000 μm, gravel.
The organic content of the sediments was determined by loss on ignition (LOI). Sediments were dried for 24–48 h at 60 °C and then the samples were placed in the muffle furnace at 450 °C for 4 h. After cooling in a desiccator for 30 min, samples were weighed and the LOI was calculated using the following equation [113]:
$$\text{LOI} = {(\text{W}_\text{i}}-{\text{W}_\text{f}})/{\text{W}_\text{i}} \times 100$$
where: LOI = Organic Matter content (%), Wi = Initial weight of the dried sediment subsample; Wf = Final weight after ignition.
General patterns
Macrobenthic patterns were analysed through a combination of univariate and multivariate techniques. Several univariate metrics were calculated including the total number of taxa (S, species richness), density (ind. m−2), and Shannon–Wiener (H′). Considering the different sampling methods, and the dependency of species richness on sample size [114], estimates of species diversity were also calculated and compared with S. The nonparametric species richness estimators used: Chao 1, Jacknife 1 order and Bootstrap all follow an asymptotic approach to estimate the number of undetected species richness. These estimators are commonly used in ecological studies because they are simple, intuitive, relatively easy to use and perform reasonably well [115]. The biased corrected form of Chao 1 estimator [114, 116] uses the number of singletons and doubletons to estimate the lower bound of species richness. The first order Jacknife estimator [117] assumes that the number of species that are missed equals the ones that were seen once (singletons). The Bootstrap estimator is based on the assumption that if the same data is resampled with replacement the number of missing species after resampling will be similar to those missed originally [117]. All estimators were calculated using the open source software R [118] using function "specpool" from "vegan" package [119]. Abundance data was used for the calculations of all estimators. In order to have a balanced number of replicates, the analyses were conducted for two replicates, with those collected in 2015 being randomly selected. Preliminary analysis showed that the same general patterns in composition and alpha-diversity were obtained for 2014 and 2015 datasets.
To visualize multivariate patterns of abundance in macrobenthic communities within the seascape, non-metric multidimensional scaling (nMDS) was applied based on the Bray–Curtis dissimilarities. Given the differences among habitats for some dominant species, when comparing habitats (i.e. full dataset), Bray–Curtis dissimilarities matrices were calculated using untransformed abundance data. Separate nMDS plots were generated for each one of the sites for a better visualization of the temporal variability. These analyses were also based on untransformed data. Within each site, significant variability in the multivariate patterns over time was analysed initially according to a three-factor design (Year; Season; Date, nested within Season) using Permutational Multivariate Analysis of Variance (PERMANOVA). As the factor "Date" was found not significant, and to increase the power of the analysis, a two-factor PERMANOVA was applied. Whenever significant differences in the interaction term were detected (i.e. Year × Season), pair-wise tests were conducted.
Connectedness within the seascape and stability patterns over time
A preliminary investigation of the patterns of variability across the seascape was carried out to identify generalist versus specialist taxa, i.e. those that span across multiple habitats versus those that are particularly associated with a specific habitat, respectively. We aimed to characterize the main differences in the community patterns in terms of shared and exclusive species that could determine the cause of the connectivity across the lagoon. This analysis was conducted based on the whole dataset, disregarding the seasonal and annual changes, as our main question was related to the constancy of spatial changes in different habitats.
Finally, we analysed the frequency of occurrence of species in each habitat during the study period. Species were classified based on Habitat Preference Trait as follows: (i) resident, present in over 85% of the sampling dates (i.e. eight events); (ii) frequent, observed between 50% and 85% of the dates; (iii) occasional, presence registered in between 25% and 50% of the sampling occasions; (iv) rare, observed in less than or equal to 25% of the sampling dates; (v) seasonal, only observed in one season but in both years.
Community stability was also examined over the sampling period within each habitat based on the indices Bray–Curtis (community structure) and Jaccard (presence/absence; composition). Within each habitat, variability between all pairwise comparisons among terms of interest (e.g. within and between seasons; within and between years) was analysed. We established that low levels of similarity are related to high variability in the macrobenthic communities over time, whereas high similarity is indicative of more stable communities.
Relationships between environmental variables and assemblage structure
Distance-based redundancy analysis (dbRDA) was used to assess the relationship between each environmental variable and the variation in the community structure (given by the direction and length of vectors for each variable). The variables used for the analysis were salinity, temperature, depth, grain size fractions, organic matter content (% LOI), and chl a. Three of the variables were transformed to reduce skewness, namely the fines and coarse sand fractions of the sediment (square root) and organic matter content (natural log). Marginal tests are used to show the significance of each variable individually to the model and sequential tests show the best subset of explanatory variables that explain the biological patterns.
C/N:
Carbon to Nitrogen ratio
Chl a :
Chlorophyll a
Conductivity, temperature, and depth casts
dbRDA:
Distance-based redundancy analysis
H' :
Shannon–Wiener
LOI:
Loss on ignition
Upper mangrove area
Deeper mangrove area
MDS:
Non-metric multidimensional scaling
OTUs:
Operational taxonomic units
PERMANOVA:
Permutational Multivariate Analysis of Variance
S1:
Shallow seagrass meadow
Deeper seagrass meadow
Unv:
Unvegetated soft-sediments
Harley CD, Randall Hughes A, Hultgren KM, Miner BG, Sorte CJ, Thornber CS, et al. The impacts of climate change in coastal marine systems. Ecol Lett. 2006;9:228–41.
PubMed Article PubMed Central Google Scholar
Waycott M, Duarte CM, Carruthers TJB, Orth RJ, Dennison WC, Olyarnik S, et al. Accelerating loss of seagrasses across the globe threatens coastal ecosystems. PNAS. 2009;106:12377–81.
Camacho-Valdez V, Ruiz-Luna A, Ghermandi A, Nunes PA. Valuation of ecosystem services provided by coastal wetlands in northwest Mexico. Ocean Coast Manage. 2013;78:1–11.
Carvalho S, Moura A, Gaspar MB, Pereira P, da Fonseca LC, Falcão M, et al. Spatial and inter-annual variability of the macrobenthic communities within a coastal lagoon (Óbidos lagoon) and its relationship with environmental parameters. Acta Oecol. 2005;27:143–59.
Como S, Magni P. Temporal changes of a macrobenthic assemblage in harsh lagoon sediments. Estuar Coast Shelf Sci. 2009;83:638–46.
Kennish MJ, Paerl HW. Coastal lagoons: critical habitats of environmental change. CRC Marine Science: CRC Press; 2010.
Pereira P, de Pablo H, Carvalho S, Vale C, Pacheco M. Daily availability of nutrients and metals in a eutrophic meso-tidal coastal lagoon (Óbidos lagoon, Portugal). Mar Pollut Bull. 2010;60:1868–72.
Tagliapietra D, Pessa G, Cornello M, Zitelli A, Magni P. Temporal distribution of intertidal macrozoobenthic assemblages in a Nanozostera noltii-dominated area (Lagoon of Venice). Mar Environ Res. 2016;114:31–9.
Tagliapietra D, Pavan M, Wagner C. Macrobenthic Community Changes Related to Eutrophication in Palude della Rosa (Venetian Lagoon, Italy). Estuar Coast Shelf Sci. 1998;47:217–26.
Magni P, Micheletti S, Casu D, Floris A, De Falco G, Castelli A. Macrofaunal community structure and distribution in a muddy coastal lagoon. Chem Ecol. 2004;20:397–409.
Como S, Magni P, Casu D, Floris A, Giordani G, Natale S, et al. Sediment characteristics and macrofauna distribution along a human-modified inlet in the Gulf of Oristano (Sardinia, Italy). Mar Pollut Bull. 2007;54:733–44.
Newton A, Icely J, Cristina S, Brito A, Cardoso AC, Colijn F, et al. An overview of ecological status, vulnerability and future perspectives of European large shallow, semi-enclosed coastal systems, lagoons and transitional waters. Estuar Coast Shelf Sci. 2014;140:95–122.
Nagelkerken I. Evaluation of nursery function of mangroves and seagrass beds for tropical decapods and reef fishes: Patterns and underlying mechanisms. In: Nagelkerken I, editor. Ecological connectivity among tropical coastal ecosystems. Dordrecht: Springer; 2009. p. 357–99.
Pusceddu A, Gambi C, Corinaldesi C, Scopa M, Danovaro R. Relationships between meiofaunal biodiversity and prokaryotic heterotrophic production in different tropical habitats and oceanic regions. PLoS ONE. 2014;9:e91056.
Navarro-Barranco C, Guerra-García JM. Spatial distribution of crustaceans associated with shallow soft-bottom habitats in a coral reef lagoon. Mar Ecol. 2016;37:77–87.
Alsaffar Z, Cúrdia J, Borja A, Irigoien X, Carvalho S. Consistent variability in beta-diversity patterns contrasts with changes in alpha-diversity along an onshore to offshore environmental gradient: the case of Red Sea soft-bottom macrobenthos. Mar Biodivers. 2019;49:247–62.
Nagelkerken I, Van Der Velde G. Connectivity between coastal habitats of two oceanic Caribbean islands as inferred from ontogenetic shifts by coral reef fishes. Gulf Caribb Res. 2003;14:43–59.
Mumby PJ, Edwards AJ, Arias-González JE, Lindeman KC, Blackwell PG, Gall A, et al. Mangroves enhance the biomass of coral reef fish communities in the Caribbean. Nature. 2004;427:533.
Mumby PJ. Connectivity of reef fish between mangroves and coral reefs: algorithms for the design of marine reserves at seascape scales. Biol Conserv. 2006;128:215–22.
Dorenbosch M, Verberk W, Nagelkerken I, Van der Velde G. Influence of habitat configuration on connectivity between fish assemblages of Caribbean seagrass beds, mangroves and coral reefs. Mar Ecol Prog Ser. 2007;334:103–16.
Berkström C, Gullström M, Lindborg R, Mwandya AW, Yahya SA, Kautsky N, Nyström M. Exploring 'knowns' and 'unknowns' in tropical seascape connectivity with insights from East African coral reefs. Estuar Coast Shelf Sci. 2012;107:1–21.
McMahon KW, Berumen ML Thorrold SR. Linking habitat mosaics and connectivity in a coral reef seascape. PNAS. 2012;38:15372–15376.
Unsworth RK, De León PS, Garrard SL, Jompa J, Smith DJ, Bell JJ. High connectivity of Indo-Pacific seagrass fish assemblages with mangrove and coral reef habitats. Mar Ecol Prog Ser. 2008;353:213–24.
Sheridan P. Benthos of adjacent mangrove, seagrass and non-vegetated habitats in Rookery Bay, Florida, USA. Estuar Coast Shelf Sci. 1997;44:455–69.
Lee SY. Mangrove macrobenthos: assemblages, services, and linkages. J Sea Res. 2008;59:16–29.
Kathiresan K, Alikunhi NM. Tropical coastal ecosystems: rarely explored for their interaction. Ecologia. 2011;1:1–22.
Skilleter GA, Loneragan NR, Olds A, Zharikov Y, Cameron B. Connectivity between seagrass and mangroves influences nekton assemblages using nearshore habitats. Mar Ecol Prog Ser. 2017;573:25–43.
Kristensen E, Bouillon S, Dittmar T, Marchand C. Organic carbon dynamics in mangrove ecosystems: a review. Aquat Bot. 2008;89:201–19.
Dissanayake N, Chandrasekara U. Effects of mangrove zonation and the physicochemical parameters of soil on the distribution of macrobenthic fauna in Kadolkele mangrove forest, a tropical mangrove forest in Sri Lanka. Adv Ecol. 2014;564056.
Amarasinghe MD. Misconceptions of mangrove ecology and their implications on conservation and management. Sri Lanka J Aquat Sci. 2018;23:29–35.
Short FT, Wyllie-Echeverria S. Natural and human-induced disturbance of seagrasses. Environ Conserv. 1996;23:17–27.
Mateo MA, Romero J, Pérez M, Littler MM, Littler DS. Dynamics of millenary organic deposits resulting from the growth of the Mediterranean seagrass Posidonia oceanica. Estuar Coast Shelf Sci. 1997;44:103–10.
Reusch TB, Boström C, Stam WT, Olsen JL. An ancient eelgrass clone in the Baltic. Mar Ecol Prog Ser. 1999;183:301–4.
Leopardas V, Uy W, Nakaoka M. Benthic macrofaunal assemblages in multispecific seagrass meadows of the southern Philippines: variation among vegetation dominated by different seagrass species. J Exp Mar Biol Ecol. 2014;457:71–80.
Paiva PC. Spatial and temporal variation of a nearshore benthic community in southern Brazil: implications for the design of monitoring programs. Estuar Coast Shelf Sci. 2001;52:423–33.
Taddei D, Frouin P. Short-term temporal variability of macrofauna reef communities (Reunion Island, Indian Ocean). In: Proceedings of 10th International Coral Reef Symposium (ICRS). Japanese Coral Reef Society, Okinawa, Japan; 2004. p. 52–57.
Bigot L, Conand C, Amouroux JM, Frouin P, Bruggemann H, Grémare A. Effects of industrial outfalls on tropical macrobenthic sediment communities in Reunion Island (Southwest Indian Ocean). Mar Pollut Bull. 2006;52:865–80.
Pech D, Ardisson P-L, Hernández-Guevara NA. Benthic community response to habitat variation: a case of study from a natural protected area, the Celestun coastal lagoon. Cont Shelf Res. 2007;27:2523–33.
Lamptey E, Armah AK. Factors affecting macrobenthic fauna in a tropical hypersaline coastal lagoon in Ghana West Africa. Estuar Coast. 2008;31:1006–19.
Magni P, Draredja B, Melouah K, Como S. Patterns of seasonal variation in lagoonal macrozoobenthic assemblages (Mellah lagoon, Algeria). Mar Environ Res. 2015;109:168–76.
Belal AAM, El-Sawy MA, Dar MA. The effect of water quality on the distribution of macro-benthic fauna in Western Lagoon and Timsah Lake Egypt. I. Egypt J Aquat Res. 2016;42:437–48.
O'Reilly CM. Seasonal dynamics of periphyton in a large tropical lake. Hydrobiologia. 2006;553:293–301.
Posey MH, Alphin TD, Banner S, Vose F, Lindberg W. Temporal variability, diversity and guild structure of a benthic community in the northeastern Gulf of Mexico. B Mar Sci. 1998;63:143–55.
Rosa LC, Bemvenuti CE. Variabilidad temporal de la macrofauna estuarina de la Laguna de los Patos Brasil. Rev Biol Mar Oceanog. 2006;41:1–9.
Norderhaug KM, Gundersen H, Pedersen A, Moy F, Green N, Walday MG, et al. Effects of climate and eutrophication on the diversity of hard bottom communities on the Skagerrak coast 1990–2010. Mar Ecol Prog Ser. 2015;530:29–46.
Ysebaert T, Herman PM. Spatial and temporal variation in benthic macrofauna and relationships with environmental variables in an estuarine, intertidal soft-sediment environment. Mar Ecol Prog Ser. 2002;244:105–24.
Biles CL, Solan M, Isaksson I, Paterson DM, Emes C, Raffaelli DG. Flow modifies the effect of biodiversity on ecosystem functioning: an in situ study of estuarine sediments. J Exp Mar Biol Ecol. 2003;285:165–77.
Giberto DA, Bremec CS, Acha EM, Mianzan H. Large-scale spatial patterns of benthic assemblages in the SW Atlantic: the Rıo de la Plata estuary and adjacent shelf waters. Estuar Coast Shelf Sci. 2004;61:1–13.
Shojaei MG, Gutow L, Dannheim J, Rachor E, Schröder A, Brey T. Common trends in German Bight benthic macrofaunal communities: assessing temporal variability and the relative importance of environmental variables. J Sea Res. 2016;107:25–33.
Desroy N, Retière C. Long-term changes in muddy fine sand community of the Rance Basin: role of recruitment. J Mar Biol Assoc UK. 2001;81:553–64.
Reiss H, Kröncke I. Seasonal variability of infaunal community structures in three areas of the North Sea under different environmental conditions. Estuar Coast Shelf Sci. 2005;65:253–74.
Van Hoey G, Vincx M, Degraer S. Temporal variability in the Abra alba community determined by global and local events. J Sea Res. 2007;58:144–55.
Wlodarska-Kowalczuk M, Pearson TH. Soft-bottom macrobenthic faunal associations and factors affecting species distributions in an Arctic glacial fjord (Kongsfjord, Spitsbergen). Polar Biol. 2004;27:155–67.
Wlodarska-Kowalczuk M, Pearson TH, Kendall MA. Benthic response to chronic natural physical disturbance by glacial sedimentation in an Arctic fjord. Mar Ecol Prog Ser. 2005;303:31.
Mincks SL, Smith CR. Recruitment patterns in Antarctic Peninsula shelf sediments: evidence of decoupling from seasonal phytodetritus pulses. Polar Biol. 2007;30:587–600.
Glover AG, Smith CR, Mincks SL, Sumida PY, Thurber AR. Macrofaunal abundance and composition on the West Antarctic Peninsula continental shelf: evidence for a sediment 'food bank'and similarities to deep-sea habitats. Deep-Sea Res Pt. 2008;55:2491–501.
Pawłowska J, Włodarska-Kowalczuk M, Zajączkowski M, Nygård H, Berge J. Seasonal variability of meio- and macrobenthic standing stocks and diversity in an Arctic fjord (Adventfjorden, Spitsbergen). Polar Biol. 2011;34:833–45.
Rueda JL, Fernández-Casado M, Salas C, Gofas S. Seasonality in a taxocoenosis of molluscs from soft bottoms in the Bay of Cádiz (southern Spain). J Mar Biol Assoc UK. 2001;81:903–12.
Guzmán-Alvis AI, Lattig P, Ruiz JA. Spatial and temporal characterization of soft bottom polychaetes in a shallow tropical bay (Colombian Caribbean). Boletin de Investig Mar y Costeras. 2006;35:19–36.
Hernández-Guevara NA, Pech D, Ardisson P-L. Temporal trends in benthic macrofauna composition in response to seasonal variation in a tropical coastal lagoon, Celestun Gulf of Mexico. Mar Freshwater Res. 2008;59:772–9.
Kanaya G, Suzuki T, Kikuchi E. Spatio-temporal variations in macrozoobenthic assemblage structures in a river-affected lagoon (Idoura Lagoon, Sendai Bay, Japan): influences of freshwater inflow. Estuar Coast Shelf Sci. 2011;92:169–79.
McCarthy SA, Laws EA, Estabrooks WA, Bailey-Brock JH, Kay EA. Intra-annual variability in Hawaiian shallow-water, soft-bottom macrobenthic communities adjacent to a eutrophic estuary. Estuar Coast Shelf Sci. 2000;50:245–58.
Nicolaidou A, Petrou K, Kormas KAr, Reizopoulou S. Inter-annual variability of soft bottom macrofaunal communities in two Ionian Sea lagoons. In: Martens K, Queiroga H, Cunha MR, Cunha A, Moreira MH, Quintino V, Rodrigues AM, Serôdio J, Warwick RM, editors. Marine biodiversity. Developments in Hydrobiology. Dordrecht: Springer Netherlands; 2006. p. 89–98.
Jackson EL, Rowden AA, Attrill MJ, Bossy SF, Jones MB. Comparison of fish and mobile macroinvertebrates associated with seagrass and adjacent sand at St. Catherine Bay, Jersey (English Channel): emphasis on commercial species. B Mar Sci. 2002;71:1333–1341.
Fredriksen S, Backer AD, Boström C, Christie H. Infauna from Zostera marina L. meadows in Norway. Differences in vegetated and unvegetated areas. Mar Biol Res. 2010:6:189–200.
Barnes RSK, Barnes MKS. Shore height and differentials between macrobenthic assemblages in vegetated and unvegetated areas of an intertidal sandflat. Estuar Coast Shelf Sci. 2012;106:112–20.
Daniel PA, Robertson AI. Epibenthos of mangrove waterways and open embayments: community structure and the relationship between exported mangrove detritus and epifaunal standing stocks. Estuar Coast Shelf Sci. 1990;31:599–619.
Sokołowski A, Ziółkowska M, Zgrundo A. Habitat-related patterns of soft-bottom macrofaunal assemblages in a brackish, low-diversity system (southern Baltic Sea). J Sea Res. 2015;103:93–102.
Pearman JK, Irigoien X, Carvalho S. Extracellular DNA amplicon sequencing reveals high levels of benthic eukaryotic diversity in the central Red Sea. Mar Gen. 2016;26:29–39.
Ellingsen KE, Hewitt JE, Thrush SF. Rare species, habitat diversity and functional redundancy in marine benthos. J Sea Res. 2007;58:291–301.
Benedetti-Cecchi L, Bertocci I, Vaselli S, Maggi E, Bulleri F. Neutrality and the response of rare species to environmental variance. PLoS ONE. 2008;3:e2777.
Edgar GJ. The influence of plant structure on the species richness, biomass and secondary production of macrofaunal assemblages associated with Western Australian seagrass beds. J Exp Mar Biol Ecol. 1990;137:215–40.
Nakamura Y, Sano M. Comparison of invertebrate abundance in a seagrass bed and adjacent coral and sand areas at Amitori Bay, Iriomote Island Japan. Fisheries Sci. 2005;71:543–50.
Włodarska-Kowalczuk M, Jankowska E, Kotwicki L, Balazy P. Evidence of season-dependency in vegetation effects on macrofauna in temperate seagrass meadows (Baltic Sea). PLoS ONE. 2014;9:e100788.
Hendriks IE, Sintes T, Bouma TJ, Duarte CM. Experimental assessment and modeling evaluation of the effects of the seagrass Posidonia oceanica on flow and particle trapping. Mar Ecol Prog Ser. 2008;356:163–73.
Herkül K, Kotta J. Effects of eelgrass (Zostera marina) canopy removal and sediment addition on sediment characteristics and benthic communities in the Northern Baltic Sea. Mar Ecol. 2009;30:74–82.
Schneider FI, Mann KH. Species specific relationships of invertebrates to vegetation in a seagrass bed. I. Correlational studies. J Exp Mar Biol Ecol. 1991;145:101–117.
Barberá-Cebrián C, Sánchez-Jerez P, Ramos-Esplá A. Fragmented seagrass habitats on the Mediterranean coast, and distribution and abundance of mysid assemblages. Mar Biol. 2002;141:405–13.
González-Ortiz V, Egea LG, Jiménez-Ramos R, Moreno-Marín F, Pérez-Lloréns JL, Bouma TJ, et al. Interactions between seagrass complexity, hydrodynamic flow and biomixing alter food availability for associated filter-feeding organisms. PLoS ONE. 2014;9:e104949.
McCloskey RM, Unsworth RKF. Decreasing seagrass density negatively influences associated fauna. PeerJ. 2015;3:e1053.
Ringold P. Burrowing, root mat density, and the distribution of fiddler crabs in the eastern United States. J Exp Mar Biol Ecol. 1979;36:11–21.
Mateo MA, Cebrián J, Dunton K, Mutchler T. Carbon flux in seagrass ecosystems. in: Larkum AWD, Orth RJ, Duarte CM, editors. Seagrasses: Biology, Ecology and Conservation. Dordrecht: Springer Netherlands; 2006. p. 159–192.
Santschi P, Höhener P, Benoit G, Buchholtz-ten Brink M. Chemical processes at the sediment-water interface. Mar Chem. 1990;30:269–315.
Hyland J, Balthis L, Karakassis I, Magni P, Petrov A, Shine J, et al. Organic carbon content of sediments as an indicator of stress in the marine benthos. Mar Ecol Prog Ser. 2005;295:91–103.
Pihl L, Svenson A, Moksnes P-O, Wennhage H. Distribution of green algal mats throughout shallow soft bottoms of the Swedish Skagerrak archipelago in relation to nutrient sources and wave exposure. J Sea Res. 1999;41:281–94.
Almahasheer H, Serrano O, Duarte CM, Arias-Ortiz A, Masque P, Irigoien X. Low carbon sink capacity of Red Sea mangroves. Sci Rep. 2017;7:9700.
Ingole B, Sivadas S, Nanajkar M, Sautya S, Nag A. A comparative study of macrobenthic community from harbours along the central west coast of India. Environ Monit Assess. 2009;154:135.
Geist SJ, Nordhaus I, Hinrichs S. Occurrence of species-rich crab fauna in a human-impacted mangrove forest questions the application of community analysis as an environmental assessment tool. Estuar Coast Shelf Sci. 2012;96:69–80.
Fusi M, Giomi F, Babbini S, Daffonchio D, McQuaid CD, Porri F, Cannicci S. Thermal specialization across large geographical scales predicts the resilience of mangrove crab populations to global warming. Oikos. 2015;124:784–95.
Dittmann S. Abundance and distribution of small infauna in mangroves of Missionary Bay, North Queensland Australia. Rev Biol Trop. 2001;49:535–44.
Alfaro AC. Benthic macro-invertebrate community composition within a mangrove/seagrass estuary in northern New Zealand. Estuar Coast Shelf Sci. 2006;66:97–110.
Ludovisi A, Castaldelli G, Fano EA. Multi-scale spatio-temporal patchiness of macrozoobenthos in the Sacca di Goro lagoon (Po River Delta, Italy). Transit Water Bull. 2013;7:233–44.
Pante E, Adjeroud M, Dustan P, Penin L, Schrimm M. Spatial patterns of benthic invertebrate assemblages within atoll lagoons: importance of habitat heterogeneity and considerations for marine protected area design in French Polynesia. Aquat Living Resour. 2006;19:207–17.
Norkko A, Cummings VJ, Thrush SF, Hewitt JE, Hume T. Local dispersal of juvenile bivalves: implications for sandflat ecology. Mar Ecol Prog Ser. 2001;212:131–44.
Gong WK, Ong JE, Wong CH, Dhanarajan C. Productivity of mangrove trees and its significance in a managed mangrove ecosystem in Malaysia. In: Universiti Malaya, Kuala Lumpur (Malaysia). Asian Symposium on Mangrove Environment: Research and Management. Kuala Lumpur (Malaysia). 25-29 Aug 1980.
Camilleri JC. Leaf-litter processing by invertebrates in a mangrove forest in Queensland. Mar Biol. 1992;114:139–45.
Demopoulos AW, Cormier N, Ewel KC, Fry B. Use of multiple chemical tracers to define habitat use of Indo-Pacific mangrove crab, Scylla serrata (Decapoda: portunidae). Estuar Coast. 2008;31:371–81.
Mistri M. Persistence of benthic communities: a case study from the Valli di Comacchio, a Northern Adriatic lagoonal ecosystem (Italy). ICES J Mar Sci. 2002;59:314–22.
Irlandi EA, Ambrose WG Jr, Orlando BA. Landscape ecology and the marine environment: how spatial configuration of seagrass habitat influences growth and survival of the bay scallop. Oikos. 1995;72:307–13.
Alsaffar Z, Pearman JK, Curdia J, Calleja MLl, Ruiz-Compean P, Roth F, Villalobos R, Jones BH, Ellis J, Móran AX, Carvalho S. The role of seagrass vegetation and local environmental conditions in shaping benthic bacterial and macroinvertebrate communities in a tropical coastal lagoon. Scientific Reports, in press.
Lundberg J, Moberg F. Mobile link organisms and ecosystem functioning: implications for ecosystem resilience and management. Ecosystems. 2003;6:87–98.
Tano SA, Eggertsen M, Wikström SA, Berkström C, Buriyo AS, Halling C. Tropical seaweed beds as important habitats for juvenile fish. Mar Freshwater Res. 2017;68:1921–34.
Nagelkerken I, van der Velde G, Gorissen MW, Meijer GJ, Van't Hof T, den Hartog C. Importance of mangroves, seagrass beds and the shallow coral reef as a nursery for important coral reef fishes, using a visual census technique. Estuar Coast Shelf Sci. 2000;51:31–44.
Lugendo BR, Pronker A, Cornelissen I, de Groene A, Nagelkerken I, Dorenbosch M, et al. Habitat utilisation by juveniles of commercially important fish species in a marine embayment in Zanzibar Tanzania. Aquat Living Resour. 2005;18:149–58.
Gullström M, Bodin M, Nilsson PG, Öhman MC. Seagrass structural complexity and landscape configuration as determinants of tropical fish assemblage composition. Mar Ecol Prog Ser. 2008;363:241–55.
Berkström C, Jörgensen TL, Hellström M. Ecological connectivity and niche differentiation between two closely related fish species in the mangrove-seagrass-coral reef continuum. Mar Ecol Prog Ser. 2013;477:201–15.
Mumby PJ, Hastings A. The impact of ecosystem connectivity on coral reef resilience. J Appl Ecol. 2008;45:854–62.
Saenger P, Gartside D, Funge-Smith S. A review of mangrove and seagrass ecosystems and their linkage to fisheries and fisheries management. RAP: FAO; 2013.
Khedhri I, Djabou H, Afli A. Trophic and functional organization of the benthic macrofauna in the lagoon of Boughrara–Tunisia (SW Mediterranean Sea). J Mar Biol Assoc UK. 2015;95:647–59.
Perry DC, Staveley TA, Gullström M. Habitat connectivity of fish in temperate shallow-water seascapes. Front Mar Sci. 2017;4:440.
Whitfield AK. The role of seagrass meadows, mangrove forests, salt marshes and reed beds as nursery areas and food sources for fishes in estuaries. Rev Fish Biol Fisher. 2017;27:75–110.
Arar EJ, Collins GB. Method 445.0: In vitro determination of chlorophyll a and pheophytin a in marine and freshwater algae by fluorescence. United States Environmental Protection Agency, Office of Research and Development, National Exposure Research Laboratory Washington, DC, USA. 1997.
Heiri O, Lotter AF, Lemcke G. Loss on ignition as a method for estimating organic and carbonate content in sediments: reproducibility and comparability of results. J Paleolimnol. 2001;25:101–10.
Chao A. Estimating the population size for capture-recapture data with unequal catchability. Biometrics 1987;783–791.
Gotelli NJ, Colwell RK. Quantifying biodiversity: procedures and pitfalls in the measurement and comparison of species richness. Ecol Lett. 2001;4:379–91.
Chiu C-H, Wang Y-T, Walther BA, Chao A. An improved nonparametric lower bound of species richness via a modified good–turing frequency formula. Biometrics. 2014;70:671–82.
Smith EP, van Belle G. Nonparametric estimation of species richness. Biometrics. 1984;40:119–29.
R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 2018.
Oksanen J, Blanchet FG, Kindt R, Legendre P, Minchin PR, O'hara RB, et al. vegan: Community Ecology Package R package version 2.5-2. 2018.
The authors would like to thank Saskia Kurten, Richard Payumo, Miguel Viegas, and Holger Anlauf for their assistance in the field and the laboratory analyses. We would also like to thank to Carlos Navarro, Leandro Sampaio, Joana Oliveira, and Ascensão Ravara for their help in taxonomic identification. The authors are also indebted to the skippers and staff of the Coastal and Marine Resources Core Lab for their invaluable support in fieldwork activities. We are also grateful to Dr. John Pearman for proofreading this manuscript and providing comments that, along with those provided by the reviewers and the editor, greatly improved it. This research was partially supported by baseline funding provided by KAUST to Prof. Xabier Irigoien and SAKMEO - Saudi Aramco/KAUST Center for Marine Environmental Observations.
This research was supported by baseline funding provided by KAUST to Prof. Xabier Irigoien. SC and JC are funded by the Saudi Aramco/KAUST Center for Marine Environmental Observations (SAKMEO).
Red Sea Research Centre, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
Zahra Alsaffar, João Cúrdia, Xabier Irigoien & Susana Carvalho
Chemistry Department, College of Science, King Saud University (KSU), Riyadh, P.O. Box 2455, 11451, Saudi Arabia
Zahra Alsaffar
AZTI - Marine Research, Herrera Kaia, Pasaia, 20100, Spain
Xabier Irigoien
IKERBASQUE, Basque Foundation for Science, Bilbao, 48013, Spain
João Cúrdia
Susana Carvalho
ZA and SC designed the study. ZA, JC, XI and SC, conducted the research, analysed and interpreted the data. ZA wrote the manuscript with the contribution of all the co-authors. All authors read and approved the final manuscript.
Correspondence to Susana Carvalho.
The conducted research followed the policies as approved by the King Abdullah University of Science and Technology (KAUST).
Alsaffar, Z., Cúrdia, J., Irigoien, X. et al. Composition, uniqueness and connectivity across tropical coastal lagoon habitats in the Red Sea. BMC Ecol 20, 61 (2020). https://doi.org/10.1186/s12898-020-00329-z
DOI: https://doi.org/10.1186/s12898-020-00329-z
Inter-annual variability
Spatial variability
Macrobenthic communities
Tropical habitats
Seascape connectivity
Conservation ecology and biodiversity research
|
CommonCrawl
|
Sustainable Environment Research
Research | Open | Published: 30 April 2019
Simulation of sustainable solid waste management system in Khulna city
Md Shofiqul Islam1 &
S. M. Moniruzzaman2
Sustainable Environment Researchvolume 29, Article number: 14 (2019) | Download Citation
Municipal solid waste management (MSWM) is the major environmental concern for Khulna, the third largest city of Bangladesh. The aim of the study was to determine the most environmentally friendly option of MSWM system for Khulna city. The present system of MSWM in Khulna city was chosen as the baseline scenario in which recycling, composting and landfilling are 9.1, 4.4 and 86.5% respectively of the total managed waste. Different scenarios were developed by varying the percentage of recycling, composting and landfilling. The life cycle inventory analysis of MSWM system was done by integrated waste management model for each scenario. The model outputs of each scenario were classified into impact categories: emission of the following pollutants: greenhouse gases, acidic gases, smog precursors, heavy metal and organics to air and to water as well as quantity of residual waste and energy consumption or recovery. In the context of the aforesaid impact categories, scenario 7 consist of 71% composting, 13.6% recycling and 15.4% landfilling is the most favorable alternative for Khulna city.
Sustainable management of municipal solid waste (MSW) is a critical issue of the municipal authority in most of the cities in the world because of the growing volume of waste and the presence of harmful chemicals and additives in different waste fractions [1,2,3]. In Bangladesh, MSW management (MSWM) system is not well-organized and generally based on collection and dumping of MSW [4]. In Khulna city, the quantity of total generated MSW is 420 to 520 t d− 1 and the Khulna city corporation (KCC) authority is responsible for waste management [5]. By door to door collection system, MSW are generally deposited in secondary disposal sites (SDS) either by the dwellers themselves or community based organizations or non-government organizations [6]. KCC performs MSWM through transportation of MSW from SDS to the final disposal sites (FDS) at Rajbandh, about 7 km away from the main city [7]. The existing practice of MSWM has led to various emissions of greenhouse gases (GHG) such as carbon dioxide from the production of new materials and methane from the decomposition of organic waste in landfills [8]. Similarly, uncontrolled disposal of MSW is a latent reason for water pollution, public health problems, explosion and landslide.
The Waste Framework Directive does not state which assessment method should be used if deviating from the waste hierarchy, but one of the possibilities is life cycle assessment (LCA), which starts as an assessment method for products but has, since the early 1990s, begun to be used on waste management as well [9]. Also LCA is an effective decision supporting tool associated with a product, process or service from cradle to grave and from production of the raw materials to final disposal of wastes for assessing different approaches of waste management through examining environmental impacts [10,11,12,13]. In Khulna city, a few studies have been found to assess the sustainable MSWM by applying LCA methodology. The aim of the present study is to determine the sustainable solid waste management system emphasizing on recycling and composting for Khulna city through LCA.
In Bangladesh, Khulna is situated below the tropic of cancer, around the intersection of latitude 22.49° N and longitude 89.34° E. Being the third largest city of country, the encompassing city has an estimated population of 1.5 million. The city has 31 wards, an estimated total land area of 47 km2, and the population density of 67,994 km− 2 [5]. The whole city area was selected for the survey area. There is a separate department for the MSWM in KCC namely conservancy department. The location of study area in context of Bangladesh as shown in Fig. 1.
Location of study area in context of Bangladesh
Survey in study area
A series of field surveys were done to find the amount of MSW used for landfilling, composting and recycling. The field surveys were conducted at each location of SDS, large hauled container points (LHCP), small hauled container points (SHCP), and distinct collection routes (DCR) throughout the city. Countless questionnaire surveys were done with the drivers of waste collection vehicles, employees of conservancy department of KCC, workers of waste collection vehicles and landfill management to collect the quantity of fuel used in collection and transportation of MSW. It is to be noted that the three major seasons are winter season (December to February), summer season (March to May) and rainy season (June to September) in Bangladesh. For the simplicity of research, the year was sub-divided into the two season, i.e., dry season (October to March) and wet season (April to September). Moreover, the amount of MSW from each location of SDS, LHCP, SHCP and DCR was recorded throughout the entire November 2016 for the dry season and throughout the entire July 2017 for the wet season.
Life cycle inventory analysis
The life cycle inventory analysis was done by an integrated waste management (IWM)-2.0 model which is an Excel TM model with a visual basic graphical interface [14]. In Europe, South America and Asia, the IWM model is designed as a decision supporting tool to decide between various options for waste management in industry as well as local government [15,16,17,18].
The major input values of the model were composition of MSW, amount of recycled MSW, amount of composted MSW, amount of landfilled MSW, average driven distance by collection and transportation vehicles, and quantity of fuel consumption. The flow diagram for life cycle inventory analysis is given in Fig. 2. The total quantity of waste collected at the curb (recyclables, organics and garbage) and the composition of the total waste stream were entered in input screen A of the model. In case of input screen B, the waste flow data such as quantity of waste sent for recycling, composting, land application, energy recovery and landfilling were entered. The data related to the collection and transportation of waste in the system such as distance driven by collection trucks, type of fuel used and fuel efficiency were entered in input screen C. In input screen D, users have the option of choosing the mix of power generation methods, or the average mix of power generation methods. Alternatively, a user can specify a custom grid by selecting the 'custom' option on screen D. 'Custom' button was selected and allowed the user to input the percentage of power generated by each of the generating methods. Input screen E will only appear if the user has entered a number greater than zero for quantity of waste recycled. The data related to recovery rates was entered on this screen. The data related to energy consumption, percent residue, residue management, distance to markets and distance from material recovery facility to landfill were entered in input screen F. The entered data on input screen G includes breakdown in tons of the materials sent for composting, composition of yard waste, energy consumption and distance from composting facility to landfill. Input screen H will only appear if the user has entered a number greater than zero for quantity of waste land applied in input screen B. The entered data in this screen were the composition of yard waste and energy consumption. The energy recovered and energy recovery efficiency were entered in input screen I. In input screen J, the data related to gas recovery, energy recovery, annual precipitation and energy consumption were entered. All the data were entered in input screen A to J for each modelled scenario. Due to the space constraint, only the entered data in baseline scenario are shown in Table 1.
Flow diagram for life cycle inventory analysis
Table 1 Details of input data for baseline scenario in IWM model
The outputs of each scenario were calculated by using IWM model and classified into impact categories: emission of GHGs, emission of acid gases, emission of smog precursors, emission of heavy metal and organics to air, emission of heavy metal and organics to water, quantity of residual waste and energy consumption or recovery. The percent reduction of emission in aforementioned categories compared to baseline scenario was calculated by Eq. (1) as follows:
$$ ER\;\left(\%\right)=\frac{EB- EM}{EB}\ast 100 $$
where, ER = Emission reduction, EB = Emission of baseline scenario and EM = Emission of modelled scenario.
Quantity of collected and transported MSW
The field survey reveals that there are 11 SHCPs having capacity of 3000 kg each and 27 large LHCPs having capacity of 5000 kg each; 12 DCRs and 17 SDSs at different locations in Khulna city. The study also finds that the total quantity of collected and transported MSW from SDS, LHCP, SHCP and DCR to FDS is 374 t d− 1 as shown in Table 2. From FDS only 18 t d− 1 of MSW is directly used for the composting purpose by non-government organization. Therefore, the quantity of MSW managed through landfilling in FDS is 356 t d− 1.
Table 2 Quantity of collected and transported MSW by KCC in Khulna city
Modelled scenarios
Table 3 represents the description of seven modelled scenarios for sustainable waste management system in Khulna city. The present practice of MSWM in Khulna is chosen as the baseline scenario in which recycling is considered as 9.1% (37.23 t d− 1) from the authors' another study [7], composting is considered as 4.4% (18 t d− 1) from field investigation and landfilling is considered as 86.5% (356 t d− 1) from field survey of the total managed waste (411.23 t d− 1). In modelled scenarios, an incineration technique in MSW management is not considered due to no facility practically in Khulna city. The baseline scenario is used as the reference against which modelled scenarios 1 to 4 are measured. The scenarios 5 to 7 represent the combination of percentage of different MSW management technique. It is to be noted that based on composition of MSW in Khulna city, the maximum percentage of compostable and recyclable MSW is considered in modelled scenario 7.
Table 3 Description of the modelled scenarios
It is estimated that the recyclable waste in the city is about 14.2%, and compostable food and vegetable waste is about 78.9% from the composition of solid waste of Khulna city [19]. In the scenario 1 (S-1), the composting is increased to six times of the baseline scenario (26.3%) because of present existing facility of composting technique by a non-government organization named Rural Unfortunates Safely Talisman Illumination Cottage which is locally called RUSTIC, recycling is considered at the same of the baseline scenario (9.1%) and landfilling is decreased to 64.6%. This scenario emphasizes composting technique of MSW in Khulna city [11]. Smiliarly in the scenario 2 (S-2), the composting is increased to twelve times of baseline scenario (52.6%), the recycling is considered at the same level of the baseline scenario (9.1%) and landfilling is decreased (38.3%). The reason for the further increment of the percentage of composting of MSW is to compare the amount of emission reduction of different environmental parameters. In the scenario 3 (S-3), the composting is increased to the highest level as 71.0% (i.e., 90% of compostable food and vegetable waste) due to the available quantity of compostable MSW excluding losses in collection, transportation, and sorting from other MSW, the recycling is at the same of the baseline scenario (9.1%) and landfilling is decreased (19.9%).
In the scenario 4 (S-4), the recycling is increased to its highest level of maximum recycleable MSW as 13.6% excluding 5 to 6% material losses, composting is considered at the same level of the baseline scenario (4.4%) and landfilling is decreased (82.0%).
In case of scenario 5 (S-5), a combination is made through consideration of composting level as similiar to S-1 and recycling level as similar to S-4. In case of scenario 6 (S-6) composting level is considered as similiar to S-2 and recycling level as similar to S-4. In case scenario-7 (S-7), composting level is considered as similiar to S-3 and recycling level as similar to S-4. Figure 3 represents the amount of managed waste at different modelled scenarios.
Amount of managed waste at different modelled scenarios
Based on the data gathered at the inventory analysis stage, the IWM Model was run for total managed waste of 411.23 t d− 1 in each scenario. The results of the simulation were evaluated on the environmental aspects for all the scenarios as described below. It is to be noted that in all tables, positive values indicate energy consumed or emission released and negative values indicate energy recovered or emissions reduced.
Table 4 shows the summary of GHGs emission from different modelled scenarios. The highest emission of GHGs (719.6 t CO2 eq d− 1) was found in S-0 due to the highest percentage of landfilling (86.5%) and lowest percentage of recycling (9.1%) as well as composting (4.4%). On the other hand the lowest emission of GHGs (172.4 t CO2 eq d− 1) was found in S-7 due to the lowest percentage of landfilling (15.4%) and highest percentage of recycling (13.6%) as well as composting (71.0%). The maximum reduction of GHGs as calculated by Eq. (1) was found in S-7 as 76% compared to baseline scenario.
Table 4 Emission of GHGs from modelled scenarios in net LCI
Table 5 shows the acidic gases emission from different modelled scenarios in total waste management system. The emission of acid gases such as Nitrogen Oxides (NOx), Sulfur Oxides (SOx) and Hydrochloric acid (HCl) were calculated by the model. In total waste management system, the highest emission of acidic gases was found in S-0 due to the highest percentage of landfilling (86.5%) and lowest percentage of recycling (9.1%) as well as composting (4.4%). On the other hand the lowest emission of acidic gases was found in S-7 due to the lowest percentage of landfilling (15.4%) and highest percentage of recycling (13.6%) as well as composting (71.0%). In S-7, the maximum reduction of emission of NOx, SOx and HCl was found to be 12, 50 and 78% respectively compared to baseline scenario.
Table 5 Emission of acidic gases in total waste management system
Table 6 shows the emission of smog precursors such as NOx, particulate matter (PM), volatile organic compounds (VOCs). In total waste management system, the highest emission of smog precursors was found in S-0. On the other hand the lowest emission of smog precursors was found in S-7 due to the lowest percentage of landfilling and highest percentage of recycling as well as composting. Also the maximum reduction of emission of NOx, PM, VOCs was found in S-7 as 12, 28 and 69%, respectively compared to baseline scenario.
Table 6 Emission of smog precursors in total waste management system
Table 7 represents the emission of heavy metal and organics to air in total waste management system. In case of lead emission, the highest emission was found in S-0 as 1858 mg d− 1 due to the highest percentage of landfilling. Conversely the lowest emission was found in S-7 as 639 mg d− 1 which is 65.6% lower compared to baseline scenario. In the same way the maximum emission reductions of mercury, cadmium and dioxins were found in S-7 as 47, 71 and 76% respectively compared to baseline scenario.
Table 7 Emission of heavy metal and organics to air in total waste management system
Table 8 Emission of heavy metal and organics to water in total waste management system
In case of emission of heavy metal and organics to water in total waste management system, the lowest emission to water was found in S-7 as shown in Table 8. In S-7, the maximum emission reductions of lead, mercury, cadmium, biochemical oxygen demand and dioxins were found to be approximately 75% as compared to baseline scenario.
Table 9 shows quantity of residual waste in total waste management system. In case of S-0, the maximun residual waste was found as 358.8 t d− 1 due to larger quantity of landfilling. On the other hand, the minimum residual was found in S-7 as 80.8 t d− 1. In addition the maximum reduction of residual waste was found in S-7 as 78%.
Table 9 Quantity and the reduction of residual waste
Table 10 repersents amount of energy consumption or recovery of modelled scenarios in various waste management techniques. The maximum net energy recovering were found in S-4 (−1491 GJ d− 1), S-5 (−1494 GJ d− 1), S-6 (−1499 GJ d− 1) and S-7 (−1502 GJ d− 1) considering the large contribution of virgin material displacement credit (−2276 GJ d− 1). The variation of net energy recovery among these scenarios was insignificant, or the minimum net energy recovering were found in S-0 (−962 GJ d− 1), S-1 (−966 GJ d− 1), S-2 (−971 GJ d− 1) and S-3 (−974 GJ d− 1). In case of all the scenarios, net energy recovery increases with the increase in the percentage of recycling, although amount of energy is insignificant compared to other waste management technique.
Table 10 Energy consumption in different modelled scenarios
The main conclusions drawn from the present study are as follows:
Scenario 7 has the least emission of greenhouse gases, acidic gases, smog precursors, heavy metal and organics to air as well as to water than that of all other scenarios.
Scenarios 4 to 7 consume less energy compared to other scenarios.
Scenario 7 has the minimum residual waste than that of all other scenarios.
Therefore, it can be concluded that scenario 7 is the best waste management system for Khulna city of Bangladesh.
Jeswani HK, Azapagic A. Assessing the environmental sustainability of energy recovery from municipal solid waste in the UK. Waste Manag. 2016;50:346–63.
Tulokhonova A, Ulanova O. Assessment of municipal solid waste management scenarios in Irkutsk (Russia) using a life cycle assessment-integrated waste management model. Waste Manage Res. 2013;31:475–84.
Demirbas A. Waste management, waste resource facilities and waste conversion processes. Energ Convers Manage. 2011;52:1280–7.
Islam MS, Moniruzzaman SM, Alamgir M. Simulation of sustainable solid waste management system of Khulna city in Bangladesh through life cycle assessment. In: 16th International Waste Management and Landfill Symposium. Cagliari; 2017 Oct 2–6.
Ahsan A, Alamgir M, El-Sergany MM, Shams S, Rowshon MK. Nik Daud NN. Assessment of municipal solid waste management system in a developing country. Chin J Eng. 2014;2014:561935.
Alamin M, Hassan KM. Life cycle assessment of solid wastes in a university campus in Bangladesh. In: Wastesafe 2013 - 3rd International Conference on Solid Waste Management in Developing Countries. Khulna; 2013 Feb 10–12.
Moniruzzaman SM, Bari QH, Fukuhara T. Recycling practices of solid waste in Khulna city, Bangladesh. J Solid Waste Tech Manag. 2011;37:1–15.
Bari QH, Mahbub Hassan K, Haque R. Scenario of solid waste reuse in Khulna city of Bangladesh. Waste Manag. 2012;32:2526–34.
European Commission. Directive 2008/98/EC of the European parliament and of the council of 19 November 2008 on waste and repealing certain directives. Official J Eur Union. 2008;312:3–30.
Ozeler D, Yetis U, Demirer GN. Life cycle assesment of municipal solid waste management methods: Ankara case study. Environ Int. 2006;32:405–11.
Ogundipe FO, Jimoh OD. Life cycle assessment of municipal solid waste management in Minna, Niger state, Nigeria. Int J Environ Res. 2015;9:1305–14.
Al-Salem SM, Lettieri P. Life cycle assessment (LCA) of municipal solid waste management in the state of Kuwait. Eur J Sci Res. 2009;34:395–405.
Seo ESM, Kulay LA. Life cycle assessment: management tool for decision-making. J Integr Manag Occup Health Env. 2006;1:1–24.
White P, Franke M, Hindle P. Integrated solid waste management: a life cycle inventory. 2nd ed. Gaithersburg: Aspen Publication; 1999.
Rodriguez-Iglesias J, Maranon E, Castrillon L, Riestra P, Sastre H. Life cycle analysis of municipal solid waste management possibilities in Asturias, Spain. Waste Manage Res. 2003;21:535–48.
McDougall FR, Hruska JP. Report: the use of life cycle inventory tools to support an integrated approach to solid waste management. Waste Manage Res. 2000;18:590–4.
McDougall FR. Life cycle inventory tools: supporting the development of sustainable solid waste management systems. Corp Env Strat. 2001;8:142–7.
Clift R, Doig A, Finnveden G. The application of life cycle assessment to integrated solid waste management: part 1 - methodology. Process Saf Environ. 2000;78:279–87.
Alamgir M, Ahsan A, Bari QH, Upreti BN, Bhatttari TN, Glawe U, et al. Present scenario of municipal solid waste and its management. In: Alamgir M, McDonald C, Roehl KE, Ahsan M, editors. Integrated management and safe disposal of solid waste in least developed Asian countries - a feasibility study. Khulna: Wastesafe Publication; 2005. p. 135–228.
The authors wish to express thanks to Khulna University of Engineering & Technology for the financial support to complete this research. The authors of this article also wish to express thanks to all officers and staff of conservancy the department of Khulna City Corporation for providing relevant data and assistance in this study.
Institute of Disaster Management, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh
Md Shofiqul Islam
Department of Civil Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh
S. M. Moniruzzaman
Search for Md Shofiqul Islam in:
Search for S. M. Moniruzzaman in:
Both authors read and approved the final manuscript.
Correspondence to Md Shofiqul Islam.
Life cycle assesment
|
CommonCrawl
|
Birkhäuser Mathematics
Trends in Mathematics
Advances in Commutative Algebra
Dedicated to David F. Anderson
Editors: Badawi, Ayman, Coykendall, Jim (Eds.)
Presents a collection of research from David F. Anderson as well as other experts in the field
Provides a valuable source of cutting-edge research in a number of subfields of commutative algebra
Is useful to graduate students and researchers
Included format: EPUB, PDF
Hardcover 88,39 €
This book highlights the contributions of the eminent mathematician and leading algebraist David F. Anderson in wide-ranging areas of commutative algebra. It provides a balance of topics for experts and non-experts, with a mix of survey papers to offer a synopsis of developments across a range of areas of commutative algebra and outlining Anderson's work. The book is divided into two sections—surveys and recent research developments—with each section presenting material from all the major areas in commutative algebra. The book is of interest to graduate students and experienced researchers alike.
AYMAN BADAWI is Professor at the Department of Mathematics and Statistics, the American University of Sharjah, the United Arab Emirates. He earned his Ph.D. in Algebra from the University of North Texas, USA, in 1993. He is an active member of the American Mathematical Society and honorary member of the Middle East Center of Algebra and its Applications. His research interests include commutative algebra, pi-regular rings, and graphs associated to rings.
JIM COYKENDALL is Professor of Mathematical Sciences at Clemson University, South Carolina, USA. He earned his Ph.D. from Cornell University in 1995, and has held various academic positions at the California Institute of Technology, the University of Tennessee, Cornell University, Lehigh University, and North Dakota State University. He has successfully guided 12 Ph.D. students. His research interests include commutative algebra and number theory.
David Anderson and His Mathematics
Anderson, D. D.
On $$\star $$-Semi-homogeneous Integral Domains
Anderson, D. D. (et al.)
t-Local Domains and Valuation Domains
Fontana, Marco (et al.)
Strongly Divided Pairs of Integral Domains
Ayache, Ahmed (et al.)
Finite Intersections of Prüfer Overrings
Olberding, Bruce
Strongly Additively Regular Rings and Graphs
Lucas, Thomas G.
On t-Reduction and t-Integral Closure of Ideals in Integral Domains
Kabbaj, Salah
Local Types of Classical Rings
Klingler, L. (et al.)
How Do Elements Really Factor in $$\mathbb {Z}[\sqrt{-5}]$$?
Chapman, Scott T. (et al.)
David Anderson's Work on Graded Integral Domains
Chang, Gyu Whan (et al.)
Divisor Graphs of a Commutative Ring
LaGrange, John D.
Isomorphisms and Planarity of Zero-Divisor Graphs
Smith, Jesse Gerald, Jr.
Book Subtitle
Ayman Badawi
Jim Coykendall
Birkhäuser Basel
Springer Nature Singapore Pte Ltd.
10.1007/978-981-13-7028-1
Series ISSN
XXII, 263
14 b/w illustrations, 2 illustrations in colour
Commutative Rings and Algebras
|
CommonCrawl
|
Precautionary approach vs fat tails: flawed politics vs solid facts
We had an exchange with AngularMan in the Greene vs Taleb thread. He basically said that it's risky to talk about the fat tails because this discussion encourages the precautionary principle and the ban of nuclear energy and fossil fuels.
Well, the "fat tails" and "precautionary principle" are sometimes conflated. The most sophisticated part of the defenders of the "precautionary principle" knows something about "fat tails" which is why they may use fat tails as an argument in favor of the precautionary principle. And this justification may sometimes be legitimate.
But in the full generality, these two phrases, "fat tails" and "precautionary principle", are completely different and independent things. The differences depend on the definitions of these two concepts – and various people may use different definitions. But with the most widespread definitions, one qualitative difference is self-evident: "fat tails" are a property that may exist or be absent and whose existence may be justified by legitimate rational arguments as a "positive statement" (what is true) while the "precautionary principle" is a legal or political principle i.e. basically a "normative statement" (how people should behave).
Let us be more specific. A fat-tail distribution is a distribution \(\rho(x)\) whose decrease for \(x\to \infty\) is slower than a Gaussian (normal) or any exponential decrease; the simplest fat-tail distributions often behave approximately as power laws \(\rho(x)\sim C/x^\alpha\) for \(x\to\infty\). Note that convergence of \(\int\rho\) requires \(\alpha\gt 1\).
On the other hand, the precautionary principle says that one is obliged and authorities are obliged to assume that "a thing is dangerous and therefore banned" if no proof (or solid evidence) about the safety of "the thing" exists in one way or another. Wikipedia helpfully tells us that the European Union has adopted this crazy principle as a "statutory requirement" in whole areas of law.
Can you spot the difference? I hope that you can. The fat tails are a property of probability distributions that we may rationally discuss while the precautionary principle is just a religious dogma that some unelected officials worship and it can't be rationally discussed because it's stupid and because the unelected officials don't tolerate any rational thinking about these matters.
I think that with the definitions I have described, most people would wonder why "fat tails" and the "precautionary principle" have anything in common at all.
Fine, let us begin with a discussion why the precautionary principle (or precautionary approach) as defined above is idiotic. The principle assumes that if you have two possible laws, "A" and "non A", one of them may be labeled as the "potentially dangerous" and it's the one that must be avoided in the absence of evidence. But laws and propositions don't come with these God-given signs. The set of all possible propositions (or possible laws or policies) can't be divided to the "positive ones" and "negative ones". A statement "A" can't be proven to be an "a priori safe" or the "right default one" or the "positive one". You can't say that "A" is a positive statement because it doesn't contain "non". After all, "A" is exactly equivalent to "non(non(A))".
Let me give you an example. A smoking ban may be adopted because it hasn't been proven that smoking in restaurants doesn't lead to the death of a whole nation. However, the safety of the smoking ban hasn't been proven, either. The ban itself may also lead to the death of a nation. The smokers will feel terrible and kill everyone else before they kill themselves. And maybe the people start to rapidly collapse after they and their ancestors have lived without the vital vitamin called nicotine for 137 years. So the precautionary principle really means that the violent non-smokers are "in charge" while the smokers are second class citizens. So the violent non-smokers may declare the smokers dangerous and the application of the precautionary principle means that the smokers may be suppressed. But there's no logical justification that it has to be like that. The smokers could also be "in charge" and declare all the violent non-smokers dangerous.
The precautionary principle is nothing else than a regulation that places a class of citizens above others and it's generally assumed that everyone knows what is the safer part of the citizens that must be "in charge": the lazy people who aren't doing anything creative or anything that would make them deviate from the most average people, Luddites, environmentalists, and similar folks. The precautionary principle has been largely adopted or hijacked by these movements that we generally consider left-wing which is why the "precautionary principle" says nothing else than that the left-wingers and NGOs should be in charge. As we saw, the "precautionary principle" becomes much more subtle when we talk e.g. about mass immigration. Mass immigration obviously carries some significant risks and the situation is analogous – except that it is much more justified – as an example of a situation in which the precautionary principle should be used. But it's not being used because everyone knows that the "precautionary principle" should always be a tool to support the left-wing ideologies and the people paid by George Soros, among related filth.
So the precautionary principle is nothing else than a dishonesty, a deliberately introduced asymmetry in the thinking. It can't be consistently applied to questions about policies. After all, we may say that it's logically self-contradictory. The proof is analogous to other proofs of various incarnations of the "liar paradox".
The question is whether we may prove that the precautionary principle allows the society and mankind to survive. There's no proof that the society may survive with that so according to the precautionary principle, the precautionary principle must be banned! ;-)
OK, while the precautionary principle – as defined by Wikipedia or the EU – is self-evidently a dishonest and irrational distortion of the rational thinking, a fat tail is meant to be something else, namely a property of a statistical distribution that may be fully justifiable or provable in many cases.
The rational decision not only in policymaking is based on the cost-benefit analysis. Imagine that you're deciding whether you should adopt a new law, "A", or keep the current status "non A". (In general, we may be comparing more options.) We don't know what will exactly happen given "A" or "non A". The consequences may be good (positive) or bad (negative). Imagine that there are possible scenarios labeled by parameters \(\lambda_i\) and we group them by the overall well-being \(W(\lambda_i)\) that we evaluate in some way.
The rational decision whether we adopt "A" or keep "non A" is based on the cost-benefit analysis. We compute the expectation value\[
\langle W\rangle_A = \int d^n \lambda_i\,\rho(\lambda_i) W(\lambda_i)_A
\] where \(\rho(\lambda_i)\) is the probability density that the parameters have the values around \(\lambda_i\). The probability distribution is normalized so that the integral above is equal to one if \(W(\lambda_i)\) is replaced by \(1\). OK, things are obvious and rational: if \(\langle W\rangle_A\gt\langle W\rangle_{{\rm non}\,A}\), then it's a good idea to adopt the policy "A", otherwise it's not. (Let's ignore the "infinitely unlikely" case in which the cost-benefit analysis ends up ambiguously; in that case, no rational decision may be justified.)
Note that the cost-benefit analysis doesn't require you to say which law is "A" and which law is "non A". If you exchange the meaning of "A" and "non A", the expectation values get exchanged as well, and whenever the first was greater than the second, the first will be smaller than the second, and vice versa. So you will obviously end up with the same recommendations for the laws.
OK, can it have any relationship with the precautionary principle? In the precautionary principle, when it's at least slightly justified, it's assumed that the distribution \(\rho(\lambda_i)\) isn't really known. And the function of well-being \(W(\lambda_i)\) may be unknown, too. But there may still exist arguments that\[
\exists \lambda_i:\quad \rho(\lambda_i)\neq 0, \,\,W(\lambda_i)\to-\infty
\] So for some possible choice of the parameters \(\lambda_i\), some future that can't be excluded, the well-being is minus infinite. The latter statement typically means that the whole civilization dies or at least someone dies etc. If the law "A" introduces some significantly nonzero risk that everything we like will die (e.g. the mankind), and this risk didn't exist for the law "non A", then it's a better idea not to adopt the law "A".
That's a variation of the precautionary principle that is actually justified – it's justified by the cost-benefit analysis, a rational attitude to all these "should we adopt A" questions.
Again, don't forget that this is not how the "precautionary principle" is usually used. The precautionary principle is being used even in situations in which the worst-case scenario is much less dramatic than the destruction of the mankind. And it is being used even in the situation in which the risk of total destruction exists even with the law "non A" and no one can actually show that \(\langle W\rangle\) would get worse under the law "A".
In other words, the "precautionary principle" may sometimes mean a policy that may be shown to be wise – and "refined versions" of this argumentation exist. But much more likely, it is applied as a policy to distort the behavior in a way that cannot be justified at all. The principle is used as an illegitimate tool to strengthen the power of a predetermined "winner". These two levels of the precautionary principle are often being conflated. In some cases, this kind of reasoning looks OK, so the whole public is often being brainwashed and led into thinking that the general precautionary approach is always wise or safer. Except that it is not.
So far, I've mentioned that the cost-benefit analysis is the rational way to decide whether it's a good idea to adopt "A". Sometimes, it may justify the precautionary principle but it most cases when people refer to the principle, the cost-benefit analysis doesn't justify it. For anyone who understands these things – what it means to think rationally in the presence of uncertainty – the rest is all about examples. Is the usage of the precautionary principle or warnings about fat tails legitimate in one particular situation or another?
Aside from the situations in which people respond relatively rationally, I would find examples in which people are ignoring the "fat tails" even though they shouldn't. And on the contrary, they are sometimes mentioning them even though they don't help them either because they don't exist or because they're not fat enough.
AngularMan stated that "fat tails" justify the ban on nuclear energy or fossil fuels. I don't think so. There's no plausible way of getting "globally destructive" or even "huge" losses because of either of them. Chernobyl was bad enough but it killed some 50 people directly, the indirect later deaths are at most in a few thousands, and the direct losses were $15 billion while the indirect ones $250 billion in the subsequent 30 years.
Just in the U.S., nuclear energy produced 800 terawatt-hours in 2015. Kilo, mega, giga, tera. You see that it's 800 billion kilowatt-hours. Count some at least $0.12 per kilowatt-hour and you will see that nuclear energy has revenues of $100 billion a year or so just in the U.S. A sizable fraction of it is profit. No doubt, the damages of Chernobyl have been repaid. Chernobyl was really a worst-case scenario. You expect a future accident – that will materialize at some point – to be much less harmful. In many cases, one may give a near proof that things won't be as bad in Chernobyl.
Fossil fuels won't destroy the world, either. They won't destroy it directly but they can't destroy it indirectly, e.g. through global warming, either. The effect of CO2 on the temperature is proportional to the "climate sensitivity" \(\Delta T\), the warming per doubling of the CO2 concentration. Its value isn't known too accurately – if it is known at all. The simplest feedback-free calculation gives about \(\Delta T\sim 1.2\,{\rm K}\). The IPCC says that this figure gets approximately doubled by positive feedbacks, so \(\Delta T\sim 2\,{\rm K}\).
Two degrees of warming (and even 10 degrees if you could get them in some way) won't lead to the end of the world or the mankind, of course, which is why the assumption of the precautionary principle that you need an "infinite destruction" to make the argument valid isn't obeyed. All conceivable consequences have thin tails.
The climate sensitivity itself is unknown and you could suggest that the probability distribution \(\rho(\Delta T)\) has a fat tail. Does it?
If you only used some evidence – more precisely, one particular theoretical method to calculate the distribution for \(\Delta T\) which ignores everything else – you could conclude that the climate sensitivity has a fat tail. Why? Because we may write \(\Delta T\) in terms of the feedback-free part \(\Delta T_0\) and the feedback coefficient \(f\):\[
\Delta T = \frac{\Delta T_0}{1-f}
\] The factor \(1/(1-f)=1+f+f^2+\dots\) may be visualized as this geometric series, as the sum of the "correction \(f\)" and the "correction arising from the correction", and so on. When \(f\lt 0\), we talk about negative feedbacks and \(\Delta T\lt \Delta T_0\). When \(1\gt f\gt 0\), the net feedbacks are positive and \(\Delta T\gt \Delta T_0\). When \(f\gt 1\), it's even worse because the geometric series is divergent (although formally, the sum is negative), and what you get is a runaway behavior: the deviation of the temperature from the equilibrium grows exponentially for some time, before this effective description breaks down.
If \(f\) has a distribution that has a nonzero probability to be between \(0.99\) and \(1.01\), for example, then \(1/(1-f)\) and therefore \(\Delta T\) has a distribution with a fat tail near \(\Delta T\to \infty\) which arises from \(f\to 1\). It could easily happen, with the probability comparable to \(p=1/10,000\), that \(f\sim 0.9999\), and therefore \(\Delta T\) could be \(10,000\) times greater than \(\Delta T_0\), formally 10,000 degrees. A typical fat tail. Of course, we don't want the civilization to end in a 10,000 °C hell with the probability as high as \(p=1/10,000\) which is why a CO2 ban could be justified.
But as I said, this "fat tail" only survives if you refuse to acknowledge any other evidence – whether it's empirical evidence or other theoretical considerations. For example, in 2010, I argued that the sensitivity can't be high and positive i.e. \(f\to 1\) is virtually impossible because if the probability were substantial for \(f=0.9999\), the values \(f=1.0001\) would have to be similarly likely, too. In fact, \(f\) isn't a universal physical constant but probably evolves with the conditions on Earth. And if \(f\) had been (sufficiently) above one, it would have happened that during the 5-billion-years history, the Earth would have already experienced the lethal runaway behavior of the global warming.
The evidence arguably shows that it couldn't have happened for 5 billion years. That's why \(\rho(f)\) for \(f\sim 1.01\) or whatever must be basically zero (the inverse of the longevity of the Earth), by continuity (or fluctuations of \(f\)), \(f\sim 0.99\) is also ruled out, and that's why we may rule out sensitivities of order a hundred of degrees – and rule them out much more safely than to make the probability \(1/100\).
This was an extreme argument – and how far it gets you depends on your assumption on the continuity of \(\rho(f)\) and/or the size of the fluctuations of \(f\) during the Earth's history. There are saner ways to rule out the huge sensitivities, of course. If the sensitivity were above 5 °C, then the predicted warming per decade in 8 recent decades would be around 0.3 °C. The probability that you would get (as we observed) less than 0.2 °C per decade in each of these 8 decades would be something like \(p\sim (1/3)^8 \sim 0.00015\) so with the certainty around 99.99%, you may say that this argument is enough to be convinced that the sensitivity must be smaller than 5 °C. There are other, partially but not completely independent, arguments excluding high sensitivities which may help you to rule out even smaller sensitivities. My basic argument will be getting increasingly strong if the mild warming (or cooling) will continue, of course. The longer history you observe, the more accurately you may eliminate noise – the more reliably you may identifying the measured trend with the "real underlying" one.
At the end, the fat tail just isn't there if you take a sufficient amount of theoretical arguments and empirical data into account. In other words, "really big" values of the climate sensitivity are excluded at a huge significance level. The tail is basically thin. Maybe it's a power law but it would have to be a quickly decreasing power law. It is therefore legitimate to assume that the sensitivity isn't insane and the Gaussian distribution for \(\Delta T\) is good for almost all purposes. The value of \(\Delta T\) is some 1 °C plus minus 1 °C or so. Richard Lindzen and a collaborator have claimed to derive a much narrower error margin around a figure that is close to (but a bit smaller than) 1 °C. But almost everyone else has error margins comparable to 1 °C. If Lindzen is wrong, no one has really done a better job than the 1 °C plus minus 1 °C that I have mentioned. And given this big uncertainty, it doesn't really make much sense to be more accurate. Everything between –1 °C and +3 °C is somewhat realistically possible, values around +1 or +2 °C are the most likely and the "linearized" analysis is OK.
The damages caused by a 0.5 °C or 1.0 °C warming between 2017 and 2100 – which follows from the 1 °C or 2 °C sensitivity, respectively – surely has a vastly lower magnitude than those caused by the ban of a majority of fossil fuels etc. over the following decades (just compare how much you would personally lose if the temperature increased by one degree; and if you couldn't use any fossil fuels or things that required them – you may multiply both numbers by 7 billion if it makes it easier for you to understand that this is exactly the global questions we're discussing) which is why the cost-benefit analysis unambiguously says that when it comes to the fight against climate change, the only rationally justifiable policy is to have courage and do nothing. Comments about fat tails are just wrong because the tail isn't fat here. There's a significant uncertainty in the climate sensitivity but all the conceivable values are qualitatively analogous – the sensitivity is at most of order one degree Celsius.
The really dangerous phenomena have the fat tail. In many cases, it's because the damages basically grow exponentially for some time. The damages are\[
|\langle W\rangle | = \exp(D)
\] where \(D\) is the effective number of \(e\)-foldings over which the problems grow exponentially. The quantity \(D\) itself has some distribution and its width may be e.g. \(10\). But when the exponent changes by ten, the exponential changes multiplicatively by a factor of \(\exp(10)\sim 22,000\) or so. That's why the uncertainty in \(D\) is very important and the more extreme yet conceivable values of \(D\) completely dominate the formule for the expected damages \(|\langle W\rangle |\).
That's when the precautionary principle is actually justified. If you can't prove that \(D\sim 20\) is impossible, you should better assume that it's possible.
Again, this danger only exists in situations in which "some exponential growth" of some bad or dangerous things may be shown to be possible. Pandemics. Mass conversion of Muslims to the radical Islam – which would be a special case of pandemics, too. Or something of the sort. Yes, nuclear energy did potentially contain similar threats in which the precautionary principle could have been applied.
For some time after the war, even some top physicists weren't certain that it was impossible for the thermonuclear weapons to ignite a chain reaction in the atmosphere and burn the whole atmosphere or the Earth. A nuclear explosion does involve some exponential reaction – a neutron breaks a larger number of nuclei that produce a larger number of neutrons, and so on. But can't the whole atmosphere become one giant bomb when a good enough thermonuclear weapon is detonated?
At the end, a rather simple calculation is enough to see that it can't happen. But it's right to check such dangers when you sell your first thermonuclear weapons, among other things. However, when the analysis of possible processes and threats is already done accurately, it's a good idea not to deny these "things are OK" arguments. The most widespread usage of the "precautionary principle" is when some people simply deny all "things are safe" arguments altogether. They shouldn't be using fancy phrases such as the precautionary principle in these situations at all – instead of a principle, what they're doing is just plain dishonesty.
In a complete discussion of these matters, there would be a big chapter dedicated to financial risks, financial black swans, and similar things. Technically, it's surely correct to say that many tails in the financial distributions are fat – in the sense of decreasing much more slowly than exponentially, e.g. as power laws. So many people often assume that big changes are really impossible even though they are not as impossible. These are matters that everyone who is doing some risk management should know. Also, the fat tail discussion may often be important because exponentially growing "chain reactions" of problems and bankruptcies similar to the nuclear blast may take place in the financial world – that's why the talk about the domino effect may sometimes be legitimate.
On the other hand, the realistic power laws are often enough to be rather safe. And the chain reactions and domino effects are usually impossible even when lots of people say that they are possible. Companies ultimately are – or should be – mostly independent entities that are created and that die in isolation from others. Every company (and every individual) should be primarily responsible for itself (or himself). The efforts to link and include everyone into one holistic bloc may look "nice" to someone – because "unity" is so nice and politically correct – but they actually increase the vulnerability of the whole system which is normally resilient partly thanks to the isolation between companies, individuals, nations, and civilizations. The domino effects sometimes emerge but it's because of a self-fulfilling prophesy: traders think that everyone is connected, and therefore they bring everyone into trouble (all similar banks go bust etc.). But it doesn't have to be so and in a functional capitalist economy with rational players, it shouldn't be so. An unhealthy chain reaction may exponentially grow in a bank but a competing bank is already "outside the bomb" and won't continue in the spreading of the fire, just like the atmosphere isn't a continuation of the H-bomb.
So while I think that there exist people who underestimate fat tails and risks in the financial world (and lots of people and especially collectives underestimated the risks before the 2008 downturn or before various flights of space shuttles etc.), I think that it's much more typical these days for people to overestimate the potential for big problems and the fatness of the tails.
|
CommonCrawl
|
Forecasts of future scenarios for airport noise based on collection and processing of web data
Marco Pretto ORCID: orcid.org/0000-0003-1194-06581,
Pietro Giannattasio1,
Michele De Gennaro2,
Alessandro Zanon2 &
Helmut Kuehnelt2
European Transport Research Review volume 12, Article number: 4 (2020) Cite this article
This paper presents an analysis of short-term (2025) scenarios for noise emission from civil air traffic in airport areas.
Flight movements and noise levels at a given airport are predicted using a web-data-informed methodology based on the ECAC Doc.29 model. This methodology, developed by the authors in a previous work, relies on the collection and processing of air traffic web data to reconstruct flight events to be fed into the ECAC model. Three new elements have been included: i) topographic information from digital elevation models, ii) a fleet substitution algorithm to estimate the impact of newer aircraft, and iii) a generator of flight events to simulate the expected traffic increase.
The effects of these elements are observed in 2025 scenarios for the airports of London Heathrow, Frankfurt and Vienna-Schwechat. The results quantify the noise reduction from new aircraft and its increment due to the air traffic growth forecast by EUROCONTROL.
Since 2015, air transport in the world has been growing at a steady rate of about 7% per year, with almost 4.1 billion passengers carried by scheduled flights in 2017 [1]. Around 26% of them were served in Europe, which in the same year saw almost 11 million flights and more than 21 million flight operations, expected to increase by up to 84% by 2040 [2] also thanks to the contribution of low-cost carriers [3]. This large development, however, poses important threats such as the increase in air pollution and noise, which the EU has addressed by setting out ambitious goals in its Flightpath 2050 [4]. According to this plan, future aircraft should lead to a reduction by 75% in CO2 emissions, 90% in NOx emissions and 65% in perceived noise compared to the average new aircraft in 2000. Improvements in this regard have already been accomplished in the past [5], and new practices such as aircraft electrification or biofuel adoption appear promising [6, 7].
Alongside technical developments, achieving the target of a sustainable growth also requires quantifying the effect of present and future air traffic, and this is typically carried out by using suitable prediction models. Concerning aircraft noise, a large number of models with different degrees of accuracy and complexity have been developed in the past [8]. Among them are best-practice methods, which rely on standardised datasets to enable fast computation of aircraft noise in large airport areas, and are therefore used by national aviation agencies in many countries.
Independently of the model, a key requirement for effective noise prediction is extensive information on flight movements and aircraft models, which proved difficult to retrieve up until a few years ago. In recent times, however, the introduction of ADS-B transponders has given rise to flight tracking websites such as Flightradar24 [9] and FlightAware [10], which use and rearrange information from these transponders and other sources to provide the public with flight movement data in real time. The available amount of information, steadily growing thanks to EU's obligation to install ADS-B on all large aircraft by 2020 [11], is already large enough to enable statistical analysis of aircraft performance [12]. Moreover, when these data are paired with additional Internet-based sources such as aircraft model databases, it becomes possible to define flight events at an airport and use them as an input to a best-practice model. This was recently done by the present authors, who used the ECAC Doc.29 model and Internet-based data sources to compute historical noise contours at multiple European airports [13, 14].
Having demonstrated the viability of large-scale noise computation from web-based data, the present work aims to show that this approach can be used also for short-term noise forecasts. To do this, reliable predictions of future aircraft fleet composition and flight movements at a given airport are required. Two algorithms are introduced for updating to 2025 the aircraft fleet and reconstructing additional flight events due to the traffic increase expected for the same year. These algorithms are applied to historical flight movement data at three European airports, and the resulting flight events are used to predict the future airport noise contours. The present approach, upgraded to account for topographic data from digital elevation models, can be used for airports of different size and passenger volume if appropriate traffic forecasts are available.
The paper is structured as follows. Section 2 illustrates the noise computation methodology with special emphasis on how web data are used and what improvements from the previous application have been made. Then, the two algorithms used for addressing future air traffic scenarios are described. Section 3 reports the results of the application of the present approach to the airports of Heathrow, Frankfurt and Vienna-Schwechat. The conclusions of the work are drawn in Section 4.
The approach described in this paper is based on the procedure of flight event reconstruction and noise computation introduced by Pretto et al. [14]. This procedure is here extended to i) account for the topography of the airport area and ii) enable an efficient prediction of the future noise levels due to variations in aircraft fleet composition and air traffic volume. The key steps of this extended approach are summarised in the flowchart of Fig. 1, aimed to support the reader in understanding the methodological steps described below.
Flowchart describing the key steps of the present approach. The input data are listed on the left-hand side
Summary of noise computation procedure
This subsection briefly describes the main steps that enable the computation of airport noise contours using the ECAC noise model and web-based air traffic data, with special focus on the aspects that affect the operations described from Section 2.2 onwards. The entire procedure is detailed by Pretto et al. [14].
ECAC Doc.29 model and ANP database
The ECAC Doc.29 model [15] is a best-practice segmentation aircraft noise prediction model that enables calculation of noise levels and contours around airports due to aircraft movements during a specified time period. At any selected airport, the model computes the desired cumulative noise metrics, such as LAeq,day, LAeq,night, LDEN, and Lmax,avg, by superposing the effects of single flight events, i.e. departures and arrivals. For each of them, single-event sound levels SEL and LAmax are computed using a grid of sound receivers in the region of interest around the airport. Each of these two sound levels is computed by superposing the effects of a set of flight path segments, which represent the 3D aircraft motion over time during the event. These segments are obtained by merging the ground track, which represents the ground projection of the aircraft motion, with the flight profile, which contains information on the vertical motion above the ground track and the related flight parameters (e.g. calibrated airspeed and engine thrust).
For a single event, the ground track and the flight profile can be generated either by analysis of flight movement data or by synthesis from appropriate procedural information. In the case of flight profiles, this information consists of a series of procedural steps, which prescribe how the aircraft must be flown during a single operation (departure or arrival) in terms of speed, altitude and flap settings. These procedural steps are listed in the ANP database [16], which contains appropriate sets of flight profiles for around 140 reference aircraft models known as proxies. A flight profile is calculated using mechanical and kinematic equations that require knowledge of such profile sets, basic aircraft model features (e.g. aircraft weight) also provided by ANP, and atmospheric conditions, allowing the computation of engine thrust, height, and true and calibrated airspeeds above the ground track [17].
Once the segmented flight path for a single flight event has been obtained, the calculation of segment noise levels is performed in the ECAC noise engine by taking into account the aircraft performance inside the given segment and the location of a receiver. First, the baseline noise levels are interpolated from reference levels, known as "Noise-Power-Distance" (NPD) data and valid for a straight, infinitely long flight path flown at fixed speed, using the current values of engine thrust (power) and segment-receiver distance. Then, adjustments are made to account for atmospheric conditions, non-reference speed, position of aircraft engines, bank angle, finite segment length, sound directivity during runway movements, and reverse thrust. All segment noise levels are then superposed and SEL and LAmax are found at a single receiver point. The process is repeated for all the receivers, thus completing the single event noise computation.
Integration with web-based air traffic data
The application of the ECAC model to the calculation of single event noise levels requires a complete description of the flight event. This is obtained through data collection from the Internet. The core information comes from flight tracker Flightaware, which was searched in June 2018 to collect raw air traffic data in nine European airports, retrieving around 11,000 flight histories. Each flight history contains the 3D locations and speeds, ordered in time and spaced by 15 s, of a certain aircraft, normally identified via its registration and ICAO type designator. All airport locations and runways were retrieved from website OurAirports [18], while website Airlinerlist [19] was used to build an offline database that associates the registration with the specific aircraft model.
As the raw flight histories were sometimes incorrect, often lacked trace of non-airborne movements, and the aircraft model was never reported explicitly, the retrieved flight data were pre-processed using the runway and aircraft information mentioned above to reconstruct the flight movement and to recover the departure/arrival runway and aircraft model. The latter was then used to enter the main ANP substitution table, which is a tool that associates a specific model with a suitable ANP proxy, thus enabling noise computation via the ECAC model. Many configurations are listed for the given model-proxy pair, which differ primarily in engine variant and weight, and hence in the noise output. Therefore, multiple values of a correction factor called "number of equivalent events", Neq, are provided in the ANP tables to modify the proxy noise levels according to the specific aircraft configuration. Since the different configurations could not be retrieved, an average configuration was built for each model, and two average numbers of equivalent events (different for departures and arrivals) were assigned to the proxy. When the aircraft registration was not available, a second ANP substitution table could be used to obtain a direct ICAO designator-proxy association, as only one configuration is listed and no averaging is needed.
The reconstructed flight movements during a single departure/arrival event at the selected airport are used, together with the aircraft information, for the construction of the segmented flight path. In each flight event, the ground track is built via analysis of the 2D position data, while the flight profile is synthesised from the ECAC procedural steps, as the time spacing between consecutive flight recordings (15 s) is too large to ensure reliable engine thrust reconstruction solely from speed and height information.
Generation of noise contour maps
In the original application each airport was studied separately, and all the flight events occurring on a given day were identified. For each event, the segmented flight path was built, and its contribution to airport noise was computed on a square grid of 11,881 receivers positioned every around 450 m in both x and y directions, at the same altitude as the airport reference point (ARP). Finally, the sound levels due to all flight events were superposed to obtain daily cumulative noise metrics, and hence daily noise contours in the airport area.
Noise computation accounting for topographic data
Local topography (i.e. the elevation of land surfaces around the airport) may have a non-negligible influence on the noise levels around an airport, mainly due to the elevation of the receiver points, which affects their distance from the flight path segments. Furthermore, the knowledge of local elevations allows for an improved description of the airport runways, and the reconstruction of aircraft ground movements can also be influenced. The next subsections explain how terrain elevation is accounted for in the present noise computation procedure.
Acquisition and implementation of topographic data
The source of topographic data for this analysis is a series of digital elevation models (DEMs) of the European territory, which includes all the airports studied. Around 1500 DEMs, each 1-degree wide in both latitude and longitude and with a 3 arc-second resolution, were downloaded from website WebGIS [20] and suitably post-processed in order to obtain a single elevation map for the entire Europe in the form of a 2D grid. The elevations of all ARPs and runways were computed by bilinear interpolation of grid data, and each runway was assigned a single elevation value (the one of its mid-point) and a gradient (using the elevation of its two ends). This is because the ECAC mechanical model relies on flat runways, but can account for runway gradients during a take-off. The same interpolation was performed around each airport for each receiver point involved in the noise computation procedure.
Line-of-sight blockage adjustment
Line-of-sight (LOS) blockage is the sound attenuation due to the presence of an obstruction along the direct propagation path between the source and the receiver. Natural structures such as mountains and hills may act as "sound shields", diffracting sound waves and thus considerably lowering noise levels behind them. The ECAC model does not account for this effect, but FAA's AEDT does through a specific LOS adjustment [21]. As the AEDT noise computation is based on the ECAC model, a straightforward implementation of this adjustment could be performed in the present methodology.
According to AEDT, the LOS adjustment, LOSadj, is calculated together with the engine installation, ΔI(φ), and the lateral attenuation, Λ(β,l), for each pair of flight path segment and receiver (for the definitions of ΔI, Λ, depression angle φ, elevation angle β and lateral displacement l see [17]). Then, these values are compared in order to estimate their overall effect through a "lateral correction", LAcorr, to be used in the ECAC noise engine:
$$ L{A}_{corr}=\max\;\left[ LO{S}_{adj},-{\varDelta}_I\left(\varphi \right)+\varLambda \left(\beta, l\right)\right] $$
The computation of LOSadj requires determining, for each segment-receiver pair, whether the direct sound propagation path is obstructed, and by how much if so. This is done in the present application by comparing the local altitude of the direct propagation path (a simple straight segment connecting flight path and receiver) with the terrain elevation. To account for the terrain, a sample point is taken every about 300 m and its elevation is computed by means of a bilinear interpolation using the four surrounding receiver points. Finally, the differences between local terrain elevation and propagation path altitude are computed, and the maximum value is used to calculate LOSadj according to the AEDT procedure.
Fleet substitution algorithm
For any assessment of future noise impact from aviation, a major aspect to be taken into account is the change in fleet composition. In fact, when an old aircraft cannot be operated any longer, it is retired and substituted with a newer, generally quieter, model. A fleet substitution algorithm has been developed in the present application to update the aircraft fleet from 2018 to 2025, relying on the ANP database as the source of noise and performance data for the newer aircraft models. The substitution algorithm is split in three steps:
identification of the aircraft to be substituted;
identification of the substitute aircraft models;
assignment of the new model to old flight events.
In the first step, the age of every aircraft at the time of the flight event is recovered using the offline aircraft model database mentioned in Section 2.1.2, and a new database for 2025 is built by increasing the age of each aircraft by 7 years. Then, all aircraft whose age exceeds 22 years are deemed fit for substitution. The cut-off age derives from a slight simplification of the fleet mix model used for the UK aviation forecasts [22].
The second step consists in deciding which aircraft are best suited to represent the future fleet. In this regard, two aspects must be considered: i) while in the next few years new-generation aircraft are expected to dominate the market (e.g. A320neo), some current-generation models are still being sold [23]; ii) as the ANP database was last updated in February 2018, some of the new-generation models expected by 2025 are not yet listed, primarily being without official noise certification at the time.
In light of the above considerations, the supply pool containing the substitute aircraft models is built as follows. First, the pool is split into 10 categories according to the aircraft size, represented by maximum weight and approximate number of seats. Second, for each category the aircraft models that are best in class in terms of noise output are identified and retrieved from the first ANP substitution table, and an average configuration for each model is built as explained in Section 2.1.2. The results are listed in Table 1, which also shows that multiple models are chosen for a single category. This is done either because such models have a similar noise output, or to represent better the weight variability within a given category.
Table 1 Supply pool of best-in-class ANP-available aircraft models for the new aircraft fleet in 2025
The third and final step is the actual fleet modification. Each aircraft fit for substitution is assigned the MTOW of its original ANP proxy, and this parameter is used to identify the supply pool category. The new model is selected randomly except for category < 190,000, where it was decided to preserve the 2018 market split between leading manufacturers Airbus and Boeing by substituting the older aircraft with models from the same company. Note that the selection inside the same category ensures that the old ground track is always compatible with the new aircraft, concerning in particular ground movements and radii of turns.
Generation of additional flight events
Besides accounting for the aircraft fleet evolution, forecasts of future air traffic scenarios should also consider a possible increase in the number of flight movements. However, while aircraft are retired on an individual basis, the number and characteristics of new flight events depend on multiple factors on global, national and local levels. In the present application, global and national factors are accounted for by using official 7-year EUROCONTROL traffic forecasts [24], which are applied locally to the airport of interest checking whether the predicted increment is compatible with its features and constraints (e.g. maximum runway system capacity).
After selecting an airport and retrieving its expected traffic increase, a flight event generation algorithm is used to create the required number of additional aircraft movements. This algorithm is applied to the events of a single day after the fleet substitution, and makes use of the existing data assets to simulate the traffic increment. It is composed of three steps:
separation of existing flight events in 60 sub-classes according to three parameters;
retrieval of the number of new flight events in each sub-class;
generation of the flight events for each sub-class.
In the first step, the flight events are classified according to the three parameters reported in Table 2. The 60 (2 × 10 × 3) sub-classes express the traffic split at a given airport, showing which operations are most common for aircraft of a given size during a given part of the 24-h day. This split shows the way the selected airport operates, emphasising inherent restrictions (e.g. avoiding departures of large aircraft at night) that result in zero events registered in some sub-classes. Therefore, introducing the classification in Table 2 enables a strategy for increasing coherently the air traffic at the airport.
Table 2 Parameters used for classifying existing flight events
In the second step, a known percentage of traffic increment is applied to all the 60 sub-classes, and for each of them the number of flight events to be added is found. As these numbers are not integers, all 60 values are floored, and the remaining fractional parts are redistributed across the sub-classes having a number of events closest to an integer. This step implies the assumption that air traffic in 2025 will preserve the split of flight events observed at the selected airport in 2018.
In the third step, the new flight events are generated separately for each sub-class. If m is the number of additional events for a given sub-class, the n events recorded in 2018 for that sub-class are identified, and m among them are randomly chosen and duplicated. This operation, performed across all sub-classes, yields all the events needed to simulate the increased airport traffic.
As a final remark, this algorithm was devised with the sole purpose of computing cumulative noise metrics under forecast traffic scenarios, and therefore does not take into account ATC-related practices such as traffic separation or temporal rearrangement of events for accommodating new flight movements. Possible airport constraints, such as runway system capacity, are duly considered upon application of the algorithm.
The noise computation procedure outlined in Section 2.1 and updated to account for the topography of the airport area was applied to the prediction of noise levels due to air traffic in 2025 in three European airports, according to the algorithms of fleet substitution and new flight event generation described in Sections 2.3 and 2.4, respectively. The analysis is based on the flight movement data collected for the previous application [14] in the airports of London Heathrow, Frankfurt, and Vienna-Schwechat. The analysis at Heathrow Airport focuses on the effectiveness of the fleet substitution algorithm, and noise forecasts are validated against official UK noise predictions. At Frankfurt Airport, instead, the effect of an increase in air traffic is added and noise results are compared with their 2018 counterparts. Finally, multiple traffic forecasts are considered for Vienna International Airport, showing the comparative impact of air traffic increment and noise reduction due to quieter aircraft, and the contribution of terrain elevation. Differently from the original application, the dimensions of the 2D grid of receivers were tailored to the airport, but the receiver density was kept unaltered.
Only fleet substitution: Heathrow airport
Heathrow Airport, located 23 km west of London, is one of the largest airports in the world. It served around 80 million passengers in 2018 with 477,604 aircraft movements [25], causing the current two-runway system to operate near its full capacity of 480,000 movements [22]. Although a third runway is expected to be operative by 2030, the number of movements is unable to increase significantly in the next few years despite the expected air traffic growth in UK [24], which makes this airport a suitable test case for the fleet substitution algorithm.
For Heathrow Airport, official noise forecasts based on 2016 traffic volume and the ANCON model are available [26]. The key cumulative metrics are LAeq,day, LAeq,night and LDEN, considered for both the average summer 24-h day and the average day of the entire year, and the noise contour area (surface area enclosed by a given contour line) is provided for several noise levels as computed for 2016 and predicted for 2025. Although multiple traffic scenarios for 2025 are considered in the official forecasts, minor differences arise among them, and therefore the so-called "Central Scenario" is chosen as a reference. As for the present calculation, the flight movements on 13 June 2018 (westerly operations) and 11 June 2018 (easterly operations) were updated to 2025 considering fleet renewal but no traffic increase. The resulting noise levels were blended according to a 70%–30% modal split [14] to build single-day cumulative metrics, which are assumed to be representative of both summer average and annual average aircraft noise.
The comparison between official and present predictions is reported in Table 3. The values in km2 represent the noise contour areas enclosed by the specified contour level. It is worth noting that the traffic increase after 2016 (+ 2.9%) is due to both an actual growth in aircraft movements before 2018 (+ 1.1%) and the flight allocation algorithm used in the official forecasts. This algorithm redistributes the expected countrywide increase in air traffic across all airports, accounting for their residual capacity but forcing the allocation of at least a few additional flights to each airport to ensure algorithm convergence. Concerning noise, as explained in the previous paper [14] the present methodology underestimates cumulative levels by 1 to 3 dB. As observed in Table 3, even such small differences can lead to large variations in contour areas for levels as low as 45–55 dB, which are obtained far from the runways where noise decays slowly with distance. However, when examining the relative area changes, a very good agreement is observed between present and official forecasts. In particular, the decrease in contour areas for LAeq,day and LDEN is predicted quite well, whereas larger deviations are observed for LAeq,night, which is though more susceptible to single flight movements due to the limited number of night-time events.
Table 3 Official and present predictions at Heathrow Airport for 2025
Figures 2, 3, and 4 show the variations in noise levels, ΔdB, predicted using 6731 receivers that cover a 49-by-24 km2 airport area. The 2018 noise contours for which contour areas are provided in Table 3 are superimposed on each map. A moderate LAeq,day decrease is forecast to the north and south-west of the airport, whereas for LAeq,night significant noise reduction to the south and north-east of it is partially offset by an increment in the south-eastern region. As expected, the map for LDEN shows variations much more similar to LAeq,day. In all cases, the largest changes are mostly far away from the airport, where noise levels are relatively low, whereas an average reduction of around 1 dB is found within the contour areas. Finally, regions in all the three maps are observed where a slight-to-moderate increase in noise levels results from the computation, which is an unexpected occurrence when considering the substitution of old aircraft with newer and quieter ones. In fact, some new aircraft correspond to proxies that are different from those of the retired airplanes, and thus may require different ANP procedures, especially for approach. In particular, some proxies are required to perform a continuous 3° descent from 6000 ft. AGL while others are also prescribed to fly at 3000 ft. AGL for several kilometres, resulting in a longer flight profile and hence in noise increments that are strongest at locations not covered by the shorter procedure.
Map of predicted variations in LAeq,day at Heathrow Airport from 2018 to 2025 due to fleet substitution (ΔdB = LAeq,day,2025 - LAeq,day,2018). 51-dB and 54-dB LAeq,day contour lines computed for 2018 are superimposed
Map of predicted variations in LAeq,night at Heathrow Airport from 2018 to 2025 due to fleet substitution (ΔdB = LAeq,night,2025 - LAeq,night,2018). 45-dB, 48-dB and 50-dB LAeq,night contour lines computed for 2018 are superimposed
Map of predicted variations in LDEN at Heathrow Airport from 2018 to 2025 due to fleet substitution (ΔdB = LDEN,2025 – LDEN,2018). 50-dB and 55-dB LDEN contour lines computed for 2018 are superimposed
The results above hinge solely on the fleet substitution algorithm, which proves to be successful in light of the following considerations. First, the aircraft age distribution in Fig. 5 shows that about 25% of the airplanes in June 2018 were less than 5 years old, showing a fleet renewal trend that is in line with the substitution of 37% of the aircraft in 7 years provided by the present algorithm. Second, the good predictions of contour area changes in Table 3 are obtained despite the very small number of new aircraft models (see Table 1) compared to the official supply pool [27]. This suggests that the key to carrying out a good prediction is the separation in appropriate aircraft size categories, whereas the number of new aircraft models is much less important if at least one of them is used in each category.
Age distribution of the aircraft operated at Heathrow Airport on 11 June and 13 June 2018
Fleet substitution and additional flight events: Frankfurt airport
Frankfurt Airport is the largest airport of Germany, with around 69 million passengers served and 512,115 aircraft movements in 2018 [28]. Differently from Heathrow, the four-runway system and the soon-to-be three terminals will be able to handle the future growth in the number of passengers, which is expected to approach 80 million by 2025 [29]. Assuming that average aircraft size and passenger load factors remain unchanged, this forecast is in line with the baseline one from EUROCONTROL, which indicates a 13.9% increase in aircraft movements for Germany in the next seven years. Therefore, this percentage was used in the flight event generation algorithm, which was applied in conjunction with the fleet substitution one.
Starting from the flight movements collected for 11 June 2018, the computation led to the results reported in Table 4. First, the number of flight movements predicted for 2025 is 1512, which is compatible not only with the planned runway system capacity of 126 movements/h, but even with the current 104 movements/h (Fraport, 2019). Concerning the contour areas, if the fleet substitution is considered without additional movements, the noise reduction is similar to that of Heathrow Airport, although the improvement is slightly lower on average. However, when the traffic increase is considered, the areas become almost as large as in 2018, or even larger in the case of LAeq,night.
Table 4 Air traffic and noise contour areas at Frankfurt Airport in 2018 and 2025, considering or not traffic increase
The effects of the two contributions on airport noise are shown in Fig. 6, which reports the variations in noise levels, ΔdB, predicted using 10,355 receivers located on a 2200 km2 airport area. While a simple fleet upgrade causes an overall decrease in noise levels (see Fig. 6(a)), the addition of new flight movements tends to cancel out this improvement (see Fig. 6(b)), leading to very similar 2018 and 2025 LDEN contour sets. As observed and commented for Heathrow, also the ΔdB map in Fig. 6(a) shows some regions where noise increases despite the fleet renewal. These increments are even stronger in Fig. 6(b) due to the contribution of the additional air traffic.
Maps of predicted variations in LDEN at Frankfurt Airport from 2018 to 2025 (ΔdB = LDEN,2025 – LDEN,2018). a Effect of fleet substitution, 2018 LDEN contours. b Effects of fleet substitution and traffic increase, 2025 LDEN contours
Differently from the case of Heathrow Airport, no official noise forecasts are available for Frankfurt Airport. Therefore, a reasonable justification for the observed trend at Frankfurt is provided on the basis of the following arguments. The considered noise metrics refer to cumulative sound exposure, and exposure scales linearly with flight operations performed by the same aircraft. If E is the sound exposure resulting only from the fleet renewal, the expected sound level increase, ΔL, due to traffic increase ΔI is given by:
$$ \Delta L=10{\log}_{10}\left(E\bullet \left(1+\Delta I\right)\right)-10{\log}_{10}(E)=10{\log}_{10}\left(1+\Delta I\right) $$
With a 13.9% traffic increase, the expected ΔL is 0.565 dB. In fact, the average increments for metrics LAeq,day, LAeq,night, and LDEN range from 0.54 to 0.59 dB, showing that the present algorithm yields good results as long as the additional movements generate the same average noise emissions as the original flight events.
Multiple traffic forecasts: Vienna international airport
As shown in the previous subsection, an increase in traffic volume at a given airport is liable to offset the decrease in noise levels due to the entry into service of new-generation aircraft. The relation between these two effects is examined in more detail at Vienna International Airport (also known as Vienna-Schwechat Airport or Vienna Airport), which is the largest airport of Austria with two runways and about 27 million passengers served in 2018 [30]. With reference to the flight movements collected for 10 June 2018, three EUROCONTROL traffic forecasts for Austria, namely "Low", "Baseline", and "High", were used to discover how much traffic increment is sustainable without worsening noise levels around the airport.
Since three different traffic forecasts needed considering, the fleet substitution was performed only once, but the generation of additional flight events was repeated three times with the appropriate increments. A preliminary check showed that the runway system capacity of 74 movements/h is large enough to accommodate all events even in the worst case of "High" scenario. The results for LDEN are reported in Table 5, where an upward trend in contour areas is observed as the number of flight movements increases. The relative changes in contour areas are linearly regressed in the plot of Fig. 7 for the three different sets of LDEN values, showing that the increase in traffic volume necessary to cancel out the improvements due to fleet substitution is about 23%. This value is not only higher than the most likely "Baseline" traffic forecast, but also well above the − 2.1% registered from 2011 to 2018 [30], suggesting that aircraft noise might not be the worst problem for the airport to face in the short term. However, the situation may change if a third runway is built [31], as the expanded airport capacity could lead to an unpredictably large increase in flight movements.
Table 5 LDEN at Vienna in 2018 and 2025 under four assumptions (N = no traffic increase, L = low, B = baseline, H = high)
Linear regression of the contour area changes for the LDEN values listed in Table 5
The variation of LDEN in the airport area is examined in Fig. 8, which shows the effects of sole fleet renewal without additional movements and the three traffic scenarios. The predictions refer to an area of about 2075 km2 covered with 9898 receivers. As expected under the current assumptions, the contour areas grow keeping almost the same shape as the traffic volume increases, but it is worth noting that under the "High" scenario in Fig. 8(d) there is a slight increase in noise (close to 1 dB) also outside the narrow strips along the typical arrival and departure routes.
Maps of predicted variations in LDEN from 2018 to 2025, and 2025 LDEN contours at Vienna International Airport for (a) unaltered traffic volume, (b) low increase, (c) baseline increase, (d) high increase (ΔdB = LDEN,2025 – LDEN,2018). Fleet substitution is applied in all the four cases
Similarly to Frankfurt Airport, markedly higher noise levels due to both different approach procedures and traffic increase can be observed locally (e.g. north-east of the runways).
Finally, since Vienna International Airport lies close to the Alps, its hilly surroundings enable a meaningful analysis of the influence of topography on noise levels. In general, the introduction of terrain features both alters the elevation of the runways and causes a vertical displacement of the sound receivers. The maps of LDEN and Lmax,avg variations in Fig. 9 show that the combination of these two effects at Vienna Airport leads to a slight reduction in the noise levels along most of the aircraft routes when compared to previous results obtained for flat terrain at ARP elevation [14]. However, in the vicinity of the most elevated regions, LDEN rises by up to 2 dB, as shown in Fig. 9(a). This increase occurs primarily because the receivers are closer to the flying aircraft, and thus the average distance between path segments and sound receivers decreases. The along-route reduction and local increment in noise levels are intensified when considering metrics based on maximum sound levels, such as Lmax,avg in Fig. 9(b), because these ones, instead of depending on cumulative exposure from all flight path segments, are dominated by the noisiest path segment of each flight event. As the noisiest segment is usually the closest to the sound receiver, terrain elevation becomes a considerable fraction of this segment-receiver distance, leading to a stronger reduction in the along-route noise but causing local increments to approach or exceed 3 dB, as detected to the south and north-west of the airport.
Variations in LDEN (a) and Lmax,avg (b) at Vienna International Airport in 2018 due exclusively to the implementation of terrain elevation data, without fleet substitution (ΔdB = level considering elevation – level considering flat terrain). The elevated regions (above 200 or 300 m) are enclosed by contour lines
The approach presented in this paper is an evolution of the original methodology devised by the authors [14] for the computation of noise in airport areas based on ECAC noise model and Internet-based information sources. Besides accounting for the topographic features of the airport area, the present approach introduces two algorithms for aircraft fleet renewal and reconstruction of new flight events, which are used to forecast airport noise according to future air traffic scenarios. Predictions of noise contours for 2025 have been carried out for three large European airports (London Heathrow, Frankfurt, Vienna-Schwechat), focusing on fleet renewal at Heathrow, air traffic increase at Frankfurt, and multiple traffic scenarios at Vienna-Schwechat.
Data from digital elevation models have been collected, processed into usable terrain elevation maps, and implemented into the noise computation methodology. In particular, the addition of line-of-sight blockage to the ECAC noise engine enables accounting for the shielding effect due to terrain features. The two algorithms for fleet substitution and generation of new flight events use and readapt, under reasonable assumptions, historical data on flight movements, aircraft models and airports. The key merit of these algorithms is the classification of the current aircraft fleet into 10 size categories, which allows performing a coherent redistribution of new aircraft and flight events in the same categories to model future air traffic scenarios. The forecasts for Heathrow Airport show that the fleet substitution algorithm is able to provide reliable estimates of the relative changes in noise levels, while reasonable results have been obtained for Frankfurt Airport when considering also an increase in flight movements. The analysis of multiple traffic scenarios at Vienna International Airport has allowed identifying the air traffic volume that balances the noise increase from additional flight events with the use of quieter aircraft. Finally, the effect of terrain elevation around Schwechat results in a modest variation in the exposure-based noise metrics, while the maximum levels in the most elevated regions increase by more than 3 dB.
The present application allows forecasting airport noise by rearranging past and present publicly available web data to simulate future air traffic scenarios for any civil airport in the world. Therefore, it represents a widely usable and general approach, the main limitation of which remains a moderate underestimation of the absolute noise levels, as explained by Pretto et al. [14]. In addition, this procedure is quite flexible, as changes can be easily made to account for other new-generation aircraft (once officially certified) and different MTOW distributions for future aircraft fleets, while sensitivity analyses can be carried out via simple modification of parameters such as traffic increment or maximum aircraft age. All these aspects make the present application a lean and powerful tool for assessing changes in airport noise under different future scenarios, and, as such, very well suited for aviation policy-makers.
All symbols in the equations are defined in the text.
ADS-B Automatic Dependent Surveillance – Broadcast
AEDT Aviation Environmental Design Tool
AGL Above Ground Level
ANCON Aircraft Noise Contour [Model]
ANP Aircraft Noise and Performance
ARP Airport Reference Point
ATC Air Traffic Control
CAA Civil Aviation Authority
DEM Digital Elevation Model
ECAC European Civil Aviation Conference
EU European Union
FAA Federal Aviation Administration
ICAO International Civil Aviation Organization
LOS Line-Of-Sight
MTOW Maximum Take-Off Weight
NPD Noise-Power-Distance
Noise metrics
SEL A-weighted sound exposure level generated by a single flight event.
LAmax Maximum A-weighted sound level generated by a single flight event.
LAeq,W Time-weighted equivalent sound level. It is the level of the average sound intensity due to N flight events during measurement period T0. Time-of-day weighting factor Δi is added to single event level SELi to account for increased noise annoyance during evening and night. The time-weighted level is calculated as follows:
$$ {L}_{Aeq,W}=10{\log}_{10}\left(\frac{1}{T_0}{\sum}_{i=1}^N{10}^{\raisebox{1ex}{$\left( SE{L}_i+{\varDelta}_i\right)$}\!\left/ \!\raisebox{-1ex}{$10$}\right.}\right) $$
From this expression, the three cumulative noise metrics below are defined.
LAeq,day 16-hour day-average sound level. T0 = 57,600 s (07:00-23:00) and Δi = 0 dB.
LAeq,nigh 8-hour night-average sound level. T0 = 28,800 s (23:00-07:00) and Δi = 0 dB.
LDEN Day-evening-night average sound level. T0 = 86,400 s, Δi = 5 dB in the evening (19:00-23:00), Δi = 10 dB at night (23:00-07:00), Δi = 0 dB otherwise.
Lmax,avg Average maximum sound level. It is calculated as follows:
$$ {L}_{\mathit{\max}, avg}=10{\log}_{10}\left(\frac{1}{N}{\sum}_{i=1}^N{10}^{\raisebox{1ex}{${L}_{Amax,i}$}\!\left/ \!\raisebox{-1ex}{$10$}\right.}\right) $$
where LAmax,i if the maximum level of the i-th flight event and N is the number of events.
Flight tracking data from FlightAware are available from the corresponding author on reasonable request. Aircraft model databases, airport data, and DEMs are available on websites Airlinerlist, OurAirports, and WebGIS, respectively. The ANP database is accessible upon permission from EUROCONTROL.
ICAO. (2019). Presentation of 2017 Air Transport Statistical Results. Retrieved April 17, 2019, from https://www.icao.int/annual-report-2017/Pages/the-world-of-air-transport-in-2017-statistical-results.aspx
EUROCONTROL. (2018). European Aviation in 2040 - Challenges of Growth. Retrieved April 2, 2019, from https://www.eurocontrol.int/articles/challenges-growth
Jimenez, E., Claro, J., Pinho de Sousa, J., & de Neufville, R. (2017). Dynamic evolution of European airport systems in the context of low-cost carriers growth. Journal of Air Transport Management, 64, 68–76. https://doi.org/10.1016/j.jairtraman.2017.06.027.
European Commission. (2011). Flightpath 2050 - Europe's vision for aviation. Publications Office of the European Union. https://doi.org/10.2777/50266.
Grampella, M., Lo, P. L., Martini, G., & Scotti, D. (2017). The impact of technology progress on aviation noise and emissions. Transp Res A, 103, 525–540. https://doi.org/10.1016/j.tra.2017.05.022.
Baharozu, E., Soykan, G., & Ozerdem, M. B. (2017). Future aircraft concept in terms of energy efficiency and environmental factors. Energy, 140, 1368–1377. https://doi.org/10.1016/j.energy.2017.09.007.
Staples, M. D., Suresh, P., Hileman, J. I., & Barrett, S. R. (2018). Aviation CO2 emissions reductions from the use of alternative jet fuels. Energy Policy, 114, 342–354. https://doi.org/10.1016/j.enpol.2017.12.007.
Filippone, A. (2014). Aircraft noise prediction. Progress in Aerospace Sciences, 68, 27–63. https://doi.org/10.1016/j.paerosci.2014.02.001.
Flightradar24. (2019). https://www.flightradar24.com/. Retrieved April 4, 2019
FlightAware. (2019). https://flightaware.com/. Retrieved April 3, 2019
European Commission. (2017). Commission implementing regulation (EU) 2017/386 of 6 March 2017 amending implementing regulation (EU) no 1207/2011 laying down requirements for the performance and the interoperability of surveillance for the single European sky. Official Journal of the European Union, 60, 34–36.
Sun, J., Ellerbroek, J., & Hoekstra, J. M. (2019). WRAP: An open-source kinematic aircraft performance model. Transportation Research Part C, 98, 118–138. https://doi.org/10.1016/j.trc.2018.11.009.
De Gennaro, M., Zanon, A., Kuehnelt, H., Pretto, M., & Giannattasio, P. (2018). Big data for low-carbon transport: an overview of applications for designing the future of road and aerial transport. 7th European Transport Research Arena 2018. Vienna. https://doi.org/10.5281/zenodo.1440969.
Pretto, M., Giannattasio, P., De Gennaro, M., Zanon, A., & Kühnelt, H. (2019). Web data for computing real-world noise from civil aviation. Transportation Research Part D, 69, 224-249. https://doi.org/10.1016/j.trd.2019.01.022.
European Civil Aviation Conference. (2016a). Doc 29: Report on Standard Method of Computing Noise Contours around Civil Airports (4th ed., Vol. 1: Applications guide). Retrieved November 2, 2017, from https://www.ecac-ceac.org/ecac-docs
EUROCONTROL. (2019). The Aircraft Noise and Performance (ANP) Database : An international data resource for aircraft noise modellers. Retrieved March 5, 2019, from https://www.aircraftnoisemodel.org/
European Civil Aviation Conference. (2016b). Doc 29: Report on Standard Method of Computing Noise Contours around Civil Airports (4th ed., Vol. 2: Technical guide). Retrieved November 2, 2017, from https://www.ecac-ceac.org/ecac-docs
OurAirports. (2019). http://ourairports.com/. Retrieved February 22, 2019
Airlinerlist. (2019). http://www.planelist.net/. Retrieved April 2, 2019
WebGIS. (2019). http://www.webgis.com/. Retrieved February 27, 2019
Federal Aviation Administration. (2017). Aviation Environmental Design Tool (AEDT) Version 2d, Technical Manual. Retrieved January 15, 2019, from https://aedt.faa.gov/2d_information.aspx
Department for Transport. (2017). UK Aviation Forecasts. Retrieved March 15, 2019, from https://www.gov.uk/government/publications/uk-aviation-forecasts-2017
Airbus. (2019). Orders and Deliveries - The month in review: March 2019. Retrieved April 13, 2019, from https://www.airbus.com/aircraft/market/orders-deliveries.html
STATFOR Team. (2019). EUROCONTROL Seven-Year Forecast February 2019. EUROCONTROL. Retrieved April 2, 2019, from https://www.eurocontrol.int/publications/seven-year-forecast-feb-2019
CAA. (2019). Airport data 2018. Retrieved March 27, 2019, from https://www.caa.co.uk/Data-and-analysis/UK-aviation-market/Airports/Datasets/UK-Airport-data/Airport-data-2018/
Environmental Research and Consultancy Department. (2019). Aviation Strategy: Noise Forecast and Analyses - Version 2. Civil Aviation Authority. Retrieved March 13, 2019, from http://publicapps.caa.co.uk/modalapplication.aspx?appid=11&mode=detail&id=8958
Ricardo Energy and Environment. (2017). A Review of the DfT Aviation Fleet Mix Model. Retrieved March 18, 2019, from https://www.gov.uk/government/publications/dft-aviation-fleet-mix-model-a-review
Fraport AG. (2019a). Monthly Traffic Results Frankfurt Airport. Retrieved April 11, 2019, from https://www.fraport.com/content/fraport/en/our-company/investors/traffic-figures.html
Fraport AG. (2019b). Visual Fact Book 2018. Retrieved April 11, 2019, from https://www.fraport.com/content/fraport/en/our-company/investors/events-und-publications/publications/visual-fact-book.html
Vienna International Airport. (2019). Traffic results. Retrieved April 15, 2019, from https://www.viennaairport.com/en/company/investor_relations/news/traffic_results
Flughafen Wien AG. (2011). Zukunft Flughafen 3. Piste. Retrieved April 8, 2019, from https://www.viennaairport.com/en/company/flughafen_wien_ag/third_runway_project
Dipartimento Politecnico di Ingegneria e Architettura, University of Udine, Via delle Scienze 206, 33100, Udine, Italy
Marco Pretto
& Pietro Giannattasio
AIT Austrian Instutute of Technology GmbH, Center for Low-Emission Transport, Giefinggasse 2, 1210, Vienna, Austria
Michele De Gennaro
, Alessandro Zanon
& Helmut Kuehnelt
Search for Marco Pretto in:
Search for Pietro Giannattasio in:
Search for Michele De Gennaro in:
Search for Alessandro Zanon in:
Search for Helmut Kuehnelt in:
MDG, MP and PG conceived the work, identified the key focus points, and defined the scenarios to be analysed. Under PG's supervision, MP devised and implemented the algorithms for fleet substitution and flight event generation, besides applying DEMs to account for the topography of airport areas. The results were obtained by MP and analysed by MP and PG, who also drafted the manuscript. MDG, AZ and HK provided a significant contribution towards the overall revision of the manuscript and the improvement of the data/results rendering format. All authors approved the paper.
Correspondence to Marco Pretto.
Pretto, M., Giannattasio, P., De Gennaro, M. et al. Forecasts of future scenarios for airport noise based on collection and processing of web data. Eur. Transp. Res. Rev. 12, 4 (2020) doi:10.1186/s12544-019-0389-x
Web data
ECAC Doc.29 model
Aircraft fleet
Future air traffic
Highlights of the 2020 Transport Research Arena conference
|
CommonCrawl
|
Home > eBooks > Inst. Math. Stat. (IMS) Collect. > High Dimensional Probability V: The Luminy Volume
High Dimensional Probability V: The Luminy Volume
Editor(s) Christian Houdré, Vladimir Koltchinskii, David M. Mason, Magda Peligrad
Inst. Math. Stat. (IMS) Collect., 5: 356pp. (2009). DOI: 10.1214/imsc/1265119251
Read Full Abstract +
The term High Dimensional Probability in the title of this volume refers to a circle of ideas and problems that originated in Probability in Banach Spaces and the Theory of Gaussian Processes more than forty years ago. Initially, the main focus was on the study of necessary and sufficient conditions for the continuity of Gaussian processes and of classical limit theorems–laws of large numbers, laws of iterated logarithm and central limit theorems in Banach spaces.
Gradually, it was realized that solving these problems requires taking into account some important geometric structures associated with random variables in high dimensional and infinite dimensional spaces. For instance, in the case of Gaussian processes, it was understood that a natural way to characterize the properties of their sample paths (boundedness, continuity, etc.) is to relate them to certain geometric characteristics (metric entropy, majorizing measures, generic chaining) of the parameter space equipped with the metric induced by the covariance structure of the process. Similar considerations turned out to be very useful and powerful in the study of limit theorems in Banach spaces and empirical processes. It was also understood that the crux of the problem is related to rather general probabilistic phenomena in high dimensional spaces such as, for instance, measure concentration. Parallel developments occurred in some other areas of mathematics such as convex geometry, Banach spaces, asymptotic geometric analysis, combinatorics, random matrices and stochastic processes. Moreover, the methods of high dimensional probability were found to have a number of important applications in these areas as well as in Statistics and Computer Science. This breadth is very well illustrated by the contributions present in this volume.
Most of the papers in this volume were presented at the Vth International Conference on High Dimensional Probability (HDP V) held at le Centre International de Rencontres Mathématiques, in Luminy, France on May 26-May 30, 2008. This was the fifteenth in a series of conferences that began in Strasbourg in 1973 and continued with nine conferences on Probability in Banach Spaces and five conferences on High Dimensional Probability.
The participants of this conference are grateful for the support of the C.I.R.M., N.S.F. and N.S.A. and for the publication of the proceedings of HDP V by the I.M.S.
Hide All Book Information -
Institute of Mathematical Statistics Collections, Volume 5
Rights: Copyright © 2009, Institute of Mathematical Statistics
First available in Project Euclid: 2 February 2010
Digital Object Identifier: 10.1214/imsc/1265119251
< Previous Volume | Next Volume >
View All Abstracts +
Title and Copyright Pages
Institute of Mathematical Statistics Collections Vol. 5, i-ii (2009).
SAVE TO MY LIBRARY
Institute of Mathematical Statistics Collections Vol. 5, iii-iv (2009).
Contributor's List
Institute of Mathematical Statistics Collections Vol. 5, v-vi (2009).
Institute of Mathematical Statistics Collections Vol. 5, vii-vii (2009).
Institute of Mathematical Statistics Collections Vol. 5, viii-viii (2009).
On weighted isoperimetric and Poincaré-type inequalities
Sergey G. Bobkov , Michel Ledoux
Institute of Mathematical Statistics Collections Vol. 5, 1-29 (2009). https://doi.org/10.1214/09-IMSCOLL501
KEYWORDS: Isoperimetric inequalities, weighted Poincaré and Cheeger-type inequalities, Pareto distributions, localization technique
Read Abstract +
Weighted isoperimetric and Poincaré-type inequalities are studied for κ-concave probability measures (in the hierarchy of convex measures).
A note on positive definite norm dependent functions
Alexander Koldobsky
Institute of Mathematical Statistics Collections Vol. 5, 30-36 (2009). https://doi.org/10.1214/09-IMSCOLL502
Let K be an origin symmetric star body in ℝn. We prove, under very mild conditions on the function f : [0, ∞)→ℝ, that if the function f(‖x‖K) is positive definite on ℝn, then the space (ℝn, ‖ ⋅ ‖K) embeds isometrically in L0. This generalizes the solution to Schoenberg's problem and leads to progress in characterization of n-dimensional versions, i.e. random vectors X=(X1, …, Xn) in ℝn such that the random variables ∑aiXi are identically distributed for all a∈ℝn, up to a constant depending on ‖a‖K only.
Gaussian approximation of moments of sums of independent symmetric random variables with logarithmically concave tails
Rafał Latała
KEYWORDS: sums of independent random variables, moments, logarithmically concave tails, Gaussian approximation, 60E15, 60F05
We study how well moments of sums of independent symmetric random variables with logarithmically concave tails may be approximated by moments of Gaussian random variables.
Gaussian integrals involving absolute value functions
Wenbo V. Li , Ang Wei
We provide general formulas to compute the expectations of absolute value and sign of Gaussian quadratic forms, i.e. $\mathbb{E}$ |〈X, AX〉+〈b, X〉+c| and $\mathbb{E}$ sgn(〈X, AX〉+〈b, X〉+c) for centered Gaussian random vector X, fixed matrix A, vector b and constant c. Products of Gaussian quadratics are also discussed and followed with several interesting applications.
Weak invariance principle and exponential bounds for some special functions of intermittent maps
Jérôme Dedecker , Florence Merlevède
KEYWORDS: intermittency, weak invariance principle, exponential inequalities, 37E05, 60F17, 37C30
We consider a parametric class Tγ of expanding maps of [0, 1] with a neutral fixed point at 0 for which there exists an unique invariant absolutely continuous probability measure νγ on [0, 1]. On the probability space ([0, 1], νγ), we prove the weak invariance principle for the partial sums of f○Tγi in some special cases involving non-standard normalization. We also prove new moment inequalities and exponential bounds for the partial sums of f○Tγi when f is some Hölder function such that f(0)=νγ(f).
Interpolation spaces and the CLT in Banach spaces
Jim Kuelbs , Joel Zinn
KEYWORDS: central limit theorems, best approximations, interpolation spaces, 60F05, 60F17
Necessary and sufficient conditions for the classical central limit theorem (CLT) for i.i.d. random vectors in an arbitrary separable Banach space require not only assumptions on the original distribution, but also on the sample. What we do here is to continue our study of the CLT in terms of the original distribution. Of course, some new ingredient must be introduced, so we allow slight modifications of the random vectors. In particular, we restrict our modifications to be continuous, and to be no larger than a fixed small number, or in some cases a fixed small proportion of the magnitude of the individual elements of the sample. We find that if we use certain interpolation space norms to measure the magnitude of such modifications, then the CLT can be improved. Examples of our result are also included.
Uniform Central Limit Theorems for pregaussian classes of functions
Dragan Radulović , Marten Wegkamp
Institute of Mathematical Statistics Collections Vol. 5, 84-102 (2009). https://doi.org/10.1214/09-IMSCOLL507
KEYWORDS: kernel density estimator, Fourier series density estimator, empirical processes, uniform central limit theorems, weak convergence
We study weak convergence of general (smoothed) empirical processes indexed by classes of functions $\mathcal{F}$ under minimal conditions. We present a general result that, applied to specific situations, enables us to prove uniform central limit theorems under P-pregaussian assumption on $\mathcal{F}$ only.
A note on bounds for VC dimensions
Jon A. Wellner , Aad van der Vaart
Institute of Mathematical Statistics Collections Vol. 5, 103-107 (2009). https://doi.org/10.1214/09-IMSCOLL508
KEYWORDS: Vapnik-Chervonenkis class, combining classes, inequality, entropy, 60B99, 62G30
We provide bounds for the VC dimension of class of sets formed by unions, intersections, and products of VC classes of sets $\mathcal{C}$1, …, $\mathcal{C}$m.
Limit theorems and exponential inequalities for canonical U- and V-statistics of dependent trials
Igor S. Borisov , Nadezhda V. Volodko
KEYWORDS: stationary sequence of random variables, mixing, multiple orthogonal series, canonical U- and V-statistics, 60H05, 60F05, 62G20
The limit behavior is studied for the distributions of normalized U- and V-statistics of an arbitrary order with canonical (degenerate) kernels, based on samples of increasing sizes from a stationary sequence of observations satisfying φ- or α-mixing. The case of m-dependent sequences is separately studied. The corresponding limit distributions are represented as infinite multilinear forms of a centered Gaussian sequence with a known covariance matrix. Moreover, under φ-mixing, exponential inequalities are obtained for the distribution tails of these statistics with bounded kernels.
Functional Limit Laws of Strassen and Wichura type for multiple generations of branching processes
Jim Kuelbs , Anand N. Vidyashankar
KEYWORDS: multi-generational limit theorems, Strassen functional LIL, Chung-Wichura functional LIL, small ball probabilities, 60F10, 60F17, 60J80
This paper is concerned with the study of functional limit theorems constructed from multiple generations of a supercritical branching process. The results we present include infinite dimensional functional laws of Strassen and Chung-Wichura type in the space (C0[0, 1])∞.
On Stein's method for multivariate normal approximation
Elizabeth Meckes
KEYWORDS: multivariate analysis, normal approximation, Stein's method, eigenfunctions, Laplacian, 60F05, 60D05
The purpose of this paper is to synthesize the approaches taken by Chatterjee-Meckes and Reinert-Röllin in adapting Stein's method of exchangeable pairs for multivariate normal approximation. The more general linear regression condition of Reinert-Röllin allows for wider applicability of the method, while the method of bounding the solution of the Stein equation due to Chatterjee-Meckes allows for improved convergence rates. Two abstract normal approximation theorems are proved, one for use when the underlying symmetries of the random variables are discrete, and one for use in contexts in which continuous symmetry groups are present. A first application is presented to projections of exchangeable random vectors in ℝn onto one-dimensional subspaces. The application to runs on the line from Reinert-Röllin is reworked to demonstrate the improvement in convergence rates, and a new application to joint value distributions of eigenfunctions of the Laplace-Beltrami operator on a compact Riemannian manifold is presented.
A remark on the maximum eigenvalue for circulant matrices
Włodek Bryc , Sunder Sethuraman
KEYWORDS: maximum, eigenvalue, circulant, Gumbel, random, matrix, 15A52
We point out that the method of Davis-Mikosch [Ann. Probab. 27 (1999) 522–536] gives for a symmetric circulant n×n matrix composed of i.i.d. entries with mean 0 and finite (2+δ)-moments in the first half-row that the maximum eigenvalue is on the order $\sqrt{2n \log n}$, and the fluctuations are Gumbel.
On the longest increasing subsequence for finite and countable alphabets
Christian Houdré , Trevis J. Litherland
KEYWORDS: longest increasing subsequence, Brownian functional, Functional Central Limit Theorem, Tracy-Widom distribution, 60C05, 60F05, 60F17, 60G15, 60G17, 05A16
Let X1, X2, …, Xn, … be a sequence of iid random variables with values in a finite ordered alphabet {α1, …, αm}. Let LIn be the length of the longest increasing subsequence of X1, X2, …, Xn. Properly centered and normalized, the limiting distribution of LIn is expressed as various functionals of m and (m−1)-dimensional Brownian motions. These expressions are then related to similar functionals appearing in queueing theory, allowing us to further describe asymptotic behaviors when, in turn, m grows without bound. The finite alphabet results are then used to treat the countable (infinite) alphabet case.
Some results on random circulant matrices
Mark W. Meckes
KEYWORDS: random matrix, circulant matrix, eigenvalues, 15A52, 60F05
This paper considers random (non-Hermitian) circulant matrices, and proves several results analogous to recent theorems on non-Hermitian random matrices with independent entries. In particular, the limiting spectral distribution of a random circulant matrix is shown to be complex normal, and bounds are given for the probability that a circulant sign matrix is singular.
Conditional expectations and martingales in the fractional Brownian field
Vladimir Dobrić , Francisco M. Ojeda
KEYWORDS: fractional Brownian motions, fractional Brownian field, fundamental martingales, 60G15, 60G18, 60G44
Conditional expectations of a fractional Brownian motion with Hurst index H respect to the filtration of a fractional Brownian motion with Hurst index H′, both contained in the fractional Brownian field, are studied. A stochastic integral representation of those processes is constructed from the covariance structure of the underlying fractional Brownian field. As processes, the conditional expectations contain martingale components and for dual pairs of Hurst indices the processes become pure martingales which, up to a multiplicative constant, coincide with the fundamental martingales of fractional Brownian motions.
Stochastic compactness of Lévy processes
Ross Maller , David M. Mason
KEYWORDS: Lévy processes, infinitely divisible, Feller class, centered Feller class, domain of attraction, large times, 62E17, 62E20, 60F15
We characterize stochastic compactness and convergence in distribution of a Lévy process at "large times", i.e., as t→∞, by properties of its associated Lévy measure, using a mechanism for transferring between discrete (random walk) and continuous time results. We thereby obtain also domain of attraction characterisations for the process at large times. As an illustration of the stochastic compactness ideas, semi-stable laws are considered.
An almost sure limit theorem for Wick powers of Gaussian differences quotients
Michael B. Marcus , Jay Rosen
Let G={G(x), x∈R+}, G(0)=0, be a mean zero Gaussian process with E(G(x)−G(y))2=σ2(x−y). Let ρ(x)=½ d2/dx2 σ2(x), x≠0. When ρk is integrable at zero and satisfies some additional regularity conditions,
limh↓0∫ : ((G(x+h)−G(x))/h)k : g(x) dx= : (G')k : (g) a.s.
for all g∈$\mathcal{B}$0(R+), the set of bounded Lebesgue measurable functions on R+ with compact support. Here G' is a generalized derivative of G and : ( ⋅ )k : is the k–th order Wick power.
Bernstein inequality and moderate deviations under strong mixing conditions
Florence Merlevède , Magda Peligrad , Emmanuel Rio
KEYWORDS: deviation inequality, moderate deviations principle, weakly dependent sequences, strong mixing, 60E15, 60F10, 62G07
In this paper we obtain a Bernstein type inequality for a class of weakly dependent and bounded random variables. The proofs lead to a moderate deviations principle for sums of bounded random variables with exponential decay of the strong mixing coefficients that complements the large deviation result obtained by Bryc and Dembo (1998) under superexponential mixing rates.
Asymptotic distribution of the most powerful invariant test for invariant families
Miguel A. Arcones
KEYWORDS: invariant tests, separate families, most powerful test, 62F05, 60F03, 62E20
We obtain the limit distribution of the test statistic of the most powerful invariant test for location families of densities. As an application, we obtain the consistency of this test. From these results similar results are obtained for the test statistic of the most powerful invariant test for scale families.
Uniform in bandwidth consistency of kernel regression estimators at a fixed point
Julia Dony , Uwe Einmahl
KEYWORDS: kernel estimation, Nadaraya–Watson, regression, uniform in bandwidth, consistency, empirical processes, exponential inequalities, moment inequalities, 62G08
We consider pointwise consistency properties of kernel regression function type estimators where the bandwidth sequence is not necessarily deterministic. In some recent papers uniform convergence rates over compact sets have been derived for such estimators via empirical process theory. We now show that it is possible to get optimal results in the pointwise case as well. The main new tool for the present work is a general moment bound for empirical processes which may be of independent interest.
Asymptotics of statistical estimators of integral curves
Vladimir Koltchinskii , Lyudmila Sakhanenko
KEYWORDS: integral curves, Nadaraya-Watson estimators, optimal convergence rates, diffusion tensor imaging, 60K35, 60K35, 60K35
The problem of estimation of integral curves of a vector field based on its noisy observations is studied. For Nadaraya-Watson type estimators, several results on asymptotics of the shortest distance from the estimated curve to a specified region have been proved. The problem is motivated by applications in diffusion tensor imaging where it is of importance to test various hypotheses of geometric nature based on the estimated distances.
Uniform central limit theorems for sieved maximum likelihood and trigonometric series estimators on the unit circle
Richard Nickl
KEYWORDS: density estimation, orthogonal series estimator, sieved maximum likelihood estimator, uniform central limit theorem, 60F17, 46E35
Given an i.i.d. sample from the law ℙ on the unit circle, we obtain uniform central limit theorems for the random measures induced by trigonometric series and sieved maximum likelihood density estimators. The limit theorems are uniform over balls in Sobolev-Hilbert spaces of order s>1/2.
|
CommonCrawl
|
Chaotic Delone sets
Measures and stabilizers of group Cantor actions
Well-posedness for the three dimensional stochastic planetary geostrophic equations of large-scale ocean circulation
Bo You
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P. R. China
Received March 2020 Revised August 2020 Published September 2020
Fund Project: This work was supported by the National Science Foundation of China Grant (11401459, 11871389), the Natural Science Foundation of Shaanxi Province (2018JM1012) and the Fundamental Research Funds for the Central Universities (xjj2018088)
The objective of this paper is to study the well-posedness of solutions for the three dimensional planetary geostrophic equations of large-scale ocean circulation with additive noise. Since strong coupling terms and the noise term create some difficulties in the process of showing the existence of weak solutions, we will first show the existence of weak solutions by the monotonicity methods when the initial data satisfies some "regular" condition. For the case of general initial data, we will establish the existence of weak solutions by taking a sequence of "regular" initial data and proving the convergence in probability as well as some weak convergence of the corresponding solution sequences. Finally, we establish the existence of weak $ \mathcal{D} $-pullback mean random attractors in the framework developed in [11,25].
Keywords: Well-posedness, Planetary geostrophic equations, Additive noise, Weak $ \mathcal{D} $-pullback mean random attractors.
Mathematics Subject Classification: Primary: 35R60, 37L55, 60H30; Secondary: 35Q86.
Citation: Bo You. Well-posedness for the three dimensional stochastic planetary geostrophic equations of large-scale ocean circulation. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020332
L. Arnold, Random Dynamical Systems, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar
V. I. Arnol'd, Geometrical Methods in the Theory of Ordinary Differential Equations, Springer-Verlag, New York, 1988. doi: 10.1007/978-1-4612-1037-5. Google Scholar
Z. Brzeźniak, E. Hausenblas and J. H. Zhu, 2D stochastic Navier-Stokes equations driven by jump noise, Nonlinear Anal., 79 (2013), 122-139. doi: 10.1016/j.na.2012.10.011. Google Scholar
C. S. Cao and E. S. Titi, Global well-posedness and finite-dimensional global attractor for a 3-D planetary geostrophic viscous model, Comm. Pure and Appl. Math., 56 (2003), 198-233. doi: 10.1002/cpa.10056. Google Scholar
H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differential Equations, 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar
Z. Dong and R. R. Zhang, Long-time behavior of 3D stochastic planetary geostrophic viscous model,, Stoch. Dyn., 18 (2018), 1850038, 48pp. doi: 10.1142/S0219493718500387. Google Scholar
J. P. Eckmann and D. Ruelle, Ergodic theory of chaos and strange attractors, Rev. Modern Phys., 57 (1985), 617-656. doi: 10.1103/RevModPhys.57.617. Google Scholar
H. J. Gao and H. Liu, Well-posedness and invariant measures for a class of stochastic 3D Navier-Stokes equations with damping driven by jump noise, J. Differential Equations, 267 (2019), 5938-5975. doi: 10.1016/j.jde.2019.06.015. Google Scholar
N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North-Holland Publishing, Tokyo, 1989. Google Scholar
P. E. Kloeden and J. A. Langa, Flattening, squeezing and the existence of random attractors, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 463 (2007), 163-181. doi: 10.1098/rspa.2006.1753. Google Scholar
P. E. Kloeden and T. Lorenz, Mean-square random dynamical systems, J. Differential Equations, 253 (2012), 1422-1438. doi: 10.1016/j.jde.2012.05.016. Google Scholar
M. Metivier, Stochastic Partial Differential Equations in Infinite Dimensional Spaces, Quaderni, Scuola Normale Superiore di Pisa, 1988. Google Scholar
J. Pedlosky, The equations for geostrophic motion in the ocean, Journal of Physical Oceanography, 14 (1984), 448-455. doi: 10.1175/1520-0485(1984)014<0448:TEFGMI>2.0.CO;2. Google Scholar
J. Pedlosky, Geophysical Fluid Dynamics, Springer-Verlag, New York, 1987. Google Scholar
R. M. Samelson, R. Temam and S. Wang, Some mathematical properties of the planetary geostrophic equations for large-scale ocean circulation, Appl. Anal., 70 (1998), 147-173. doi: 10.1080/00036819808840682. Google Scholar
R. M. Samelson, R. Temam and S. Wang, Remarks on the planetary geostrophic model of gyre scale ocean circulation, Differential Integral Equations, 13 (2000), 1-14. Google Scholar
R. M. Samelson and G. K. Vallis, A simple friction and diffusion scheme for planetary geostrophic basin models, Journal of Physical Oceanography, 27 (1997), 186-194. doi: 10.1175/1520-0485(1997)027<0186:ASFADS>2.0.CO;2. Google Scholar
B. Schmalfuss, Qualitative properties for the stochastic Navier-Stokes equation, Nonlinear Anal., 28 (1997), 1545-1563. doi: 10.1016/S0362-546X(96)00015-6. Google Scholar
A. V. Skorohod, Studies in the Theory of Random Processes, Addison-Wesley Publishing Co., Inc., Reading, Mass, 1965. Google Scholar
R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar
B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar
B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms, Stoch. Dyn., 14 (2014), 1450009, 31pp. doi: 10.1142/S0219493714500099. Google Scholar
B. Wang, Random attractors for non-autonomous stochastic wave equations with multiplicative noise, Discrete Contin. Dyn. Syst., 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar
B. Wang, Dynamics of fractional stochastic reaction-diffusion equations on unbounded domains driven by nonlinear noise, J. Differential Equations, 268 (2019), 1-59. doi: 10.1016/j.jde.2019.08.007. Google Scholar
B. Wang, Weak pullback attractors for mean random dynamical systems in Bochner spaces, J. Dynam. Differential Equations, 31 (2019), 2177-2204. doi: 10.1007/s10884-018-9696-5. Google Scholar
B. You, Random attractors for the three dimensional stochastical planetary geostrophic equations of large-scale ocean circulation, Stochastics, 89 (2017), 766-785. doi: 10.1080/17442508.2016.1276913. Google Scholar
B. You, Large deviation principle for the three dimensional planetary geostrophic equations of large-scale ocean circulation with small multiplicative noise, arXiv, (2020), 3312831. Google Scholar
B. You and F. Li, Random attractor for the three-dimensional planetary geostrophic equations of large-scale ocean circulation with small multiplicative noise, Stoch. Anal. Appl., 34 (2016), 278-292. doi: 10.1080/07362994.2015.1126184. Google Scholar
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361
Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302
Charlotte Rodriguez. Networks of geometrically exact beams: Well-posedness and stabilization. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021002
Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161
Pengyan Ding, Zhijian Yang. Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $ \mathbb{R}^{N} $. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021006
Bixiang Wang. Mean-square random invariant manifolds for stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1449-1468. doi: 10.3934/dcds.2020324
Wenlong Sun, Jiaqi Cheng, Xiaoying Han. Random attractors for 2D stochastic micropolar fluid flows on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 693-716. doi: 10.3934/dcdsb.2020189
Yangrong Li, Shuang Yang, Qiangheng Zhang. Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28 (4) : 1529-1544. doi: 10.3934/era.2020080
Yanhong Zhang. Global attractors of two layer baroclinic quasi-geostrophic model. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021023
Cung The Anh, Dang Thi Phuong Thanh, Nguyen Duong Toan. Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces. Evolution Equations & Control Theory, 2021, 10 (1) : 1-23. doi: 10.3934/eect.2020039
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230
Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020336
Jérôme Lohéac, Chaouki N. E. Boultifat, Philippe Chevrel, Mohamed Yagoubi. Exact noise cancellation for 1d-acoustic propagation systems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020055
Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025
|
CommonCrawl
|
Beyond the pixel plane: sensing and learning in 3D
Imagine you're building a self-driving car that needs to understand its surroundings. How would you enable your car to perceive pedestrians, bikers, and other vehicles around it in order to move safely? You could use a camera for this, but that doesn't seem particularly effective: you'd be taking a 3D environment, "squashing" it down into a 2D image captured from the camera, and then trying to recover the 3D information you actually care about (like the distances to pedestrians or cars in front of you) from that 2D image. By squashing your 3D surroundings down to a 2D image, you're throwing out a lot of the information that matters most to you. Trying to piece this information back together is difficult and, even for state-of-the-art algorithms, error-prone.
Instead, you'd ideally be able to augment your 2D view of the world with 3D data. Rather than trying to estimate distances to pedestrians or other vehicles from a 2D image, you would be able to locate these objects directly through your sensor. But now another part of the pipeline becomes difficult: the perception. How do you actually recognize objects like people, bicyclists, and cars in 3D data? Traditional deep learning techniques like convolutional neural networks (CNNs), which would make identifying these objects in a 2D image straightforward, need to be adapted to work in 3D. Luckily, the problem of perception in 3D has been studied quite a bit over the past few years, and our mission in this article will be to give a brief overview of this work.
In particular, we'll focus on recent deep learning techniques that enable 3D object classification and semantic segmentation. We'll begin by reviewing some background information on common ways to capture and represent 3D data. We'll then describe fundamental deep learning methods for three different representations of 3D data. Finally, we'll describe promising new research directions and conclude with our perspective on where the field is headed.
How do we capture and represent 3D data?
It's clear that we need computer vision methods that can operate directly in 3D, but this presents three clear challenges: sensing, representing, and understanding 3D data.
The process of capturing 3D data is complex. While 2D cameras are inexpensive and widespread, specialized hardware setups are typically needed for 3D sensing.
Stereo vision uses multiple cameras and measures the shift in perceived object position to compute depth information (Source: University of Edinburgh)
Stereo fixes two or more cameras at specific positions relative to one another and uses this setup to capture different images of a scene, match the corresponding pixels, and compute how each pixel's position differs between the images to calculate its position in 3D space. This is roughly how humans perceive the world — our eyes capture two separate "images" of the world, and then our brains look at how an object's position differs between the views from our left and right eyes to determine its 3D position. Stereo is attractive because it involves simple hardware — just two or more ordinary cameras. However, the approach isn't great in applications where accuracy or speed matter, since using visual details to match corresponding points between the camera images is not only computationally expensive but also error-prone in environments that are textureless or visually repetitive.
RGB-D cameras output a four-channel image that contains color information along with per-pixel depth (Source: Kyushu University)
RGB-D involves using a special type of camera that captures depth information ("D") in addition to a color image ("RGB"). Specifically, it captures the same type of color image you'd get from a normal 2D camera, but, for some subset of the pixels, also tells you how far in front of the camera the object is. Internally, most RGB-D sensors work through either "structured light," which projects an infrared pattern onto a scene and senses how that pattern has warped onto the geometric surfaces, or "time of flight," which looks at how long projected infrared light takes to return to the camera. Some RGB-D cameras you might have heard of include the Microsoft Kinect and the iPhone X's Face ID sensor. RGB-D is great because these sensors are relatively small and low-cost but also fast and immune to visual matching errors. However, RGB-D cameras often have many holes in their depth output due to occlusion (where objects in the foreground block projection onto objects behind them), pattern sensing failures, and range issues (both projection and sensing becomes difficult further away from the camera).
LIDAR uses several laser beams (sensing in concentric circles) to directly perceive the geometric structure of an environment (Source: Giphy)
LiDAR involves firing rapid laser pulses at objects and measuring how much time they take to return to the sensor. This is similar to the "time of flight" technology for RGB-D cameras we described above, but LiDAR has significantly longer range, captures many more points, and is much more robust to interference from other light sources. Most 3D LiDAR sensors today have several (up to 64) beams aligned vertically, spinning rapidly to see in all directions around the sensor. These are the sensors used in most self-driving cars because of their accuracy, range, and robustness, but the problem with LiDAR sensors is that they're often large, heavy, and extremely expensive (the 64-beam sensor that most self-driving cars use costs $75,000!). As a result, many companies are currently trying to develop cheaper "solid state LiDAR" systems that can sense in 3D without having to spin.
3D Representations
Once you've captured 3D data, you need to represent it in a format that makes sense as an input to the processing pipeline you're building. There are four main representations you should know:
The different representations of 3D data. (a) point cloud (source: Caltech), (b) voxel grid (source: IIT Kharagpur), (c) triangle mesh (source: UW), (d) multi-view representation (source: Stanford)
a. Point clouds are simply collections of points in 3D space; each point is specified by an (xyz) location, optionally along with some other attributes (like rgb color). They're the raw form that LiDAR data is captured in, and stereo and RGB-D data (which consist of an image labeled with per-pixel depth values) are usually converted into point clouds before further processing.
b. Voxel grids are derived from point clouds. "Voxels" are like pixels in 3D; think of voxel grids as quantized, fixed-sized point clouds. Whereas point clouds can have an infinite number of points anywhere in space with floating-point pixel coordinates, voxel grids are 3D grids in which each cell, or "voxel," has a fixed size and discrete coordinates.
c. Polygon meshes consist of a set of polygonal faces with shared vertices that approximate a geometric surface. Think of point clouds as a collection of sampled 3D points from an underlying continuous geometric surface; polygon meshes aim to represent those underlying surfaces in a way that can be easily rendered. While originally created for computer graphics, polygon meshes can also be useful for 3D vision. There are several methods to obtain polygon meshes from point clouds, including Kazhdan et al.'s Poisson surface reconstruction[1] (2006).
d. Multi-view representations are collections of 2D images of a rendered polygon mesh captured from different simulated viewpoints ("virtual cameras") to convey the 3D geometry in a simple way. The difference between simply capturing images from multiple cameras (like in stereo) and constructing a multi-view representation is that multi-view requires actually building a full 3D model and rendering it from several arbitrary viewpoints to fully convey the underlying geometry. Unlike the other three representations above, which are used for both storage and processing of 3D data, multi-view representations are typically only used to turn 3D data into an easy format for processing or visualization.
Now that you've turned your 3D data into a digestible format, you need to actually build out a computer vision pipeline to understand it. The problem here is that extending traditional deep learning techniques that work well on 2D images (like CNNs) to operate on 3D data can be tricky depending on the representation of that data, making conventional tasks like object detection or segmentation challenging.
Learning with multi-view inputs
Using a multi-view representation of 3D data is the simplest way to adapt 2D deep learning techniques to 3D. It's a clever way to transform the problem of 3D perception into one of 2D perception, but in a way that still allows you to reason about the 3D geometry of an object. The early deep learning-based work to use this idea was Su et al.'s multi-view CNN[2] (2015), a simple yet effective architecture that can learn a feature descriptor from multiple 2D views of a 3D object. Implementing this approach, the method had increased performance compared to using a single 2D image for the object classification task. This is accomplished by feeding in individual images into a VGG network pre-trained on ImageNet in order to extract salient features, pooling these resulting activation maps, and passing this information into additional convolutional layers for further feature learning.
Multi-view CNN architecture (Source: paper)
Still, multi-view image representations have numerous limitations. The main issue is that you aren't truly learning in 3D — a fixed number of 2D views is still just an imperfect approximation of an underlying 3D structure. As a result, tasks like semantic segmentation, especially across more complex object and scenes, become challenging because of the limited feature information gained from 2D images. Moreover, this form of visualizing 3D data is not scalable for computationally constrained tasks like autonomous driving and virtual reality — keep in mind that generating a multi-view representation requires rendering a full 3D model and simulating several arbitrary viewpoints. Ultimately, multi-view learning is faced with many drawbacks that motivates research on methods that learn directly from 3D data.
Learning with volumetric representations
Learning with voxel grids solves the main drawbacks of multi-view representations. Voxel grids bridge the gap between 2D and 3D vision — they're the closest 3D representation to images, making it relatively easy to adapt 2D deep learning concepts (like the convolution operator) to 3D. Maturana and Scherer's VoxNet[3] (2015) was one of the first deep learning methods to achieve compelling results on the object classification task given a voxel grid input. VoxNet operates on probabilistic occupancy grids, in which each voxel contains the probability that that voxel is occupied in space. A benefit of this approach is that it allows the network to differentiate between voxels that are known to be free (e.g., voxels a LiDAR beam passed through) and voxels whose occupancy is unknown (e.g., voxels behind where a LiDAR beam hit).
VoxNet architecture (Source: paper)
VoxNet's architecture itself is fairly straightforward, consisting of two convolutional layers, a max pooling layer, and two fully-connected layers to compute an output class score vector. This network is much shallower and has far fewer parameters than most state-of-the-art image classification networks, but it was selected from a stochastic search over hundreds of possible CNN architectures. Since voxel grids are so similar to images, the actual strided convolution and pooling operators they employ are trivial adaptations of these operators from working on 2D pixels to 3D voxels; the convolution operator uses a $d \times d \times d \times c$ kernel rather than the $d \times d \times c$ kernel used in 2D CNNs, and the pooling operator considers non-overlapping 3D blocks of voxels rather than 2D blocks of pixels.
One issue with VoxNet is that the architecture isn't inherently rotation-invariant. While the authors reasonably assume that the sensor is kept upright so that the $z$ axis of the voxel grid is aligned with the direction of gravity, no such assumptions can be made about rotation about the $z$ axis — an object from behind is still that same object, even though the geometry in the voxel grid would interact with the learned convolution kernel very differently. To solve this, they employ a simple data augmentation strategy. During training, they rotate each model several times and train on all copies; then, at test time, they pool the output of the final fully-connected layer across several rotations of the input. They note that this approach led to similar performance but faster convergence compared to pooling the output of intermediate convolutional layers like Su et al.'s multi-view CNNs do in their "view pooling" step. In this way, VoxNet learns rotation invariance by sharing the same learned convolutional kernel weights across different rotations of the input voxel grid.
VoxNet represents a huge step towards true 3D learning, but voxel grids still have a number of drawbacks. First, they lose resolution compared to point clouds, since several distinct points representing intricate structures will be binned into one voxel if they're close together. At the same time, voxel grids can lead to unnecessarily high memory usage compared to point clouds in sparse environments, since they actively consume memory to represent free and unknown space whereas point clouds contain only known points.
Learning with point clouds
PointNet
In light of these issues with voxel-based approaches, recent work has focused on architectures that operate directly on raw point clouds. Most notably, Qi et al.'s PointNet[4] (2016) was one of the first proposed methods for handling this form of irregular 3D data. However, as noted by the authors, point clouds are simply a set of points represented in 3D by their xyz locations. More concretely, given $N$ points in a point cloud, the network needs to learn unique features that are invariant to the $N!$ permutations of this input data, since the ordering of points fed into a network doesn't impact the underlying geometry. In addition, the network should be robust to transformations of the point cloud — rotations, translations and scaling should not impact prediction.
To ensure invariance across input ordering, the key insight behind PointNet is using a simple symmetric function that produces a consistent output for any ordering of the input (examples in this class of functions include addition and multiplication). Guided by this intuition, the basic module behind the PointNet architecture (called PointNet Vanilla) is defined as follows:
$$f(x_1, \ldots, x_n) = \gamma \odot g(h(x_1), \ldots, h(x_n))$$
where $f$ is a symmetric function that transforms input points into a $k$-dimensional vector (for object classification). This function $f$ can be approximated such that there exists another symmetric function $g$. In the equation, $h$ is a multi-layer perceptron (MLP) that maps individual input points (and their corresponding features such as xyz position, color, surface normals, etc.) into a higher dimensional latent space. The max-pooling operations serves as the symmetric function $g$, aggregating learned features into a global descriptor for the point cloud. This single feature vector is passed into $\gamma$, another MLP that outputs predictions for objects.
To address the challenge of learning representations that are invariant to geometric transformations of the point cloud, PointNet employs a mini-network, called T-Net, which applies an affine transformation onto the input point cloud. The concept is similar to Jaderberg et al.'s spatial transformer networks[5] (2016) but is much simpler because there is no need for defining a new type of layer. T-Net consists of learnable parameters that enable PointNet to transform the input point cloud into a fixed, canonical space — ensuring that the overall network is robust to even the slightest of variations.
PointNet architecture (Source: paper)
The overall PointNet architecture integrates the vanilla approach and T-Net with multiple MLP layers that create feature representations for the point clouds. However, beyond just object classification, PointNet also enables semantic segmentation of both objects and scenes. To accomplish this, the architecture combines the global feature vector from the max pooling symmetric function with per-point features learned after the input data is passed through a few MLPs. By concatenating these two vectors, each point is aware of both its global semantics and its local features, enabling the network to learn additional, more meaningful features that help with segmentation.
Semantic segmentation results of indoor scenes using PointNet (Source: paper)
PointNet++
Despite the impressive results with PointNet, one of the primary drawbacks is that the architecture fails to capture the underlying local structure within neighborhoods of points — an idea that is similar to extracting features from increasing receptive field sizes in images using CNNs. To address this issue, Qi et al. developed PointNet++[6] (2017), a spinoff from the PointNet architecture but is also capable of learning features from local regions within a point cloud. The basis behind this method is a hierarchical feature learning layer that has three key steps. It (1) samples points to serve as centroids for the local region, (2) groups neighboring points in these local regions based on distance from the centroid, and (3) encodes features for these regions using a mini-PointNet.
These steps are progressively repeated so that features are learned across varying sizes of point groups within the point cloud. In doing so, the network can better understand the underlying relationships within local clusters of points across the entire point cloud — ultimately aiding in improved generalization performance. The results of this work demonstrate that PointNet++ enables significant improvements over existing methods including PointNet, and achieved state-of-art performance on 3D point cloud analysis benchmarks (ModelNet40 and ShapeNet)
Promising new research areas
Graph CNNs
Current research on deep learning architectures for processing 3D data has been focused on point cloud representations, with much of the recent work extending ideas from PointNet/PointNet++ and drawing inspiration from other fields to further improve performance. An example of one such paper is Wang et al.'s Dynamic Graph CNNs[7] (2018), which uses graph-based deep learning methods to improve feature extraction in point clouds. The idea is that PointNet and PointNet++ fail to capture the geometric relationships among individual points because these methods need to maintain invariance to different input permutations. However, by considering a point and it's surrounding nearest neighbors as a directed graph, Wang et al. construct EdgeConv, an operator that generates unique features across points in the data. Read another Gradient overview if you are interested in learning more about learning on graphs.
SPLATNet
SPLATNet architecture (Source: paper)
On the other hand, some research has taken a step away from the classic feature extraction methods proposed in PointNet/PointNet++, opting to design a new method for processing point clouds all together. Su et al.'s SPLATNet (2018)[8] architecture is a great example of this new focus in point cloud research — the authors design a novel architecture and convolution operator than can directly operate on point clouds. The key insight behind this paper is translating the concept of "receptive fields" to irregular point clouds, which enables spatial information to be preserved even across sparse regions (a key drawback with PointNet/PointNet++). What's especially fascinating is that SPLATNet can project features extracted from multi-view images into 3D space, fusing this 2D data with the original point cloud in an end-to-end learnable architecture. Using this 2D-3D joint learning, SPLATNet achieved a new state-of-the-art in semantic segmentation.
Frustum PointNets
Visualizing a 3D frustum generated from a 2D bounding box estimation (Source: paper)
A third promising reserach direction involves extending the basic architectural building blocks we described above to build more elaborate networks for useful tasks like object detection in 3D. Building on the idea of using both 2D and 3D data, Qi et al.'s Frustum PointNets[9] (2017) presents a new approach that fuses RGB images with point clouds to improve the efficiency of localizing objects in large 3D scenes. Conventional approaches for this task determine possible 3D bounding boxes for objects by performing classification on sliding windows directly over the entire point cloud, which is computationally expensive and makes real-time prediction difficult. Qi et al. make two key contributions.
First, they propose to initially use a standard CNN for object detection on 2D images, extrude a 3D frustum corresponding to the region of the point cloud in which the detected object could reside, and then perform the search process over only this "slice" of the point cloud. This significantly narrows down the search space for bounding box estimation, reducing the likelihood of false detections and considerably speeding up the processing pipeline, which is crucial for autonomous driving applications.
Second, instead of performing the typical sliding window classification to during the bounding box search process, Qi et al. design a novel PointNet-based architecture that can directly perform instance segmentation (segmenting the point cloud into individual objects) and bounding box estimation over the entire 3D frustum in one pass, making their method both fast and robust to occlusions and sparsity. Ultimately, as a result of these improvements, this work outperformed all prior approaches at the time of publication on the KITTI and SUN RGB-D 3D detection benchmarks.
Over just the past 5 years, 3D deep learning methods have progressed from working with derived (multi-view) to raw (point cloud) representations of 3D data. Along the way, we've moved from methods that were simple adaptations of 2D CNNs to 3D data (multi-view CNNs and even VoxNet) to methods that were handcrafted for 3D (PointNet and other point cloud methods), greatly improving performance on tasks like object classification and semantic segmentation. These results are promising because they confirm that there truly is value in seeing and representing the world in 3D.
However, advancements in this field have just begun. Current work focuses not only on improving the accuracy and performance of these algorithms, but also on ensuring robustness and scalability. And although much of the current research is motivated by autonomous vehicle applications, new methods operating directly on point clouds will play a significant role in 3D medical imaging, virtual reality, and indoor mapping.
(Cover image: Waymo)
Mihir Garimella and Prathik Naidu are rising sophomores at Stanford University majoring in Computer Science. They've done research in robotics, computer vision, and machine learning and have industry experience at Facebook's Connectivity Lab and Amazon's CoreAI team, respectively. They're the co-founders of Firefly Autonomy, an early-stage startup building autonomous drones for indoor mapping and industrial inspection.
Kazhdan, Michael, and Hugues Hoppe. "Screened poisson surface reconstruction." ACM Transactions on Graphics (ToG) 32.3 (2013): 29. ↩︎
Su, Hang, et al. "Multi-view convolutional neural networks for 3d shape recognition." Proceedings of the IEEE international conference on computer vision. 2015. ↩︎
Maturana, Daniel, and Sebastian Scherer. "Voxnet: A 3d convolutional neural network for real-time object recognition." Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015. ↩︎
Qi, Charles R., et al. "Pointnet: Deep learning on point sets for 3d classification and segmentation." Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1.2 (2017): 4. ↩︎
Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial transformer networks." Advances in neural information processing systems. 2015. ↩︎
Qi, Charles Ruizhongtai, et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space." Advances in Neural Information Processing Systems. 2017. ↩︎
Wang, Yue, et al. "Dynamic graph CNN for learning on point clouds." arXiv preprint arXiv:1801.07829 (2018). ↩︎
Su, Hang, et al. "Splatnet: Sparse lattice networks for point cloud processing." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. ↩︎
Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." arXiv preprint arXiv:1711.08488 (2017). ↩︎
VisionOverviews
Mihir Garimella, Prathik Naidu
Is Deep Learning the Future of Medical Decision Making?
The Past, Present, and Future of AI Art
Goodhart's Law: Are Academic Metrics Being Gamed?
When AI Plans Ahead
Stanford's Human-Centered AI Launch Symposium
Overviews Reinforcement Learning Vision Language Perspectives Policy Generative Models Conference Art Ethics
Going beyond the bounding box with semantic segmentation
Andy Chen, Chaitanya Asawa
Felix Wang, Steven Ban and Hugh Zhang
The Promise of Hierarchical Reinforcement Learning
Yannis Flet-Berliac
Generative Models
© 2019 The Gradient - Published with Ghost
|
CommonCrawl
|
Chinese Journal of Mechanical Engineering
Terahertz Spectroscopic Characterization and Thickness Evaluation of Internal Delamination Defects in GFRP Composites
Walter Nsengiyumva ORCID: orcid.org/0000-0002-0848-34791,2,
Shuncong Zhong ORCID: orcid.org/0000-0001-8999-27011,2,
Manting Luo ORCID: orcid.org/0000-0002-1464-30963 &
Bing Wang ORCID: orcid.org/0000-0002-1480-33011,2
Chinese Journal of Mechanical Engineering volume 36, Article number: 6 (2023) Cite this article
The use of terahertz time-domain spectroscopy (THz-TDS) for the nondestructive testing and evaluation (NDT&E) of materials and structural systems has attracted significant attention over the past two decades due to its superior spatial resolution and capabilities of detecting and characterizing defects and structural damage in non-conducting materials. In this study, the THz-TDS system is used to detect, localize and evaluate hidden multi-delamination defects (i.e., a three-level multi-delamination system) in multilayered GFRP composite laminates. To obtain accurate results, a wavelet shrinkage de-noising algorithm is used to remove the noise from the measured time-of-flight (TOF) signals. The thickness and location of each delamination defect in the z-direction (i.e., through-the-thickness direction) are calculated from the de-noised TOF signals considering the interaction between the pulsed THz waves and the different interfaces in the GFRP composite laminates. A comparison between the actual and the measured thickness values of the delamination defects before and after the wavelet shrinkage denoising process indicates that the latter provides better results with less than 3.712% relative error, while the relative error of the non-de-noised signals reaches 16.388%. Also, the power and absorbance levels of the THz waves at every interface with different refractive indices in the GFRP composite laminates are evaluated based on analytical and experimental approaches. The present study provides an adequate theoretical analysis that could help NDT&E specialists to estimate the maximum thickness of GFRP composite materials and/or structures with different interfaces that can be evaluated by the THz-TDS. Also, the accuracy of the obtained results highlights the capabilities of the THz-TDS for the NDT&E of multilayered GFRP composite laminates.
Glass fiber reinforced polymer-matrix (GFRP) composites have the advantages of combining several types of properties that are not usually found together in a single material (e.g. lightweight, high strength, low thermal conductivity, high resistance to corrosion, good thermal and electrical insulation performance, etc.), making them some of the most widely used materials for the manufacturing of structural parts in the aerospace, missile engine housing, electrical insulators, and construction industries [1,2,3]. However, both the design process and the material composition make GFRP composites vulnerable to several types of defects and structural damage such as cracks, fiber breakage, voids, and delamination which unavoidably affect their design performance during their in-service stage. These types of flaws are frequently induced by several factors including internal stress release, poor material handling during the manufacturing process, water vaporization, burns from high temperatures, impact damage, etc. In this context, structural components made of GFRP composites such as wind turbine aerofoils, and hydrofoils for marine propulsors, as well as helicopter rotor blades, and aircraft auxiliary fuel tanks [4, 5], must undergo extensive quality evaluation tests after the manufacturing process to ensure all defects are eliminated prior to being introduced into service and must continuously be monitored during operation to extend their service lives [2, 6]. To achieve this goal while equally preserving the structural integrity of the test structure, nondestructive testing and evaluation (NDT&E) techniques must be used.
To achieve the aforementioned goals, various NDT&E techniques capable of characterizing damage and defects in fiber-reinforced composites have been developed. In our recent study [2], we indicated that ultrasonic testing is one of the most widely used methods to evaluate defects and structural damage in fiber-reinforced composite materials, and it performs well in detecting both surface and deeply buried damage and defects. However, this method does have several limitations as it requires a coupling medium (except for airborne ultrasonic), lacks the sensitivity to shallow surface fracture defects, and is subjected to large attenuations of the sound waves when propagating in multilayered composite materials such as GFRP composites [5, 7]. Additional NDT&E techniques have also been used for the testing of GFRP composite materials, including X-ray radiography, infrared thermography (IRT), optical coherence tomography (OCT), optical interferometric techniques (e.g., moiré interferometry, holographic interferometry, speckle interferometry, electronic speckle pattern interferometry (ESPI), digital speckle correlation, and shearography and their subsequent variations), acoustic emission (AE), vibration testing, strain monitoring, and microwave, etc. [2, 3]. Each NDT&E technique has its characteristics, but most methods have some limitations in large-scale sensing, imaging, and corresponding safety considerations [2, 6]. To solve this challenge and provide nondestructive testing (NDT) engineers and practitioners with the most up-to-date NDT systems, some NDT&E techniques are currently being combined to improve the quality and accuracy of the test results [6] and guarantee the adequate performance of the GFRP composite materials.
In the field of materials and structural systems testing, terahertz (THz) technology has recently become one of the most promising NDT techniques, thanks to the adaptive physical modeling of the transmission process of the THz waves and their high-penetration capabilities in non-conducting materials [8,9,10]. The THz part of the electromagnetic spectrum extends from 0.1 to 10 THz (i.e., corresponding to the wavelength ranging from 3 mm down to 30 μm), and lies between the microwave and infrared regions of the electromagnetic spectrum. Figure 1 illustrates the region of the electromagnetic spectrum covered by the THz waves. Among the many advantages of the THz waves include the fact that they pose little to no harmful effects to biological tissues (i.e., they can also be used to inspect tissues in biomedical engineering) [11, 12], and they can provide high spatial resolution due to their relatively shorter wavelengths. All these remarkable properties confer THz waves the capabilities to detect and evaluate various types of defects in GFRP composites [13,14,15], measure their optical and dielectric properties [8, 16], and the fiber orientation in unidirectional GFRP composites [17] and the stacking sequence of angled GFRP composites [18], etc. THz-based NDT applications have been delayed for many years due to some technical issues that limited the development of efficient THz emission and detection devices, and the THz region of the electromagnetic spectrum was formerly known as the "THz gap". However, this gap has already been lifted following the fast development of highly performing semiconductors, ultrafast electronics, and laser systems, as well as ultra-micro machining technologies. Indeed, these fast developments have equally contributed to the development of highly efficient THz time-domain spectroscopy (THz-TDS) that is now widely used in NDT applications in many different fields [2, 9, 19]. In all these applications, however, the configurations and sample scanning processes are often presented differently and Table 1 summarizes the most common configurations and sample scanning processes available in the literature today.
Illustration of the region covered by the THz band in the electromagnetic spectrum, as well as the wavelength and major characteristics of the THz waves
Table 1 Presentation of the thickness of typical delamination defects and the visualization methods present in the literature
There are currently many different studies outlining the use of THz-TDS for the identification, visualization, and characterization of defects and structural damage in fiber-reinforced composite structures, and the fact that THz-based inspections do not require any couplant between the testing device and the material system under test (i.e., non-contact method) is another added advantage. In Ref. [20], for example, the authors used the THz-TDS to detect and characterize simulated delamination defects in a 12 layers carbon fiber-reinforced polymer-matrix (CFRP) composite plate and reported having achieved successful test and accurate results. Although this study was successful and the test results were accurate, several authors have indicated that the conductivity of carbon fibers generally hinders the penetration of THz waves in CFRP composites, and THz waves are only capable of detecting the impact-induced matrix cracking on the surface and subsurface of CFRP composite materials [21,22,23]. In Ref. [13], the authors utilized a PCA-based THz-TDS system configured in reflection and transmission modes to visualize a delamination defect simulated in a GFRP composite plate consisting of eight glass fiber layers and reported successful results. In a similar study [24], a fiber-coupled THz-TDS based on PCA was used to visualize multi-delamination defects and their thickness in a GFRP composite plate. However, the study used averaging method to remove the noise from the THz signals which drastically reduced the amplitude of the measured THz signals making it difficult to determine the absorbance of the THz wave at each interface within the GFRP composite sample.
Additional studies featuring the application of the THz-TDS system for the NDT of GFRP composites include but are not limited to Refs. [25, 26]. In Ref. [25], for example, the authors used a fiber-coupled pulsed THz-TDS system featuring PCA-based THz waves emitter and receiver and they were able to extract occluded textual content from a packed stack of paper pages down to nine pages without human supervision. In a recent study [26], the authors used a PCA-based THz-TDS system to detect and evaluate simulated delamination defects in a GFRP plate with the THz wave operating at an incident angle of 25° in the reflection mode and reported successful results. However, all these studies used the averaging method to remove the noise from the measured THz signals which, in some cases, resulted in the amplitude of the measured THz signals being reduced, which subsequently limited the accuracy of the obtained results. In particular, the use of averaging method to remove the noise from the measured THz signals may not provide accurate results when the THz-TDS is used to measure the thickness of internal delamination defect/damage in GFRP composites because the accuracy of these types of measurements highly depends on accurate temporal localization of the different peaks in the measured signals.
In this study, we use a fiber-coupled PCA-based THz-TDS system to detect and evaluate the thicknesses of hidden multi-delamination defects in GFRP composite laminates. The optical parameters of the different materials involved in the sample makeup (i.e., compact GFRP composite and polytetrafluoroethylene, shorten to PTFE, for simulated delamination defects) are first determined. Then, the thickness and location of each delamination defect in the z-direction are successfully determined using analytical and experimental approaches. As opposed to previous studies using the THz-TDS system to detect GFRP composite materials' delamination defects and the averaging signal processing method to remove the noise from the measured THz signals [13, 20], the present study applies the wavelet shrinkage de-noising algorithm to remove the noise and improve the signal-to-noise ratio (SNR) and preserve the amplitude of the measured TOF signals which enables accurate determination of the thickness and location of each delamination defect. The measured and calculated results are compared to determine the accuracy of the proposed method and the effect of the wavelet shrinkage de-noising algorithm on the evaluation of GFRP composite materials' delamination defects. Also, the attenuation of the THz waves at every interface in the through-the-thickness direction is calculated using Fresnel equations and compared with the experimental values. This information is important as it can help NDT engineers and practitioners to determine the effective thickness of the GFRP composite parts that can be evaluated by the THz-TDS system when considering different options or selecting adequate NDT tools for their NDT tests.
The rest of the paper is organized as follows: Section 2 provides the theoretical formulation and THz signals analysis framework to determine the materials' optical parameters as well as relevant signal processing techniques used to process all the measured THz signals. Section 3 outlines the experimental methods and system description. Section 4 provides the results of the study and the discussion to provide adequate explanations of all the results obtained and Section 5 outlines the conclusions of the study.
Theoretical Formulation and Analysis of THz Signals
In this section of the study, the theoretical background and the mathematical formulations used to evaluate the material's optical properties and the thickness of the delamination defects are presented. The section also details the mathematical formulations pertinent to the development of the stationary wavelet transform (SWT) based algorithm that is used to enhance the measured TOF signals and obtain accurate test results. To be able to fully characterize the sample and evaluate the position and sizes of the different delamination defects, parameters such as the thickness of each delamination, their location in the sample, as well as the actual attenuation level of the THz radiation/waves in the GFRP composite sample are determined.
Extraction of the Sample's Optical Parameters
The optical parameters of the GFRP composite samples and the Teflon film used to simulate the delamination defects in the GFRP composite samples are determined using the reference and sample signals measured in reflection and transmission modes. The reference and sample time-domain signals are captured by placing the gold-painted mirror and the GFRP composite sample at the sample holder platform on the THz-TDS system, respectively. These time-domain signals are captured in both the transmission and reflection modes and their corresponding frequency-domain signals are calculated by applying the Fast Fourier transformation (FFT) using the following equation:
$$E(\omega ) = \int\limits_{ - \infty }^{ + \infty } {E(t)e^{ - j\omega t} {\text{d}}t,}$$
where \(E\left( t \right)\) and \(E\left( \omega \right)\) denote the THz time-domain signal and its corresponding frequency domain signal, respectively. After the transformation of the THz time-domain signals for both the samples \(E_{s} \left( t \right)\) and the reference \(E_{r} \left( t \right)\) into their frequency-domain counterparts, the optical properties of the GFRP composite samples, and the Teflon film used in the study were calculated. The optical parameters of the GFRP composite materials represent their macroscopic characteristics at THz frequencies. The representation of these optical parameters is generally subsumed in the following equation of the material's complex refractive index:
$$\tilde{n}\left( \omega \right) = n\left( \omega \right) + j\kappa \left( \omega \right),$$
where \(n\left( \omega \right)\) denotes the real part of the refractive index and \(\kappa \left( \omega \right)\) the extinction coefficient of the material's complex refractive index. The material's extinction coefficient \(\kappa \left( \omega \right)\) is directly proportional to the absorption coefficient expressed as \(\alpha \left( \omega \right) = 2\omega \kappa \left( \omega \right)/c\) with \(\omega\) being the angular frequency, and \(c\) the in-vacuo propagation speed of light (i.e., THz wave in this case) [13]. In the reflection mode, a difference in the optical path length occurs based on the different propagation velocities of the THz waves in the sample and air. Considering a sample of thickness \(d\), the time delay spent by the THz wave passing through the GFRP composite sample from the time it impinges the bottom surface to the time it reaches the rear surface of the same sample can be expressed as:
$$\Delta t = t_{2} - t_{1} = \frac{2d}{{c/n}},$$
where the time delay \(t_{1}\) and \(t_{2}\) correspond to the peak time delays of the reflections at the bottom and rear surfaces of the GFRP composite sample, respectively. Therefore, by transforming Eq. (3) it is now easy to obtain the refractive index of the GFRP composite sample.
$$n\left( \omega \right) = \frac{{{\Delta }tc}}{2d}.$$
At this stage, it is observed that the time-resolved detection scheme of THz-TDS can also be directly applied to measure the depth characteristics or features to obtain the information describing the multilayer structure and determine the in-depth localization of the multi-delamination defects of our GFRP composite samples using the following expression:
$$d = \frac{\Delta tc}{{2n}}\cos \theta ,$$
where \(d\) represents the thickness of the sample or the inclusion in the sample, \(\Delta t\) the time elapsed between two consecutive peaks or two different reflections at the times \(t_{i}\) and \(t_{i + 1}\), \(\theta\) denotes the incident angle of the THz waves impinging the surface of the test GFRP composite, and \(n\) the refractive index of the sample or the inclusion in the sample [13]. It is noted that the value of the thickness \(d\) may also be the localization of the defect or the damage in the through-the-thickness direction (i.e., delamination, inclusions, etc.) or the depth location of an interface of a different refractive index. After measuring THz time-domain signals for the sample \(E_{s} \left( t \right)\) and reference \(E_{r} \left( t \right)\) and calculating their corresponding frequency-domain signals \(E_{s} \left( \omega \right)\) and \(E_{s} \left( \omega \right)\), respectively, the absorption coefficients for the GFRP composite samples and the Teflon inclusions were calculated as follows:
$$\alpha \left( \omega \right) = - \ln \left| {\frac{{E_{s} \left( \omega \right)}}{{E_{r} \left( \omega \right)}}} \right|^{2} .$$
Subsequently, the transmission function \(H\left( \omega \right)\) is also calculated using the following expression:
$$H\left( \omega \right) = \frac{{E_{s} \left( \omega \right)}}{{E_{r} \left( \omega \right)}} = \left\{ {\begin{array}{*{20}c} {\frac{{\tilde{n}\left( \omega \right)}}{{\left[ {1 + \tilde{n}\left( \omega \right)} \right]^{2} }}e^{ - j\beta } ,} \\ {\rho \left( \omega \right)e^{ - j\varphi \left( \omega \right)} ,} \\ \end{array} } \right.$$
where \(\beta = \omega \left[ {\left( {{{\tilde{n}\left( \omega \right) - 1} \mathord{\left/ {\vphantom {{\tilde{n}\left( \omega \right) - 1} c}} \right. \kern-0pt} c}} \right)d} \right]\) is a complex expression that represents the propagation coefficients of the electromagnetic wave propagating through the GFRP composite sample at an angular frequency \(\omega\) over the propagation distance \(d\) that is equal to the thickness of the GFRP composite sample. The parameters \(E_{r} \left( \omega \right)\) and \(E_{s} \left( \omega \right)\) are the reference signal and the sample signals, and the reference signals, respectively, \(\varphi \left( \omega \right)\) is the phase difference between the reference and the sample, and \(\rho \left( \omega \right)\) is the magnitude of the ratio of the sample to the reference THz signals. Using the expression of the transmission function given in Eq. (7), the equations for the optical parameters of the GFRP composite sample are obtained as follows [16]:
$$n\left( \omega \right) = \frac{\varphi \left( \omega \right)}{{\omega d}}c + 1,$$
$$\kappa \left( \omega \right) = \ln \left\{ {\frac{4n\left( \omega \right)}{{\rho \left( \omega \right)\left[ {n\left( \omega \right) + 1} \right]^{2} }}} \right\}\frac{c}{d\omega },$$
$$\alpha \left( \omega \right) = \frac{2\omega \kappa \left( \omega \right)}{c} = \frac{2}{d}\ln \left\{ {\frac{4n\left( \omega \right)}{{\rho \left( \omega \right)\left[ {n\left( \omega \right) + 1} \right]^{2} }}} \right\}.$$
The Determination of the Thickness of the Sample and Defect
The time-resolved detection scheme of THz-TDS is directly applicable to measuring the depth information of a multi-layer sample. When pulsed THz waves are incident on an object, the reflected THz waveform consists of a series of pulses reflected from the interfaces [21]. The thickness of the sample or defect can be calculated in the THz reflection scan mode via the following equation:
$$T_{GFRP} = \frac{\Delta tc\cos \left( \theta \right)}{{2n_{GFRP} }},$$
where \(T_{GFRP}\) represents the thickness of the sample; \(\Delta t\) is the time between successive reflections; c is the speed of light in air; \(n_{GFRP}\) is the refractive index of the sample, and \(\theta\) is the incident angle of the THz waves. The factor of one-half arises since the THz waves are measured in the reflection mode. In this study, the angle \(\theta\) is assumed to be \(0^\circ\) because the incident THz wave is quasi-normal and the expression in Eq. (11) will now change to \(T_{GFRRP} = \Delta tc/2n_{GFRP}\) without the factor \(\cos \left( \theta \right)\) since \(\cos \left( 0 \right) = 1\).
The Fraction of Incident THz Power
The fraction of the incident THz power that is reflected from the interface is given by the reflectance. To evaluate the path of the THz waves, the reflection and transmission power of THz waves at the interface of two media (i.e., materials 1 and 2) can be calculated using the Fresnel equations as follows [24]:
$$R_{s} = \left| {\frac{{z_{2} \cos \left( {\theta_{i} } \right) - Z_{1} \cos \left( {\theta_{t} } \right)}}{{z_{2} \cos \left( {\theta_{i} } \right) + Z_{1} \cos \left( {\theta_{t} } \right)}}} \right|^{2} = \left| {\frac{{\sqrt {\frac{{\mu_{2} }}{{\varepsilon_{2} }}\cos \left( {\theta_{i} } \right)} - \sqrt {\frac{{\mu_{1} }}{{\varepsilon_{1} }}\cos \left( {\theta_{t} } \right)} }}{{\sqrt {\frac{{\mu_{2} }}{{\varepsilon_{2} }}\cos \left( {\theta_{i} } \right)} + \sqrt {\frac{{\mu_{1} }}{{\varepsilon_{1} }}\cos \left( {\theta_{t} } \right)} }} } \right|^{2} ,$$
$$R_{p} = \left| {\frac{{z_{2} \cos \left( {\theta_{t} } \right) - Z_{1} \cos \left( {\theta_{i} } \right)}}{{z_{2} \cos \left( {\theta_{t} } \right) + Z_{1} \cos \left( {\theta_{i} } \right)}}} \right|^{2} = \left| {\frac{{\sqrt {\frac{{\mu_{2} }}{{\varepsilon_{2} }}\cos \left( {\theta_{t} } \right)} - \sqrt {\frac{{\mu_{1} }}{{\varepsilon_{1} }}\cos \left( {\theta_{i} } \right)} }}{{\sqrt {\frac{{\mu_{2} }}{{\varepsilon_{2} }}\cos \left( {\theta_{t} } \right)} + \sqrt {\frac{{\mu_{1} }}{{\varepsilon_{1} }}\cos \left( {\theta_{i} } \right)} }} } \right|^{2} ,$$
where \(R_{s}\) represents the reflectance of s-polarized THz waves; \(R_{p}\) is the reflectance of p-polarized THz waves; \(z_{1}\) and \(z_{2}\) are the wave impedances of the media involved (i.e., materials 1 and 2); \(\mu_{1}\) and \(\mu_{2}\) are the magnetic permeability of materials 1 and 2, respectively; \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are the electric permittivities of the materials 1 and 2, respectively, at the frequency of the THz waves, \(\theta_{i}\) is the incident THz-wave angle, \(\theta_{t}\) is the reflection THz-wave angle. In the case of non-magnetic media (i.e., materials for which \(\mu_{2} = \mu_{1} = \mu_{0}\) (where \(\mu_{0}\) denotes the permeability of free space), impedances \(z_{1}\) and \(z_{2}\) can be expressed as \(z_{1} = z_{0} /n_{1}\) and \(z_{2} = {{z_{0} } \mathord{\left/ {\vphantom {{z_{0} } {n_{2} }}} \right. \kern-0pt} {n_{2} }}\), respectively. Replacing the expressions of these two parameters into Eqs. (12) and (13) and simplifying the resulting expressions, the following equations are obtained for the \(s\) and \(p\) polarized reflectances of the THz waves as follows [24]:
$$R_{s} = \left| {\frac{{n_{1} \cos \left( {\theta_{i} } \right) - n_{2} \cos \left( {\theta_{t} } \right)}}{{n_{1} \cos \left( {\theta_{i} } \right) + n_{2} \cos \left( {\theta_{t} } \right)}}} \right|^{2} = \left| {\frac{{n_{1} \cos \left( {\theta_{i} } \right) - n_{2} \sqrt {1 - \left( {\frac{{n_{1} }}{{n_{2} }}\sin \left( {\theta_{t} } \right)} \right)^{2} } }}{{n_{1} \cos \left( {\theta_{i} } \right) + \sqrt {1 - \left( {\frac{{n_{1} }}{{n_{2} }}\sin \left( {\theta_{t} } \right)} \right)^{2} } }} } \right|^{2} ,$$
$$R_{p} = \left| {\frac{{z_{2} \cos \left( {\theta_{t} } \right) - Z_{1} \cos \left( {\theta_{i} } \right)}}{{z_{2} \cos \left( {\theta_{t} } \right) + Z_{1} \cos \left( {\theta_{i} } \right)}}} \right|^{2} = \left| {\frac{{n_{1} \sqrt {1 - \left( {\frac{{n_{1} }}{{n_{2} }}\sin \left( {\theta_{i} } \right)} \right)^{2} } - n_{2} \cos \left( {\theta_{i} } \right)}}{{n_{1} \sqrt {1 - \left( {\frac{{n_{1} }}{{n_{2} }}\sin \left( {\theta_{i} } \right)} \right)^{2} } + n_{2} \cos \left( {\theta_{i} } \right)}} } \right|^{2} .$$
The total reflectance \(R\) of the THz waves in the GFRP composite samples can now be calculated as:
$$R = \frac{{R_{s} + R_{p} }}{2}.$$
In this case, it follows that the total transmittance (\(T\)) of the THz waves can be expressed as:
$$T = 1 - R.$$
The THz-wave power can be calculated using Eq. (17), which corresponds to the transmission power of the THz-wave at the surface of the GFRP composites.
The Absorbance of THz Waves in Refraction Material
In refraction materials such as GFRP composites, the absorbance of the THz waves denotes the attenuation of the transmitted THz wave power in the material at the different interfaces. In this context, the THz waves impinging the top surface of the GFRP composite sample propagate through the sample until they reach the interface between the GFRP and the upper side of the top delamination defect. At this level, the incident THz beam is partly reflected at the upper side of the delamination defect while the remaining portion of the THz beam is transmitted into the remaining sections of the GFRP composite sample. The composition of our GFRP composite samples is such that the next section of the sample adjacent to the interface of the first reflection is the delamination defect. In this case, the remaining THz waves after that first reflection are transmitted into the delamination defect and the aforementioned process is repeated at every interface until the THz waves reach the rear surface of the sample. However, when the THz waves pass through the specimen, transmissions and reflections are repeated at every interface between two different media for every reflected portion of the THz, and only the first reflection and/or transmission at every interface reaches the detector to be measured. The transmitted/reflected THz waves that cannot reach the detector or that cannot be easily visualized are generally referred to as absorbed THz power. The absorbed power of the THz waves can be calculated using Eq. (18) as follows [26]:
$$A = I\left( {1 - e^{\alpha d} } \right),$$
where \(A\) denotes the absorbance of the THz waves in the GFRP composite material; \(I\) denotes the incident THz-wave power; \(\alpha\) denotes the absorption coefficient of the GFRP composite material or simply the absorption coefficient of our GFRP composite samples, and \(d\) is the thickness of the GFRP composite sample. This equation is only applicable to a single set of composite materials and cannot be used when we have several materials superposed one on top of the other (i.e., the indicated parameters are for one type of material, meaning that the Teflon inclusions are not included in the equation and should be calculated separately). As indicated in the previous paragraph, this scenario will repeat itself at every interface between the delamination defect and the compact section of the GFRP composite sample (i.e., media of different refractive indices) where only the effects of two parts of \(E_{{in,{ }i}}\), namely \(E_{{in,{ }i + 1}}\) and \(R_{i + 1}\) are visualized and quantified and the absorbed THz power at each of these interfaces can be calculated using Eq. (18).
The Stationary Wavelet Denoising Algorithm
To obtain good quality signals and accurate measurement results, all measured signals are subjected to a stationary wavelet transform (SWT) based signal denoising algorithm for denoising the THz signals before using them to calculate the samples' features of interest. In its detailed implementation, the SWT is the translation-invariance modification of the Discrete Wavelet Transform (DWT) that does not decimate coefficients at every transformation level. SWT is a signal de-noising technique based on the idea of thresholding the wavelet coefficients. Unlike the general DWT, the SWT is always up-sampled (i.e., never sub-sampled) at each level of decomposition. Inspired by the close similarity between the THz pulse and the common wavelet basic functions, its application in THz signal processing practices was first presented in Ref. [28] whereby the authors used the SWT to extract the tomographic results (i.e., the spectroscopic information about each reflecting layer of a sample). In most signal processing practices where the use of SWT is involved, the latter decomposes a single-dimensional signal \(x\left( n \right)\) into approximation coefficients vector \(cA_{k,l}\) and detail coefficients \(cD_{k,l}\) by convolving with a low-pass filter \(\psi_{i} \left( n \right)\) and a high-pass filter \(\phi_{i} \left( n \right)\) (i.e., where \(i = 1,2,3...p\) and \(p\) denotes the maximum level of decomposition of the signal by the SWT) along the temporal axis [29]. It is noted that the low-frequency information is concentrated in the approximation coefficients while the high-frequency information is concentrated in the detail coefficients. Figure 2 illustrates the decomposition process of the SWT whereby the single-dimensional signal passes through a filter bank for the convolution process.
Schematic representation of a level-3 decomposition of a THz time-domain signal with the SWT
It is noted that the wavelet coefficients with small absolute values are generally considered the noise while those with large absolute values are generally regarded as the main featured information of the signal. Applying the thresholding process will remove the small absolute value coefficients, and depending on the decomposition level and the type of wavelet transform used, the reconstruction of the signal from these coefficients is projected to generate a signal in which the contribution of noise has been greatly reduced [30]. In general, SWT de-noising with soft thresholding is performed through 4 main steps [29, 31]. The wavelet coefficients are determined by taking the SWT of the input signal:
$$\left[ {cA,cD} \right] = SWT\left[ {x\left( n \right)} \right].$$
Assuming the noise level in the THz time-domain signal is equal to the parameter \(\sigma\), and \(n\) the number of sampling points in the THz time-domain signals, the value of the threshold parameter \(T\) is calculated using the following equation:
$$T = \sigma \sqrt {\ln N^{2} } .$$
In this particular study, the values of the wavelet coefficients \(cD_{k,l}\) are thresholded by using the soft-thresholding process (i.e., which is one of the thresholding processes for the SWT operations) and the following results are obtained:
$$c\hat{D}_{k,l} = \left\{ {\begin{array}{*{20}c} {cD_{k,l} - {\text{T}},cD_{k,l} \ge T,} \\ {cD_{k,l} + {\text{T}},cD_{k,l} \le T,} \\ {0,\left| {cD_{k,l} } \right| < T.} \\ \end{array} } \right.$$
At this stage, the inverse stationary wavelet transform (ISWT) is performed to obtain the denoised THz time-domain signal \(\hat{x}\left( n \right)\) as follows:
$$\hat{x}\left( n \right) = ISWT\left( {\left[ {cA,c\hat{D}} \right]} \right).$$
In the present study, we only considered the detailed coefficients to ensure all the information in the signal is captured and obtain highly accurate results.
Experimental Methods and System Description
In this study, the THz-TDS is used to evaluate and characterize a series of simulated delamination defects in layered GFRP composite samples. The THz-TDS system uses THz radiation for the spectroscopic characterization and imaging of the GFRP composite samples which are generally generated and detected using methods such as optical rectification, photoconductive antenna (PCA), and the surface electric field of a semiconductor [32], which may be done coherently or incoherently depending on whether both the amplitude and phase of the field are measured or only the intensity of the THz radiation is measured [24]. In general, coherent detection is closely associated with the generation technique for the THz waves as they share the underlying mechanisms and key components (i.e., utilize the same light source). In this study, PCA is used to coherently generate and detect the THz waves and the following section provides a detailed description of the operation of our THz-TDS system.
THz-TDS and Imaging System
The THz-TDS (TeraView TPS 4000) is used to detect and characterize hidden delamination defects in GFRP composite samples. To measure the optical parameters and characterize the delamination defects of the GFRP samples, the system is configured to perform tests in reflection and transmission modes. These two modes are not simultaneously configured but rather one is configured after the other depending on which mode is needed at the different stages of the study. This THz-TDS system features a scanning range of up to 1200 ps and its resolution can reach 0.1 ps at a rapid scanning frequency of 50 Hz. In its effective range of frequencies (0.01–4.5 THz), the system's SNR is generally more than 90 dB at the peak and this remains at more than 70 dB during the entire sample measurement process. The system's main components include the femtosecond laser, the optical delay line, and the X-Y imaging stage as well as the two photoconductive antennas (PCA), i.e., the emitter and receiver, and several optical components (Figure 3). The sample housing unit is filled with Nitrogen (N2) during the measurement to eliminate the effect of moisture/humidity on the test results and the power of the THz radiation is kept below 1 mW to avoid any potential thermal strain on the sample.
Schematic representation of the internal configuration of the THz-TDS TeraPulse 4000 with the mirrors/reflectors and other optical modules
The operation of our THz-TDS is such that the PCA-based THz emitter gets excited by an ultrafast femtosecond laser and produces single-cycle THz pulses that are coherently detected by the PCA-based THz detector in reflection or transmission mode. The sample is raster-scanned by a set of motorized stages moving in an X-Y direction and the generated THz pulses intercept the sample along the vertical plane and get reflected by the same sample (reflection mode) or transmit through the sample (transmission mode) before it is collected and sent to the laser-gated PCA-based detector. A current proportional to the THz electric field of the THz radiation is produced and measured (i.e., amplitude and phase) by gating the photoconductive gap to the femtosecond pulse that is synchronized with the emission of the THz radiation. After amplification and the processing of the obtained THz signal at each pixel, a final THz image is obtained. The existence of distinct refractive indexes in the material (i.e., compact GFRP composite sample and the TPFE discs/films) produces multiple reflections at the different interfaces and these reflections are presented in form of several peaks at the corresponding time delays in the transmitted and/or reflected THz signals. The sequence and thicknesses of the various laminates of the test sample can be reconstructed easily by analyzing these peaks with knowledge of the refractive indices of the material constituents. The absorbance of the THz waves by the sample is also calculated by analyzing the difference between the incident power of the THz waves and the power at every interface in the GFRP composite sample. Finally, the THz time-domain signal can be extended to its frequency-domain version using the FFT for additional analysis.
Sample Preparation Procedure and Materials
The GFRP composite samples used in this study were manufactured using unidirectional (UD) glass fiber prepreg (G15000/6509/33%, GF contents: 67 wt%, Weihai GuangWei Composites Co. Ltd, Shandong—China). All the samples were laminated by the hand layup process following a unidirectional pattern. A series of Teflon film disks of different diameter sizes (i.e., 5 mm, 10 mm, and 15 mm) were embedded in each sample to simulate the location of three different levels of delamination in between the stacked layers of glass fiber prepregs during the manufacturing process (Figure 4). Each disk had a thickness of 130 µm and their centers were thoroughly aligned. Prior to deciding the thickness of the different delamination defects, we first conducted an overview of the literature to know the usual thickness of simulated delamination in GFRP composite materials. Our literature survey indicates that most of the simulated delamination defects present an estimated thickness between 20 µm to 250 µm. In this context, we decided to fabricate the simulated embedded delamination defects using Teflon film with a thickness of 130 µm. The thicknesses of the different simulated delamination defects found in the literature are also listed in Table 1 and readers are directed to this part of the work for more information. A total of 13 layers of unidirectional glass fiber prepreg with an estimated thickness of 120 µm each were used to manufacture a 220 mm × 220 mm GFRP composite plate. After the laminating process by hand-layup, the GFRP composite plate was cured in a vacuum oven for 2 hours at 125 °C. The pressure in the vacuum oven was maintained at 0.6 MPa during the entire time of the curing process. After the curing process, the sample was left to cool down and a micrometer (DL321025S, Deli Group Co. Ltd.—China, range: 0–25 ± 0.001 mm) was used to measure its thickness.
A schematic representation showing the sample preparation process of the multi-delaminated GFRP specimens. Note that the dummy sample is manufactured in the same way but does not have any inclusions
The thickness of the test sample was measured at 8 different locations and the standard error was calculated. The sample thickness was found to be equal to 1.532 ± 0.045 mm. It is noted that no delamination was visible at the edge of the sample nor was the delamination visually apparent or detectible by thickness variation measured by the micrometer. After measuring the thickness, the 220 mm × 220 mm × 1.532 mm GFRP composite plate was cut into 25 different pieces of 40 mm × 40 mm × 1.532 mm each to obtain the samples used in our experiments. It is noted that the PTFE or Teflon disks were randomly placed in between the different layers of the composite plate but each time, the disks were carefully arranged to form a three-level multi-delamination system. To obtain the best samples, a preliminary scanning of all 25 samples obtained by cutting the manufactured GFRP composite plate was conducted using the THz-TDS system. This preliminary scan revealed that only 5 samples had 3 layers of delamination defects of different sizes at different depth locations each as we intended to use them (Figure 4(b)). As such, only these 5 samples were considered for further tests, and the rest of them were discarded. The reference GFRP composite sample (i.e., the GFRP composite sample manufactured following the aforementioned process but does not have any inclusions) and a piece of Teflon film are used to determine the optical properties of these two material systems prior to their utilization in our study.
Optical Parameters of the Samples
The optical parameters of the samples are first determined as the starting point by using the time-domain THz signals of the reference and sample measured in reflection and transmission modes. As such, both the reference and sample THz time-domain signals, i.e., \(E_{r} \left( t \right)\) and \(E_{s} \left( t \right)\), respectively, are first captured by placing the gold-painted mirror and the GFRP composite sample at the sample holder unit (also known as the X-Y stage of the THz-TDS), respectively. The same procedure is used for both the GFRP composite reference sample and Teflon film whereby the reference sample is replaced by a piece of PTFE (i.e., results not shown here for simplicity and brevity of the article). To remove the influence of the noise and ensure the high-quality time-domain signals are used in our calculations, we first applied the wavelet shrinkage de-noising to process the THz time-domain signals before the transformation in the frequency-domain. To do this, we choose the Symlet (Sym4) type of wavelets, which are a modified form of Daubechies wavelets. A maximum level of 9 is used for the wavelet decomposition because there were no noticeable changes observed when using higher levels of wavelet decomposition to justify the extra computational expenses. This level of decomposition is maintained for all our wavelet shrinkage denoising operations used in this study for consistency purposes. The corresponding THz signals in the frequency domain, i.e., \(E_{r} \left( \omega \right)\) and \(E_{s} \left( \omega \right)\), are calculated from the aforementioned THz time-domain signals by using Eq. (1). In the time domain, the noise is predominantly observed in the region with greater fluctuations just after the main THz pulse of the measured signals before the denoising process. A typical example of the time-domain THz sample signal and its frequency-domain spectrum before and after the wavelet de-noising operation are shown in Figure 5. It is observed that the region with larger fluctuations/noise is located in the temporal region between 6.5 and 33 ps. Figure 5 also indicates that the use of SWT helps remove fluctuations in the signal without losing much energy in the main THz pulse in the time domain while major noise dips are successfully suppressed in the frequency domain.
THz sample signal with and without wavelet de-noising in a time domain and b frequency domain. The inset in a shows the fluctuations in the signal due to environmental noise and their removal by the SWT
The aforementioned process was also used to denoise the signals and determine the FFT for the different signals used to evaluate the different features of the GFRP composite samples. Representative signals with a significant reduction of noise for different regions of the sample with and without delamination defects were obtained after the wavelet shrinkage denoising process. After the calculation of the results in Eqs. (8)–(10), the values of refractive indices for both the reference GFRP composite laminate and Teflon film were obtained as presented in Figure 6. The values of the refractive indices and absorption coefficients for both the GFRP composite samples and the Teflon film are represented in the same figure and both the labels and arrows are used to help readers identify the different variables.
Representation of the optical parameters of the GFRP composite and the Teflon film used to create delamination in its interior: a refractive indices and b absorption coefficients of both elements
The average value of the calculated real part of the refractive index \(n\) of the GFRP composite sample and the PTFE film is found to be 2.157 and 1.462, respectively, at the frequency of 0.45 THz. Figure 6 indicates that the refractive indices for both materials remain almost steady (i.e., they vary very little in the considered range of frequencies) while the absorption coefficients of these two materials (i.e., the GFRP composite sample and the Teflon film) continuously increase in the same range of frequencies. Interestingly, the values of the refractive indices obtained for both materials are also consistent with those reported in Refs. [33,34,35] for the GFRP composites and the Teflon films, respectively.
Thickness Characterization of the Internal Delamination Defects
Figure 7 indicates that when the THz beam (\(E_{in}\)) goes through a multilayered GFRP composite sample with multi-delamination defects (i.e., materials with different layers where the individual layers are composed of different material systems), it will be partly reflected (\(R_{i}\)) and partly transmitted (\(E_{{in,{ }i}}\)) at each interface encountered. This is because the different interfaces present different refractive indices which cause the refraction angles of the THz wave propagation to change. In this context, the THz wave will also propagate through the GFRP composite sample passing through the top delamination defect, the middle delamination defect, and the bottom delamination defect (i.e., these delamination defects are considered the different layers) until it reaches the back surface or rear of the GFRP composite sample as illustrated in Figure 7. The signals reflected at each interface (\(R_{i}\) with \(i = 1,2, \ldots ,8\)) will travel back through the multilayer structure to the PCA detector, while the transmitted signal will continue through successive interfaces (\(E_{{in,{ }i}}\) with \(i = 1,2, \ldots ,8\)) until it reaches the interface between the back surface of the composite and the air (Figure 7). The reflected signals will appear in the THz time-domain waveform at different times of flight according to the sequence of mediums penetrated by the THz wave. This time-domain waveform will be used to detect the different delamination defects and to estimate their relative in-depth location based on the TOF between the different interfaces.
The propagation of the THz wave in multilayered GFRP composites with three levels of delamination defects. The denominations of the top, middle, and bottom delamination defects used to describe the different delamination levels are based on the propagation direction of the THz waves when passing through the GFRP composite sample as illustrated in this figure
To visualize the position of the delamination defects in the GFRP composite sample, the wavelet shrinkage de-noising was also applied to process the THz time-domain signals (i.e., the sample signals) in both the regions with and without delamination. To do this, we chose the Symlet (Sym4) type of wavelets, which are a modified form of the Daubechies wavelets. Likewise, a maximum level of 9 was used for the wavelet decomposition because there were still no noticeable changes that could be observed when using higher levels to justify the extra computational expenses. After the wavelet shrinkage denoising process, we obtained representative signals with a significant reduction of noise for different regions of the sample with and without delamination defects. Figure 8 presents a comparative representation of the different THz time-domain waveforms collected from the regions of the GFRP composite samples with and without delamination defects (i.e., see the individual graphs for the different labels). In Figure 8(a), the data obtained from the delamination-free or pristine region of the GFRP composite sample is presented for both clarity and comparison purposes. This signal has two peaks, the first peak appears at \(t_{1} = {6}{\text{.271892 ps}}\) and corresponds to the first interface which is also the top surface of the GFRP composite sample, while the second peak appears at \(t_{2} = 28.33535\;{\text{ps}}\) and corresponds to the bottom surface or the rear of the GFRP composite sample. The TOF between these two different peaks was found to be 21.663458 ps. This value of TOF between these two interfaces of the pristine region of the GFRP composite sample was particularly important because it was used later in our calculations to determine the thickness of the GFRP composite sample which was then compared with the real thickness of the GFRP composite sample for the validation of the method.
Time traces of the detected reflected THz signals in the time domain. The different THz signals represent a the normal/pristine region of the GFRP composite sample, b the upper delamination, c both the upper and middle delamination, and d all the 3 levels of delamination in the sample
In Figure 8(b), the 21.44864–23.5741 ps interval in the THz signal highlighted in cyan represents the TOF covered by the THz pulse while passing through the bottom delamination defect (i.e., the delamination defect closest to the rear of the GFRP composite sample). Figure 8(b) also shows that the bottom delamination defect is the biggest in diameter as it covers the whole area of the other two delamination defects (middle and top delamination defects). However, the fraction of the THz beam reaching this point is small, and that is why the amplitude of the THz signal coming out of this delamination defect is also smaller than the other two. In Figure 8(c), the 12.68007–14.40954 ps interval in the THz signal highlighted in green equally represents the TOF covered by the THz pulse while passing through the middle delamination defect. At this position, the middle delamination overlaps with the bottom delamination defect because the latter is smaller in diameter than the former and it is not possible to select any region within the middle delamination without selecting the top delamination at the same time. In Figure 8(d), the 7.976565–9.609055 ps interval in the section of the waveform highlighted in blue represents the characteristics of the top delamination defect. The selection of the region with the top delamination entails that we also unintentionally select the regions of the other two delamination defects because the top delamination defect is smaller in diameter (9 mm) than the other two (12 and 15 mm for the middle and bottom delamination defects). In this context, Figure 8(d) represents the data from the delaminated region (i.e., the region on the GFRP composite sample with multiple delamination defects). In summary, the various subsidiary peaks observed at the optical delay times greater than 10 ps correspond to the different pulses that have bounced back off various interfaces between the compact GFRP composite and its delaminated regions. That is, Figure 8(d) represents the reflected THz pulse from a representative point in the delaminated region of the GFRP composite as opposed to Figure 8(a) which represents the reflected THz pulse from a representative point in the pristine region of the GFRP composite sample.
In Figure 8(d), the most noticeable new features are the three enhanced peaks appearing between the reflection peaks of the front surface and the rear of the GFRP composite sample (peaks at the optical delays corresponding to 8.946361 ps, 13.91656 ps, and 21.82847 ps). These three enhanced peaks indicate the location of the PTFE and the reason we can see that is because of the relatively large refractive index mismatch between the resin matrix and the PTFE at that level. In the images indicated in Figure 8(b)–(d), the PTFE film inserted into the GFRP composite to create artificial delamination defects caused the composite's laminas at the three different levels of the delamination defects to deform and the corresponding peaks in the time-domain THz signals are shifted to larger optical delays. In these two signals, the features outside the delamination defects and in between the individual delamination defects are similar. However, the plies in the regions between the top and bottom delamination defects are substantially compressed due to the fabrication process that ensured a uniform overall thickness of the entire GFRP composite laminate even after adding the 3 levels of the PTFE films. At this stage, a detailed investigation into the measured data was conducted to obtain quantitative information describing the internal features of the GFRP composite sample and delamination thicknesses. The aforementioned distinguishable reflected signals characterized by their optical delays in conjunction with the refractive indices of the constituent materials as calculated in the previous section of the present paper enabled us to determine the exact location and thickness of the individual delamination defects within GFRP composite samples. In particular, a signal denoising process based on wavelet shrinkage was performed which helped us to eliminate all the cases of overlapping echoes and unclear peaks in the THz time-domain signals before calculating the quantitative information describing the internal features of the composite and delamination thicknesses.
Using the calculated refractive indices of the GFRP composite sample and the PTFE film (Figure 7) as well as the information extracted from the wavelet denoised signals, we were able to calculate the thickness of the portions of the GFRP composite sample situated between every two consecutive reflections be it for the delamination or the GFRP composite sample itself. In this context, it was also possible to determine the exact location of the individual delamination defects within the GFRP composite sample. As indicated earlier, the thickness of the entire GFRP composite sample is determined by first identifying the peaks of the time delays between the front and the rear surfaces of the GFRP composite and substituting the required values of the different parameters in Eq. (11) for subsequent calculations. However, since our THz system uses quasi-normal incident THz waves, the angle of incidence \(\theta\) is neglected and assumed to be equal to 0° in all the calculations in this study, suggesting that \(\cos \theta = 1\). As such, the overall thickness of the composite is calculated as follows:
$$\begin{aligned} T_{GFRP} & = \frac{\Delta tc}{{2n_{GFRP} }} = \frac{{\left( {22.604314 \times 10^{ - 12} } \right) \times \left( {2.998 \times 10^{8} } \right)}}{{2 \times \left( {2.157} \right)}}{\text{m}} \\ & \approx 1.571\;{\text{mm}}, \\ \end{aligned}$$
where \(c = 2.998 \times 10^{8} \;{\text{m}}/{\text{s}}\) is the in-vacuo speed of light, ∆t=22.604314 ps is the time between the reflections at the bottom surface and the rear of the GFRP composite, and the factor of one-half arises since the system is used in reflection mode (i.e., the THz pulse passes through the composite twice). The calculated thickness of the GFRP composite is equal to 1.571 mm, which is close to the measured value of 1.532 ± 0.045 mm. The difference between the thicknesses of the GFRP composite sample measured by the micrometer and the THz-TDS system originates from the difference in refractive indices in the GFRP composite. In this calculation, we did not consider the refractive index of the delamination defects as being different from that of the compact GFRP composite. Instead, we considered the entire composite to be a compact unit whilst it has some delamination defects that have a different refractive index. Indeed, the best practice would have been to consider the different TOF and the corresponding refractive index and add up all the different thicknesses to obtain the correct thickness of the GFRP composite (i.e., individual thicknesses of the delamination defects and the inter-delamination regions should be considered separately). Considering the different refractive indices, the newly obtained thickness of our GFRP composite is 1.536 mm, which is the same value as the thickness we measured using the micrometer with the standard error considered. Similarly, using the calculated refractive index for the PTFE film and the time delays between the reflections associated with the interfaces between the individual PTFE films and the adjoining composite layers, we can easily determine the thickness of the individual layers of PTFE films in the delaminated GFRP composite samples. In the case of the top delamination defect, for example, the thickness is calculated as follows:
$$\begin{aligned} T_{Teflon} & = \frac{\Delta tc}{{2n_{Teflon} }} = \frac{{\left( {1.272837 \times 10^{ - 12} } \right) \times \left( {2.998 \times 10^{8} } \right)}}{{2 \times \left( {1.462} \right)}}{\text{m}} \\ & \approx 0.130505\;{\text{mm}} \approx 130.505\;\upmu {\text{m}}, \\ \end{aligned}$$
where ∆t = 1.272837 ps and \(n_{Teflon} = 1.462\) corresponds to the time delay of the reflections at the interfaces between the front and the rear sides of the PTFE film and the refractive index of the Teflon film, respectively. Again, the calculated value of the Teflon film is quite close to the measured value of 130 µm. At this stage, we use the same formula to calculate the thickness of the individual delamination defects in the GFRP composite sample. Table 2 presents the calculated results of the individual delamination defects. As indicated in Section 2, all the PTFE discs inserted in the composite sample had the same thickness of 130 µm each. The only difference is their diameter sizes which were 9, 12, and 15 mm for the top, middle, and bottom delamination defects, respectively. It is observed that the experimental values of the thicknesses of the individual PTFE discs strongly agreed with the measured values, further confirming the accuracy of our THz-TDS measurements.
Table 2 Presentation of the real and the experimental values of thickness for the different delamination defects for 5 samples with the standard error (SE) and relative error
To demonstrate the benefits of using the SWT to process the THz time-domain signals, we used the THz signal as measured directly from our THz-TDS system and selected several regions on a single GFRP composite sample, and calculated the average thicknesses of the individual delamination defects before and after the denoising process by the SWT. Table 3 presents the results of the individual delamination defects calculated from the THz time-domain signals before and after the signal denoising by the SWT-based algorithm.
Table 3 Presentation of the THz-TDS measured results for the different delamination defects before and after the denoising operation by the SWT
The values of the thicknesses of the individual delamination defects calculated from the measured THz signals indicate that the measured and calculated values are very close and in some cases equal to the real values, which further confirms the accuracy of our measurements. Using the same logic, we can also calculate the exact position of the different delamination defects in the through-the-thickness direction of the GFRP composite samples. In our earlier discussion, we indicated that the different peaks in the time-domain signals indicate the different interfaces in the GFRP composite samples. To calculate the exact in-depth location of the top delamination defect, we evaluated the thickness of the composite between the first and second peaks in the time-domain signal (i.e., the thickness of the composite before the top delamination). The THz pulse considered in this case indicates that the first peak is located at the time \(t_{1} = 6.279424{\text{ ps}}\) while the second peak is located at \(t_{2} = 8.978687{\text{ ps}}\).
$$\begin{aligned} T_{loc - td} & = \frac{\Delta tc}{{2n_{GFRP} }} = \frac{{\left( {2.699263 \times 10^{ - 12} } \right) \times \left( {2.998 \times 10^{8} } \right)}}{{2 \times \left( {2.157} \right)}} \\ & \approx 187.584\;\upmu {\text{m}}, \\ \end{aligned}$$
where \(c\) and \(T_{loc - td}\) denote the in-vacuo speed of light and the in-depth location of the top delamination defect, respectively. The parameter \(\Delta t\) denotes the TOF between the front surface of the composite and the first interface of the top delamination defect and is equal to \(2.699263{\text{ ps}}\). Using the same equation, the middle and the bottom delamination defects are found to be situated at exactly \(530.74\;\upmu {\text{m}}\) and \(1.081{\text{ mm}}\) from the front surface of the GFRP composite, respectively. It is noted that the locations of the middle and the bottom delamination defects are calculated in different steps following the passage of the THz pulses as they go through the layers of different refractive indices in the GFRP composite, viz. the refractive index of the inter-delamination layers or compact GFRP composite (\(n_{GFRP}\)) and the refractive index of the delamination defects (\(n_{{{\text{Teflon}}}}\)). In a similar context, we can now determine the location of all the delamination defects in 5 different GFRP composite samples. Table 4 reports the values of the distances or thicknesses indicating the locations of the individual delamination in the through-thickness direction of the GFRP composite sample considered. It is indicated that these delamination defects were created at the same depth location when making the samples and this explains why the obtained values of the in-depth locations for all the 3 delamination defects in 5 different samples are almost the same.
Table 4 Estimated in-depth locations of the different delamination defects in the GFRP composite sample
Absorbance and Fraction of the THz Waves
To investigate the optical path of the THz wave passing through our multilayered GFRP composite sample with multiple delamination defects, the power of the reflected THz wave was calculated using the Fresnel equations as presented in Eqs. (14)–(17). It is reminded that the THz-TDS was used in both the transmission and reflection modes to obtain the THz signals used to measure the optical parameters of the materials and the reflection mode to obtain the signals used for the nondestructive evaluation of the delamination defects. In all these cases, the incident THz wave impinging the surface of the GFRP composite sample was considered quasi-normal (i.e., the incident angle is equal to 0°) so the expression for \(\sin \left( \theta \right) = 0\). Using the aforementioned Fresnel equations, we found that the THz wave was transmitted into the GFRP composite sample with 72.61% of the waves transmitted through the surface, while 27.39% of the power of the THz wave was reflected. This reflected THz component was captured by the PCA receiver as the power of the reflected THz waves as represented in Figure 9. The intensity of the first peak of the THz pulse from the PCA emitter is represented by the signal from the gold-painted mirror, while the first peak of the THz wave reflected from the surface of the GFRP composite sample is represented by the GFRP composite plate signal.
Comparison of pulsed THz waves reflected from the gold-painted mirror and the GFRP composite sample
In analyzing the absorbance of the THz wave by the GFRP composite plates and the different interfaces we will use the information outlined in Figure 7. The latter indicates that there are 8 reflections in total in line with the number of delamination defects present in the GFRP composite sample. Out of these eight reflections, seven (i.e., from \(E_{in,1}\) to \(E_{in,7}\)) are subjected to the THz power absorption at each interface and the remaining portion reaches the rear of the GFRP composite sample (i.e., the power of the transmitted THz wave is absorbed in every interface of its path through the GFRP composite sample). The absorbed and the remaining THz power at each interface can be easily calculated using Eq. (18). According to Ref. [14], up to 26.85% of the THz wave's total power is absorbed when it passes through a 3 mm thick GFRP composite with an absorption coefficient of 1 mm−1, suggesting that only 73.15% of the emitter THz wave passes through the composite with this set of characteristics or parameters. In this context, 53.114% (corresponding to 0.7261 × 0.7315) of the power of the transmitted THz wave reached the first interface between the compact GFRP composite and the upper side of the top delamination defect. At this level, the THz wave was reflected and its reflection power is estimated to be 14.548% (corresponding to 0.7261 × 0.7315 × 0.2739). Similarly, the reflected THz wave was again transmitted into the compact GFRP composite with some attenuation and the transmitted power of the THz wave was evaluated to be 10.642% (corresponding to 0.7261 × 0.73152 × 0. 2739). Using the same analogy the power of the second peak is found to be 7.727% (corresponding to 0.72612 × 0.73152 × 0.2739) with transmittance at the interface between the GFRP composite sample and air. To calculate the power of the third peak, the power of the transmitted THz wave at the interface between the compact GFRP and the upper side of the top delamination was found to be 38.566% (corresponding to 0.72612 × 0.7315) and this is the power of the THz wave that reached the bottom side of the top delamination. The reflected THz wave came with a power equal to 10.563% (corresponding to 0.72612 × 0.7315 × 0.2739). The THz wave then penetrated the delamination and reached the upper side. The power of the reached THz wave was calculated to be 7.670% (corresponding to 0.72613 × 0.7315 × 0.2739) with transmittance at the upper side of the top delamination. The transmitted power of the THz wave was estimated to be 5.611% (corresponding to 0.72613 × 0.73152 × 0.2739). After calculating all these fractions of power in transmissions and reflections, the power of the third peak was determined to be 4.074% (corresponding to 0.72614 × 0.73152 × 0. 2739) with transmittance at the surface of the GFRP composite sample. To compare the calculated values with the measured values, local extrema were identified based on the algorithm presented in Ref. [36]. The power of the THz wave measured at this level is estimated to be 3.824% for the third peak, which is about a − 6.54% maximum error. This error might have been caused by the multiple calculations performed to reach this stage.
In the case of the middle delamination defect, several transmissions and reflections must be taken into consideration. In primis, the first, second, and third peaks were obtained from the surface and the first delamination, respectively, and the calculation details are provided in the previous paragraph. In the same context, the fourth and fifth peaks originated from the middle delamination defect. The second and fourth peaks share the same phase because they are both reflected from the upper side of the top and middle delamination defects, respectively. However, the fifth peak presents a reverse phase with the fourth peak because the former is reflected from the fixed end while the latter is reflected from the free end, corresponding to the bottom and upper sides of the second delamination, respectively. One of the main advantages of the THz waves is that they can successfully penetrate overlapping delamination defects to allow users to easily detect deeply buried defects beneath other defects/damage. Using the same calculation patterns, we can also calculate the powers of the THz wave at the interfaces associated with the middle delamination defects. In this context, the power of the THz wave penetrating the top delamination and reaching its bottom side is estimated to be 28.003% (corresponding to 0.72613 × 0.7315). Considering the attenuation of the THz wave at the upper side of the middle delamination defect, the power of the transmitted THz wave penetrating the GFRP was determined to be 20.048% (corresponding to 0.72613 × 0.73152). In this case, the power of the transmitted THz wave when being reflected at the upper side of the middle delamination was estimated to be 5.611% (corresponding to 0.72613 × 0.73152 × 0.2739). The reflected THz wave was transmitted again into the compact GFRP, and its power was determined to be 8.419% (corresponding to 0.92263 × 0.73153 × 0. 2739). In this context, the intensity of the THz power in the fourth peak was found to be 1.149% (corresponding to 0.72666 × 0.73154 × 0.2739) with three times the transmittance and one time the attenuation. Similarly, the power of the THz wave in the fifth peak was estimated to be 0.606% (0.72618 × 0.73154 × 0. 2739), with two times the transmittance when compared to that of the fourth peak. The power of the THz wave measured at this level is estimated to be 0.998 and 0.532%, for the fourth and fifth peaks, respectively.
As indicated in Figure 7, the centers of all three delamination defects overlapped. The reflections from \(R_{1}\) to \(R_{5}\) have already been calculated and the power of the THz wave at these interfaces has already been calculated. At this stage, we can now calculate the power of the THz wave for the sixth and seventh reflections. These two reflections/peaks originated from the bottom delamination defect. Since the THz wave can easily penetrate the compact GFRP composite sample and the three overlapping delamination defects, it is easy to evaluate the sizes of all three delamination defects. Similarly, the sixth and the second peaks had the same phase because they are both reflected from the upper side of the bottom delamination defect. However, the seventh peak presents a reverse phase because it was reflected from the bottom side of the bottom delamination defect. The THz wave went through the compact GFRP composite sample, in the same way, it did for the middle delamination defect. At the bottom side of the middle delamination defect, the power of the transmitted THz wave was found to be 10.800% (corresponding to 0.72615 × 0.73152). Subsequently, the THz wave penetrated the compact GFRP composite with a power of 8% (corresponding to 0.72615 × 0.73153) following the attenuation in the compact section of the GFRP composite sample. Similarly, the transmitted THz wave with attenuation was reflected at the upper side of the bottom delamination defect, and the power at this level was estimated to be 2.164% (corresponding to 0.72615 × 0.73153 × 0.2739). The power of the sixth peak was found to be 0.171% (corresponding to 0.726110 × 0.73156 × 0.2739). This power is calculated using the same process as above considering the transmittance and attenuation, five times and three times, respectively. The power of the seventh peak was found to be 0.090% (corresponding to 0.726112 × 0.73156 × 0.2739) considering the two times transmittance and compared with that of the sixth peak above.
In this work, the THz-TDS system is used to characterize multilayered GFRP composite laminates and evaluate the thickness and location of simulated internal delamination defects. The study also presents a method to calculate the power of the THz waves at every interface in the through-the-thickness direction to help NDT&E engineers and practitioners estimate the maximum thickness of the multilayered GFRP composites that can be evaluated by the THz-TDS system. The specific results of the research and conclusions are summarized as follows:
The multi-delamination defects were successfully simulated by inserting layers of PTFE in the GFRP composite samples and the optical parameters for the different materials in the samples (i.e., compact GFRP composite and PTFE) were calculated.
The SWT-based algorithm was used to denoise the measured THz signals and both the thickness and in-depth location of each delamination defect were successfully calculated using analytical and experimental approaches.
The wavelet shrinkage de-noising algorithm was proven to remove the fluctuations in the THz signals effectively while equally preserving the features associated with the multi-delamination defects. The denoised THz signals are clearer than the measured THz signals and both the in-depth locations and thicknesses of the different delamination defects were accurately calculated.
A comparison between the actual and the measured thickness values of the delamination defects before and after the wavelet shrinkage denoising process indicates that the latter is highly accurate with less than 3.712% relative error, while the same error of non-de-noised signals reaches 16.388%.
This method could potentially be of great interest to the NDT&E engineers and practitioners using THz-TDS to remove undesired features associated with atmospheric material resonances obscuring them from correctly visualizing the features of the samples to be detected. That is, the difficulty and expense of purging the sample housing unit with dry nitrogen to physically remove the humidity features from the THz signals can be replaced by adequate signal processing such as this in many cases involving laboratory and field-based THz-TDS NDT&E applications.
Finally, the theoretical analysis provided in the present study would help NDT&E engineers and practitioners to estimate the maximum thickness of the GFRP composite materials and structural systems that can be evaluated by the THz-TDS.
Y H Çelik, C Türkan. Investigation of mechanical characteristics of GFRP composites produced from chopped glass fiber and application of taguchi methods to turning operations. SN Applied Sciences, 2020, 2: 849-861. https://doi.org/10.1007/s42452-020-2684-5.
W Nsengiyumva, S Zhong, J Lin, et al. Advances, limitations and prospects of nondestructive testing and evaluation of thick composites and sandwich structures: A state-of-the-art review. Composite Structures, 2021, 256: 112951-3002. https://doi.org/10.1016/j.compstruct.2020.112951.
S Zhong, W Nsengiyumva. Nondestructive Testing and Evaluation of Fiber-Reinforced Composite Structures. Singapore: Springer, 2022.
M E Ibrahim. Nondestructive testing and structural health monitoring of marine composite structures. Marine Applications of Advanced Fibre-Reinforced Composites, Elsevier, 2016: 147183.
M E Ibrahim. Nondestructive evaluation of thick-section composites and sandwich structures: A review. Composites Part A: Applied Science and Manufacturing, 2014, 64: 36–48. https://doi.org/10.1016/j.compositesa.2014.04.010.
W Nsengiyumva, S Zhong, M Luo, et al. Critical insights into the state‐of‐the‐art NDE data fusion techniques for the inspection of structural systems. Struct Control Health Monit, 2021, 29: 1-36. https://doi.org/10.1002/stc.2857.
J Liu, B Dong, F Wang, et al. Study on cracks defect inspection for GFRP laminates using laser excitation chirp pulsed photothermal radar imaging (CP-PRI). Infrared Physics & Technology 2021, 114: 103689 - 103697. https://doi.org/10.1016/j.infrared.2021.103689.
W Nsengiyumva, S Zhong, L Zheng, et al. Theoretical and experimental analysis of the dielectric properties of 3D orthogonal woven GFRP composites in the terahertz frequency range. Optik, 2022, 260: 169105–169125. https://doi.org/10.1016/j.ijleo.2022.169105.
X Fu, Y Liu, Q Chen, et al. Applications of Terahertz spectroscopy in the detection and recognition of substances. Front Phys., 2022, 10: 869537–869553. https://doi.org/10.3389/fphy.2022.869537.
Z Zhang, Y Huang, S Zhong, et al. Time of flight improved thermally grown oxide thickness measurement with terahertz spectroscopy. Front Mech Eng, 2022, 17: 49–60. https://doi.org/10.1007/s11465-022-0705-3.
G J Wilmink, J E Grundt. Invited review article: Current state of research on biological effects of Terahertz radiation. J Infrared Milli Terahz Waves, 2011, 32: 1074–122. https://doi.org/10.1007/s10762-011-9794-5.
Y Peng, C Shi, Y Zhu, et al. Terahertz spectroscopy in biomedical field: a review on signal-to-noise ratio improvement. PhotoniX, 2020, 1: 12–30. https://doi.org/10.1186/s43074-020-00011-z.
J Dong, B Kim, A Locquet, et al. Nondestructive evaluation of forced delamination in glass fiber-reinforced composites by terahertz and ultrasonic waves. Composites Part B: Engineering, 2015, 79: 667–675. https://doi.org/10.1016/j.compositesb.2015.05.028.
F Rutz, M Koch, S Khare, et al. Terahertz quality control of polymeric products. Int J Infrared Milli Waves, 2007, 27: 547–556. https://doi.org/10.1007/s10762-006-9106-7.
N Palka, D Miedzinska. Detailed non-destructive evaluation of UHMWPE composites in the terahertz range. Optical and Quantum Electronics, 2014, 46: 515-525. https://doi.org/10.1007/s11082-013-9836-4.
W Nsengiyumva, S Zhong, B Wang, et al. Terahertz spectroscopic study of optical and dielectric properties of typical electrical insulation materials. Optical Materials, 2021, 111837-111850. https://doi.org/10.1016/j.optmat.2021.111837.
C Jördens, M Scheller, S Wietzke, et al. Terahertz spectroscopy to study the orientation of glass fibres in reinforced plastics. Composites Science and Technology, 2010, 70: 472-477. https://doi.org/10.1016/j.compscitech.2009.11.022.
D-W Park, G-H Oh, H-S Kim. Predicting the stacking sequence of E-glass fiber reinforced polymer (GFRP) epoxy composite using terahertz time-domain spectroscopy (THz-TDS) system. Composites Part B: Engineering, 2019, 177: 107385-107393. https://doi.org/10.1016/j.compositesb.2019.107385.
S Zhong. Progress in terahertz nondestructive testing: A review. Front Mech Eng, 2019, 14: 273–281. https://doi.org/10.1007/s11465-018-0495-9.
T R Sørgård, AD van Rheenen, M W Haakestad. Terahertz imaging of composite materials in reflection and transmission mode with a time-domain spectroscopy system. Proceedings Volume 9747, Terahertz, RF, Millimeter, and Submillimeter-Wave Technology and Applications IX, San Francisco, California, 2016, 974714-974718. https://doi.org/10.1117/12.2217726.
V I Bezborodov, V K Kiseliov, O S Kosiak, et al. Quasi-optical sub-terahertz internal reflection reflectometer for non-destructive testing of carbon fiber reinforced plastics. Telecom Rad Eng, 2014, 73: 83-93. https://doi.org/10.1615/TelecomRadEng.v73.i1.70.
J Zhang, W Li, H-L Cui, et al. Nondestructive evaluation of carbon fiber reinforced polymer composites using reflective terahertz imaging. Sensors, 2016, 16: 875-887. https://doi.org/10.3390/s16060875.
J Dong, P Pomarède, L Chehami, et al. Visualization of subsurface damage in woven carbon fiber-reinforced composites using polarization-sensitive terahertz imaging. NDT & E International, 2018, 99: 72-79. https://doi.org/10.1016/j.ndteint.2018.07.001.
D H Han, L H Kang. Nondestructive evaluation of GFRP composite including multi-delamination using THz spectroscopy and imaging. Composite Structures, 2018, 185: 161–175. https://doi.org/10.1016/j.compstruct.2017.11.012.
A Redo-Sanchez, B Heshmat, A Aghasi, et al. Terahertz time-gated spectral imaging for content extraction through layered structures. Nat Commun, 2016, 7: 12665-12672. https://doi.org/10.1038/ncomms12665.
C H Ryu, S H Park, D H Kim, et al. Nondestructive evaluation of hidden multi-delamination in a glass-fiber-reinforced plastic composite using terahertz spectroscopy. Composite Structures, 2016, 156: 338–347. https://doi.org/10.1016/j.compstruct.2015.09.055.
Y Xu, H Hao, D S Citrin, et al. Three-dimensional nondestructive characterization of delamination in GFRP by terahertz time-of-flight tomography with sparse Bayesian learning-based spectrum-graph integration strategy. Composites Part B: Engineering, 2021, 225: 109285-109296. https://doi.org/10.1016/j.compositesb.2021.109285.
D M Mittleman, M Gupta, R Neelamani, et al. Recent advances in terahertz imaging. Applied Physics B: Lasers and Optics, 1999, 68: 1085–1094. https://doi.org/10.1007/s003400050750.
Y Chen, S Huang, E Pickwell-MacPherson. Frequency-wavelet domain deconvolution for terahertz reflection imaging and spectroscopy. Opt Express, 2010, 18: 1177-1177. https://doi.org/10.1364/OE.18.001177.
J Dong, A Locquet, D S Citrin. Enhanced terahertz imaging of small forced delamination in woven glass fibre-reinforced composites with wavelet de-noising. J Infrared Milli Terahz Waves, 2016, 37: 289-301. https://doi.org/10.1007/s10762-015-0226-9.
B Ferguson, D Abbott. De-noising techniques for terahertz responses of biological samples. Microelectronics Journal, 2001, 32: 943-953. https://doi.org/10.1016/S0026-2692(01)00093-3.
R D V Ríos, S Bikorimana, MA Ummy, et al. A bow-tie photoconductive antenna using a low-temperature-grown GaAs thin-film on a silicon substrate for terahertz wave generation and detection. J Opt, 2015, 17: 125802-125812. https://doi.org/10.1088/2040-8978/17/12/125802.
P D Cunningham, NN Valdes, FA Vallejo, et al. Broadband terahertz characterization of the refractive index and absorption of some important polymeric and organic electro-optic materials. Journal of Applied Physics, 2011, 109: 043505-043505–5. https://doi.org/10.1063/1.3549120.
J Zhang, J Wang, X Han, et al. Noncontact detection of Teflon inclusions in glass-fiber-reinforced polymer composites using terahertz imaging. Appl Opt, 2016, 55: 10215-10222. https://doi.org/10.1364/AO.55.010215.
J Zhang, J Chen, J Wang, et al. Nondestructive evaluation of glass fiber honeycomb sandwich panels using reflective terahertz imaging. Jnl of Sandwich Structures & Materials, 2019, 21: 1211-1223. https://doi.org/10.1177/1099636217711628.
F Sun, M Fan, B Cao, et al. Terahertz based thickness measurement of thermal barrier coatings using long short-term memory networks and local extrema. IEEE Trans Ind Inf , 2021, 2508-2517. https://doi.org/10.1109/TII.2021.3098791.
The authors acknowledge the funding they received from National Natural Science Foundation of China (Grant Nos. 52275096, 52005108, 52275523), Fuzhou-Xiamen-Quanzhou National Independent Innovation Demonstration Zone High-end Equipment Vibration and Noise Detection and Fault Diagnosis Collaborative Innovation Platform Project, Fujian Provincial Major Research Project (Grant No. 2022HZ024005).
Fujian Provincial Key Laboratory of Terahertz Functional Devices and Intelligent Sensing, School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
Walter Nsengiyumva, Shuncong Zhong & Bing Wang
Institute of Precision Instrument and Intelligent Measurement & Control, Fuzhou University, Fuzhou, 350108, P. R. China
School of Mechanical and Electrical Engineering, Putian University, Putian, 351100, China
Manting Luo
Walter Nsengiyumva
Shuncong Zhong
Bing Wang
WN: conceptualization, preparation of the manuscript, data curation, and writing of the original draft. SZ and BW: funding acquisition. SZ: conceptualization, supervision of the work, revision, proofreading, and editing of the manuscript. WN, ML, and BW: preparation of the samples, revision, proofreading, and graphic design. All authors read and approved the final manuscript.
Walter Nsengiyumva received his Ph.D. degree in Mechatronic Engineering from School of Mechanical Engineering and Automation, Fuzhou University, China, in 2022. He is currently an Associate Professor at Fuzhou University, China, and a reviewer of more than 15 international journals. His research interests include structural health monitoring, terahertz time-domain spectroscopy and imaging, composite structures, mechanics, and nondestructive testing and evaluation of materials and structural systems.
Shuncong Zhong is a Professor of Mechatronic Engineering at Fuzhou University, China. He received his Ph.D. degree from the University of Manchester, U.K., in 2007. He has had a many-year industrial and academic career at Imperial College London, University of Liverpool, University of Strathclyde, Shanghai Jiao Tong University, and Mindray Company, Ltd., China. He is currently a Chair Professor at Fuzhou University, China. He has already published more than 200 papers and authorized more than 30 Chinese, US, and UK patents, and 13 software copyrights. He has also published 3 books/chapters, and 5 ISO and Chinese standards. His research interests include intelligent sensing and diagnosis, optics and terahertz systems, structural health monitoring, nondestructive testing and evaluation, signal and image processing, and pattern recognition for diagnosis and prognostics. Prof. Zhong is currently the vice-chairman of the Fujian Society of Mechanical Engineering, the vice-chairman of the Fujian Society of Mechanics, chairman of the Nondestructive Testing Branch of the Fujian Mechanical Engineering Society, executive member of the Fault Diagnosis Committee of the Chinese Society of Vibration Engineering, and executive member of the Equipment Intelligent Operation and Maintenance Branch of the Chinese Society of Mechanical Engineering. He is also an associate editor, guest editor, and editorial board member of 8 international journals. He got 2 Fujian Provincial Science and Technology Progress Awards (1st Prize, ranking first) and 1 Fujian Provincial Higher Education Teaching Achievement Award (Top Prize, ranking first). He was elected as a Fellow of the Society of Engineering and Technology (IET Fellow), a Fellow of the International Institute of Acoustics and Vibration (IIAV Fellow), and a Fellow of the International Society for Condition Monitoring (ISCM Fellow).
Manting Luo received her Ph.D. degree from School of Mechanical Engineering and Automation, Fuzhou University, China, in 2021. She is a lecturer at Putian University, China. Her research interests include optics, terahertz technology, and non-destructive testing & evaluation of structural systems. She is a master's tutor at Putian University. She presided over 5 teaching and scientific research projects, participated in 8 projects including National Natural Science Foundation projects and various provincial and municipal scientific research projects, and participated in a number of enterprise horizontal projects. She has published 7 first-author papers, 1 invention patent, and 5 utility model patents.
Bing Wang is a Professor of Mechanical Engineering at Fuzhou University, Fuzhou, China. He received his Ph.D. at the University of Hull in Jan 2017 and then became a Research Fellow. He moved to the University of Cambridge as a Research Associate in bistable composite technologies from July 2017 to April 2020. His research interests include smart materials, composite structures, mechanics, non-destructive testing & evaluation, multiscale analysis, elasticity, and viscoelasticity.
Correspondence to Shuncong Zhong.
Nsengiyumva, W., Zhong, S., Luo, M. et al. Terahertz Spectroscopic Characterization and Thickness Evaluation of Internal Delamination Defects in GFRP Composites. Chin. J. Mech. Eng. 36, 6 (2023). https://doi.org/10.1186/s10033-022-00829-7
Revised: 08 August 2022
Glass-fiber-reinforced polymer-matrix (GFRP) composites
Terahertz time-domain spectroscopy (THz-TDS)
Nondestructive testing and evaluation (NDT&E)
Stationary wavelet transform (SWT)
Thickness evaluation
Delamination defects
|
CommonCrawl
|
Dynamic pathway of the photoinduced phase transition of ${\mathrm{TbMnO}}_{3}$
Bothschafter, Elisabeth M. Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Abreu, Elsa Institute for Quantum Electronics, ETH Zurich, Switzerland
Rettig, Laurenz Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Kubacka, Teresa Institute for Quantum Electronics, ETH Zurich, Switzerland
Parchenko, Sergii Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Porer, Michael Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Dornes, Christian Institute for Quantum Electronics, ETH Zurich, Switzerland
Windsor, Yoav William Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Ramakrishnan, Mahesh Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Alberca, Aurora Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Manz, Sebastian Department of Materials, ETH Zurich, Switzerland
Saari, Jonathan Institute for Quantum Electronics, ETH Zurich, Switzerland
Koohpayeh, Seyed M. Institute for Quantum Matter (IQM), Johns Hopkins University, Baltimore, USA
Fiebig, Manfred Department of Materials, ETH Zurich, Switzerland
Forrest, Thomas Diamond Light Source, Harwell Science and Innovation Campus, Didcot, United Kingdom
Werner, Philipp Department of Physics, University of Fribourg, Switzerland
Dhesi, Sarnjeet S. Diamond Light Source, Harwell Science and Innovation Campus, Didcot, United Kingdom
Johnson, Steven L. Institute for Quantum Electronics, ETH Zurich, Switzerland
Staub, Urs Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland
Physical Review B. - 2017, vol. 96, no. 18, p. 184414
Département de Physique
DOI 10.1103/PhysRevB.96.184414
wer_dpp.pdf: 2
|
CommonCrawl
|
DOI:10.2140/apde.2021.14.1873
Quantitative comparisons of multiscale geometric properties
@article{Azzam2019QuantitativeCO,
title={Quantitative comparisons of multiscale geometric properties},
author={Jonas Azzam and Michele Villa},
journal={Analysis \& PDE},
Jonas Azzam, Michele Villa
Analysis & PDE
We generalize some characterizations of uniformly rectifiable (UR) sets to sets whose Hausdorff content is lower regular (and in particular, do not need to be Ahlfors regular). For example, David and Semmes showed that, given an Ahlfors $d$-regular set $E$, if we consider the set $\mathscr{B}$ of surface cubes (in the sense of Christ and David) near which $E$ does not look approximately like a union of planes, then $E$ is UR if and only if $\mathscr{B}$ satisfies a Carleson packing condition…
The weak lower density condition and uniform rectifiability
Jonas Azzam, M. Hyde
We show that an Ahlfors $d$-regular set $E$ in $\mathbb{R}^{n}$ is uniformly rectifiable if the set of pairs $(x,r)\in E\times (0,\infty)$ for which there exists $y \in B(x,r)$ and $0 0$. To prove…
Higher dimensional Jordan curves.
Michele Villa
We address the question of what is the correct higher dimensional analogue of Jordan curves from the point of view of quantitative rectifiability. More precisely, we show that 'topologically stable'…
Harmonic Measure and the Analyst's Traveling Salesman Theorem
Jonas Azzam
We study how generalized Jones $\beta$-numbers relate to harmonic measure. Firstly, we generalize a result of Garnett, Mourgoglou and Tolsa by showing that domains in $\mathbb{R}^{d+1}$ whose…
View 6 excerpts, cites background and methods
A $d$-dimensional Analyst's Travelling Salesman Theorem for general sets in $\mathbb{R}^n$.
M. Hyde
In his 1990 paper, Jones proved the following: given $E \subseteq \mathbb{R}^2$, there exists a curve $\Gamma$ such that $E \subseteq \Gamma$ and \[ \mathscr{H}^1(\Gamma) \sim \text{diam}\, E +…
An Analyst's Travelling Salesman Theorem for general sets in $\mathbb{R}^n$
View 2 excerpts, cites methods
Almost sharp descriptions of traces of Sobolev $W_{p}^{1}(\mathbb{R}^{n})$-spaces to arbitrary compact subsets of $\mathbb{R}^{n}$. The case $p \in (1,n]$
A. Tyulenev
Let S ⊂ R be an arbitrary nonempty compact set such that the d-Hausdorff content H ∞(S) > 0 for some d ∈ (0, n]. For each p ∈ (max{1, n−d}, n] an almost sharp intrinsic description of the trace space…
Subsets of rectifiable curves in Banach spaces: sharp exponents in Schul-type theorems
Matthew Badger, Sean McCurdy
The Analyst's Traveling Salesman Problem is to find a characterization of subsets of rectifiable curves in a metric space. This problem was introduced and solved in the plane by Jones in 1990 and…
Some porosity-type properties of sets related to the $d$-Hausdorff content
Let S ⊂ R n be a nonempty set. Given d ∈ [0 , n ) and a cube Q ⊂ R n with l = l ( Q ) ∈ (0 , 1] , we show that if the d -Hausdorff content H d ∞ ( Q ∩ S ) < λl d for some λ ∈ (0 , 1) , then the set Q…
A d-dimensional analyst's travelling salesman theorem for subsets of Hilbert space
Mathematische Annalen
We are interested in quantitative rectifiability results for subsets of infinite dimensional Hilbert space H. We prove a version of Azzam and Schul's d-dimensional Analyst's Travelling Salesman…
Sets with topology, the Analyst's TST, and applications
This paper was motivated by three questions. First: in a recent paper, Azzam and Schul asked what sort of sets could play the role of curves in the context of the higher dimensional analyst's…
Uniform rectifiability and harmonic measure I: Uniform rectifiability implies Poisson kernels in $L^p$
S. Hofmann, J. M. Martell
We present a higher dimensional, scale-invariant version of a classical theorem of F. and M. Riesz. More precisely, we establish scale invariant absolute continuity of harmonic measure with respect…
Subsets of rectifiable curves in Hilbert space-the analyst's TSP
R. Schul
We study one dimensional sets (Hausdorff dimension) lying in a Hilbert space. The aim is to classify subsets of Hilbert spaces that are contained in a connected set of finite Hausdorff length. We do…
Rectifiable sets and the Traveling Salesman Problem
Peter W. Jones
Let K c C be a bounded set. In this paper we shall give a simple necessary and sufficient condit ion for K to lie in a rectifiable curve. We say that a set is a rectifiable curve if it is the image…
An analyst's traveling salesman theorem for sets of dimension larger than one
Jonas Azzam, R. Schul
In his 1990 Inventiones paper, P. Jones characterized subsets of rectifiable curves in the plane via a multiscale sum of $$\beta $$β-numbers. These $$\beta $$β-numbers are geometric quantities…
A sharp necessary condition for rectifiable curves in metric spaces
Guy C. David, R. Schul
In his 1990 Inventiones paper, P. Jones characterized subsets of rectifiable curves in the plane, using a multiscale sum of what is now known as Jones $\beta$-numbers, numbers measuring flatness in a…
The weak-A∞ property of harmonic and p-harmonic measures implies uniform rectifiability
S. Hofmann, Phi Le, J. M. Martell, K. Nystrom
Let $E\subset \ree$, $n\ge 2$, be an Ahlfors-David regular set of dimension $n$. We show that the weak-$A_\infty$ property of harmonic measure, for the open set$\Omega:= \ree\setminus E$, implies u…
Analysis of and on uniformly rectifiable sets
G. David, S. Semmes
The notion of uniform rectifiability of sets (in a Euclidean space), which emerged only recently, can be viewed in several different ways. It can be viewed as a quantitative and scale-invariant…
On the uniform rectifiability of AD-regular measures with bounded Riesz transform operator: the case of codimension 1
F. Nazarov, A. Volberg, X. Tolsa
We prove that if μ is a d-dimensional Ahlfors-David regular measure in $${\mathbb{R}^{d+1}}$$Rd+1 , then the boundedness of the d-dimensional Riesz transform in L2(μ) implies that the non-BAUP…
Reifenberg Parameterizations for Sets with Holes
G. David, T. Toro
We extend the proof of Reifenberg's Topological Disk Theorem to allow the case of sets with holes, and give sufficient conditions on a set $E$ for the existence of a bi-Lipschitz parameterization of…
|
CommonCrawl
|
Input-to-state stability of infinite-dimensional stochastic nonlinear systems
On the Lorenz '96 model and some generalizations
Single-target networks
Gheorghe Craciun 1, , Jiaxin Jin 2,, and Polly Y. Yu 2,
Department of Mathematics and Department of Biomolecular Chemistry, University of Wisconsin-Madison, 53706
Department of Mathematics, University of Wisconsin-Madison, 53706
* Corresponding author: Jiaxin Jin
Received June 2020 Revised December 2020 Published February 2022 Early access March 2021
Fund Project: The authors are supported by NSF grant DMS-1816238
Figure(6)
Reaction networks can be regarded as finite oriented graphs embedded in Euclidean space. Single-target networks are reaction networks with an arbitrarily set of source vertices, but only one sink vertex. We completely characterize the dynamics of all mass-action systems generated by single-target networks, as follows: either (i) the system is globally stable for all choice of rate constants (in fact, is dynamically equivalent to a detailed-balanced system with a single linkage class), or (ii) the system has no positive steady states for any choice of rate constants and all trajectories must converge to the boundary of the positive orthant or to infinity. Moreover, we show that global stability occurs if and only if the target vertex of the network is in the relative interior of the convex hull of the source vertices.
Keywords: Dynamical systems in biology, kinetics in biochemical problems, systems biology, networks, chemical kinetics in thermodynamics and heat transfer.
Mathematics Subject Classification: Primary: 37N25, 92C42; Secondary: 80A30.
Citation: Gheorghe Craciun, Jiaxin Jin, Polly Y. Yu. Single-target networks. Discrete & Continuous Dynamical Systems - B, 2022, 27 (2) : 799-819. doi: 10.3934/dcdsb.2021065
D. F. Anderson, A proof of the global attractor conjecture in the single linkage class case, SIAM Journal on Applied Mathematics, 71 (2011), 1487-1508. doi: 10.1137/11082631X. Google Scholar
D. F. Anderson, J. D. Brunner, G. Craciun and M. D. Johnston, On classes of reaction networks and their associated polynomial dynamical systems, (2020). Google Scholar
D. Angeli, A tutorial on chemical reaction network dynamics, European Journal of Control, 15 (2009), 398-406. doi: 10.3166/ejc.15.398-406. Google Scholar
M. W. Birch, Maximum likelihood in three-way contingency tables, Journal of the Royal Statistical Society. Series B (Methodological), 25 (1963), 220-233. doi: 10.1111/j.2517-6161.1963.tb00504.x. Google Scholar
B. Boros, Existence of positive steady states for weakly reversible mass-action systems, SIAM Journal on Mathematical Analysis, 51 (2019), 435-449. doi: 10.1137/17M115534X. Google Scholar
B. Boros and J. Hofbauer, Permanence of weakly reversible mass-action systems with a single linkage class, SIAM Journal on Applied Dynamical Systems, 19 (2020), 352-365. doi: 10.1137/19M1248431. Google Scholar
M. L. Brustenga, G. Craciun and M-S Sorea, Disguised toric dynamical systems, (2020). Google Scholar
G. Craciun, Toric differential inclusions and a proof of the global attractor conjecture, (2015), arXiv: 1501.02860 [math.DS]. Google Scholar
G. Craciun, Polynomial dynamical systems, reaction networks, and toric differential inclusions, SIAM Journal on Applied Algebra and Geometry, 3 (2019), 87-106. doi: 10.1137/17M1129076. Google Scholar
G. Craciun, A. Dickenstein, B. Sturmfels and A. Shiu, Toric dynamical systems, Journal of Symbolic Computation, 44 (2009), 1551-1565. doi: 10.1016/j.jsc.2008.08.006. Google Scholar
G. Craciun, J. Jin and P. Y. Yu, An efficient characterization of complex-balanced, detailed-balanced, and weakly reversible systems, SIAM Journal on Applied Mathematics, 80 (2020), 183-205. doi: 10.1137/19M1244494. Google Scholar
G. Craciun, J. Jin and P. Y. Yu, Dynamical equivalence to complex balancing as an open condition in parameter space, in Preparation. Google Scholar
G. Craciun, F. Nazarov and C. Pantea, Persistence and permanence of mass-action and power-law dynamical systems, SIAM Journal on Applied Mathematics, 73 (2013), 305-329. doi: 10.1137/100812355. Google Scholar
G. Craciun and C. Pantea, Identifiability of chemical reaction networks, Journal of Mathematical Chemistry, 44 (2008), 244-259. doi: 10.1007/s10910-007-9307-x. Google Scholar
A. Dickenstein and M. Pérez Millán, How far is complex balancing from detailed balancing?, Bulletin of Mathematical Biology, 73 (2011), 811-828. doi: 10.1007/s11538-010-9611-7. Google Scholar
M. Feinberg, Complex balancing in general kinetic systems, Archive for Rational Mechanics and Analysis, 49 (1972), 187-194. doi: 10.1007/BF00255665. Google Scholar
M. Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors - I. The Deficiency Zero and the Deficiency One Theorems, Chemical Engineering Science, 42 (1987), 2229-2268. Google Scholar
M. Feinberg, Necessary and sufficient conditions for detailed balancing in mass action systems of arbitrary complexity, Chemical Engineering Science, 44 (1989), 1819-1827. Google Scholar
M. Feinberg, Foundations of Chemical Reaction Network Theory, Applied Mathematical Sciences, Springer International Publishing, 2019. Google Scholar
M. Gopalkrishnan, E. Miller and A. Shiu, A geometric approach to the global attractor conjecture, SIAM Journal on Applied Dynamical Systems, 13 (2014), 758-797. doi: 10.1137/130928170. Google Scholar
J. Gunawardena, Chemical reaction network theory for in-silico biologists, (2003), http://vcp.med.harvard.edu/papers/crnt.pdf, Google Scholar
F. Horn, Necessary and sufficient conditions for complex balancing in chemical kinetics, Archive for Rational Mechanics and Analysis, 49 (1972), 172-186. doi: 10.1007/BF00255664. Google Scholar
F. Horn and R. Jackson, General mass action kinetics, Archive for Rational Mechanics and Analysis, 47 (1972), 81-116. doi: 10.1007/BF00251225. Google Scholar
M. D. Johnston, Translated chemical reaction networks, Bulletin of Mathematical Biology, 76 (2014), 1081-1116. doi: 10.1007/s11538-014-9947-5. Google Scholar
M. D. Johnston and E. Burton, Computing weakly reversible deficiency zero network translations using elementary flux modes, Bulletin of Mathematical Biology, 81 (2019), 1613-1644. doi: 10.1007/s11538-019-00579-z. Google Scholar
M. D. Johnston, D. Siegel and G. Szederkényi, Computing weakly reversible linearly conjugate chemical reaction networks with minimal deficiencys, Mathematical Biosciences, 241 (2013), 88-98. doi: 10.1016/j.mbs.2012.09.008. Google Scholar
G. Lipták, G. Szederkényi and K. M. Hangos, Computing zero deficiency realizations of kinetic systems, Systems & Control Letters, 81 (2015), 24-30. doi: 10.1016/j.sysconle.2015.05.001. Google Scholar
S. Müller and G. Regensburger, Generalized mass action systems: Complex balancing equilibria and sign vectors of the stoichiometric and kinetic-order subspaces, SIAM Journal on Applied Mathematics, 72 (2012), 1926-1947. doi: 10.1137/110847056. Google Scholar
L. Onsager, Reciprocal relations in irreversible processes I., Physical Review, 37 (1931), 405-426. Google Scholar
[30] L. Pachter and B. Sturmfels, Algebraic Statistics for Computational Biology, Cambridge University Press, 2005. doi: 10.1017/CBO9780511610684.007. Google Scholar
C. Pantea, On the persistence and global stability of mass-action systems, SIAM Journal on Mathematical Analysis, 44 (2012), 1636-1673. doi: 10.1137/110840509. Google Scholar
J. Rudan, G. Szederkényi, K. M. Hangos and T. Péni, Polynomial time algorithms to determine weakly reversible realizations of chemical reaction networks, Journal of Mathematical Chemistry, 52 (2014), 1386-1404. doi: 10.1007/s10910-014-0318-0. Google Scholar
S. Schuster and R. Schuster, A generalization of Wegscheider's condition. Implications for properties of steady states and for quasi-steady-state approximation, Journal of Mathematical Chemistry, 3 (1989), 25-42. doi: 10.1007/BF01171883. Google Scholar
G. Szederkényi, Comment on "Identifiability of chemical reaction networks" by G. Craciun and C. Pantea, Journal of Mathematical Chemistry, 45 (2009), 1172-1174. doi: 10.1007/s10910-008-9499-8. Google Scholar
G. Szederkényi, J. R. Banga and A. A. Alonso, CRNreals: A toolbox for distinguishability and identifiability analysis of biochemical reaction networks, Bioinformatics, 28 (2012), 1549-1550. Google Scholar
G. Szederkényi and K. M. Hangos, Finding complex balanced and detailed balanced realizations of chemical reaction networks, Journal of Mathematical Chemistry, 49 (2011), 1163-1179. doi: 10.1007/s10910-011-9804-9. Google Scholar
A. I. Vol'pert, Differential equations on graphs, Math. USSR-Sb, 88 (1972), 578-588. Google Scholar
R. Wegscheider, Über simultane gleichgewichte und die beziehungen zwischen thermodynamik und reactionskinetik homogener systeme, Monatshefte für Chemie und verwandte Teile anderer Wissenschaften, 22 (1901), 849-906. Google Scholar
P. Y. Yu and G. Craciun, Mathematical analysis of chemical reaction systems, Israel Journal of Chemistry, 58 (2018), 733-741. Google Scholar
Figure 1. (a) A single-target network that is globally stable under mass-action kinetics. (b)–(c) Single-target networks with no positive steady states. (d) Not a single-target network
Figure 2. Consider subnetworks of (a) under mass-action kinetics, whose associated dynamics is given by (9). If the coefficient of $ {\boldsymbol{x}}^{ {\boldsymbol{y}}_1} $ in $ \dot{x} $ is positive and the coefficient of $ {\boldsymbol{x}}^{ {\boldsymbol{y}}_3} $ in $ \dot{x} $ is negative, then the system (9) can be realized by a single-target network, determined by the sign of $ {\boldsymbol{x}}^{ {\boldsymbol{y}}_2} $ in $ \dot{x} $. If the net directions are as shown in (b), then (9) can be realized by the single-target network in (c). Similarly, if the net directions appear as in (d), then (9) can be realized by the network in (e)
Figure 3. Reversible systems in (a) Example 3.12 and (b) Example 3.13 that are dynamically equivalent to detailed-balanced systems. Each undirected edge represents a pair of reversible edges
Figure 4. Geometric argument for dynamically equivalence to single-target network in Example 3.12. Shown are the edges with $ \mathrm{{X}}_1 + \mathrm{{X}}_2 $ as their source. The centre $ \left(\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2} \right)^\top $ of the tetrahedron is marked in blue. With rate constants given in the example, the weighted sum of reaction vectors points from the source to the centre
Figure 5. (a) The mass-action system with two target vertices from Example 4.2, which is dynamically equivalent to a complex-balanced system using a subnetwork of (b) if and only if $ \frac{1}{25} \leq \frac{ \kappa_1 \kappa_3}{ \kappa_2 \kappa_4} \leq 25 $
Figure 5(a) is dynamically equivalent to a complex-balanced system if and only if $ \frac{1}{25} \leq \frac{ \kappa_1 \kappa_3}{ \kappa_2 \kappa_4} \leq 25 $. The system is equivalent to (a) when $ \kappa_2 \kappa_4 = 25 \kappa_1 \kappa_3 $ and (b) when $ 25 \kappa_2 \kappa_4 = \kappa_1 \kappa_3 $. For $ \frac{1}{25} \leq \frac{ \kappa_1 \kappa_3}{ \kappa_2 \kappa_4} \leq 25 $, the dynamically equivalent system is an appropriate convex combination of (a) and (b)">Figure 6. The system in Figure 5(a) is dynamically equivalent to a complex-balanced system if and only if $ \frac{1}{25} \leq \frac{ \kappa_1 \kappa_3}{ \kappa_2 \kappa_4} \leq 25 $. The system is equivalent to (a) when $ \kappa_2 \kappa_4 = 25 \kappa_1 \kappa_3 $ and (b) when $ 25 \kappa_2 \kappa_4 = \kappa_1 \kappa_3 $. For $ \frac{1}{25} \leq \frac{ \kappa_1 \kappa_3}{ \kappa_2 \kappa_4} \leq 25 $, the dynamically equivalent system is an appropriate convex combination of (a) and (b)
Eugene Kashdan, Dominique Duncan, Andrew Parnell, Heinz Schättler. Mathematical methods in systems biology. Mathematical Biosciences & Engineering, 2016, 13 (6) : i-ii. doi: 10.3934/mbe.201606i
Monique Chyba, Benedetto Piccoli. Special issue on mathematical methods in systems biology. Networks & Heterogeneous Media, 2019, 14 (1) : i-ii. doi: 10.3934/nhm.20191i
Annegret Glitzky. Energy estimates for electro-reaction-diffusion systems with partly fast kinetics. Discrete & Continuous Dynamical Systems, 2009, 25 (1) : 159-174. doi: 10.3934/dcds.2009.25.159
N. Bellomo, A. Bellouquid. From a class of kinetic models to the macroscopic equations for multicellular systems in biology. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 59-80. doi: 10.3934/dcdsb.2004.4.59
Judith R. Miller, Huihui Zeng. Stability of traveling waves for systems of nonlinear integral recursions in spatial population biology. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 895-925. doi: 10.3934/dcdsb.2011.16.895
Avner Friedman. PDE problems arising in mathematical biology. Networks & Heterogeneous Media, 2012, 7 (4) : 691-703. doi: 10.3934/nhm.2012.7.691
Avner Friedman. Free boundary problems arising in biology. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 193-202. doi: 10.3934/dcdsb.2018013
Pei Zhang, Siyan Liu, Dan Lu, Ramanan Sankaran, Guannan Zhang. An out-of-distribution-aware autoencoder model for reduced chemical kinetics. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021138
Howard A. Levine, Yeon-Jung Seo, Marit Nilsen-Hamilton. A discrete dynamical system arising in molecular biology. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 2091-2151. doi: 10.3934/dcdsb.2012.17.2091
Vaughn Climenhaga. Multifractal formalism derived from thermodynamics for general dynamical systems. Electronic Research Announcements, 2010, 17: 1-11. doi: 10.3934/era.2010.17.1
Heiko Enderling, Alexander R.A. Anderson, Mark A.J. Chaplain, Glenn W.A. Rowe. Visualisation of the numerical solution of partial differential equation systems in three space dimensions and its importance for mathematical models in biology. Mathematical Biosciences & Engineering, 2006, 3 (4) : 571-582. doi: 10.3934/mbe.2006.3.571
F. R. Guarguaglini, R. Natalini. Global existence and uniqueness of solutions for multidimensional weakly parabolic systems arising in chemistry and biology. Communications on Pure & Applied Analysis, 2007, 6 (1) : 287-309. doi: 10.3934/cpaa.2007.6.287
Joseph G. Yan, Dong-Ming Hwang. Pattern formation in reaction-diffusion systems with $D_2$-symmetric kinetics. Discrete & Continuous Dynamical Systems, 1996, 2 (2) : 255-270. doi: 10.3934/dcds.1996.2.255
Charles Wiseman, M.D.. Questions from the fourth son: A clinician reflects on immunomonitoring, surrogate markers and systems biology. Mathematical Biosciences & Engineering, 2011, 8 (2) : 279-287. doi: 10.3934/mbe.2011.8.279
Avner Friedman. Conservation laws in mathematical biology. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3081-3097. doi: 10.3934/dcds.2012.32.3081
Klemens Fellner, Wolfang Prager, Bao Q. Tang. The entropy method for reaction-diffusion systems without detailed balance: First order chemical reaction networks. Kinetic & Related Models, 2017, 10 (4) : 1055-1087. doi: 10.3934/krm.2017042
Eduard Feireisl. Relative entropies in thermodynamics of complete fluid systems. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3059-3080. doi: 10.3934/dcds.2012.32.3059
Elena Shchepakina, Olga Korotkova. Canard explosion in chemical and optical systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 495-512. doi: 10.3934/dcdsb.2013.18.495
Jacky Cresson, Bénédicte Puig, Stefanie Sonner. Stochastic models in biology and the invariance problem. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2145-2168. doi: 10.3934/dcdsb.2016041
Jean-Pierre Françoise, Hongjun Ji. The stability analysis of brain lactate kinetics. Discrete & Continuous Dynamical Systems - S, 2020, 13 (8) : 2135-2143. doi: 10.3934/dcdss.2020182
Gheorghe Craciun Jiaxin Jin Polly Y. Yu
|
CommonCrawl
|
Effect of the entomopathogenic fungus, Lecanicillium lecanii, on the biology and predation rate of the anthocorid predatory bug, Blaptostethus pallescens, feeding on the flower thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae)
K. Sundaravalli1,2,
Richa Varshney ORCID: orcid.org/0000-0003-1660-29811,
A. Kandan1 &
K. Revathi2,3
Egyptian Journal of Biological Pest Control volume 32, Article number: 142 (2022) Cite this article
The flower thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae) is a notorious pest that attacks the plants of economic importance. The anthocorid bug Blaptostethus pallescens Poppius (Heteroptera: Anthocoridae) is a predator of thrips in field crops and greenhouses. Another biocontrol agent, the entomopathogenic fungi (EPF) Lecanicillium lecanii (Zimm.) (Hypocreales: Cordycipitaceae), has been effectively used in management of various insect pests. With the aim to develop an effective combination of biocontrol agents like microbial and insect predator for the management of the serious pest F. schultzei, the present studies were carried out on development, predation rate and prey preferences of the predator B. pallescens provided with the EPF (ICAR-NBAIR Vl-8)-treated thrips under laboratory conditions.
The predator, B. pallescens, could complete its life cycle on treated thrips. The nymphal duration of bugs fed on treated thrips was higher (25.25 ± 0.13 days). The Vl-8 strain did not show any negative effect on hatchability of eggs (83% hatchability in treatment; 86% in control). However, the longevity was comparatively less in bugs fed on treated thrips than that of control group The fecundity of the treated group was found to be reduced to one half of the control group. The rate of daily feeding potential of bugs on treated thrips was less (7.29 ± 0.15) than the untreated thrips (12.54 ± 0.1) throughout their lifetime. Moreover, when the F1 generation from both parents line fed on treated and untreated thrips allowed to feed on Corcyra cephalonica eggs, it did not show any difference in terms of nymphal duration which revealed that the fungi did not affect the F1 generation. When choice was given between treated and untreated thrips to different instars of this predatory bug, all the instars including adult significantly preferred the untreated thrips. However, nymphs and adult bugs were found to encounter both the treated and untreated thrips. No mortality was observed in any stage of the predator.
This study shows that the effect of L. lecanii (ICAR-NBAIR Vl-8) on B. pallescens is not harmful. However, further field studies are required to evaluate their combined effect against this pest.
The flower thrips, Frankliniella schultzei (Trybom), is a polyphagous insect pest and has ability to feed on alternative food source. It is a severe pest of cucumber, bean, snap bean, pepper, corns (Pereira et al. 2017). It is also called as common blossom thrips and distributed in the subtropical as well as the tropical regions around the world. In addition to being a serious pest that causes major losses through feeding damage, it also acts as a vector transmitting Tospovirus (Tamíris et al. 2020).
Minute pirate bugs as well as entomopathogenic fungi (EPF) could be alternatives to chemicals to control thrips. The anthocorid predatory bug, Blaptostethus pallescens Poppius (Heteroptera: Anthocoridae), has been found feeding on thrips, lepidopteran insects and mites (Ballal et al. 2018). This bug is suitable for laboratory rearing conditions, and its biological characteristics have already been studied (Ballal et al. 2003). B. pallescens has been found preying upon several pests like eggs of lepidopteran pests like spider mites, woolly aphid infesting cotton and young caterpillars (Ballal et al. 2018). EPFs have got numerous benefits in terms of various insect pest management, and it is safer to environment and could be a replacement for chemical pesticides (Bamisile et al. 2021).
To manage serious pests like thrips, it is desirable to use more than one management practices in a compatible manner. Integration strategies that include insect natural enemies and microbials for the management of pests have potential against insect pests. But then, it is very much warranted to test lethal and sublethal effects of such microbial agents over the natural predators (Croft 1990) to understand compatibility of both the bioagents for better management of insect pest (Najme et al. 2011). These microbials could affect biology, life table parameters, predatory potential of both parental as well as F1 generation. Thus, the present study aimed to study the effect of L. lecanii (strain NBAIR Vl-8) on biology, predation rate and prey preference of the predatory bug B. pallescens when fed on F. schultzei. This study will enhance our understanding and provide information on combined application of these two bioagents to manage the thrips.
Entomopathogenic fungi culture
The EPF, L. lecanii Vl-8 strain, was grown and maintained in Sabouraud Dextrose Yeast agar (SDYA) Medium at 25 ± 1 °C for 8–10 days. After the growth of pure culture in the plates, the fungal spore culture was inoculated into Sabouraud Dextrose Broth media (SDYB) liquid media. It was then placed in shaker at 25 ± 1 °C and 150 rpm for 8 days. The 8-day-old culture was then mixed with the pre-autoclaved talc in the ratio 1:2 (500 ml fungi grown broth: 1 kg talc). Then, the mixed talc was covered with a black cloth and kept for drying for 3 days under aseptic conditions. Quality analysis of EPF was done by checking the spore count through hemocytometer and serial dilution method. Briefly, 1 g of talc was mixed to 9 ml of water and was vortexed and 1 ml from it was transferred to another vial to make it 10–2. The same method was repeated up to 10–12 dilution. Further 1 ml taken from 10–6, 10–8, 10–10, 10–12 dilutions and transferred to a Petri plate containing SDYA medium. The colony count was observed in 10–8 plates to confirm that the talc contains 10–8 spores. After the spore count was quantified, the suspensions were finally prepared by serial dilution in distilled water by adding 0.1% of Tween-80 and stored in the refrigerator at 4 °C until further use.
Insect culture
Frankliniella schultzei
Adults of F. schultzei were reared on chemical free beans in a plastic container (1 L capacity) with a lid. The lid of the container was cut open and stuck with a fine mesh for aeration. Daily bean pods were collected from the adult container and kept for hatching to get desirable stage.
Blaptostethus pallescens
Adult of B. pallescens was obtained from the stock culture reared at ICAR-NBAIR laboratory, Bengaluru, and subcultured in plastic pearl pet containers of 500 ml capacity. Corcyra cephalonica eggs ad libitum were given as food, and 4–5 pieces of bean pods each of size 5–6 cm were provided as ovipositional substrate in each container which can hold up to 20 adult B. pallescens. Collection of bean pods was done on daily basis to collect the eggs of B. pallescens and kept for hatching. The newly hatched nymphs were reared in plastic containers using the same procedure as followed for adults. Corcyra eggs were replenished on daily basis as food. A small cotton piece soaked in water was introduced into the containers to maintain the moisture.
Biology and feeding potential on Vl-8 treated and untreated thrips
Laboratory studies were carried out at 26 ± 2 °C and 65 ± 2% R.H. Second instar larvae of thrips were taken and sprayed with 1 × 108 spores/ml of Vl-8 fungal strain such that the entire body surface is covered. After 24 h. of spraying, the treated thrips were then counted and provided as feed for the predator. The predator B. pallescens was acclimatized to laboratory conditions for 2 generations on untreated F. schultzei thrips before commencing the experiment. Biology was also studied on untreated thrips.
Thirty newly hatched nymphs of the predator were taken for the experiment. Each nymph was placed individually in plastic containers (90 mm diameter) with tissue paper in the bottom and Vl-8-treated thrips as feed. Two bean pods were introduced in each plastic container for feeding for thrips. Ten number of treated or untreated second instar thrips larvae were provided per nymph initially up to first two instars of predatory bug and were increased to about 30 for late instars and adults because of increase in requirement by the advance stages of predatory bug. Observations on the number of thrips consumed and the time of molting were recorded on daily basis in both the treatments. Thrips were replenished on daily basis to all the containers. The experimental setup was maintained till emergence of adults. The nymphal duration for each individual was recorded. After the adult emergence, the sex ratios were recorded. Further, the adults were left for mating in order to observe the fecundity and per cent hatching. Longevity of male and female predator was also recorded. Morphometrics of each nymphal instars and adults were recorded using LEICA M205C microscope.
F1 generation
The eggs laid by the adult bugs reared on Vl-8 treated or untreated thrips were collected and kept for hatching. Twenty nymphs were taken and reared on UV-treated C. cephalonica eggs to check if there was any adverse or sublethal effect of Vl-8 on nymphal duration of F1.
Prey preference of different instars of predatory bugs
Second, third, fourth and fifth nymphal instars of B. pallescens along with female adults were taken and provided with Vl-8 treated and untreated second instar thrips larvae at the same time. Fifteen treated and fifteen untreated thrips were provided for second and third instar, and the number of thrips was increased to 20 each for fourth instar to adult stage. Experiment setup of each bug was maintained in a large Petri plate and kept for 24 h. The thrips consumed by the bugs were recorded.
Adult preference at different time intervals of thrips infection
Three-day-old adult bugs were offered 20 untreated and 20 Vl-8 treated second instar thrips larvae at different intervals, i.e., 24, 48, 72, 96 h. post-inoculation, and were checked for their preference. The thrips consumed by the bugs were recorded after 24 h.
Prey preference of different instars of B. pallescens was analyzed using an independent t test comparing the means of Vl-8 treated and untreated thrips. Manly's preference index for prey type was calculated using the formula given below:
$$\beta 1 = \frac{{\log \left( {\frac{{e{1}}}{A1}} \right)}}{{\log \left( {\frac{e1}{{A1}}} \right) + {\text{log}}\left( {\frac{e2}{{A2}}} \right)}}$$
where β1 is the prey preference toward prey type 1, e1 is the remaining prey after the experiment, and A1 is the number of prey provided initially. The value near to 0.5 reveals that there is no preference toward provided prey. If the value is nearing to 1, it shows the preference toward prey type 1 and value nearing to 0 shows the preference toward prey type 2 (Varshney and Ballal 2018). In this study, untreated thrips were chosen as prey type 1. One-sample t test was used to compare the Manly's preference index for untreated thrips setting a value 0.5 to test the null hypothesis that predator prefers prey at random.
Biology on Vl-8 treated and untreated thrips
Blaptostethus pallescens was able to complete life cycle feeding on Vl-8-treated thrips. All the morphometric data on mean body length, width, labium, hind femur and tibia of adult B. pallescens when fed on treated thrips did not differ than the adults fed on untreated thrips. Hence, we did not show data in this paper.
The comparative data on biological parameters of B. pallescens fed on Vl-8 treated and untreated thrips are given in Table 1. The data revealed that the nymphal duration was higher (25.25 ± 0.13 days) for B. pallescens reared on Vl-8-treated thrips than the untreated ones (19.56 ± 0.29 days). The longevity of both male and females of the bugs reared on Vl-8-treated thrips were reduced (male: 13.5 ± 2.22; female: 14.2 ± 2.20) than the bugs reared on untreated thrips (male: 38.67 ± 1.86; female: 40.25 ± 0.85). The bugs reared on untreated thrips were able to survive for around 24 days more than the bugs fed on treated thrips. This certainly showed the reduction in the predation capacity of the bugs when reared on treated thrips. The fecundity and nymphs per female of insects reared on treated thrips were reduced to one half of the control insects.
Table 1 Comparison between biological parameters of Blaptostethus pallescens reared on Vl-8 treated and untreated thrips Frankliniella schultzei
However, non-significant difference was observed in percent hatchability of the bugs fed on treated thrips (83%) and bugs fed on untreated thrips (86%). The sex ratio revealed that the female emergence was higher in both the treatments.
Feeding potential
Feeding potential data of the B. pallescens revealed non-significant difference in per day and total feeding for the 1st instar of predatory bug when allowed to feed on Vl-8 treated and untreated thrips. A difference in feeding potential was observed from the second instar to fifth instar resulting in difference in total nymphal feeding potential being 103.5 ± 1.42 in nymphal duration of 25 days and 124.33 ± 1.94 in 19 days for Vl-8 treated and untreated thrips, respectively. The average daily feeding rate of the adult bug on treated thrips was 7.29 ± 0.15 which was significantly lower than the control (12.54 ± 0.1). The total adult feeding on treated thrips was 51 ± 1.03 for one week and on untreated thrips, it was 100.33 ± 0.91 (Fig. 1).
Feeding potential of different instars of Blaptostethus pallescens fed on Lecanicillium lecanii (Vl-8) treated and untreated Frankliniella schultzei
The nymphal duration of F1 generation from both treatments where B. pallescens was reared on Vl-8 treated thrips and untreated thrips was observed. There was non-significant difference in nymphal durations in both the treatments. Nymphs from both treatments were able to complete nymphal duration in 18 days (Table 2).
Table 2 Nymphal duration of F1 generation of Blaptostethus pallescens obtained from the parent bugs fed on Vl-8 treated and untreated thrips
Prey preference of different instars
Data in Table 3 show the prey preference data of B. pallescens on Vl-8 treated thrips and untreated thrips recorded after 24 h. of inoculation. The predator was found to encounter both groups, but it preferred the untreated thrips. In choice test, second (t = 3.130, df = 4.927, p = 0.026), third (t = − 5.892, df = 5.880, p = 0.001), fourth (t = − 3.667, df = 5.146, p = 0.014), fifth (t = − 12.247, df = 6.00, p < 0.0001) instar nymphs and adult female (t = − 7.155, df = 5.534, p < 0.001) preferred untreated thrips. It was further confirmed by Manley's preference index β 1 for each nymphal instar and adult B. pallescens (Table 3).
Table 3 Prey preference of different nymphal instars and adult of B. pallescens fed on Vl-8 treated and untreated thrips recorded after 24 h
Prey preference at different time interval of infection
Comparative data of prey preference of adult B. pallescens on Vl-8-treated thrips at different post-inoculation period (24–96 h) and untreated thrips showed that at each post-inoculation period, significant preference was observed for untreated thrips (24 h: t = 5.73 P < 0.0001; 48 h: t = 9.86 P < 0.0001; 72 h: t = 11.77 P < 0.0001; 96 h: t = 19.5 P < 0.0001). As the time of post-inoculation increased, adult predator consumed more number of untreated thrips. Very low feeding was observed on treated thrips after 48 h of post-inoculation.
The present study revealed that there was no mortality in any stage of B. pallescens when fed on Vl-8-treated thrips and predatory bugs were successfully abled to complete their life cycle on treated thrips. This finding is similar to Broza et al. (2001) who found that collembolans were not susceptible to Metarhizium anisopliae, Verticillium lecanii, Beauveria bassiana, Hirsutella spp., and its endotoxins. Longevity of the B. pallescens bugs was reduced when fed on treated prey compared to the control group. Similar observations were recorded by Liu et al. (2019) who found the reduction of longevity and predation rate of predatory mite Amblydromalus limonicus Garman when treated with EPF Beauveria bassiana (Balsamo). Feeding potential data of B. pallescens on Vl-8-treated thrips revealed that there was a reduction in their feeding potential than the control group. This finding is corroborated with the study of Pourian et al (2011) who observed reduction in feeding, searching and predation of Orius albidipennis (Reuter) when fed with Metarhizium anisopliae treated prey. Trizelia et al. (2017) found the reduction in the predation ability of Menochilus sexmaculatus when EPF-sprayed aphids were provided. Such reduction in the feeding potential of B. pallescens might be the cause of elongated nymphal duration in the bugs fed on infected prey due to lack of enough nutritional requirement for growth. Reduced feeding on infected prey might hamper pest control and needs field study. However, it is not uncommon in field situation that general predator like B. pallescens can switch the prey or might prefer uninfected prey.
The present study clearly showed that there was no influence of the EPF (Vl-8) on the percent egg hatching of the bugs, but reduction in fecundity was observed. Sayed et al. (2021) also found that the egg hatching of the coccinellids, Coccinella undecimpunctata and Hippodamia variegate, was not affected by Beauveria bassiana. Similar to present study, fecundity of Cryptolaemus montrouzieri Mulsant was affected by both B. bassiana and M. anisopliae (Mohammed 2019). On the contrary, Nielsen et al. (2005) reported non-significant difference in the fecundity of Spalangia cameroni Perkins, a pupal parasitoid when infected with Metarhizium anisopliae. The nymphal duration of F1 generation of B. pallescens obtained from the parents fed on Vl-8-treated thrips was similar to that of F1 generation obtained from the control parents fed on untreated thrips. Similarly, Liu et al. (2019) observed no influence of B. bassiana on the fecundity of F1 generation when compared to the uninfected control group. It is therefore clearly evident that the trans-generational effect of EPF was not carried forward to the progeny of infected parents (Midthassel et al. 2016). When choice was given between Vl-8 treated and untreated thrips to the different instars of B. pallescens, they significantly preferred the untreated prey. Similar results were documented in the study of Meyling and Pell (2006) where they found that the males and females of Anthocoris nemorum L detected and avoided the contact with B. bassiana inoculated leaf. In present study, the adult bugs preferred the untreated thrips to that of Vl-8-treated thrips in all tests. Almost nil feeding was observed when fed on treated thrips at 96 h. post-inoculation. This may be due to the bug's ability to detect infected prey particularly when mycelia growth develops on prey and when it comes in contact to them. Similar observations were made for Dicyphus hesperus bug which did not prefer whitefly nymphs treated with Isaria fumosorosea five days prior (Alma et al. 2010).
The study showed no mortality in any of the stage of predator when Vl-8-treated thrips were provided for feeding. However, fecundity and longevity were affected. But absolutely no adverse effect was observed on developmental period and feeding potential of nymphs in F1 generation which shows that trans-generational effect of this fungi (Vl-8) was not carried over to F1 generation. Furthermore, B. pallescens always preferred untreated thrips over Vl-8-treated thrips. In field, there is less chance of B. pallescens encountering treated thrips/prey. Greenhouse and field experiments need to be conducted to ascertain the compatibility between B. pallescens and L. lecanii (Vl-8) and their role in management of thrips.
The datasets used and/or analyzed during the current study are available from the first author/corresponding author on reasonable request.
EPF:
Entomopathogenic fungi
ICAR:
NBAIR:
National Bureau of Agricultural Insect Resources, Bengaluru
Alma CR, Gillespie DR, Roitberg BD, Goettel MS (2010) 'Threat of infection and threat-avoidance behavior in the predator Dicyphus hesperus feeding on whitefly nymphs infected with an Entomopathogen. J Insect Behav 23:90–99
Ballal CR, Singh SP, Poorani J, Gupta T (2003) Biology and rearing requirements of an anthocorid predator, Blaptostethus pallescens Poppius (Heteroptera: Anthocoridae). J Biol Control 17(1):29–33
Ballal CR, Akbar S, Yamada K, Wachkoo A, Varshney R (2018) Annotated catalogue of the flower bugs from India (Heteroptera: Anthocoridae; Lasiochilidae). Acta Entomol Musei Nationalis Pragae 58:207–226
Bamisile BS, Akutse KS, Siddiqui A, Junaid Ali A, Yijuan X (2021) Model application of entomopathogenic fungi as alternatives to chemical pesticides: prospects, challenges, and insights for next-generation sustainable agriculture. J Front Plant Sci. https://doi.org/10.3389/fpls.2021.741804
Broza M, Pereira RM, Stimac JL (2001) The non-susceptibility of soil Collembola to insect pathogens and their potential as scavengers of microbial pesticides. Pedobiologia 45:523
Croft AB (1990) Arthropod biological control agents and pesticides. Wiley, New York, p 723
Liu J, Zhang Z-Q, Beggs JR, Wei X-Y (2019) Influence of pathogenic fungi on the life history and predation rate of mites attacking a psyllid pest. J Ecotoxicol Environ Saf 183:109585
Meyling NV, Pell JK (2006) Detection and avoidance of an entomopathogenic fungus by a generalist insect predator. Ecol Entomol 31(2):162–171. https://doi.org/10.1111/j.0307-6946.2006.00781.x
Midthassel A, Leather SR, Wright DJ, Baxter IH (2016) Compatibility of Amblyseius swirskii with Beauveria bassiana: two potentially complimentary biocontrol agents. Biocontrol 61:437–447
Mohamed GS (2019) The virulence of the entomopathogenic fungi on the predatory species, Cryptolaemus montrouzieri Mulsant (Coleoptera: Coccinellidae) under laboratory conditions. Egypt J Biol Pest Control 29:42
Najme A, Kamal A, Imani S, Takalloozadeh H (2011) Effects of some pesticides on biological parameters and predation of the predaceous plant bug Deraeocoris lutescens (Hemiptera: Miridae). Adv Environ Biol 5(10):3219
Nielsen C, Skovgård H, Steenberg T (2005) Effect of Metarhizium anisopliae (Deuteromycotina: Hyphomycetes) on survival and reproduction of the filth fly parasitoid, Spalangia cameroni (Hymenoptera: Pteromalidae). Environ Entomol 34(1):133–139. https://doi.org/10.1603/0046-225X-34.1.133
Pereira PS, Sarmento RA, Galdino TV, Lima CH, Dos Santos FA, Silva J, dos Santos GR, Picanço MC (2017) Economic injury levels and sequential sampling plans for Frankliniella schultzei in watermelon crops. Pest Manage Sci 73:1438–1445
Pourian HR, Talaei-Hassanloui R, Kosari AA, Ashouri A (2011) Effects of Metarhizium anisopliae on searching, feeding and predation by Orius albidipennis (Hem., Anthocoridae) on Thrips tabaci (Thy., Thripidae) larvae. Biocontrol Sci Technol 21:15–21
Sayed S, Elarrnaouty SA, AlOtaibi S, Salah M (2021) Pathogenicity and side effect of indigenous Beauveria bassiana on Coccinella undecimpunctata and Hippodamia variegata (Coleoptera: Coccinellidae). Insects 12(1):42
Tamíris AA, Daniela TP, Rodrigo RS, Marcelo CP, Cristina SB, Thomas EH, William DH (2020) Development and validation of sampling plans for Frankliniella schultzei on tomato. Crop Prot 134:105163
Trizelia T, Busniah M, Permadi A (2017) Pathogenicity of entomopathogenic fungus Metarhizium spp. against predators Menochilus sexmaculatus Fabricius (Coleoptera: Coccinellidae). Asian J Agric 1:1–5
Varshney R, Ballal CR (2018) Intraguild predation on Trichogramma chilonis Ishii (Hymenoptera: Trichogrammatidae) by the generalist predator Geocoris ochropterus Fieber (Hemiptera: Geocoridae). Egypt J Biol Pest Control. 28:5
The authors are grateful to Indian Council of Agricultural Research, New Delhi, India, and Director, ICAR−NBAIR, Bengaluru, India, for providing research facilities and encouragement. We are also thankful to Mrs. Usha Ravikumar for technical assistance.
Division of Germplasm Conservation and Utilization, ICAR-National Bureau of Agricultural Insect Resources, P. Bag No. 2491, H A Farm Post, Bellary Road, Bengaluru, Karnataka, 560024, India
K. Sundaravalli, Richa Varshney & A. Kandan
Meenakshi Academy of Higher Education and Research, 12, Vembuliamman Koil Street, West K. K Nagar, Chennai, 600078, India
K. Sundaravalli & K. Revathi
Department of Zoology, Ethiraj College for Women, Chennai, India
K. Revathi
K. Sundaravalli
Richa Varshney
A. Kandan
RV conceived the study and planned experiment. KS performed experiments and analyzed the data under the guidance of RV. AK formulated the entomopathogenic fungi. KS wrote the manuscript. RV and KR reviewed manuscript. All the authors read and approved the manuscript.
Correspondence to Richa Varshney or K. Revathi.
This manuscript does not contain any studies with human participants or animals performed by any of the authors.
The authors have not found any potential conflicts of interest, and all ethical aspects are considered.
Sundaravalli, K., Varshney, R., Kandan, A. et al. Effect of the entomopathogenic fungus, Lecanicillium lecanii, on the biology and predation rate of the anthocorid predatory bug, Blaptostethus pallescens, feeding on the flower thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae). Egypt J Biol Pest Control 32, 142 (2022). https://doi.org/10.1186/s41938-022-00634-3
Accepted: 29 November 2022
Lecanicillium lecanii
|
CommonCrawl
|
From trivariate cdf to the distribution of differences of random variables
Consider a trivariate cumulative distribution function (cdf) $G$.
Is there a collection of necessary conditions on $G$ ensuring that $$ \exists \text{ a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has cdf $G$} $$ ?
Is there a collection of necessary and sufficient conditions on $G$ ensuring that $$ \exists \text{ a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has cdf $G$} $$ ?
Update I: Let $P$ be the probability distribution associated with $G$. We can claim that: if there exists a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$, then $$ \int_{(a,b,c)\in \mathbb{R}^3 \text{ s.t. } c=a-b} dP=1 $$
Is this condition also sufficient? I.e., can we claim that if $$ \int_{(a,b,c)\in \mathbb{R}^3 \text{ s.t. } c=a-b} dP=1 $$ then there exists a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$?
Can we write $$ \int_{(a,b,c)\in \mathbb{R}^3 \text{ s.t. } c=a-b} dP=1 $$ by using the cdf $G$ ?
If there exists a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$, then $P$ should satisfy: for every $\begin{pmatrix} a_1\\ b_1\\ c_1 \end{pmatrix}\leq \begin{pmatrix} a_2\\ b_2\\ c_2 \end{pmatrix}$
If $a_2\geq b_2+c_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1, b_2+c_2], [b_1, b_2], [c_1, c_2])\\ P([a_2, a_3], [b_1, b_2], [c_1, c_2])= 0 & \forall a_3\geq a_2\\ \end{cases} $$
If $b_1\leq a_1-c_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [a_1-c_2, b_2], [c_1, c_2])\\ P([a_1,a_2], [b_3, b_1], [c_1, c_2])=0 & \forall b_3\leq b_1\\ \end{cases} $$
If $a_1 \leq b_1+c_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([b_1+c_1,a_2],[b_1,b_2],[c_1,c_2])\\ P([a_3,a_1], [b_1, b_2], [c_1, c_2])=0 & \forall a_3 \leq a_1 \end{cases} $$
If $b_2\geq a_2-c_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, a_2-c_1], [c_1, c_2])\\ P([a_1,a_2], [b_2, b_3], [c_1, c_2])=0 & \forall b_3\geq b_2 \end{cases} $$
If $c_2 \geq a_2-b_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, b_2], [c_1, a_2-b_1])\\ P([a_1,a_2], [b_1, b_2], [c_2, c_3])=0 & \forall c_3\geq c_2 \end{cases} $$
If $c_1\leq a_1-b_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, b_2], [a_1-b_2, c_2])\\ P([a_1,a_2], [b_1, b_2], [c_3, c_1])=0 & \forall c_3\leq c_1 \end{cases} $$ These implications can be written using $G$ (as I want!). However: are these implications also sufficient? I don't know how to prove or dis-prove it.
probability distributions random-variable
$\begingroup$ You have supplied such conditions: to wit, when $(X_1,X_2,X_3)$ is a random variable with distribution $G,$ then almost surely $X_3=X_1-X_2.$ That's dead simple. What other form are you hoping to express these conditions in that would be any simpler or more useful? $\endgroup$ – whuber♦ Nov 29 '18 at 15:17
$\begingroup$ Thanks. I'm hoping for conditions directly imposed on $G$. $\endgroup$ – user3285148 Nov 29 '18 at 15:20
The condition in your first update is sufficient, because it implies that $X_3=X_1-X_2$ almost surely, so in particular $(X_1,X_2,X_3)=^d(X_1,X_2,X_1-X_2)$.
Edit: To be completely explicit about the dependence of $G$, this can be restated by requiring that $P(B)=0$ for any 3-dimensional box $B$ such that $B\cap \{(x,y,z):z=x-y\}=\emptyset$.
Mike HawkMike Hawk
$\begingroup$ Thanks. What about update II? Remember that I'm looking for conditions that can expressed using $G$. $\endgroup$ – user3285148 Dec 11 '18 at 18:36
$\begingroup$ The measure $dP$ is defined in terms of $G$, so I'm not sure what else you want. $\endgroup$ – Mike Hawk Dec 11 '18 at 18:44
$\begingroup$ I want necessary and sufficient conditions explicitly involving "boxes" in $\mathbb{R}^3$ $\endgroup$ – user3285148 Dec 11 '18 at 18:46
$\begingroup$ Thanks. I think that "$P(B)=0$ for any 3-dimensional box $B$ such that $B\cap ...=\emptyset$" holds IF AND ONLY IF the bullet points in my update II hold. Is this correct? $\endgroup$ – user3285148 Dec 12 '18 at 11:19
Not the answer you're looking for? Browse other questions tagged probability distributions random-variable or ask your own question.
Joint PDF of a set of equations
How to show that $X_2$ is also a standard normal variable?
Sufficient conditions for strict monotonicity and continuity of multidimensional cdf
When $(X_1-X_0, X_1-X_2)\sim (X_2-X_0, X_2-X_1)\sim(X_0-X_1, X_0-X_2)$?
|
CommonCrawl
|
An extension of assumed stress finite elements to a general hyperelastic framework
Nils Viebahn ORCID: orcid.org/0000-0002-4021-55891,
Jörg Schröder1 &
Peter Wriggers2
Advanced Modeling and Simulation in Engineering Sciences volume 6, Article number: 9 (2019) Cite this article
Assumed stress finite elements are known for their extraordinary good performance in the framework of linear elasticity. In this contribution we propose a mixed variational formulation of the Hellinger–Reissner type for hyperelasticity. A family of hexahedral shaped elements is considered with a classical trilinear interpolation of the displacements and different piecewise discontinuous interpolation schemes for the stresses. The performance and stability of the new elements are investigated and demonstrated by the analysis of several benchmark problems. In addition the results are compared to well known enhanced assumed strain elements.
An enormous effort was invested in the development of finite element methods based on the variational approach going back to Galerkin [28], which are in general considering the approximation of one basic variable. It is well known that low order elements, based on this variational approach yield poor results in the framework of nearly incompressibility and bending dominated problems. The reason for this are locking effects, see [12]. Developments considering the approximation of additional fields in the variational setup are for example given in Reissner [44] (compare also [29] and [43]), here an independent stress approximation is applied in addition to the displacements, which acts as a Lagrange multiplier. We refer to this type of formulation, which is based on a complementary stored energy function, as Hellinger–Reissner (HR) formulation. A few years later, [30] and [57] proposed independently a variational principle related to displacements, stresses and strains, based on the so-called Hu-Washizu functional. In the framework of finite element analysis, mixed formulations lead to saddle-point problems and therefore a major constraint of these approaches is the restriction to the so-called LBB-conditions, see [11, 19, 20, 35] and [10]. Mathematical aspects concerning the mixed finite element formulation for elasticity based on the Hellinger–Reissner (HR) principle are discussed in Arnold and Winther [4], Auricchio et al. [10], Lonsing and Verfürth [37], Arnold et al. [3], Boffi et al. [18] and Cockburn et al. [25]. In contrast to the Stokes problem, where the enrichment of the discrete space of the primal variable leads always to stable elements in the linear range, the situation is more difficile in the framework of the HR formulation. Here, the discrete spaces have to be carefully balanced.
Within the principle of HR two methods can be distinguished. A shift of the derivatives from the displacements to the stresses, using integration by parts, leads to a formulation with the displacements in the \(L^2\) and the stresses in the \(H(\mathrm{Div})\) Sobolev spaces. In the literature, this is often denoted as the dual HR formulation. Stable elements in the linear range based on the dual HR formulation with a strong enforcement of the symmetry can only be achieved at high polynomial order, see e.g. [6, 31] in 2D and [1, 7] in 3D. A reduction of the symmetry constraint leads to some flexibility in the construction of the finite elements, see [5, 18, 53, 54] and [32].
In the primal HR formulation the derivatives are correlated to the displacements, such that the displacements are in the \(H^1\) and the stresses in \(L^2\) Sobolev spaces. Elements, based on this formulation, are called assumed stress elements going back to the pioneering work of Pian [39]. In 2D, often a discontinuous stress approximation using a 5-parameter ansatz proposed by Pian and Sumihara [40] is used. The advantages of this approach are characterized by a remarkable insensitivity to mesh distortion, locking free behavior for plane strain quasi-incompressible elasticity and superconvergent results for bending dominated problems, see e.g. [40, 52] and [23]. Its stability with regard to the LBB-conditions and an a posteriori error estimation has been shown by Yu et al. [61] and Li et al. [36]. A well known extension to the 3D case is the element by Pian and Tong [41]. The family of assumed stress elements is closely related to the family for enhanced assumed strains elements, see e.g. [2, 60] and [16].
The "direct" extension of the HR principle, which requires a complementary energy function in terms of stress measures, is in general not possible. Some effort has been pursued in the extension of the dual version of the HR principle considering small-strain elasto-plasticity, [46, 48]. Considering a large-strain setup the workgroup of Atluri (see [8, 9, 49]), has been proposed an incremental variational formulation, involving the discretization of the displacement, rotations and the hydrostatic pressure considering the primal HR formulation. In addition [42] extended the underlying variational formulation to a Hu-Whashizu type form, closely related to the family of enhanced assumed strains. Another approach has been discussed in Wriggers [58], where the complementary constitutive relation has been derived in an explicit form for a special Neo-Hookean type.
In the proposed work, a general framework is introduced, which extends the family of assumed stress elements to hyperelasticity. It is based on an iterative solution procedure of the constitutive law at the level of the element integration points. In this work, the framework is adopted to a family of 3D hexahedral elements, with a varying interpolation scheme of the stresses. It should be noted that in the framework of hyperelasticity, the corresponding solution spaces are shifted from the classical Lebesgue spaces to the more general framework of the Hilbert spaces, see [24] for a detailed discussion. Thus, the related solution spaces for the proposed formulations are represented for the displacements and stresses by \(W^{1,p}\) and \(W^{0,p}\), whereas \(p\ge 2\) depends on the constitutive relation. The stress interpolation is discussed with respect to volumetric locking, shear locking and hourglassing. In addition the related EAS elements are mentioned and the properties and characteristics of the various elements are discussed on the example of different numerical benchmarks.
Kinematics and variational formulation
Table 1 Kinematics, constitutive quantities and stresses
Let \(\mathcal{B} \subset \mathrm{IR}^3\) be the body of interest in the reference placement, parametrized in \({{\varvec{X}}}\). Its boundary \(\partial \mathcal{B}\) is decomposed into a nontrivial Dirichlet part \(\partial {\mathcal{B}_u}\) and a Neumann part \(\partial {\mathcal{B}_t}\) with \(\partial {\mathcal{B}_u} \cup \partial {\mathcal{B}_t} = \partial \mathcal{B}\) and \(\partial {\mathcal{B}_u} \cap \partial {\mathcal{B}_t} = \emptyset \). The nonlinear deformation map \({\varvec{\varphi }}\), which maps points \({{\varvec{X}}}\in {\mathcal {B}}\) onto points \({{\varvec{x}}}\) of the actual placement, is given by \( {{\varvec{x}}}= {{\varvec{\varphi }}} ({{\varvec{X}}})\). As basic kinematical quantities we define the deformation gradient, the right Cauchy-Green tensor, and the Green-Lagrange strain tensor
$$\begin{aligned} {{\varvec{F}}}= {{\varvec{I}}}+ {\mathrm{Grad}}\, {{\varvec{u}}},\qquad {{\varvec{C}}}={{\varvec{F}}}^T{{\varvec{F}}}\qquad \text{ and }\qquad {{\varvec{E}}}=\frac{1}{2}({{\varvec{C}}}-{{\varvec{I}}}), \end{aligned}$$
respectively. Here, \({{\varvec{I}}}\) denotes the second-order identity tensor. The Jacobian of the deformation gradient has to satisfy \(J:= \det {{\varvec{F}}}> 0\). In addition we introduce the symmetric second Piola-Kirchhoff stresses \({{\varvec{S}}}\) as an additional independent variable. The relevant continuum-mechanical quantities are listed in Table 1. Before we extend the discrete formulation to a general hyperelastic framework, we first assume for simplicity here the existence a complementary stored energy function \(\chi ({{\varvec{S}}})\) which describes the constitutive equation by
$$\begin{aligned} \partial _S \chi ({{\varvec{S}}}):={{\varvec{E}}}. \end{aligned}$$
E.g. in the case of St. Venant type nonlinear elasticity , we simply obtain the explicit expression
$$\begin{aligned} \chi ({{\varvec{S}}}) = \frac{1}{2} {{\varvec{S}}}: \mathbb {C}^{-1} : {{\varvec{S}}}\end{aligned}$$
where the compliance tensor \(\mathbb {C}^{-1}\) is defined as the inverse of the fourth-order elasticity tensor. In case of isotropy the compliance tensor is given by
$$\begin{aligned} \mathbb {C}^{-1} := \frac{1}{2\mu } \mathrm{II}- \frac{\Lambda }{2\mu (2\mu + 3\Lambda )} {{\varvec{I}}}\otimes {{\varvec{I}}}, \end{aligned}$$
with the dyadic product defined as \(({{\varvec{I}}}\otimes {{\varvec{I}}})_{ijkl} = \delta _{ij}\delta _{kl}\), the fourth-order identity tensor \(\mathrm{II}_{ijkl} = \frac{1}{2}(\delta _{il}\delta _{jk} + \delta _{ik}\delta _{jl}) \) and the Lamé constants \(\Lambda \) and \(\mu \). It should be mentioned that in general an explicit complementary stored energy does not exist for arbitrary hyperelastic constitutive laws. This issue will be discussed in the next chapter. The balance of momentum closes, under the assumption of suitable boundary conditions, the set of equations for the boundary value problem in hyperelasticity
$$\begin{aligned} \mathrm{Div}\, {{\varvec{P}}}+ {{\varvec{f}}}= {\varvec{0}}\,\quad \text {on}\quad {\mathcal {B}}, \end{aligned}$$
where \({{\varvec{f}}}\) denotes the body force vector and \(\mathrm{Div}\) the divergence operator with respect to \({{\varvec{X}}}\). For a more detailed overview on the underlying continuum mechanics, the reader is referred e.g. to Ciarlet [24]. The solution of Eqs. (2) and (5) for the displacements \({{\varvec{u}}}\) and the second Piola-Kirchhoff stresses \({{\varvec{S}}}\) are equivalent to the stationary point of the HR functional
$$\begin{aligned} \Pi ^{HR} ({{\varvec{S}}},{{\varvec{u}}}) = \int \limits _{\mathcal{B}} ({{\varvec{S}}}: {{\varvec{E}}}- \chi ({{\varvec{S}}})) \, \text{ d }V + \Pi ^{ext} ({{\varvec{u}}}) , \end{aligned}$$
with the external potential \(\Pi ^{ext}({{\varvec{u}}})\) given by
$$\begin{aligned} \Pi ^{ext}({{\varvec{u}}}) = \displaystyle - \int \limits _{\mathcal{B}} {{\varvec{f}}}\cdot {{\varvec{u}}}\, \text{ d }V - \int \limits _{\partial \mathcal{B}_{t}} \overline{{{\varvec{t}}}} \cdot {{\varvec{u}}}\, \text{ d }A, \end{aligned}$$
where \(\overline{{{\varvec{t}}}}\) denotes the prescribed traction vector on the Neumann boundary and \({{\varvec{u}}}\) satisfies a priori the Dirichlet boundary conditions. In order to find a stationary point of the functional, the roots of the first variations with respect to the unknown fields \({{\varvec{u}}}\) and \({{\varvec{S}}}\) have to be calculated. In detail, we obtain
$$\begin{aligned} \begin{array}{rcl} G_u := \delta _{u} \Pi &{} = &{} \displaystyle \int \limits _{\mathcal{B}} \delta {{\varvec{E}}}: {{\varvec{S}}}\, \text{ d }V - \int \limits _{\mathcal{B}} \delta {{\varvec{u}}}\cdot {{\varvec{f}}}\, \text{ d }V - \int \limits _{\partial \mathcal{B}_{t}} \delta {{\varvec{u}}}\cdot \overline{{{\varvec{t}}}} \, \text{ d }A =0\, ,\\ G_{S} := \delta _{S} \Pi &{} = &{} \displaystyle \int \limits _{\mathcal{B}} (\delta {{\varvec{S}}}: ({{\varvec{E}}}- \partial _{{{\varvec{S}}}} \chi ({{\varvec{S}}})) \, \text{ d }V=0 \, , \end{array} \end{aligned}$$
with the virtual deformation \(\delta {{\varvec{u}}}\) and the virtual stress field \(\delta {{\varvec{S}}}\). Furthermore, the virtual strains are defined by \(\delta {{\varvec{E}}}=\frac{1}{2} (\delta {{\varvec{F}}}^T {{\varvec{F}}}+{{\varvec{F}}}^T\, \delta {{\varvec{F}}})\) with \(\delta {{\varvec{F}}}=\nabla \delta {{\varvec{u}}}\).
Discrete formulation and interpolation
We consider a standard decomposition of the reference body \({\mathcal {B}}\) into an assemblage of hexahedral elements \({\mathcal {B}}_e\) such that \( {\mathcal {B}}\approx {\mathcal {B}}^h=\bigcup _{e=1}^{{n}_{\text {ele}}}{\mathcal {B}}_e\), where \({n}_{\text {ele}}\) is the number of elements. In the following, underlined quantities denote the application of a matrix-notation, which reduces second- and fourth-order tensors to suitable matrices using the Voigt notation. The displacements and its variations are accordingly approximated by
$$\begin{aligned} {{\varvec{u}}}=\underline{\mathrm{IN}}\, \underline{{{\varvec{d}}}} \quad \text { and } \quad \delta {{{\varvec{u}}}}= \underline{\mathrm{IN}} \,\delta {\underline{{{\varvec{d}}}}} \end{aligned}$$
where \(\underline{\mathrm{IN}}\) denotes a matrix containing the Lagrangian trilinear shape functions and \(\underline{{{\varvec{d}}}}\) the vector of elementwise nodal displacements. This leads to a displacement interpolation which is \(C^0\)-continuous on \({\mathcal {B}}^h\). The interpolation of the assumed stress field on the isoparametric reference element is given in vector notation by
$$\begin{aligned} {\underline{{{\varvec{S}}}}_{\xi }}=({S}_{\xi \xi }, {S}_{\eta \eta }, {S}_{\zeta \zeta }, {S}_{\xi \eta }, {S}_{\eta \zeta }, {S}_{\xi \zeta })^T= \underline{\mathbb {L}}_\xi \, \underline{{\varvec{\beta }}}\,, \end{aligned}$$
where \(\underline{{\varvec{\beta }}}\) is the vector of element-wise unknowns and \(\underline{\mathbb {L}}_\xi \) the matrix with the corresponding interpolation functions, with the general structure
$$\begin{aligned} \underline{\mathbb {L}}_{\xi }= \text {diag}\left( \underline{\mathbb {L}}_{\xi \xi }, \underline{\mathbb {L}}_{\eta \eta }, \underline{\mathbb {L}}_{\zeta \zeta }, \underline{\mathbb {L}}_{\xi \eta }, \underline{\mathbb {L}}_{\eta \zeta }, \underline{\mathbb {L}}_{\xi \zeta }\right) \,. \end{aligned}$$
A variety of suitable interpolation matrices are discussed in the following section. The transformation from the isoparametric domain to the reference configuration for the second Piola-Kirchhoff stresses is described by
$$\begin{aligned} {{\varvec{S}}}= \widehat{{{\varvec{J}}}} \, {{{\varvec{S}}}_\xi } \, \widehat{{{\varvec{J}}}}^T\,, \end{aligned}$$
where the Jacobian matrix, mapping between the isoparametric coordinates \({\varvec{\xi }}\) and the reference coordinates \({{\varvec{X}}}\) follows as
$$\begin{aligned} {{\varvec{J}}}=\frac{\partial {{\varvec{X}}}({\varvec{\xi }})}{\partial {\varvec{\xi }}}\, \quad \text {and} \quad \widehat{{{\varvec{J}}}}= \frac{\partial {{\varvec{X}}}({\varvec{\xi }})}{\partial {\varvec{\xi }}}\bigg \vert _{{\varvec{\xi }}=0}\,. \end{aligned}$$
In order to pass the patch test, see Figs. 2 and 3, it is necessary to use the values of the Jacobian at the origin \(\{\xi ,\eta ,\zeta \}=\{0,0,0\}\) as it is discussed in Pian and Sumihara [40] and Pian and Tong [41]. Therefore, the second Piola-Kirchhoff stress in the physical space is given by \( \underline{{{\varvec{S}}}}= \underline{\mathbb {L}} \, \underline{{\varvec{\beta }}}\,, \) with \(\underline{\mathbb {L}}=\underline{{{\varvec{T}}}}\, \underline{\mathbb {L}}_{\xi }\) where the transformation matrix \(\underline{{{\varvec{T}}}}\) is given by
$$\begin{aligned} \underline{{{\varvec{T}}}}=\left[ \begin{array}{cccc} \widehat{J}_{11}^2 &{}\quad \widehat{J}_{12}^2 &{}\quad \widehat{J}_{13}^2 &{}\quad 2 \widehat{J}_{11} \widehat{J}_{12} \\ \widehat{J}_{21}^2 &{}\quad \widehat{J}_{22}^2 &{}\quad \widehat{J}_{23}^2 &{}\quad 2 \widehat{J}_{21} \widehat{J}_{22} \\ \widehat{J}_{31}^2 &{}\quad \widehat{J}_{32}^2 &{}\quad \widehat{J}_{33}^2 &{}\quad 2 \widehat{J}_{31} \widehat{J}_{32} \\ \widehat{J}_{11} \widehat{J}_{21} &{}\quad \widehat{J}_{12} \widehat{J}_{22}&{} \quad \widehat{J}_{13} \widehat{J}_{23}&{} \quad \widehat{J}_{12} \widehat{J}_{21} + \widehat{J}_{11} \widehat{J}_{22} \\ \widehat{J}_{21} \widehat{J}_{31}&{} \quad \widehat{J}_{22} \widehat{J}_{32} &{}\quad \widehat{J}_{23} \widehat{J}_{33} &{}\quad \widehat{J}_{22} \widehat{J}_{31} + \widehat{J}_{21} \widehat{J}_{32} \\ \widehat{J}_{11} \widehat{J}_{31} &{}\quad \widehat{J}_{12} \widehat{J}_{32} &{}\quad \widehat{J}_{13} \widehat{J}_{33} &{}\quad \widehat{J}_{12} \widehat{J}_{31} + \widehat{J}_{11} \widehat{J}_{32} \end{array} \right. \dots \qquad \nonumber \\ \qquad \dots \left. \begin{array}{cc} 2 \widehat{J}_{12} \widehat{J}_{13} &{}\quad 2 \widehat{J}_{11} \widehat{J}_{13} \\ 2 \widehat{J}_{22} \widehat{J}_{23} &{}\quad 2 \widehat{J}_{21} \widehat{J}_{23} \\ 2 \widehat{J}_{32} \widehat{J}_{33} &{}\quad 2 \widehat{J}_{31} \widehat{J}_{33} \\ \widehat{J}_{13} \widehat{J}_{22} + \widehat{J}_{12} \widehat{J}_{23} &{}\quad \widehat{J}_{13} \widehat{J}_{21} + \widehat{J}_{11} \widehat{J}_{23} \\ \widehat{J}_{23} \widehat{J}_{32} + \widehat{J}_{22} \widehat{J}_{33} &{}\quad \widehat{J}_{23} \widehat{J}_{31} + \widehat{J}_{21} \widehat{J}_{33} \\ \widehat{J}_{13} \widehat{J}_{32} + \widehat{J}_{12} \widehat{J}_{33} &{}\quad \widehat{J}_{13} \widehat{J}_{31} + \widehat{J}_{11} \widehat{J}_{33} \end{array} \right] \,. \end{aligned}$$
The discretized weak forms appear for a typical element as
$$\begin{aligned} \begin{array}{rcl} G_u^e &{}=&{} \delta \underline{{{\varvec{d}}}}^T \displaystyle \int \limits _{\mathcal{B}^e} \underline{\mathrm{IB}}^T \, \underline{{{\varvec{S}}}} \, \text{ d }V - \delta \underline{{{\varvec{d}}}}^T \displaystyle \int \limits _{\mathcal{B}^e} \underline{\mathrm{IN}}^T \underline{{{\varvec{f}}}}\, \text{ d }V - \delta \underline{{{\varvec{d}}}}^T \displaystyle \int \limits _{\partial \mathcal{B}^e_t} \underline{\mathrm{IN}}^T \overline{\underline{{{\varvec{t}}}}} \, \text{ d }A \, , \\ G_{S}^e &{}=&{} \delta \underline{{\varvec{\beta }}^T} \displaystyle \int \limits _{\mathcal{B}^e} \underline{\mathbb {L}}^{T} (\underline{{{\varvec{E}}}} - \partial _{\underline{{{\varvec{S}}}}} \chi (\underline{{{\varvec{S}}}})) \, \text{ d }V \, , \end{array} \end{aligned}$$
where we have utilized \(\delta {\underline{{{\varvec{E}}}}}=\underline{{{\varvec{B}}}}\,\delta \underline{{{\varvec{d}}}}\) with \(\underline{{{\varvec{B}}}}\) being a suitable matrix containing the derivatives of the shape functions. In general cases, where complementary stored energy function is not known, the partial derivative \(\partial _{\underline{{{\varvec{S}}}}} \chi (\underline{{{\varvec{S}}}})\) has to be computed iteratively. The Green-Lagrange strain tensor is given by the approximation of the geometry whereas \(\partial _{\underline{{{\varvec{S}}}}} \chi (\underline{{{\varvec{S}}}}) =: \underline{{{\varvec{E}}}}^{cons}\) is implicitly evaluated from the constitutive equation. Therefore, we compute at each integration point \(\underline{{{\varvec{E}}}}^{cons}\) from the residual
at fixed \(\underline{{{\varvec{S}}}}\). Using a Newton scheme we have to update
$$\begin{aligned} \underline{{{\varvec{E}}}}^{cons} \Leftarrow \underline{{{\varvec{E}}}}^{cons} + \underbrace{[\partial ^2_{\underline{{{\varvec{E}}}}\underline{{{\varvec{E}}}}} \psi (\underline{{{\varvec{E}}}}) \Big |_{\underline{{{\varvec{E}}}}^{cons}}]^{-1}}_{\displaystyle =:\underline{\mathbb {D}}} {{\varvec{r}}}(\underline{{{\varvec{E}}}}^{cons}; \underline{{{\varvec{S}}}}) \end{aligned}$$
until . Table 2 sketches the nested algorithmic treatment for a typical element for the case that the complementary stored energy cannot be computed analytically. The linearization, \(\hbox {Lin}G^e = G^e (\underline{{{\varvec{d}}}}, \underline{{\varvec{\beta }}}) + \Delta G^e (\Delta \underline{{{\varvec{d}}}}, \Delta \underline{{\varvec{\beta }}})\), yields the increments
$$\begin{aligned} \begin{array}{rcl} \Delta G_u^e &{}=&{} \delta \underline{{{\varvec{d}}}}^T \displaystyle \int \limits _{\mathcal{B}^e} \underline{{\varvec{\Xi }}}\, \underline{{{\varvec{S}}}}\, \text{ d }V \Delta \underline{{{\varvec{d}}}} +\delta \underline{{{\varvec{d}}}}^T \displaystyle \int \limits _{\mathcal{B}^e} \underline{{{\varvec{B}}}}^T \underline{\mathbb {L}} \, \text{ d }V \Delta \underline{{\varvec{\beta }}}, \\ \Delta G_{S}^e &{}=&{} \delta \underline{{\varvec{\beta }}}^T \displaystyle \int \limits _{\mathcal{B}^e} \underline{\mathbb {L}}^{T} \underline{{{\varvec{B}}}} \, \text{ d }V \Delta \underline{{{\varvec{d}}}} - \delta \underline{{\varvec{\beta }}}^T \int \limits _{\mathcal{B}^e} \underline{\mathbb {L}}^{T} \underline{\mathbb {D}} \, \underline{\mathbb {L}} \, \text{ d }V \Delta \underline{{\varvec{\beta }}} \, , \end{array} \end{aligned}$$
Table 2 Nested algorithmic treatment for a single element
where \(\underline{{\varvec{\Xi }}}\) is defined as \(\Delta \underline{{{\varvec{B}}}}=\underline{{\varvec{\Xi }}}\Delta \underline{{{\varvec{d}}}}\). We introduce for convenience the element matrices and right hand side vectors
$$\begin{aligned} \begin{array}{l} \displaystyle \underline{{{\varvec{K}}}}_{uu}^e := \int \limits _{{\mathcal {B}}^e} \underline{{\varvec{\Xi }}}\,\underline{{{\varvec{S}}}}\, \text{ d }V\,,\quad \underline{{{\varvec{K}}}}_{uS}^e := \int \limits _{{\mathcal {B}}^e} \underline{\mathbb {L}}^{T} {{{\varvec{B}}}} \, \text{ d }V \,, \quad \underline{{{\varvec{K}}}}_{SS}^e := \int \limits _{{\mathcal {B}}^e} \underline{\mathbb {L}}^{T} \underline{\mathbb {D}} \, \underline{\mathbb {L}} \, \text{ d }V \, , \\ \displaystyle \underline{{{\varvec{r}}}}^e_u := \int \limits _{\mathcal{B}^e} \underline{\mathrm{IB}}^T \underline{{{\varvec{S}}}} \, \text{ d }V - \int \limits _{\mathcal{B}^e} \underline{\mathrm{IN}}^T \underline{{{\varvec{f}}}} \, \text{ d }V - \int \limits _{\partial {\mathcal {B}}^e_t} \underline{\mathrm{IN}}^T \underline{{{\varvec{t}}}} \, \text{ d }A \quad \text {and} \\ \displaystyle {{{\varvec{r}}}}^e_{S} := \int \limits _{\mathcal{B}^e} \underline{\mathbb {L}}^{T} (\underline{{{\varvec{E}}}} - \underline{{{\varvec{E}}}}^{cons}) \, \text{ d }V \, . \end{array} \end{aligned}$$
This leads to the system of equations
$$\begin{aligned} \text {Lin}G^e = \left[ \begin{array}{r} \delta \underline{{{\varvec{d}}}}^{T}\\ \delta \underline{{\varvec{\beta }}}^{T} \end{array} \right] \left( \left[ \begin{array}{ll} \underline{{{\varvec{K}}}}_{uu}^e \quad {\underline{{{\varvec{K}}}}_{uS}^e}^T \\ {\underline{{{\varvec{K}}}}_{uS}^e} \quad \underline{{{\varvec{K}}}}_{SS}^e \end{array} \right] \left[ \begin{array}{r} \Delta \underline{{{\varvec{d}}}} \\ \Delta \underline{{\varvec{\beta }}} \end{array} \right] + \left[ \begin{array}{r} \underline{{{\varvec{r}}}}_u^e \\ \underline{{{\varvec{r}}}}_S^e \end{array} \right] \right) \end{aligned}$$
Assembling over the number of elements \(n_{{ele}}\) leads to the global system of equations
and therefore the nodal unknowns are computed via
$$\begin{aligned} \Delta \underline{{{\varvec{D}}}} = - \underline{{{\varvec{K}}}}^{-1} \underline{{{\varvec{R}}}} \, . \end{aligned}$$
Due to the elementwise discontinuous interpolation of the stresses, the unknowns \(\Delta \underline{{\varvec{\beta }}}\) in (20) can already be eliminated at element level. This leads to a global system of equations with the same number of unknowns, and almost the same computational cost, as a displacement based trilinear element.
Stress interpolation
Following the limitation principle by Fraeijs de Veubeke [27], it can be shown that the following 39 parameter based interpolation is in the linear elastic framework equivalent to a primal displacement formulation with a trilinear interpolation for the displacements because the resulting stress space are equivalent in both elements. For this case the interpolation vectors for the stresses are given by
$$\begin{aligned} \begin{array}{c} \boxed { \begin{array}{ccl} \underline{\mathbb {L}}_{\xi \xi }^{39}&{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\eta \eta }^{39} &{}=&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\zeta \zeta }^{39} &{}=&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\xi \eta }^{39}&{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\eta \zeta }^{39}&{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\,\xi \zeta \right) \\ \underline{\mathbb {L}}_{\xi \zeta }^{39}&{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \eta \zeta \right) \end{array}}\\ \text {39 stress modes} \end{array} \end{aligned}$$
It will be shown in the numerical examples, that this formulation suffers due to volumetric locking, which is caused by the choice of parameter corresponding to \(\underline{\mathbb {L}}_{\xi \xi },\,\underline{\mathbb {L}}_{\eta \eta }\) and \(\underline{\mathbb {L}}_{\zeta \zeta }\). Here, a meaningful reduction will suppress the artificial stiffness. In addition, it will be shown that this interpolation also leads to shear locking, which can be attributed to the choice of \(\underline{\mathbb {L}}_{\xi \eta },\,\underline{\mathbb {L}}_{\eta \zeta }\) and \(\underline{\mathbb {L}}_{\xi \zeta }\). An expedient reduction of introduced stress-modes, lead to a softening of the formulation. However, care must be taken not to relax the formulation too much, since this could lead to artificial deformation states such as hourglassing. A well known and very efficient stress discretization for the linear elastic counterpart of the proposed HR formulation is the 18 parameter based interpolation scheme proposed by Pian and Tong [41], which is a 3D extension of the element by Pian and Sumihara [40]. It has been shown by Andelfinger and Ramm [2] that this interpolation leads in the small deformation framework to an equivalent EAS formulation with 21 additional modes. Here the individual interpolation vectors are given by
$$\begin{aligned} \begin{array}{c} \boxed { \begin{array}{ccl} \underline{\mathbb {L}}_{\xi \xi }^{18}&{} =&{} \left( 1,\, \eta ,\, \zeta ,\, \eta \zeta \right) \\ \underline{\mathbb {L}}_{\eta \eta }^{18} &{}=&{} \left( 1,\, \xi ,\, \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\zeta \zeta }^{18} &{}=&{} \left( 1,\, \xi ,\, \eta ,\, \xi \eta \right) \\ \underline{\mathbb {L}}_{\xi \eta }^{18}&{} =&{} \left( 1,\, \zeta \right) \\ \underline{\mathbb {L}}_{\eta \zeta }^{18}&{} =&{} \left( 1,\, \xi \right) \\ \underline{\mathbb {L}}_{\xi \zeta }^{18}&{} =&{} \left( 1,\, \eta \right) \end{array}}\\ \text {18 stress modes} \end{array} \end{aligned}$$
Note, that this stress interpolation considers a minimal number of stress unknowns, since any further reduction would lead to singular system matrices due to a violation of the count condition proposed by Zienkiewicz et al. [62]. In the framework of the primal HR formulation this count condition requires that the number of unknowns related to the stresses is always larger or equal to the number of unknowns related to the displacements. Note that in this case a single element constitutes the most critical patch, due to the assembling procedure and the related tying of the displacement related degrees of freedom. The numerical examples will show that this formulation does not suffer due to volumetric and shear locking. But in case of large uniaxial homogeneous stress states hourglassing modes can be detected. It is known (see e.g. [34] and the references therein) that the hourglass modes are caused by a violation of the kinematics related to the shear deformation terms. The EAS formulation with 21 enhanced modes depict the same characteristics in the hyperelastic framework. In a straight forward manner, further discretization schemes can be adopted from EAS formulations. A promising approach has been published recently by Krischok and Linder [34] where a 9 parameter based EAS element was proposed, which does not suffer to hourglassing modes and is free from volumetric locking. The related stress interpolation in the framework of the HR formulation needs the introduction of a 30-parameter based discretization
$$\begin{aligned} \begin{array}{c} \boxed { \begin{array}{ccl} \underline{\mathbb {L}}_{\xi \xi }^{30}&{} =&{} \left( 1,\, \eta ,\, \zeta ,\, \eta \zeta \right) \\ \underline{\mathbb {L}}_{\eta \eta }^{30} &{}=&{} \left( 1,\, \xi ,\, \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\zeta \zeta }^{30} &{}=&{} \left( 1,\, \xi ,\, \eta ,\, \xi \eta \right) \\ \underline{\mathbb {L}}_{\xi \eta }^{30} &{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\,\eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\eta \zeta }^{30}&{} =&{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\xi \zeta }^{30}&{} = &{} \left( 1,\, \xi ,\, \eta ,\, \zeta ,\, \xi \eta ,\, \eta \zeta \right) \end{array}}\\ \text {30 stress modes} \end{array} \end{aligned}$$
It can be recognized that the only difference between the interpolation schemes of Eqs. (24) and (25) are the additional terms for the interpolation of the shear stresses. These additional stress modes lead to a stabilization such that hourglassing modes have not been detected in the numerical examples. Unfortunately they also lead to shear locking phenomena as it can be expected by the comparison \(\underline{\mathbb {L}}_{\xi \eta }-\underline{\mathbb {L}}_{\xi \zeta }\) of Eqs. (25) and (23).
With this knowledge, it makes sense to investigate one further interpolation scheme. Since the volumetric locking problem seems to be solved by a suitable interpolation of \(S_{11}\), \(S_{22}\) and \(S_{33}\), only the interpolation matrices corresponding to the shear stresses are modified. In particular we consider the interpolation scheme which can be nested in between the interpolations from Eqs. (24) and (25). The corresponding matrices follow by
$$\begin{aligned} \begin{array}{c} \boxed { \begin{array}{ccl} \underline{\mathbb {L}}_{\xi \xi }^{18}&{} =&{} \left( 1,\, \eta ,\, \zeta ,\, \eta \zeta \right) \\ \underline{\mathbb {L}}_{\eta \eta }^{18} &{}= &{} \left( 1,\, \xi ,\, \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\zeta \zeta }^{18} &{}=&{} \left( 1,\, \xi ,\, \eta ,\, \xi \eta \right) \\ \underline{\mathbb {L}}_{\xi \eta }^{18}&{} =&{} \left( 1,\, \zeta ,\, \eta \zeta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\eta \zeta }^{18}&{} = &{} \left( 1,\, \xi ,\, \xi \eta ,\, \xi \zeta \right) \\ \underline{\mathbb {L}}_{\xi \zeta }^{18}&{} = &{} \left( 1,\, \eta ,\, \xi \eta ,\, \eta \zeta \right) \end{array}}\\ \text {24 stress modes} \end{array} \end{aligned}$$
Numerical simulations
In the following numerical examples the proposed family of finite elements will be investigated with respect to locking phenomena, stability, robustness and efficiency. Each considered benchmark focuses on one of these aspects in order to be able to classify each element. In addition to the proposed assumed stress elements the numerical studies will also be discussed for a non-mixed lowest order element, the well known H1P0 (see [50]) and a couple of enhanced assumed strain element formulations. In particular we consider exactly those elements which are, in the framework of linear elasticity and a constant Jacobian, equivalent to the considered assumed stress elements. A list of the considered elements is given in Table 3. Please note that the *EAS-9 element is not the popular enhanced assumed strain element by Simo and Rifai [51], which also comprises 9 enhanced modes.
Table 3 Overview of considered elements
Inhomogeneous compression Block; geometry, representative mesh, deformed configuration and the boundary conditions
Equivalence of the formulations in linear elasticity
All considered assumed stress element formulations have an equivalent counterpart in the linear elastic framework if the Jacobian mapping from the isoparametric to the physical domain is piecewise constant. This equivalency has been already discussed and proven in a variety of publications, see e.g. [16]. Here, a numerical test is used in order to verify the implementation of the finite element code. Therefore we consider a Neo Hookean type free energy function of the form
$$\begin{aligned} \psi =\frac{\mu }{2} (\mathrm{tr}{{\varvec{C}}}-3)-\left( \frac{\Lambda }{2}+\mu \right) \ln (\det {{\varvec{F}}})+ \frac{\Lambda }{4} (\det {{\varvec{C}}}-1)\, \end{aligned}$$
which fulfills , where is the elasticity tensor related to Hooke's law. In order to obtain the result referring to the linear elastic framework we consider the result after the first Newton-iteration. The considered boundary value problem is a block with the dimension \(100\times 100\times 50\) which is subjected to a constant surface load at the top central quarter \(\overline{{{\varvec{t}}}}=(0,0,-3)^T\). Due to the axial symmetry of the problem, only a quarter of the block is discretized. The geometry, a representative mesh and the boundary conditions are given in Fig. 1. The material parameters are assumed to be compressible, with a Young's modulus \(E=5\) and a Poisson's ratio of \(\nu =0.3\). In terms of the Lamé parameter this is related to \(\mu =1.92308\) and \(\Lambda =2.88462\). A regular hexahedral mesh with uniform refinement is considered, such that the Jacobian is always constant in each element. Table 4 depicts the convergence of the nodal \(u_3\) displacements at point \({{\varvec{x}}}=(0,0,50)\). The equivalences between the different element formulations can be recognized. In fact all nodal values of the related equivalent elements (AS-39 and H1; AS-18 and EAS-21; AS-30 and *EAS-9; AS-24 and EAS-15) are identical up to machine precision.
Table 4 Inhomogeneous compression block in linear elasticity
Patch test 1; reference mesh and deformed body
The Patch test is a necessary condition for the convergence of finite elements. It demands that an arbitrary patch of assembled elements is able to reproduce a constant state of stress and strain if subjected to boundary displacements consistent with constant straining. This condition is necessary since with respect to mesh refinement, where \(h\rightarrow 0\), all boundary value problems tend to constant stress and strains in each element. This test is mainly attributed to the work of Bruce Iron, first presented in Bazeley et al. [15]. A summary on its theory, practice and possible conclusions on its satisfaction can be found in Taylor et al. [55]. Following Korelc et al. [33], two different load scenarios are considered, described in Figs. 2 and 3. Load case (A) prescribes a pure rigid body motion by a rotation around the z-axis. All proposed elements are free of resulting stresses and strains and thus fulfill the first patch test. Load case (B) prescribes a combined deformation of shear and uniaxial strain, which analytical solution leads to a constant strain and stress over the whole domain. All proposed elements result in the expected constant stress and strain field and therefore fulfill the second patch test. Note that the patch tests verify only the consistency of the finite element and thus represents only a necessary but not a sufficient condition for the stability of the formulation.
Inf-sup test
The theorems of Babuška [11] and Brezzi [20] ensure stability, existence and uniqueness of the solution of mixed finite elements in the case of linear elasticity. The crucial aspect in the validation of these theorems for particular formulations is mainly encountered in the proof of the discrete inf-sup condition. In many cases its analytical proof is cumbersome and a direct numerical evaluation is impossible because an infinite number of problems must be taken into account. For the engineering praxis, a numerical inf-sup test has been proposed in Chapelle and Bathe [22] and Bathe [13, 14]. The objective of this test is to estimate the inf-sup constant numerically on a set of refined meshes. In the following this inf-sup test is adopted for the proposed Hellinger–Reissner formulations considering a stress free and undeformed (\({{\varvec{F}}}={{\varvec{I}}}\)) configuration which represents the framework of linear elasticity. A simply supported unit cube with the dimension \(1\times 1\times 1\) is considered with a consecutive regular mesh refinement. It has been shown by Brezzi and Fortin [21] that the square root of the smallest eigenvalue, denoted by \(\lambda _p\), of the problem
$$\begin{aligned} \underline{{{\varvec{K}}}}_{uS}^T \, \underline{{{\varvec{T}}}}^{-1} \, \underline{{{\varvec{K}}}}_{uS} \, \Delta \underline{{{\varvec{d}}}}=\lambda _p \, \underline{{{\varvec{A}}}} \, \Delta \underline{{{\varvec{d}}}} \end{aligned}$$
is equivalent to the inf-sup constant, where the matrices \(\underline{{{\varvec{T}}}}\) and \(\underline{{{\varvec{A}}}}\) are defined as
$$\begin{aligned} \begin{array}{l} \displaystyle \int \limits _\mathcal{B}{{\varvec{S}}}: {{\varvec{S}}}\, \text{ d }V = \Delta \underline{{\varvec{\beta }}}^T \, \underline{{{\varvec{T}}}} \, \Delta \underline{{\varvec{\beta }}} \\ \displaystyle \int \limits _\mathcal{B}{{\varvec{E}}}: {{\varvec{E}}}\, \text{ d }V = \Delta \underline{{{\varvec{d}}}}^T \, \underline{{{\varvec{A}}}} \, \Delta \underline{{{\varvec{d}}}}\,. \\ \end{array} \end{aligned}$$
In addition to the evaluation of the inf-sup values for the proposed assumed stress elements the corresponding test is verified by means of the H1P0. This element is a well known textbook example which does not satisfy the inf-sup condition. A detailed instruction on the accomplishment of the inf-sup test for displacement-pressure based elements is given in Chapelle and Bathe [22].
Figure 4 shows the development of the inf-sup value for the chosen boundary value problem over the number of elements considering a regular structured mesh refinement. The value seems to be bounded from below for the AS-39, AS-30 and AS-24. In case of the AS-18 the inf-sup value shows asymptotical convergence, which also indicates a distinct lower bound and thus passes the inf-sup test. Both described behavior are in sharp contrast to the results of the H1P0. Here, the inf-sup value decreases with respect to mesh refinement by an almost constant rate, clearly indicating the failure of the inf-sup test. It should be mentioned, that such a numerical verification does not replace the need of an analytical investigation in order to ensure the statements on the stability. Nonetheless, to the best knowledge of the authors, not a single example has been found, where the inf-sup test does not give similar results as the analytical proofs.
Eigenvalue analysis of initial element matrix
The eigenvalue spectrum of a single element matrix in the initial state is inspected for the case of nearly incompressibility and in addition for a slender domain. Note that in the proposed setup the initial state represents the case of linear elasticity. For the nearly incompressible example, we follow the example of Andelfinger and Ramm [2] and consider a single square element with a side length of \(l=1\), a Young's modulus of \(E=1\) and a Poisson's ratio of \(\nu =0.49999\). Table 5 shows the eigenvalue spectrum of the different elements, whereas eigenvalues larger than \(10^3\) are denoted by \(\infty \) and the six zero eigenvalues are omitted. The incompressibility should affect only a single eigenvalue to tend to infinity which is related to the mode of volumetric dilation. It can be recognized, that in case of the H1 and the AS-39 a total number of seven eigenvalues tend to infinity which consequences in volumetric locking. All other proposed elements depict the correct eigenvalue spectrum.
Inf-sup test: numerical evaluated inf-sup constant over the number of elements
Table 5 Eigenvalue spectrum for incompressible square element
A second study on the eigenvalue spectrum is related to an increasing slenderness of the domain of interest. In Fig. 5 the development of the smallest eigenvalue of a single element cube with the dimensions \(2\times 2\times t\) is investigated. In addition the domain is assumed to be clamped at \(x=0\). The Domain on the left of Fig. 5 depicts the related eigenmode, which is clearly the mode related to bending deformation. The evolution of the smallest eigenvalue of the element matrix shows two distinct characteristics. In case of the EAS-15, EAS-21, AS-24 and AS-18 the eigenvalue is decreasing rapidly for a shrinking thickness t. In contrast this progress is significant slower for the elements H1, AS-39, *EAS-9 and H1P0. Based on this observation, it can be recognized that bending deformation states are energetically preferable for the first group of elements, yielding to a locking behavior for the latter group. The same analysis is carried for a skew-shaped domain depicted in Fig. 6. It is interesting to mention, that for this domain the elements EAS-15 and AS-24 show a reduced ratio of decrease after a critical value of t. In contrast, the behavior of the remaining elements is almost unaffected.
Investigation of eigenvalue spectrum; domain and related eigenmode depicted on the left. Development of smallest eigenvalue over 1 / t on the right
Hyperelastic nearly incompressible Cook's membrane
A tapered cantilever beam known as the Cook's membrane problem is considered, representing a bulk related boundary value problem with nearly incompressible material. The geometry, boundary conditions and material parameter are summarized in Fig. 7. The cantilever, with a thickness of \(t=10\) is clamped on the left and a constant shear stress is applied on the right face. In addition the displacements in out of place direction are restricted, imitating a plane strain setting. The material parameters are chosen to be nearly incompressible with a Young's modulus of \(E=200\) and a Poisson's ratio of \(\nu =0.499\). This corresponds to the Lamé parameters given by \(\Lambda =33288.9\) and \(\mu =66.711\). The convergence of the tip displacement for a regular mesh refinement in x and y direction are shown in Fig. 8a. Note that only a single element in z direction is considered. The suffering due to volumetric locking can be recognized for the displacement based elements H1. Unfortunately, the AS-39 does not converge at all for this numerical example. In contrast, all other mixed finite elements achieve a comparably good convergence behavior of the tip displacements, since they do not suffer due to volumetric locking, as expected from the eigenvalue analysis of Table 5. Note that the elements which are equivalent in the linear elastic framework do not yield exactly the identical solution for the displacements in the finite deformation case. However their results are still close to each other.
Cook's membrane problem; exemplary reference mesh and deformed body on the left and boundary conditions and parameters on the right
Cook's membrane problem: convergence of tip displacement (a) and number of necessary load steps (b) over the number of elements
Consideration of the necessary load steps, depicted in Fig. 8b, indicate that the assumed stress elements are able to deal with large load steps in case of nearly incompressibility. For this boundary value problem the considered elements require only a single load step, independent of the mesh size. In contrast, the enhanced assumed strain elements and the H1P0 element require a significant higher number of load increments, leading to a substantially larger computation time. The level of necessary load steps in case of the H1 element is moderate but it should kept in mind that their performance by means of displacement accuracy is insufficient.
Hyperelastic bending of a compressible clamped plate
We consider a thin rectangular plate with the dimension \(10\times 10\times 0.1\) which is clamped at one end and loaded by a bending force at the opposing end. The geometry, boundary conditions and a representative mesh are depicted in Fig. 9. Two different meshing strategies are considered in the following, note that both are restricted to a single element in thickness direction. On the one hand a regular and structured meshing procedure is adopted, where the number of elements in x and y direction coincide. A detailed visualization of the appropriated meshes is neglected for the sake of brevity. In addition an unstructured in-plane mesh is adopted, whereas the considered refinement steps are explicitly depicted in Fig. 10. In a first step a perfect compressible material, characterized by the Neo Hookean energy, see (27), and a Young's modulus of \(E=200\) and a Poisson's ratio of \(\nu =0\) is taken into account. In terms of the Lamé parameter this is related to \(\mu =100\) and \(\Lambda =0\). Due to the material parameter and the boundary conditions, this boundary value problem considers a pure bending mode. The convergence of the displacements with respect to both mesh refinement strategies are depicted in Fig. 11.
Clamped plate: geometry, coarsest unstructured mesh, deformed configuration and the boundary conditions
Clamped plate: refined unstructured meshes
Bending of a compressible clamped plate; convergence of tip displacement and number of necessary load steps over the number of elements for a structured (a) and an unstructured (b) mesh refinement strategy
The elements can be mainly distinguished into three groups, whereas the close relationship between the assumed stress and enhanced assumed strain elements can be recognized again. In case of the structured meshing the AS-18, EAS-21, AS-24 yield EAS-15 yield optimal results for the tip-displacement already for the coarsest meshing. However, taking into account the unstructured meshes the quality of the results is weakened, especially for the coarsest level. However, the AS-18 and EAS-21 depict the best mesh convergence of all considered elements, independent of the discretization strategy. The AS-24 and EAS-15 are slightly weaker in case of unstructured coarse meshes. In contrast to these four elements the H1P0, H1, AS-39, AS-30 and *EAS-9 demonstrate a distinct locking behavior in this numerical example. Note that these observations are consent with the results of the eigenvalue study related to Fig. 6, where locking behavior has been predicted for the latter elements in case of boundary value problems with slender domains. Interestingly, their tip-displacement convergence is slightly better for the unstructured case, which is in contrast to the behavior of the remaining elements.
In addition the number of necessary load steps are depicted on the right of Fig. 11. It can be recognized that also in this bending dominated problem, the number of necessary load steps is significantly smaller for the proposed family of assumed stress elements. Especially the enhanced assumed strain elements, which do not show locking effects (EAS-15, EAS-21) suffer due to the need of smaller load increments compared to their AS counterparts (AS-18, EAS-24).
Hyperelastic bending of a clamped plate—nearly incompressible
We investigate the boundary value problem with the same geometry, boundary conditions and constitutive relation as in the prior example. Only, the material is assumed to be nearly incompressible increasing the Poisson's ratio to \(\nu =0.495\). In terms of the Lamé parameter this is related to \(\mu =66.8896\) and \(\Lambda =6622.07\).
Bending of a nearly incompressible plate; convergence of tip displacement and number of necessary load steps over the number of elements for a structured (a) and an unstructured (b) mesh refinement strategy
Considering the displacement convergence, shown on the left of Fig. 12, it can be noted that the qualitatively response of the elements is equivalent to the compressible case, except for the AS-39 and H1. These elements clearly depict a conspicuous additional volumetric locking. A similar picture as for the compressible case is obtained considering the number of necessary load steps, shown in Fig. 12b. Even if the displacement results for the EAS-21 and EAS-15 seem to be satisfactory, they suffer due to the need of smaller load steps (a factor of 20), compared to their assumed stress counterparts.
Hyperelastic hourglassing test
The deficiency of a couple of enhanced assumed strain elements to artificial modes, often denoted as hourglassing, have first been detected in case of homogeneous stress states by Wriggers and Reese [59]. The investigation of these unphysical free energy modes have been subject of many following publications, see e.g. [26] and [56]. In order to investigate the assumed stress elements with regard to hourglassing modes we consider the boundary value problem depicted in Fig. 13.
Hourglassing test: geometry, representative mesh, deformed configuration and the boundary conditions
The numerical test considers a cube with a side length of 50 where a compression is applied by displacement boundary conditions. In addition displacement boundary conditions are considered such that the edges are fixed to be straight. It is an easy task to see, that for this problem surface buckling is excluded due to the choice of boundary conditions and therefore the solution for the displacements has to be unique. Therefore each point of instability is introduced by the numerical discretization scheme and can be considered as artificial. The same constitutive law as in the prior examples is used, whereas the material parameter are given by \(E=4.337\) and \(\nu =0.355\). In terms of the Lamé parameter this is related to \(\mu =1.60037\) and \(\Lambda =3.91814\). Figure 14 show the development of the smallest eigenvalues of the global condensed stiffness matrix with respect to the applied displacement at the top edge. It can be recognized that the AS-18, EAS-21, AS-24 and EAS-15 obtains points of instability at different load stages whereas the remaining element formulations depict stable behavior. Note that the unstable elements are exactly the shear locking free elements. Interestingly, the load levels where the instability occurs is not close to each other for the formulations which are equivalent in the linear elastic framework. In case of the assumed stress discretization the point of hourglassing is on a higher state of stress, compared to its enhanced assumed strain counterpart. The relationship between these elements occurs again in the related eigenforms see Fig. 15. It can be recognized, that the hourglassing modes itself are equivalent for the AS-18 and EAS-21 and also for the AS-24 and EAS-15.
Hourglassing test: development of the smallest eigenvalue of the stiffness matrix over the loading
Hourglassing test: Hourglassing modes at points of instability
Hyperelastic fiber reinforced Cook's membrane problem
In order to show that the proposed algorithm is also applicable to more complex constitutive equations the Cook's membrane with a fiber reinforced material is considered. Therefore, the underlying strain energy function is split into an isotropic and an anisotropic part
$$\begin{aligned} \psi =\psi _{\text {iso}}+\psi _{\text {aniso}}\,. \end{aligned}$$
The isotropic part is represented by the following strain energy of Mooney-Rivlin type
$$\begin{aligned} \psi _{\text {iso}}=\frac{\alpha }{2} \mathrm{tr}[{{\varvec{C}}}]^2+\frac{\beta }{2} \mathrm{tr}[\text {Cof}{{\varvec{C}}}]^2-\gamma \ln J + \epsilon _1 (\det [{{\varvec{C}}}]^{\epsilon _2}+\det [{{\varvec{C}}}]^{-\epsilon _2}-2)\,, \end{aligned}$$
where \(\alpha \), \(\beta \), \(\gamma \), \(\epsilon _1\) and \(\epsilon _2\) are material parameter and the Cofactor of a second order tensor is defined by \(\text {Cof}{{\varvec{A}}}=\det [{{\varvec{A}}}]\, {{\varvec{A}}}^{-T}\). For the formulation of anisotropic free energies as isotropic tensor functions we apply the concept of structural tensors, see e.g. [17].
Cooks membrane problem: geometry, representative mesh, deformed configuration and the boundary conditions
Considering here the case of transverse isotropy we introduce a preferred direction vector \({{\varvec{a}}}\) of unit length and the structural tensor \({{\varvec{M}}}={{\varvec{a}}}\otimes {{\varvec{a}}}\). The anisotropic part of the strain energy, originally introduced in Schröder et al. [47], is given by
$$\begin{aligned} \psi _{\text {aniso}}= g_0\left( \frac{1}{g_c+1}\mathrm{tr}[{{\varvec{C}}}{{\varvec{M}}}]^{g_c+1}+\frac{1}{g_h+1}\mathrm{tr}[\mathrm{Cof}\!{{{\varvec{C}}}} {{\varvec{M}}}]^{g_h+1}+\frac{1}{g_\theta }I_3^{-g_\theta } \right) \, \end{aligned}$$
with the material parameter \(g_0,\, g_c,\,g_h,\,g_\theta \), see in this context also [45]. The geometry, boundary conditions and the material parameter are depicted in Fig. 16. The convergence of the displacements at node \({{\varvec{x}}}=(48,60,0)^T\) and the corresponding necessary load steps are depicted in Fig. 17. First, it should be mentioned that for the AS-39 we have not been able to obtain a solution for the considered boundary value problem. In contrast to that, the previously mentioned statements are confirmed. The AS-30, *EAS-9 and the H1 suffer due to the bending of the boundary value problem. In addition, the H1 suffers due to the constraint on the volumetric deformation, corresponding to the material parameter \(\epsilon _1\). The remaining elements seem to be free of locking behavior and converge quickly. Furthermore, the number of necessary load steps is again lower in case of the assumed stress elements.
Cook's membrane problem, displacement convergence (left) and the number of necessary load steps (right)
We proposed an extension of a mixed finite element formulation based on the Hellinger–Reissner principle to the framework of hyperelasticity and investigated it numerically. This principle requires an independent interpolation of the stresses and displacements. The displacements are interpolated by classical trilinear shape functions, whereas for the stresses four different interpolation strategies are discussed. The numerical results for each interpolation can be summarized by the following. The AS-39 correlates to the H1 element also in the nonlinear regime and shows volumetric as well as shear locking. The AS-18 represents the hyperelastic extension of the element proposed by Pian and Tong [41]. A close relationship can be drawn to the EAS-21. The AS-18 performs very well in bending dominated and nearly incompressible problems. Unfortunately, it suffers due to hourglassing modes which could lead to instabilities also in more complex boundary value problems. The AS-30 element is inspired by the *EAS-9 enhanced formulation, proposed in Krischok and Linder [34]. It is free of volumetric locking, does not show hourglass instabilities in the discussed numerical example. Unfortunately, it is not free of shear locking and therefore behaves poor in the bending dominated problems. The AS-24 element is closely related to the EAS-15 proposed by Pantuso and Bathe [38]. It is free of volumetric and shear locking. However, it suffers due to hourglass instabilities.
Even if non of the investigated interpolation schemes are free of all drawbacks, the proposed procedure of Hellinger–Reissner formulations for large deformations has emerged as a promising approach. The proposed elements seem to be significantly improved in terms of robustness and large load increments. This leads to an enormous gain in terms of computational cost comparing to the widely used EAS formulations.
Adams S, Cockburn B. A mixed finite element method for elasticity in three dimensions. J Sci Comput. 2005;25(3):515–21.
Andelfinger U, Ramm E. EAS-elements for two-dimensional, three-dimensional, plate and shell structures and their equivalence to HR-elements. Int J Numer Methods Eng. 1993;36:1311–37.
Arnold DN, Falk RS, Winther R. Mixed finite element methods for linear elasticity with weakly imposed symmetry. Math Comp. 2007;76:1699–723.
Arnold DN, Winther R. Nonconforming mixed elements for elasticity. Math Models Methods Appl Sci. 2003;13:295–307.
Arnold DN, Brezzi F, Douglas J. PEERS: a new mixed finite element for plane elasticity. Jpn J Appl Math. 1984a;1:347–67.
Arnold DN, Douglas J, Gupta C. A family of higher order mixed finite element methods for plane elasticity. Numerische Mathematik. 1984b;45:1–22.
Arnold DN, Awanou G, Winther R. Finite elements for symmetric tensors in three dimensions. Math. Comp. 2008;77:1229–51.
Atluri SN. On the hybrid stress finite element model in incremental analysis of large deflection problems. Int J Solids Struct. 1973;9:1188–91.
Atluri SN, Murakawa H. On hybrid finite element models in nonlinear solid mechanics. Finite Elements Nonlinear Mech. 1977;1:3–41.
Auricchio F, Brezzi F, Lovadina C. Mixed finite element methods. In: Stein E, de Borst R, Hughes TJR, editors. Encyclopedia of computational mechanics. Hoboken: Wiley; 2004. p. 238–77.
Babuška I. The finite element method with Lagrangian multipliers. Numerische Mathematik. 1973;20(3):179–92.
Babuška I, Suri M. Locking effects in the finite element approximation of elasticity problems. Numerische Mathematik. 1992;62(1):439–63.
Bathe KJ. Finite element procedures. New Jersey: Prentice Hall; 1996.
Bathe KJ. The inf-sup condition and its evaluation for mixed finite element methods. Comput Struct. 2001;79:243–52.
Bazeley GP, Cheung YK, Irons BM, Zienkiewicz OC. Triangular elements in plate bending. conforming and nonconforming solutions. In: 1st conf. matrix methods in structural mechanics. 1966; p. 547–576.
Bischoff M, Ramm E, Braess D. A class of equivalent enhanced assumed strain and hybrid stress finite elements. Comput Mech. 1999;22:443–9.
Boehler JP. A simple derivation of respresentations for non-polynomial constitutive equations in some cases of anisotropy. Zeitschrift für angewandte Mathematik und Mechanik. 1979;59:157–67.
Boffi D, Brezzi F, Fortin M. Reduced symmetry elements in linear elasticity. Commun Pure Appl Anal. 2009;8:95–121.
Boffi D, Brezzi F, Fortin M. Mixed finite element methods and applications. Heidelberg: Springer; 2013.
Brezzi F. On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers. Revue française d'automatique informatique recherche opérationnelle. Analyse numérique. 1974;8(2):129–51.
Brezzi F, Fortin M. Mixed and hybrid finite element methods. New York: Springer-Verlag; 1991.
Chapelle D, Bathe K. The inf-sup test. Comput Struct. 1993;47:537–45.
Chun KS, Kassenge SK, Park WT. Static assessment of quadratic hybrid plane stress element using non-conforming displacement modes and modified shape functions. Struct Eng Mech. 2008;29:643–58.
Ciarlet PG. Mathematical elasticity: three dimensional elasticity, vol. 1. North Holland: Elsevier Science Publishers B.V.; 1988.
Cockburn B, Gopalakrishnan J, Guzmán J. A new elasticity element made for enforcing weak stress symmetry. Math Comp. 2010;79:1331–49.
de Souza Neto EA, Peric D, Huang GC, Owen DRJ. Remarks on the stability of enhanced strain elements in finite elasticity and elastoplasticity. Commun Numer Methods Eng. 1995;11(11):951–61.
de Veubeke B Fraeijs. Displacement and equilibrium models in the finite element method. In: Zienkiewicz OC, Holister GC, editors. Stress analysis. Hoboken: Wiley; 1965.
Galerkin BG. Series solution of some problems in elastic equilibrium of rods and plates. Vestn Inzh Tech. 1915;19:897–908.
Hellinger E. Encyklopädie der mathematischen Wissenschaften mit Einschluss ihrer Anwendungen, vol 4, chapter Die allgemeine Ansätze der Mechanik der Kontinua. 1914.
Hu H-C. On some variational methods on the theory of elasticity and the theory of plasticity. Scientia Sinica. 1955;4:33–54.
Johnson C, Mercier B. Some equilibrium finite element methods for two-dimensional elasticity problems. Numerische Mathematik. 1978;30:85–99.
Klaas O, Schröder J, Stein E, Miehe C. A regularized dual mixed element for plane elasticity—implementation and performance of the BDM element. Comput Methods Appl Mech Eng. 1995;121:201–9.
Korelc J, Solinc U, Wriggers P. An improved EAS brick element for finite deformation. Comput Mech. 2010;46:641–59.
Krischok A, Linder C. On the enhancement of low-order mixed finite element methods for the large deformation analysis of diffusion in solids. Int J Numer Methods Eng. 2016;106:278–97.
Ladyzhenskaya O. The mathematical theory of viscous incompressible flow, vol. 76. New York: Gordon and Breach; 1969.
Li B, Xie X, Zhang S. New convergence analysis for assumed stress hybrid quadrilateral finite element method. Dis Continuous Dyn Syst Ser B. 2017;22(7):2831–56.
Lonsing M, Verfürth R. On the stability of BDMS and PEERS elements. Numer Math. 2004;99:131–40.
Pantuso D, Bathe KJ. A four-node quadrilateral mixed-interpolated element for solids and fluids. Math Models Methods Appl Sci (M3AS). 1995;5(8):1113–28.
Pian THH. Derivation of element stiffness matrices by assumed stress distribution. AIAA J. 1964;20:1333–6.
Pian THH, Sumihara K. A rational approach for assumed stress finite elements. Int J Num Methods Eng. 1984;20:1685–95.
Pian THH, Tong P. Relations between incompatible displacement model and hybrid stress model. Int J Numer Methods Eng. 1986;22:173–81.
Piltner R. An alternative version of the Pian-Sumihara element with a simple extension to non-linear problems. Comput Mech. 2000;26:483–9.
Prange G. Das Extremum der Formänderungsarbeit Habilitationsschrift. Hannover: Technische Hochschule Hannover; 1916.
Reissner E. On a variational theorem in elastictiy. J Math Phys. 1950;29:90–5.
Schröder J. Poly-, quasi- and rank-one convexity in applied mechanics, CISM course and lectures 516, chapter anisotropic polyconvex energies. Berlin: Springer; 2010.
Schröder J, Klaas O, Stein E, Miehe C. A physically nonlinear dual mixed finite element formulation. Comput Methods Appl Mech Eng. 1997;144:77–92.
Schröder J, Neff P, Ebbing V. Anisotropic polyconvex energies on the basis of crystallographic motivated structural tensors. J Mech Phys Solids. 2008;56(12):3486–506.
Schröder J, Igelbüscher M, Schwarz A, Starke G. A Prange-Hellinger-Reissner type finite element formulation for small strain elasto-plasticity. Comput Methods Appl Mech Eng. 2017;317:400–18.
Seki W, Atluri SN. On newly developed assumed stress finite element formulations for geometrically and materially nonlinear problems. Finite Elements Anal Des. 1995;21:75–110.
Simo JC, Taylor RL, Pister KS. Variational and projection methods for the volume constraint in finite deformation elasto-plasticity. Comput Methods Appl Mech Eng. 1985;51:177–208.
Simo JC, Rifai MS. A class of mixed assumed strain methods and the method of incompatible modes. Int J Numer Methods Eng. 1990;29:1595–638.
Simo JC, Kennedy JG, Taylor RL. Complementary mixed finite element formulations for elastoplasticity. Comput Methods Appl Mech Eng. 1989;74:177–206.
Stenberg R. On the construction of optimal mixed finite element methods for the linear elasticity problem. Numerische Mathematik. 1986;42:447–62.
Stenberg R. A family of mixed finite elements for the elastic problem. Numerische Mathematik. 1988;53:513–38.
Taylor RL, Simo JC, Zienkiewicz OC, Chan ACH. The patch test—a condition for assessing FEM convergence. Int J Numer Methods Eng. 1986;22:39–62.
Wall WA, Bischoff M, Ramm E. A deformation dependent stabilization technique, exemplified by eas elements at large strains. Comput Methods Appl Mech Eng. 2000;188:859–71.
Washizu K. On the variational principles of elasticity and plasticity. Aeroelastic and Structure Research Laboratory. Technical Report 25–18. Cambridge: MIT; 1955.
Wriggers P. Mixed finite element methods—theory and discretization. In: Carstensen C, Wriggers P, editors. Mixed finite element technologies, volume 509 of CISM International Centre for Mechanical Sciences. Berlin: Springer; 2009.
Wriggers P, Reese S. A note on enhanced strain methods for large deformations. Comput Methods Appl Mech Eng. 1996;135:201–9.
Yeo ST, Lee BC. Equivalence between enhanced assumed strain method and assumed stress hybrid method based on the Hellinger-Reissner principle. Int J Numer Methods Eng. 1996;39:3083–99.
Yu G, Xie X, Carstensen C. Uniform convergence and a posteriori error estimation for assumed stress hybrid finite element methods. Comput Methods Appl Mech Eng. 2011;200:2421–33.
Zienkiewicz OC, Qu S, Taylor RL, Nakazawa S. The patch test for mixed formulations. Int J Numer Methods Eng. 1986;23:1873–83.
Author's contributions
All authors contributed in the derivation and development of the idea for the proposed mixed finite element method. The implementation, numerical studies and the draft of the manuscript have been mainly performed by NV. JS and PW supervised the study and corrected the manuscript. All authors read and approved the final manuscript.
The authors appreciate the support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 "Novel finite elements - Mixed, Hybrid and Virtual Element formulations at finite strains for 3D applications" under the project "Reliable Simulation Techniques in Solid Mechanics, Development of Non-standard Discretization Methods, Mechanical and Mathematical Analysis" (SCHR 570/23-2) (WR 19/50-2). Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 255432295.
We acknowledge support by the Open Access Publication Fund of the University of Duisburg-Essen.
Institute of Mechanics, Faculty of Engineering, University Duisburg-Essen, Universitätsstr. 15, 45141, Essen, Germany
Nils Viebahn
& Jörg Schröder
Institute of Continuum Mechanics, Leibniz Universität Hannover, Appelstr. 11, 30167, Hannover, Germany
Peter Wriggers
Search for Nils Viebahn in:
Search for Jörg Schröder in:
Search for Peter Wriggers in:
Correspondence to Nils Viebahn.
Viebahn, N., Schröder, J. & Wriggers, P. An extension of assumed stress finite elements to a general hyperelastic framework. Adv. Model. and Simul. in Eng. Sci. 6, 9 (2019) doi:10.1186/s40323-019-0133-z
Mixed FEM
Hellinger–Reissner formulation
Hyperelasticity
Hourglassing
Locking free
Assumed stress element
Nearly incompressibility
|
CommonCrawl
|
Nonconvex penalized quantile regression: A review of methods, theory and algorithms
Lan Wang
Quantile regression is now a widely recognized useful alternative to the classical least-squares regression. It was introduced in the seminal paper of Koenker and Bassett (1978b). Given a response variable Y and a vector of covariates x $ \mathbf{x} $, quantile regression estimates the effects of x $ \mathbf{x} $ on the conditional quantile of Y. Formally, τ the $ \tau $ th (0 < τ < 1 $ 0<\tau <1 $) conditional quantile of Y given x $ \mathbf{x} $ is defined as Q Y (τ | x) = inf {t : F Y | x (t) ≥ τ } $ Q_{Y}(\tau |\mathbf{x}) = \inf \{t:F_{Y|\mathbf{x}}(t) \ge \tau \} $, where F Y | x $ F_{Y|\mathbf{x}} $ is the 274conditional cumulative distribution function of Y given x $ \mathbf{x} $. An important special case of quantile regression is the least absolute deviation (LAD) regression (Koenker and Bassett, 1978a), which estimates the conditional median Q Y (0.5 | x) $ Q_{Y}(0.5|\mathbf{x}) $.
Handbook of Quantile Regression
Dive into the research topics of 'Nonconvex penalized quantile regression: A review of methods, theory and algorithms'. Together they form a unique fingerprint.
Penalized Regression Mathematics 100%
Quantile Regression Mathematics 85%
Conditional Quantiles Mathematics 64%
Review Mathematics 53%
Least Absolute Deviation Mathematics 33%
Regression Estimate Mathematics 31%
Least Squares Regression Mathematics 29%
Covariates Mathematics 20%
Wang, L. (2017). Nonconvex penalized quantile regression: A review of methods, theory and algorithms. In Handbook of Quantile Regression (pp. 273-292). CRC Press. https://doi.org/10.1201/9781315120256
Nonconvex penalized quantile regression : A review of methods, theory and algorithms. / Wang, Lan.
Handbook of Quantile Regression. CRC Press, 2017. p. 273-292.
Wang, L 2017, Nonconvex penalized quantile regression: A review of methods, theory and algorithms. in Handbook of Quantile Regression. CRC Press, pp. 273-292. https://doi.org/10.1201/9781315120256
Wang L. Nonconvex penalized quantile regression: A review of methods, theory and algorithms. In Handbook of Quantile Regression. CRC Press. 2017. p. 273-292 https://doi.org/10.1201/9781315120256
Wang, Lan. / Nonconvex penalized quantile regression : A review of methods, theory and algorithms. Handbook of Quantile Regression. CRC Press, 2017. pp. 273-292
@inbook{6a597df0b8514908be291d214f1435ec,
title = "Nonconvex penalized quantile regression: A review of methods, theory and algorithms",
abstract = "Quantile regression is now a widely recognized useful alternative to the classical least-squares regression. It was introduced in the seminal paper of Koenker and Bassett (1978b). Given a response variable Y and a vector of covariates x $ \mathbf{x} $, quantile regression estimates the effects of x $ \mathbf{x} $ on the conditional quantile of Y. Formally, τ the $ \tau $ th (0 < τ < 1 $ 0<\tau <1 $) conditional quantile of Y given x $ \mathbf{x} $ is defined as Q Y (τ | x) = inf {t : F Y | x (t) ≥ τ } $ Q_{Y}(\tau |\mathbf{x}) = \inf \{t:F_{Y|\mathbf{x}}(t) \ge \tau \} $, where F Y | x $ F_{Y|\mathbf{x}} $ is the 274conditional cumulative distribution function of Y given x $ \mathbf{x} $. An important special case of quantile regression is the least absolute deviation (LAD) regression (Koenker and Bassett, 1978a), which estimates the conditional median Q Y (0.5 | x) $ Q_{Y}(0.5|\mathbf{x}) $.",
author = "Lan Wang",
note = "Publisher Copyright: {\textcopyright} 2018 by Taylor and Francis Group, LLC.",
booktitle = "Handbook of Quantile Regression",
T1 - Nonconvex penalized quantile regression
T2 - A review of methods, theory and algorithms
AU - Wang, Lan
N1 - Publisher Copyright: © 2018 by Taylor and Francis Group, LLC.
N2 - Quantile regression is now a widely recognized useful alternative to the classical least-squares regression. It was introduced in the seminal paper of Koenker and Bassett (1978b). Given a response variable Y and a vector of covariates x $ \mathbf{x} $, quantile regression estimates the effects of x $ \mathbf{x} $ on the conditional quantile of Y. Formally, τ the $ \tau $ th (0 < τ < 1 $ 0<\tau <1 $) conditional quantile of Y given x $ \mathbf{x} $ is defined as Q Y (τ | x) = inf {t : F Y | x (t) ≥ τ } $ Q_{Y}(\tau |\mathbf{x}) = \inf \{t:F_{Y|\mathbf{x}}(t) \ge \tau \} $, where F Y | x $ F_{Y|\mathbf{x}} $ is the 274conditional cumulative distribution function of Y given x $ \mathbf{x} $. An important special case of quantile regression is the least absolute deviation (LAD) regression (Koenker and Bassett, 1978a), which estimates the conditional median Q Y (0.5 | x) $ Q_{Y}(0.5|\mathbf{x}) $.
AB - Quantile regression is now a widely recognized useful alternative to the classical least-squares regression. It was introduced in the seminal paper of Koenker and Bassett (1978b). Given a response variable Y and a vector of covariates x $ \mathbf{x} $, quantile regression estimates the effects of x $ \mathbf{x} $ on the conditional quantile of Y. Formally, τ the $ \tau $ th (0 < τ < 1 $ 0<\tau <1 $) conditional quantile of Y given x $ \mathbf{x} $ is defined as Q Y (τ | x) = inf {t : F Y | x (t) ≥ τ } $ Q_{Y}(\tau |\mathbf{x}) = \inf \{t:F_{Y|\mathbf{x}}(t) \ge \tau \} $, where F Y | x $ F_{Y|\mathbf{x}} $ is the 274conditional cumulative distribution function of Y given x $ \mathbf{x} $. An important special case of quantile regression is the least absolute deviation (LAD) regression (Koenker and Bassett, 1978a), which estimates the conditional median Q Y (0.5 | x) $ Q_{Y}(0.5|\mathbf{x}) $.
BT - Handbook of Quantile Regression
|
CommonCrawl
|
Chain Rule for Derivative — The Theory
Calculus, College Math
General Math
Functions & Operations
Math Tools
Compendium of Math Symbols
The Definitive Glossary of Higher Mathematical Jargon
The Definitive, Non-Technical Introduction to LaTeX, Professional Typesetting and Scientific Publishing
The Definitive Higher Math Guide on Integer Long Division (and Its Variants)
In calculus, Chain Rule is a powerful differentiation rule for handling the derivative of composite functions. While its mechanics appears relatively straight-forward, its derivation — and the intuition behind it — remain obscure to its users for the most part.
In what follows though, we will attempt to take a look what both of those. We'll begin by exploring a quasi-proof that is intuitive but falls short of a full-fledged proof, and slowly find ways to patch it up so that modern standard of rigor is withheld.
Chain Rule — A Review
Deriving the Chain Rule — Preliminary Attempt
Deriving the Chain Rule — Second Attempt
Other Calculus-Related Guides You Might Be Interested In
Given a function $g$ defined on $I$, and another function $f$ defined on $g(I)$, we can defined a composite function $f \circ g$ (i.e., $f$ compose $g$) as follows:
\begin{align*} [f \circ g ](x) & \stackrel{df}{=} f[g(x)] \qquad (\forall x \in I) \end{align*}
In which case, we can refer to $f$ as the outer function, and $g$ as the inner function. Under this setup, the function $f \circ g$ maps $I$ first to $g(I)$, and then to $f[g(I)]$.
In addition, if $c$ is a point on $I$ such that:
The inner function $g$ is differentiable at $c$ (with the derivative denoted by $g'(c)$).
The outer function $f$ is differentiable at $g(c)$ (with the derivative denoted by $f'[g(c)]$).
then it would transpire that the function $f \circ g$ is also differentiable at $c$, where:
\begin{align*} (f \circ g)'(c) & = f'[g(c)] \, g'(c) \end{align*}
giving rise to the famous derivative formula commonly known as the Chain Rule.
Theorem 1 — The Chain Rule for Derivative
Given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, if $c$ is a point on $I$ such that $g$ is differentiable at $c$ and $f$ differentiable at $g(c)$ (i.e., the image of $c$), then we have that:
Or in Leibniz's notation:
\begin{align*} \frac{df}{dx} = \frac{df}{dg} \frac{dg}{dx} \end{align*}
as if we're going from $f$ to $g$ to $x$.
In English, the Chain Rule reads:
The derivative of a composite function at a point, is equal to the derivative of the inner function at that point, times the derivative of the outer function at its image.
As simple as it might be, the fact that the derivative of a composite function can be evaluated in terms of that of its constituent functions was hailed as a tremendous breakthrough back in the old days, since it allows for the differentiation of a wide variety of elementary functions — ranging from $\displaystyle (x^2+2x+3)^4$ and $\displaystyle e^{\cos x + \sin x}$ to $\ln \left(\frac{3+x}{2^x} \right)$ and $\operatorname{arcsec} (2^x)$.
More importantly, for a composite function involving three functions (say, $f$, $g$ and $h$), applying the Chain Rule twice yields that:
\begin{align*} f(g[h(c)])' & = f'(g[h(c)]) \, \left[ g[h(c)] \right]' \\ & = f'(g[h(c)]) \, g'[h(c)] \, h'(c) \end{align*}
(assuming that $h$ is differentiable at $c$, $g$ differentiable at $h(c)$, and $f$ at $g[h(c)]$ of course!)
In fact, extending this same reasoning to a $n$-layer composite function of the form $f_1 \circ (f_2 \circ \cdots (f_{n-1} \circ f_n) )$ gives rise to the so-called Generalized Chain Rule:
\begin{align*}\frac{d f_1}{dx} = \frac{d f_1}{d f_2} \, \frac{d f_2}{d f_3} \dots \frac{d f_n}{dx} \end{align*}
thereby showing that any composite function involving any number of functions — if differentiable — can have its derivative evaluated in terms of the derivatives of its constituent functions in a chain-like manner. Hence the Chain Rule.
All right. Let's see if we can derive the Chain Rule from first principles then: given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, we are told that $g$ is differentiable at a point $c \in I$ and that $f$ is differentiable at $g(c)$. That is:
\begin{align*} \lim_{x \to c} \frac{g(x) – g(c)}{x – c} & = g'(c) & \lim_{x \to g(c)} \frac{f(x) – f[g(c)]}{x – g(c)} & = f'[g(c)] \end{align*}
Here, the goal is to show that the composite function $f \circ g$ indeed differentiates to $f'[g(c)] \, g'(c)$ at $c$. That is:
\begin{align*} \lim_{x \to c} \frac{f[g(x)] – f[g(c)]}{x -c} = f'[g(c)] \, g'(c) \end{align*}
As a thought experiment, we can kind of see that if we start on the left hand side by multiplying the fraction by $\dfrac{g(x) – g(c)}{g(x) – g(c)}$, then we would have that:
\begin{align*} \lim_{x \to c} \frac{f[g(x)] – f[g(c)]}{x -c} & = \lim_{x \to c} \left[ \frac{f[g(x)]-f[g(c)]}{g(x) – g(c)} \, \frac{g(x)-g(c)}{x-c} \right] \end{align*}
So that if for simplicity, we denote the difference quotient $\dfrac{f(x) – f[g(c)]}{x – g(c)}$ by $Q(x)$, then we should have that:
\begin{align*} \lim_{x \to c} \frac{f[g(x)] – f[g(c)]}{x -c} & = \lim_{x \to c} \left[ Q[g(x)] \, \frac{g(x)-g(c)}{x-c} \right] \\ & = \lim_{x \to c} Q[g(x)] \lim_{x \to c} \frac{g(x)-g(c)}{x-c} \\ & = f'[g(c)] \, g'(c) \end{align*}
Great! Seems like a home-run right? Well, not so fast, for there exists two fatal flaws with this line of reasoning…
First, we can only divide by $g(x)-g(c)$ if $g(x) \ne g(c)$. In fact, forcing this division now means that the quotient $\dfrac{f[g(x)]-f[g(c)]}{g(x) – g(c)}$ is no longer necessarily well-defined in a punctured neighborhood of $c$ (i.e., the set $(c-\epsilon, c+\epsilon) \setminus \{c\}$, where $\epsilon>0$). As a result, it no longer makes sense to talk about its limit as $x$ tends $c$.
And then there's the second flaw, which is embedded in the reasoning that as $x \to c$, $Q[g(x)] \to f'[g(c)]$. To be sure, while it is true that:
As $x \to c$, $g(x) \to g(c)$ (since differentiability implies continuity).
As $x \to g(c)$, $Q(x) \to f'[g(c)]$ (remember, $Q$ is the difference quotient of $f$ at $g(c)$).
It still doesn't follow that as $x \to c$, $Q[g(x)] \to f'[g(c)]$. In fact, it is in general false that:
If $x \to c$ implies that $g(x) \to G$, and $x \to G$ implies that $f(x) \to F$, then $x \to c$ implies that $(f \circ g)(x) \to F$.
Here, what is true instead is this:
Theorem 2 — Composition Law for Limits
Given an inner function $g$ defined on $I$ (with $c \in I$) and an outer function $f$ defined on $g(I)$, if the following two conditions are both met:
As $x \to c$, $g(x) \to G$.
$f(x)$ is continuous at $G$.
then as $x \to c $, $(f \circ g)(x) \to f(G)$.
In any case, the point is that we have identified the two serious flaws that prevent our sketchy proof from working. Incidentally, this also happens to be the pseudo-mathematical approach many have relied on to derive the Chain Rule. Not good.
In which case, begging seems like an appropriate future course of action…
Lord Sal @khanacademy, mind reshooting the Chain Rule proof video with a non-pseudo-math approach? Click to Tweet
Actually, jokes aside, the important point to be made here is that this faulty proof nevertheless embodies the intuition behind the Chain Rule, which loosely speaking can be summarized as follows:
\begin{align*} \lim_{x \to c} \frac{\Delta f}{\Delta x} & = \lim_{x \to c} \frac{\Delta f}{\Delta g} \, \lim_{x \to c} \frac{\Delta g}{\Delta x} \end{align*}
Now, if you still recall, this is where we got stuck in the proof:
\begin{align*} \lim_{x \to c} \frac{f[g(x)] – f[g(c)]}{x -c} & = \lim_{x \to c} \left[ \frac{f[g(x)]-f[g(c)]}{g(x) – g(c)} \, \frac{g(x)-g(c)}{x-c} \right] \quad (\text{kind of}) \\ & = \lim_{x \to c} Q[g(x)] \, \lim_{x \to c} \frac{g(x)-g(c)}{x-c} \quad (\text{kind of})\\ & = \text{(ill-defined)} \, g'(c) \end{align*}
So that if only we can:
Patch up the difference quotient $Q(x)$ to make $Q[g(x)]$ well-defined on a punctured neighborhood of $c$ — so that it now makes sense to define the limit of $Q[g(x)]$ as $x \to c$.
Tweak $Q(x)$ a bit to make it continuous at $g(c)$ — so that the Composition Law for Limits would ensure that $\displaystyle \lim_{x \to c} Q[g(x)] = f'[g(c)]$.
then there might be a chance that we can turn our failed attempt into something more than fruitful.
Let's see… How do we go about amending $Q(x)$, the difference quotient of $f$ at $g(c)$? Well, we'll first have to make $Q(x)$ continuous at $g(c)$, and we do know that by definition:
\begin{align*} \lim_{x \to g(c)} Q(x) = \lim_{x \to g(c)} \frac{f(x) – f[g(c)]}{x – g(c)} = f'[g(c)] \end{align*}
Here, being merely a difference quotient, $Q(x)$ is of course left intentionally undefined at $g(c)$. However, if we upgrade our $Q(x)$ to $\mathbf{Q} (x)$ so that:
\begin{align*} \mathbf{Q}(x) \stackrel{df}{=} \begin{cases} Q(x) & x \ne g(c) \\ f'[g(c)] & x = g(c) \end{cases} \end{align*}
then $\mathbf{Q}(x)$ would be the patched version of $Q(x)$ which is actually continuous at $g(c)$. One puzzle solved!
All right. Moving on, let's turn our attention now to another problem, which is the fact that the function $Q[g(x)]$, that is:
\begin{align*} \frac{f[g(x)] – f(g(c)}{g(x) – g(c)} \end{align*}
is not necessarily well-defined on a punctured neighborhood of $c$. But then you see, this problem has already been dealt with when we define $\mathbf{Q}(x)$! In particular, it can be verified that the definition of $\mathbf{Q}(x)$ entails that:
\begin{align*} \mathbf{Q}[g(x)] = \begin{cases} Q[g(x)] & \text{if $x$ is such that $g(x) \ne g(c)$ } \\ f'[g(c)] & \text{if $x$ is such that $g(x)=g(c)$} \end{cases} \end{align*}
Translation? The upgraded $\mathbf{Q}(x)$ ensures that $\mathbf{Q}[g(x)]$ has the enviable property of being pretty much identical to the plain old $Q[g(x)]$ — with the added bonus that it is actually defined on a neighborhood of $c$!
And with the two issues settled, we can now go back to square one — to the difference quotient of $f \circ g$ at $c$ that is — and verify that while the equality:
\begin{align*} \frac{f[g(x)] – f[g(c)]}{x – c} = \frac{f[g(x)]-f[g(c)]}{g(x) – g(c)} \, \frac{g(x)-g(c)}{x-c} \end{align*}
only holds for the $x$s in a punctured neighborhood of $c$ such that $g(x) \ne g(c)$, we now have that:
\begin{align*} \frac{f[g(x)] – f[g(c)]}{x – c} = \mathbf{Q}[g(x)] \, \frac{g(x)-g(c)}{x-c} \end{align*}
for all the $x$s in a punctured neighborhood of $c$. With this new-found realisation, we can now quickly finish the proof of Chain Rule as follows:
\begin{align*} \lim_{x \to c} \frac{f[g(x)] – f[g(c)]}{x – c} & = \lim_{x \to c} \left[ \mathbf{Q}[g(x)] \, \frac{g(x)-g(c)}{x-c} \right] \\ & = \lim_{x \to c} \mathbf{Q}[g(x)] \, \lim_{x \to c} \frac{g(x)-g(c)}{x-c} \\ & = f'[g(c)] \, g'(c) \end{align*}
where $\displaystyle \lim_{x \to c} \mathbf{Q}[g(x)] = f'[g(c)]$ as a result of the Composition Law for Limits.
Wow! That was a bit of a detour isn't it? You see, while the Chain Rule might have been apparently intuitive to understand and apply, it is actually one of the first theorems in differential calculus out there that require a bit of ingenuity and knowledge beyond calculus to derive.
And if the derivation seems to mess around with the head a bit, then it's certainly not hard to appreciate the creative and deductive greatness among the forefathers of modern calculus — those who've worked hard to establish a solid, rigorous foundation for calculus, thereby paving the way for its proliferation into various branches of applied sciences all around the world.
And as for you, kudos for having made it this far! As a token of appreciation, here's an interactive table summarizing what we have discovered up to now:
Chain Rule Derivation — Take One
Chain Rule Derivation — Take Two
Given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, if $g$ is differentiable at a point $c \in I$ and $f$ is differentiable at $g(c)$, then we have that:
Composition Law for Limits
Given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, if the following two conditions are both met:
Since the following equality only holds for the $x$s where $g(x) \ne g(c)$:
\begin{align*} \frac{f[g(x)] – f[g(c)]}{x -c} & = \left[ \frac{f[g(x)]-f[g(c)]}{g(x) – g(c)} \, \frac{g(x)-g(c)}{x-c} \right] \\ & = Q[g(x)] \, \frac{g(x)-g(c)}{x-c} \end{align*}
combined with the fact that $Q[g(x)] \not\to f'[g(x)]$ as $x \to c$, the argument falls apart.
Once we upgrade the difference quotient $Q(x)$ to $\mathbf{Q}(x)$ as follows:
we'll have that:
for all $x$ in a punctured neighborhood of $c$. In which case, the proof of Chain Rule can be finalized in a few steps through the use of limit laws.
And with that, we'll close our little discussion on the theory of Chain Rule as of now. By the way, are you aware of an alternate proof that works equally well? If so, you have good reason to be grateful of Chain Rule the next time you invoke it to advance your work!
Derivative of Inverse Functions: Theory & Applications
Derivatives of $x^x$, $x^{x^x}$…
Exponent Rule for Derivative: Theory & Applications
Algebra of Infinite Limits and Polynomial's End-Behaviors
Integration Series: The Overshooting Method
How's your higher math going?
Shallow learning and mechanical practices rarely work in higher mathematics. Instead, use these 10 principles to optimize your learning and prevent years of wasted effort.
Hmm... Tell me more.
Math Vault and its Redditbots enjoy advocating for mathematical experience through digital publishing and the uncanny use of technologies. Check out their 10-principle learning manifesto so that you can be transformed into a fuller mathematical being too.
The Algebra of Infinite Limits — and the Behaviors of Polynomials at the Infinities
Euler's Formula: A Complete Guide
Laplace Transform: A First Introduction
Anitej Banerjee says:
Wow, that really was mind blowing!
I did come across a few hitches in the logic — perhaps due to my own misunderstandings of the topic.
Firstly, why define g'(c) to be the lim (x->c) of [g(x) – g(c)]/[x-c].
If you were to follow the definition from most textbooks:
f'(x) = lim (h->0) of [f(x+h) – f(x)]/[h]
Then, for g'(c), you would come up with:
g'(c) = lim (h->0) of [g(c+h) – g(c)]/[h]
Perhaps the two are the same, and maybe it's just my loosey-goosey way of thinking about the limits that is causing this confusion…
Secondly, I don't understand how bold Q(x) works. I understand the law of composite functions limits part, but it just seems too easy — just defining Q(x) to be f'(x) when g(x) = g(c)… I can't pin-point why, but it feels a little bit like cheating :P.
Lastly, I just came up with a geometric interpretation of the chain rule — maybe not so fancy :P.
f(g(x)) is simply f(x) with a shifted x-axis [Seems like a big assumption right now, but the derivative of g takes care of instantaneous non-linearity]. g'(x) is simply the transformation scalar — which takes in an x value on the g(x) axis and returns the transformation scalar which, when multiplied with f'(x) gives you the actual value of the derivative of f(g(x)). I like to think of g(x) as an elongated x axis/input domain to visualize it, but since the derivative of g'(x) is instantaneous, it takes care of the fact that g(x) may not be as linear as that — so g(x) could also be an odd-powered polynomial (covering every real value — loved that article, by the way!) but the analogy would still hold (I think).
Once again, thank you very much! 😀
Math Vault says:
Hi Anitej. For the first question, the derivative of a function at a point can be defined using both the x-c notation and the h notation. In fact, using a stronger form of limit comparison law, it can be shown that if the derivative exists, then the derivative as defined by both definitions are equivalent.
For the second question, the bold Q(x) basically attempts to patch up Q(x) so that it is actually continuous at g(c). Now, if we define the bold Q(x) to be f'(x) when g(x)=g(c), then not only will it not take care of the case where the input x is actually equal to g(c), but the desired continuity won't be achieved either.
And as for the geometric interpretation of the Chain Rule, that's definitely a neat way to think of it!
Well that sorts it out then… err, mostly.
But why resort to f'(c) instead of f'(g(c)), wouldn't that lead to a very different value of f'(x) at x=c, compared to the rest of the values [That does sort of make sense as the limit as x->c of that derivative doesn't exist]?
Either way, thank you very much — I certainly didn't expect such a quick reply! 🙂
Oh. It is f'[g(c)]. Remember, g being the inner function is evaluated at c, whereas f being the outer function is evaluated at g(c). In particular, the focus is not on the derivative of f at c. You might want to go through the Second Attempt Section by now and see if it helps.
Pranjal says:
Thank you. This is awesome . This is one of the most used topic of calculus . You have explained every thing very clearly but I also expected more practice problems on derivative chain rule.
Hi Pranjal. For calculus practice problems, you might find the book "Calculus" by James Stewart helpful. It's under the tag "Applied College Mathematics" in our resource page.
S.M. Zakaria Laskar says:
Well Done, nice article, thanks for the post
helene says:
thank you very good article
Thank you. Chain rule is a bit tricky to explain at the theory level, so hopefully the message comes across safe and sound!
[tcb_post_title link='1' rel='0' target='0' inline='1' css='tve-u-173599ca74e' static-link='{"className":"tve-froala fr-basic","href":"https://mathvault.ca/latex-guide/","title":"The Definitive, Non-Technical Introduction to LaTeX, Professional Typesetting and Scientific Publishing","data-css":"tve-u-173599ca74e","class":"tve-froala fr-basic"}' link-css-attr="tve-u-173599ca74e"]
|
CommonCrawl
|
June 2015 , Volume 20 , Issue 4
An energy-consistent depth-averaged Euler system: Derivation and properties
Marie-Odile Bristeau, Anne Mangeney, Jacques Sainte-Marie and Nicolas Seguin
In this paper, we present an original derivation process of a non-hydrostatic shallow water-type model which aims at approximating the incompressible Euler and Navier-Stokes systems with free surface. The closure relations are obtained by a minimal energy constraint instead of an asymptotic expansion. The model slightly differs from the well-known Green-Naghdi model and is confronted with stationary and analytical solutions of the Euler system corresponding to rotational flows. At the end of the paper, we give time-dependent analytical solutions for the Euler system that are also analytical solutions for the proposed model but that are not solutions of the Green-Naghdi model. We also give and compare analytical solutions of the two non-hydrostatic shallow water models.
Marie-Odile Bristeau, Anne Mangeney, Jacques Sainte-Marie, Nicolas Seguin. An energy-consistent depth-averaged Euler system: Derivation and properties. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 961-988. doi: 10.3934/dcdsb.2015.20.961.
Dynamics of a parasite-host epidemiological model in spatial heterogeneous environment
Yongli Cai and Weiming Wang
2015, 20(4): 989-1013 doi: 10.3934/dcdsb.2015.20.989 +[Abstract](2428) +[PDF](819.8KB)
In this paper, we explore a parasite-host epidemiological model incorporating demographic and epidemiological processes in a spatially heterogeneous environment in which the individuals are subject to a random movement. We show the global stability of the extinction equilibrium in three different cases, and prove the existence, uniqueness and the global stability of the disease--free equilibrium. When the death rate in the model becomes a constant, we give the existence of the endemic equilibrium and the global stability of the endemic equilibrium in a special case. Furthermore, we perform a series of numerical simulations to display the effects of the movement of hosts and the heterogeneous environment on the disease dynamics. Our analytical and numerical results reveal that the disease extinction/outbreak can be ignited by both individual mobility and the environmental heterogeneity.
Yongli Cai, Weiming Wang. Dynamics of a parasite-host epidemiological model in spatial heterogeneous environment. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 989-1013. doi: 10.3934/dcdsb.2015.20.989.
Multidimensional stability of disturbed pyramidal traveling fronts in the Allen-Cahn equation
Hongmei Cheng and Rong Yuan
2015, 20(4): 1015-1029 doi: 10.3934/dcdsb.2015.20.1015 +[Abstract](2271) +[PDF](388.5KB)
This paper is concerned with the asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equation on $\mathbb{R}^n$, $n\geq 4$. Our first result states that pyramidal traveling fronts are asymptotically stable under the initial perturbations that decay at space infinity. Then we further show the existence of a solution that oscillates permanently between two pyramidal traveling fronts, which implies that pyramidal traveling fronts are not asymptotically stable under more general perturbations. Our main technique is the supersolution and subsolution method coupled with the comparison principle.
Hongmei Cheng, Rong Yuan. Multidimensional stability of disturbed pyramidal traveling fronts in the Allen-Cahn equation. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1015-1029. doi: 10.3934/dcdsb.2015.20.1015.
On an ODE-PDE coupling model of the mitochondrial swelling process
Sabine Eisenhofer, Messoud A. Efendiev, Mitsuharu Ôtani, Sabine Schulz and Hans Zischka
2015, 20(4): 1031-1057 doi: 10.3934/dcdsb.2015.20.1031 +[Abstract](2410) +[PDF](3497.7KB)
Mitochondrial swelling has huge impact to multicellular organisms since it triggers apoptosis, the programmed cell death. In this paper we present a new mathematical model of this phenomenon. As a novelty it includes spatial effects, which are of great importance for the in vivo process. Our model considers three mitochondrial subpopulations varying in the degree of swelling. The evolution of these groups is dependent on the present calcium concentration and is described by a system of ODEs, whereas the calcium propagation is modeled by a reaction-diffusion equation taking into account spatial effects. We analyze the derived model with respect to existence and long-time behavior of solutions and obtain a complete mathematical classification of the swelling process.
Sabine Eisenhofer, Messoud A. Efendiev, Mitsuharu \u00D4tani, Sabine Schulz, Hans Zischka. On an ODE-PDE coupling model of the mitochondrial swelling process. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1031-1057. doi: 10.3934/dcdsb.2015.20.1031.
Migration and orientation of endothelial cells on micropatterned polymers: A simple model based on classical mechanics
Julie Joie, Yifeng Lei, Marie-Christine Durrieu, Thierry Colin, Clair Poignard and Olivier Saut
Understanding the endothelial cell migration on micropatterned polymers, as well as the cell orientation is a critical issue in tissue engineering, since it is the preliminary step towards cell polarization and that possibly leads to the blood vessel formation. In this paper, we derive a simple agent-based model to describe the migration and the orientation of endothelial cells seeded on bioactive micropatterned polymers. The aim of the modeling is to provide a simple model that corroborates quantitatively the experiments, without considering the complex phenomena inherent to cell migration. Our model is obtained thanks to a classical mechanics approach based on experimental observations. Even though its simplicity, it provides numerical results that are quantitatively in accordance with the experimental data, and thus our approach can be seen as a preliminary way towards a simple modeling of cell migration.
Julie Joie, Yifeng Lei, Marie-Christine Durrieu, Thierry Colin, Clair Poignard, Olivier Saut. Migration and orientation of endothelial cells on micropatterned polymers: A simple model based on classical mechanics. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1059-1076. doi: 10.3934/dcdsb.2015.20.1059.
A detailed balanced reaction network is sufficient but not necessary for its Markov chain to be detailed balanced
Badal Joshi
Certain chemical reaction networks (CRNs) when modeled as a deterministic dynamical system taken with mass-action kinetics have the property of reaction network detailed balance (RNDB) which is achieved by imposing network-related constraints on the reaction rate constants. Markov chains (whether arising as models of CRNs or otherwise) have their own notion of detailed balance, imposed by the network structure of the graph of the transition matrix of the Markov chain. When considering Markov chains arising from chemical reaction networks with mass-action kinetics, we will refer to this property as Markov chain detailed balance (MCDB). Finally, we refer to the stochastic analog of RNDB as Whittle stochastic detailed balance (WSDB). It is known that RNDB and WSDB are equivalent. We prove that WSDB and MCDB are also intimately related but are not equivalent. While RNDB implies MCDB, the converse is not true. The conditions on rate constants that result in networks with MCDB but without RNDB are stringent, and thus examples of this phenomenon are rare, a notable exception is a network whose Markov chain is a birth and death process. We give a new algorithm to find conditions on the rate constants that are required for MCDB.
Badal Joshi. A detailed balanced reaction network is sufficient but not necessary for its Markov chain to be detailed balanced. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1077-1105. doi: 10.3934/dcdsb.2015.20.1077.
Codimension 3 B-T bifurcations in an epidemic model with a nonlinear incidence
Chengzhi Li, Jianquan Li and Zhien Ma
It was shown in [11] that in an epidemic model with a nonlinear incidence and two compartments some complex dynamics can appear, such as the backward bifurcation, codimension 1 Hopf bifurcation and codimension 2 Bogdanov-Takens bifurcation. In this paper we prove that for the same model the codimension of Bogdanov-Takens bifurcation can be 3 and is at most 3. Hence, more complex new phenomena, such as codimension 2 Hopf bifurcation, codimension 2 homoclinic bifurcation and semi-stable limit cycle bifurcation, exhibit. Especially, the system can have and at most have 2 limit cycles near the positive singularity.
Chengzhi Li, Jianquan Li, Zhien Ma. Codimension 3 B-T bifurcations in an epidemic model with a nonlinear incidence. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1107-1116. doi: 10.3934/dcdsb.2015.20.1107.
Dynamics of the density dependent and nonautonomous predator-prey system with Beddington-DeAngelis functional response
Haiyin Li and Yasuhiro Takeuchi
We investigate the dynamics of a non-autonomous and density dependent predator-prey system with Beddington-DeAngelis functional response, where not only the prey density dependence but also the predator density dependence are considered. First, we derive a sufficient condition of permanence by comparison theorem, at the same time we propose a weaker condition ensuring some positive bounded set to be positive invariant. Next, we obtain two existence conditions for positive periodic solution by Brouwer fixed-point theorem and by continuation theorem, where the second condition is weaker than the first and gives the existence range of periodic solution. Further we show the global attractivity of the bounded positive solution by constructing Lyapunov function. Similarly, we have sufficient condition of global attractivity of boundary periodic solution.
Haiyin Li, Yasuhiro Takeuchi. Dynamics of the density dependent and nonautonomous predator-prey systemwith Beddington-DeAngelis functional response. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1117-1134. doi: 10.3934/dcdsb.2015.20.1117.
Global dynamics and traveling wave solutions of two predators-one prey models
Jian-Jhong Lin, Weiming Wang, Caidi Zhao and Ting-Hui Yang
In this work, we consider an ecological system of three species with two predators-one prey type without or with diffusion. For the system without diffusion, i.e. a system of three ODEs, we clarify global dynamics of all equilibria and find an exact condition to guarantee the existence and global asymptotic stability of the positive equilibrium. However, when the corresponding condition does not hold, the prey becomes extinct due to the over exploitation. On the other hand, for the system with diffusion, using the cross iteration method we find the minimum speed $c_*$. The existence of traveling wave front connecting the trivial solution and the coexistence state with some sufficient conditions is verified if the wave speed is large than $c_*$ and we also prove the nonexistence of such solutions if the wave speed is less than $c_*$. Finally, numerical simulations of system without or with diffusion are implemented and biological meanings are discussed.
Jian-Jhong Lin, Weiming Wang, Caidi Zhao, Ting-Hui Yang. Global dynamics and traveling wave solutions of two predators-one prey models. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1135-1154. doi: 10.3934/dcdsb.2015.20.1135.
Quasi-periodic motions in a special class of dynamical equations with dissipative effects: A pair of detection methods
Ugo Locatelli and Letizia Stefanelli
We consider a particular class of equations of motion, generalizing to $n$ degrees of freedom the ``dissipative spin--orbit problem'', commonly studied in Celestial Mechanics. Those equations are formulated in a pseudo-Hamiltonian framework with action-angle coordinates; they contain a quasi-integrable conservative part and friction terms, assumed to be linear and isotropic with respect to the action variables. In such a context, we transfer two methods determining quasi-periodic solutions, which were originally designed to analyze purely Hamiltonian quasi-integrable problems.
First, we show how the frequency map analysis can be adapted to this kind of dissipative models. Our approach is based on a key remark: the method can work as usual, by studying the behavior of the angular velocities of the motions as a function of the so called ``external frequencies'', instead of the actions.
Moreover, we explicitly implement the Kolmogorov's normalization algorithm for the dissipative systems considered here. In a previous article, we proved a theoretical result: such a constructing procedure is convergent under the hypotheses usually assumed in KAM theory. In the present work, we show that it can be translated to a code making algebraic manipulations on a computer, so to calculate effectively quasi-periodic solutions on invariant tori (and the attracting dynamics in their neighborhoods).
Both the methods are carefully tested, by checking that their predictions are in agreement, in the case of the so called ``dissipative forced pendulum''. Furthermore, the results obtained by applying our adaptation of the frequency analysis method to the dissipative standard map are compared with some existing ones in the literature.
Ugo Locatelli, Letizia Stefanelli. Quasi-periodic motions in a special class of dynamical equations with dissipative effects: A pair of detection methods. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1155-1187. doi: 10.3934/dcdsb.2015.20.1155.
A reduced-order SMFVE extrapolation algorithm based on POD technique and CN method for the non-stationary Navier-Stokes equations
Zhendong Luo
In this article, we employ proper orthogonal decomposition (POD) technique to establish a POD-based reduced-order stabilized mixed finite volume element (SMFVE) extrapolation algorithm based on two local Gaussian integrals, parameter-free, and Crank-Nicolson (CN) method with fewer degrees of freedom for the non-stationary Navier-Stokes equations. The error estimates between the POD-based reduced-order SMFVE solutions and the classical SMFVE solutions and the implementation for the POD-based reduced-order SMFVE extrapolation algorithm are provided. A numerical example is used to illustrate that the numerical results are consistent with theoretical conclusions. Moreover, it is shown that the POD-based reduced-order SMFVE extrapolation algorithm is feasible and efficient for finding numerical solutions for the non-stationary Navier-Stokes equations.
Zhendong Luo. A reduced-order SMFVE extrapolation algorithm based on POD technique and CN method for the non-stationary Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1189-1212. doi: 10.3934/dcdsb.2015.20.1189.
Pullback attractors for a class of nonlinear lattices with delays
Yejuan Wang and Kuang Bai
We consider a class of nonlinear delay lattices $$ \ddot{u}_i(t)+(-1)^p\triangle^pu_i(t)+\lambda u_i(t)+\dot{u}_i(t)=h_i(u_i(t-\rho(t)))+f_i(t),~~~i \in \mathbb{Z}, $$ where $\lambda$ is a real positive constant, $p$ is any positive integer and $\triangle$ is the discrete one-dimensional Laplace operator. Under suitable conditions on $h$ and $f$ we prove the existence of pullback attractors for the multi-valued process associated with the system for which the uniqueness of solutions need not hold.
Yejuan Wang, Kuang Bai. Pullback attractors for a class of nonlinear lattices with delays. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1213-1230. doi: 10.3934/dcdsb.2015.20.1213.
Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity
Qi Wang
In this paper, we study the nonconstant positive steady states of a Keller-Segel chemotaxis system over a bounded domain $\Omega\subset \mathbb{R}^N$, $N\geq 1$. The sensitivity function is chosen to be $\phi(v)=\ln (v+c)$ where $c$ is a positive constant. For the chemical diffusion rate being small, we construct positive solutions with a boundary spike supported on a platform. Moreover, this spike approaches the most curved part of the boundary of the domain as the chemical diffusion rate shrinks to zero. We also conduct extensive numerical simulations to illustrate the formation of stable boundary and interior spikes of the system. These spiky solutions can be used to model the self--organized cell aggregation phenomenon in chemotaxis.
Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1231-1250. doi: 10.3934/dcdsb.2015.20.1231.
Navier--Stokes equations on a rapidly rotating sphere
D. Wirosoetisno
We extend our earlier $\beta$-plane results [al-Jaboori and Wirosoetisno, 2011, DCDS-B 16:687--701] to a rotating sphere. Specifically, we show that the solution of the Navier--Stokes equations on a sphere rotating with angular velocity $1/\epsilon$ becomes zonal in the long time limit, in the sense that the non-zonal component of the energy becomes bounded by $\epsilon M$. Central to our proof is controlling the behaviour of the nonlinear term near resonances. We also show that the global attractor reduces to a single stable steady state when the rotation is fast enough.
D. Wirosoetisno. Navier--Stokes equations on a rapidly rotating sphere. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1251-1259. doi: 10.3934/dcdsb.2015.20.1251.
New results of the ultimate bound on the trajectories of the family of the Lorenz systems
Fuchen Zhang, Chunlai Mu, Shouming Zhou and Pan Zheng
In this paper, the global exponential attractive sets of a class of continuous-time dynamical systems defined by $\dot x = f\left( x \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} x \in {R^3},$ are studied. The elements of main diagonal of matrix $A$ are both negative numbers and zero, where matrix $A$ is the Jacobian matrix $\frac{{df}}{{dx}}$ of a continuous-time dynamical system defined by $\dot x = f\left( x \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} x \in {R^3},$ evaluated at the origin ${x_0} = \left( {0,0,0} \right).$ The former equations [1-6] that we are searching for a global bounded region have a common characteristic: The elements of main diagonal of matrix $A$ are all negative, where matrix $A$ is the Jacobian matrix $\frac{{df}}{{dx}}$ of a continuous-time dynamical system defined by $\dot x = f\left( x \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} x \in {R^n},$ evaluated at the origin ${x_0} = {\left( {0,0, \cdots ,0} \right)_{1 \times n}}.$ For the reason that the elements of main diagonal of matrix $A$ are both negative numbers and zero for this class of dynamical systems , the method for constructing the Lyapunov functions that applied to the former dynamical systems does not work for this class of dynamical systems. We overcome this difficulty by adding a cross term $xy$ to the Lyapunov functions of this class of dynamical systems and get a perfect result through many integral inequalities and the generalized Lyapunov functions.
Fuchen Zhang, Chunlai Mu, Shouming Zhou, Pan Zheng. New results of the ultimate bound on the trajectories of the family of the Lorenz systems. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1261-1276. doi: 10.3934/dcdsb.2015.20.1261.
The threshold of a stochastic SIRS epidemic model in a population with varying size
Yanan Zhao, Daqing Jiang, Xuerong Mao and Alison Gray
In this paper, a stochastic susceptible-infected-removed-susceptible (SIRS) epidemic model in a population with varying size is discussed. A new threshold $\tilde{R}_0$ is identified which determines the outcome of the disease. When the noise is small, if $\tilde{R}_0<1$, the infected proportion of the population disappears, so the disease dies out, whereas if $\tilde{R}_0>1$, the infected proportion persists in the mean and we derive that the disease is endemic. Furthermore, when ${R}_0 > 1$ and subject to a condition on some of the model parameters, we show that the solution of the stochastic model oscillates around the endemic equilibrium of the corresponding deterministic system with threshold ${R}_0$, and the intensity of fluctuation is proportional to that of the white noise. On the other hand, when the noise is large, we find that a large noise intensity has the effect of suppressing the epidemic, so that it dies out. These results are illustrated by computer simulations.
Yanan Zhao, Daqing Jiang, Xuerong Mao, Alison Gray. The threshold of a stochastic SIRS epidemic model in a population with varying size. Discrete & Continuous Dynamical Systems - B, 2015, 20(4): 1277-1295. doi: 10.3934/dcdsb.2015.20.1277.
|
CommonCrawl
|
What is the reduced row echelon form of $A$? [closed]
Let $$A = \left( \begin{array}{cccc} 7 & 7 & 9 & -17\\ 6 & 6 & 1 & -2 \\ -12 & -12 & -27 & 1 \\ 7& 7 & 17 & -15\end{array} \right)$$
What is the reduced row echelon form of $A$?
What is the rank of $A$?
linear-algebra abstract-algebra matrices matrix-rank gaussian-elimination
closed as off-topic by Hakim, Davide Giraudo, user26857, Yagna Patel, graydad Sep 4 '15 at 15:10
"This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Hakim, Davide Giraudo, user26857, Yagna Patel, graydad
$\begingroup$ click edit to see how to write matrix in a clear way $\endgroup$ – Petite Etincelle Aug 16 '14 at 8:38
$\begingroup$ Surely $rank A \leq 3$ because the first two columns are equal $\endgroup$ – Ella Smith Aug 16 '14 at 8:43
$\begingroup$ To help us help you: Do you know what reduced row echelon form and rank mean? Do you know what an elementary row operation is? $\endgroup$ – Rebecca J. Stones Aug 16 '14 at 8:59
$1.$ Multiply the first row by $\frac{1}{7}:$ $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 6 & 6 & 1 & -2 \\ -12 & -12 & -27 & 1 \\ 7 & 7 & 17 & -15 \end{pmatrix} $$ $2.$ Multiply first row by $6$ and add to row $2$: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & -\frac{47}{7} & \frac{88}{7} \\ -12 & -12 & -27 & 1 \\ 7 & 7 & 17 & -15 \end{pmatrix} $$ $3.$ Multiply first row by $12$ to the third row: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & -\frac{47}{7} & \frac{88}{7} \\ 0 & 0 & -\frac{81}{7} & -\frac{197}{7} \\ 7 & 7 & 17 & -15 \end{pmatrix} $$ $4.$ Multiply first row by $-7$ and add to the fourth row: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & -\frac{47}{7} & \frac{88}{7} \\ 0 & 0 & -\frac{81}{7} & -\frac{197}{7} \\ 0 & 0 & 8 & 2 \end{pmatrix} $$ $5.$ Multiply the second row by $-\frac{7}{47}$: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & -\frac{88}{47} \\ 0 & 0 & -\frac{81}{7} & -\frac{197}{7} \\ 0 & 0 & 8 & 2 \end{pmatrix} $$ $6.$ Multiply second row by $\frac{81}{7}$ and add to the third row: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & -\frac{88}{47} \\ 0 & 0 & 0 & -\frac{2341}{47} \\ 0 & 0 & 8 & 2 \end{pmatrix} $$ $7.$ Multiply the second row by $-8$ to the fourth row: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & -\frac{88}{47} \\ 0 & 0 & 0 & -\frac{-2341}{47} \\ 0 & 0 & 0 & \frac{798}{47} \end{pmatrix} $$ $8.$ Multiply the third row by $-\frac{47}{2341}$: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & -\frac{88}{47} \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & \frac{798}{47} \end{pmatrix} $$ $9.$ Multiply the third row by $-\frac{798}{47}$ and add to the fourth row: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & -\frac{88}{47} \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ This is the row echelon form. For the reduced row echelon form, $10.$ Multiply the third row by $\frac{88}{47}$ and add to the second row to get: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & -\frac{17}{7} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ $11.$ Multiply the third row by $\frac{17}{7}$ and add it to the first row to get: $$ \begin{pmatrix} 1 & 1 & \frac{9}{7} & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ $12.$ Multiply the second row by $-\frac{9}{7}$ and add it to the first row to get: $$ \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ This has rank $3$ (ie, the number of non-zero rows).
Not the answer you're looking for? Browse other questions tagged linear-algebra abstract-algebra matrices matrix-rank gaussian-elimination or ask your own question.
Reduced Echelon Form Question
Column and row space basis
row echelon vs reduced row echelon form
Matrices - Understanding row echelon form and reduced echelon form
Issue understanding the difference between reduced row echelon form on a coefficient matrix and on an augmented matrix
Calculating the rank of a matrix , reduced row echelon or row echelon?
Reduced-row echelon form associated to three lines in the plane
Transpose of the reduced row echelon form
What is the use of reduced row echelon form (not a row echelon form)?
Is this a matrix of row echelon form? Or even the reduced row echelon form?
|
CommonCrawl
|
THEORY DEPARTMENT
Research Codes
Neutral Transport
Magnetohydrodynamics
Basic Plasma Physics
Turbulence & Transport
Energetic Particles
Helio & Astro Plasmas
A peeling-ballooning eigenfunction calculated using M3D-C$^1$. The red and blue show the perturbed pressure, and the magenta curve the location of the last closed flux surface of the plasma equilibrium. The green triangles show the finite element mesh, which has resolution packed near the edge of the plasma to efficiently resolve the eigenfunction. The peeling-ballooning instability leads to Edge Localized Modes (ELMs) in tokamaks.
The following research codes are actively maintained:
XGC
M3D-C$^1$
COILOPT
STELLOPT
Gkeyll
HMHD
RBQ1D
DEGAS 2
Publications and Data Management website: website.
XGC: the X-point Gyrokinetic Code for Transport in Tokamaks
Near the edge of tokamak plasmas, strong particle and energy sources and sinks (e.g. radiation, contact with a material wall,...) drive the plasma away from thermal equilibrium, thus invalidating the assumptions underpinning fluid theories. The XGC, gyrokinetic, particle-in-cell (PIC) suite-of-codes, XGC1, XGCa and XGC0, were developed to provide a comprehensive description of kinetic transport phenomena in this complicated region; including: heating and cooling, radiation losses, neutral particle recycling and impurity transport.
A particle's "state" is described by the position, ${\bf x}$, and velocity, ${\bf v}$; constituting a a six-dimensional phase space. Gyrokinetic codes average over the very-fast, "gyromotion" of charged particles in strong magnetic fields, ${\bf B}$, and phase space becomes five-dimensional, ${\bf X} \equiv ({\bf x},v_\parallel,\mu)$, where ${\bf x}$ is the position of the "guiding center", $v_\parallel$ is the velocity parallel to ${\bf B}$, and $\mu$ is the "magnetic moment". The density of particles is given by a distribution function, $f({\bf X},t)$; the evolution of which, including collisons, is formally described by the "Vlasov-Maxwell" equation: $f$ evolves along "characteristics", which are the dynamical trajectories of the guiding centers [1], \begin{eqnarray} \dot {\bf x} & = & \left[ v_\parallel {\bf b} + v_\parallel^2 \nabla B \times {\bf b} + {\bf B} \times ( \mu \nabla B - {\bf E}) / B^2 \right] / D,\\ \dot v_\parallel & = & - ( {\bf B} + v_\parallel \nabla B \times {\bf b} ) \cdot ( \mu \nabla B - {\bf E}), \end{eqnarray} where ${\bf E}$ is the electric field, ${\bf b}={\bf B}/|B|$, and $D \equiv 1 + v_\parallel {\bf b} \cdot \nabla \times {\bf b}/B$ ensures the conservation of phase-space volume (Liouville theorem). The non-thermal-equilibrium demands that gyrokinetic codes must evolve the full distribution function, by applying classical "full-f" [2,3] and noise-reducing, "total-f" techniques [4]. (This is in contrast to so-called "$\delta f$" methods, which evolve only a small perturbation to an assumed-static, usually-Maxwellian distribution.) As full-f codes, XGC can include heat and torque input, radiation cooling; and neutral particle recycling [5].
Multiple particle-species (e.g. ions and electrons, ions and impurities) are included; and XGC uses a field-alligned, unstructured mesh in cylindrical coordinates, and so can easily accommodate the irregular magnetic fields in the plasma edge (e.g. the "X-point", "separatrix"). XGC calculates transport in the entire plasma volume; from the "closed-flux-surface", good-confinement region (near the magnetic axis) to the "scrape-off layer" (where magnetic fieldlines intersect the wall and confinement is lost). Collisions between ions, electrons and impurities are evaluated using either (i) a linear, Monte-Carlo operator [6] for test-particles, and the Hirshman-Sigmar operator [7] for field-particle collisions; or (ii) a fully-nonlinear, Fokker-Planck-Landau collision operator [8,9]. XGC codes efficiently exploit massively-parallel computing architecture.
[1] Robert G. Littlejohn, Phys. Fluids 28, 2015 (1985)
[2] C.S. Chang, Seunghoe Ku & H. Weitzner, Phys. Plasmas 11, 2649 (2004)
[3] S. Ku, C.S. Chang & P.H. Diamond, Nucl. Fusion 49, 115021 (2009)
[4] S. Ku, R. Hager et al., J. Comp. Phys. 315, 467 (2016)
[5] D.P. Stotler, C.S. Chang et al., J. Nucl. Mater. 438 S1275 (2013)
[6] Allen H. Boozer & Gioietta Kuo‐Petravic, Phys. Fluids 24, 851 (1981)
[7] S.P. Hirshman & D.J. Sigmar, Nucl. Fusion 21, 1079 (1981)
[8] E. S. Yoon & C.S. Chang, Phys. Plasmas 21, 032503 (2014)
[9] Robert Hager, E.S. Yoon et al., J. Comp. Phys. 315 644 (2016)
[10] Robert Hager & C.S. Chang, Phys. Plasmas 23, 042503 (2016)
[11] D.J. Battaglia, K.H. Burrell et al., Phys. Plasmas 21, 072508 (2014)
[#h34: C-S. Chang, R. Hager, S-H. Ku, 2016-08-05]
Gkeyll: Energy-conserving, discontinuous, high-order discretizations for gyro-kinetic simulations
Discontinuities at cell boundaries are allowed and used to compute a "numerical flux" needed to update the solution. Shown is a DG fit (in the least-square sense) of $x^4+\sin(5x)$ onto constant (left), linear (middle) and quadratic (right) basis functions.
Fusion energy gain in tokamaks depends sensitively on the plasma edge [1,2]; but, because of open- and closed-fieldlines, interaction with divertor plates, neutral particles, and large electromagnetic fluctuations, the edge region is, understandably, difficult to treat analytically. Large-scale, kinetic, numerical simulations are required. The "GKEYLL" code [3] is a flexible, robust, powerful, algorithm that provides numerical calculations of gyrokinetic turbulence, which importantly preserves the conservation laws of gyrokinetics.
Gyrokinetics [4] describes how a distribution of particles, described by a density-distribution function, $f(t,{\bf z})$, evolves in time, $t$; where ${\bf z}=({\bf x},{\bf v})$ describes the position and velocity, i.e. a point in "phase space", of the guiding-center. Elegant theoretical and numerical descriptions of this dynamical system exploit the "Hamiltonian properties", i.e. $\partial f / \partial t + \{f,H\} = 0$, where $H({\bf z})$ is the Hamiltonian (i.e. "energy", e.g. for a Vlasov system, $H(x, v, t)$ $=$ $mv^2/2$ $+$ $q \, \phi(x,t)$, where $\phi$ is the electro-static potential) and $\{g,f\}$ is the Poisson bracket operator [5]. For reliable simulations, numerical discretizations must preserve so-called "quadratic invariants", $\int H \{ f,H\} \, d{\bf z} $ $=$ $ \int f \{ f,H\} \, d{\bf z} = 0$.
Discontinuous Galerkin (DG) algorithms represent the "state-of-art" discretizations of hyperbolic, partial-differential equations [6]. DG combines the key advantages of finite-elements (e.g. low phase error, high accuracy, flexible geometries) with those of finite-volumes (e.g. up-winding, guaranteed positivity/monotonicity, . . . ), and makes efficient use of parallel computing architectures. DG is inherently super-convergent: e.g., whereas finite-volume methods interpolate $p$ points to get $p$-th order accuracy, DG methods in contrast interpolate $p$ points to get $(2p−1)$-th order accuracy, a significant advantage for $p>1$! Use of DG schemes may lead to significant advances towards a production-quality, edge gyrokinetic simulation software with reasonable computational costs.
[1] A. Loarte, et al. Nucl. Fusion 47, S203 (2007)
[2] P.C. Stangeby, The Plasma Boundary of Magnetic Fusion Devices, Institute of Physics Publishing (2000).
[3] A. Hakim, 26th IAEA Fusion Energy Conference (2016)
[4] E.A. Frieman & Liu Chen, Phys. Fluids 25, 502 (1982)
[5] John R. Cary & Alain J. Brizard, Rev. Mod. Phys. 81, 693 (2009)
[6] Bernardo Cockburn & Chi-Wang Shu, J. Comput. Phys. 141, 199 (1998)
[#h44: A. Hakim, 2016-08-05]
NOVA and NOVA-K
Hot beam ion orbits in the Tokamak Fusion Test Reactor (TFTF) shot #103101 [2,4]. Shown (click to enlarge) are passing (a) and trapped (b) orbits represented in both $\psi,R$ and $Z,R$ planes at a time when TAEs were observed.
NOVA is a suite of codes including the linear ideal eigenmode solver to find the solutions of ideal magnetohydrodynamic (MHD) system of equations [1], including such effects as plasma compressibility and realistic tokamak geometry. The kinetic post-processor of the suite, NOVA-K [2], analyses those solutions to find their stability properties. NOVA-K evaluates the wave particle interaction of the eigenmodes such as Toroidal Alfvén Eigenmodes (TAEs) or Reversed Shear Alfvén Eigenmodes (RSAEs) by employing the quadratic form with the perturbed distribution function coming from the drift kinetic equations [2,4]. The hot particle growth rate of ideal MHD eigenmode is expressed via the equation \begin{eqnarray} \frac{\gamma_{h}}{\omega_{AE}}=\frac{\Im\delta W_{k}}{2\delta K}, \end{eqnarray} where $\omega_{AE}$ is Alfvén eigenmode frequency, $\delta W_{k}$ is the potential energy of the mode, and $\delta K$ is the inertial energy. In computations of the hot ion contribution to the growth rate NOVA-K makes use of the constant of motion to represent the hot particle drift orbits. Example of such representation is shown in the figure.
NOVA is routinely used for AE structure computations and comparisons with the experimentally observed instabilities [5,6]. The main limitations of NOVA code are caused by neglecting thermal ion finite Larmor radius (FLR), toroidal rotation, and drift effects for the eigenmode computations. Therefore it can not describe accurately radiative damping for example. Finite element methods are used in radial direction and Fourier harmonics are used in poloidal and toroidal directions.
NOVA-K is able to predict various kinetic growth and damping rates perturbatively, such as the phase space gradient drive from energetic particles, continuum damping, radiative damping, ion/electron Landau damping and trapped electron collisional damping.
More information can be found on Dr. N. N. Gorelenkov's NOVA page.
[1] C. Z. Cheng, M. C. Chance, Phys. Fluids 29, 3695 (1986)
[2] C. Z. Cheng, Phys. Reports 211, 1 (1992)
[3] G. Y. Fu, C. Z. Cheng, K. L. Wong, Phys. Fluids B 5, 4040 (1993)
[4] N. N. Gorelenkov, C. Z. Cheng, G. Y. Fu, Phys. Plasmas 6, 2802 (1999)
[5] M. A. Van Zeeland, G. J. Kramer et al., Phys. Rev. Lett. 97, 135001 (2006)
[6] G. J. Kramer, R. Nazikian et al., Phys. Plasmas 13, 056104 (2006)
[#h45: N. N. Gorelenkov, 2018-07-05]
The guiding center code Orbit, using guiding center equations first derived in White and Chance [1] and more completely in The Theory of Toroidally Confined Plasmas [2], can use equal arc, PEST, or any other straight field line coordinates $\psi_p, \theta, \zeta$, which along with the parallel velocity and energy completely specify the particle position and velocity. The guiding center equations depend only on the magnitude of the $B$ field and functions $I$ and $g$ with $\vec{B} = g\nabla \zeta + I\nabla \theta + \delta \nabla \psi_p$. In simplest form, in an axisymmetric configuration, in straight field line coordinates, and without field perturbation or magnetic field ripple, the equations are \begin{eqnarray} \dot{\theta} = \frac{ \rho_{\parallel}B^2}{D} (1 - \rho_{\parallel}g^{\prime}) + \frac{g}{D}\left[ (\mu + \rho_{\parallel}^2B)\frac{\partial B}{\partial\psi_p} + \frac{\partial \Phi}{\partial\psi_p}\right], \label{tdot2} \end{eqnarray} \begin{eqnarray} \dot{\psi_p} = -\frac{g}{D}\left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\theta} + \frac{\partial \Phi}{\partial\theta}\right], \label{psdot2} \end{eqnarray} \begin{eqnarray} \dot{\rho_{\parallel}} = -\frac{(1 -\rho_{\parallel}g^{\prime})}{D} \left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\theta} + \frac{\partial\Phi}{\partial\theta}\right], \label{rdot2} \end{eqnarray} \begin{eqnarray} \dot{\zeta} = \frac{ \rho_{\parallel}B^2}{D} (q +\rho_{\parallel}I^{\prime}_{\psi_p}) - \frac{I}{D} \left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\psi_p} + \frac{\partial \Phi}{\partial\psi_p}\right], \label{zdot2} \end{eqnarray} with $ D = gq + I + \rho_\parallel(g I'_{\psi_p} - I g'_{\psi_p})$. The terms in $\partial \Phi/\partial\psi_p$, $\partial \Phi/\partial\theta$, $\partial \Phi/\partial\zeta$ are easily recognized as describing $\vec{E}\times \vec{B}$ drift.
Orbit can read numerical equilibria developed by TRANSP or other routines, particle distributions produced by TRANSP, and mode eigenfuncitons produced by NOVA.
The code uses a fourth order Runge Cutta integration routine. It is divided into a main program Orbit.F, which is essentially a heavily commented name list and a set of switches for choosing the type of run, the diagnostics, the data storage, and output.
The code has been used extensively since its inception at PPPL, General Atomics, RFX (Padova) and Kiev for the analysis of mode-induced particle loss for fishbones and TAE modes, induced ripple loss, the modification of particle distributions, local particle transport, etc.
To obtain a copy take all files from /u/ftp/pub/white/Orbit. To submit a job modify the script batch, modify Orbit.F to choose the run desired, type make, and type qsub -q sque batch
[1] R. B. White & M. S. Chance, Phys. Fluids 27, 2455 (1984)
[1] R. B. White, The theory of Toroidally Confined Plasmas, Imperial College Press, 3rd. ed. (2014)
[#h46: R. B. White, 2018-07-05]
GTS: Gyrokinetic Tokamak Simulation Code
Snapshot of a GTS ion-temperature-gradient instability simulation showing the field-line-following mesh and the quasi-2D structure of the electrostatic potential in the presence of microturbulence. Notice the fine structure on the two poloidal planes perpendicular to the toroidal direction, while the potential along the field lines changes very little as you go around the torus. (Image generated from a GTS simulation by Kwan-Liu Ma and his group at the University of California, Davis.)
The Gyrokinetic Tokamak Simulation (GTS) code is a full-geometry, massively parallel $\delta f$ particle-in-cell code [1,2] developed at PPPL to simulate plasma turbulence and transport in practical fusion experiments. The GTS code solves the modern gyrokinetic equation in conservative form [3]: \begin{equation} \frac{\partial f_a}{\partial t}+\frac{1}{B^{*}}\nabla_{\bf Z}\cdot (\dot{{\bf Z}}B^{*}f_a)=\sum\limits_{b} C[f_a, f_b]. \end{equation} for a gyro-center distribution function $f ({\bf Z}, t)$ in 5-dimension phase space ${\bf Z}$, along with the gyrokinetic Poisson equation and Ampere's law for potentials using a $\delta f$ method.
GTS features high robustness at treating globally consistent, shaped cross-section tokamaks; in particular, the highly challenging spherical tokamak geometry such as NSTX and its upgrade NSTX-U. GTS simulations directly import plasma profiles of temperature, density and toroidal rotation from the TRANSP experimental database, along with the related numerical MHD equilibria, including perturbed equilibria. General magnetic coordinates and a field-line-following mesh are employed [1]. The particle gyro-center motion is calculated by Lagrangian equations in the flux coordinates, which allows for accurate particle orbit integration thanks to the separation between fast parallel motion and slow perpendicular drifts. The field-line-following mesh accounts for the nature of the quasi-2D mode structure of drift-wave turbulence in toroidal systems, and hence offers a highly efficient spatial resolution for strongly anisotropic fluctuations in fusion plasmas. Fully-kinetic electron physics is included. In particular, both trapped and untrapped electrons are included in the non-adiabatic response. GTS solves the field equations in configuration space for the turbulence potentials using finite element method on unstructured mesh, which is carried out by PETSc. The real space, global field solver, in principle, retains all toroidal modes from $(m,n) = (0,0)$ all the way up to a limit which is set by grid resolution, and therefore retains full-channel nonlinear energy couplings.
One remarkable feature in GTS, which distinguishes it from the other $\delta f$ particle simulations, is that the weight equations satisfy the incompressibility condition in extended phase space $({\bf Z}, w)$ [4]. Satisfying the incompressibility is actually required in order to correctly solve the $\delta f$ kinetic equation using simulation markers whose distribution function $F({\bf Z}, w, t)$ is advanced along the marker trajectory in the extended phase space according to $F({\bf Z},w,t)=const.$.
In GTS, Coulomb collisions between like particles are implemented via a linearized Fokker-Plank operator with particle, momentum and energy conservation. Electron-ion collisions are simulated by the Lorentz operator. By modeling the same gyrokinetic-Poisson system, GTS is extended to performing global neoclassical simulation in additional to traditional turbulence simulation. More importantly, GTS now is able to do a global gyrokinetic simulation with self-consistent turbulence and neoclassical dynamics coupled together. This remarkable capability is shown to lead to significant new features regarding nonlinear turbulence dynamics, impacting a number of important transport issues in tokamak plasmas. In particular, this capability is critical for the proposed study of nonlinear NTM physics. For example, it allows to calculate bootstrap current in the presence of turbulence [5, 6], which plays a key role for NTM evolution. Currently, GTS has been extended to simulating finite beta physics. This includes low-n shear-Alfvén modes, current driven tearing modes, kinetic ballooning modes and micro-tearing modes.
A state-of-the-art electromagnetic algorithm has been developed and implemented into GTS [7,8] with the goal to achieve a robust, global electromagnetic simulation capability to attack the highly challenging electron transport problem in high-$\beta$ NSTX-U plasmas and be used as a first-principles-based module for integrated whole device modeling of turbulence/neoclassical/MHD physics.
On the front of physics study, GTS simulations have been applied to wide experiments for various problems. Recent applications include discovering new turbulence sources responsible for plasma transport and understanding underlying physics behind the confinement scaling in spherical tokamak experiments [4,9], validating the physics of turbulence-driven plasma flow and first-principles-based model prediction of intrinsic rotation profile against profile against experiments [10], and plasma self-generated non-inductive current in turbulent fusion plasmas [5].
GTS has three levels of parallelism: a one-dimensional domain decomposition in the toroidal direction, dividing both the grid and the particles, a particle distribution within each domain, which further divides the particles between processors, and a loop-level multi-threading method. The domain decomposition and the particle distribution are implemented with MPI, while the loop-level multi-threading is implemented with OpenMP directives.
[1] W. X. Wang, Z. Lin et al., Phys. Plasmas 13, 092505 (2006)
[2] W. X. Wang, P. H. Diamond et al., Phys. Plasmas 17, 072511 (2010)
[3] A. J. Brizard & T. S. Hahm, Rev. Mod. Phys. 79, 421 (2007)
[4] W. X. Wang, S. Ethier et al., Phys. Plasmas 22, 102509 (2015)
[5] W. X. Wang et al., Proc. 24th IAEA Fusion Energy Conference, (2012), TH/P7-14
[6] T. S. Hahm, Nucl. Fusion 53 104026 (2013)
[7] E. A. Startsev & W. W. Lee, Phys. Plasmas 21, 022505 (2014)
[8] E. A. Startsev et al., Paper BM9.00002, APS-DPP Conference, San Jose, CA (2016)
[9] W. X. Wang, S. Ethier et al., Nucl. Fusion 55, 122001 (2015)
[10] B. A. Grierson, W. X. Wang et al., Phys. Rev. Lett. 118, 015002 (2017)
[#h47: W. Wang, 2018-07-11]
HMHD: A 3D Extended MHD Code
A snapshot of plasmoid-mediated turbulent reconnection simulation showing the parallel current density and samples of magnetic field lines.
HMHD is a massively parallel, general purpose 3D extended MHD code that solves the fluid equations of particle density $n$ and momentum density $n\boldsymbol{u}$: \[ \partial_{t}n+\nabla\cdot\left(n\boldsymbol{u}\right)=0, \] \[ \partial_{t}\left(n\boldsymbol{u}\right)+\nabla\cdot\left(n\boldsymbol{uu}\right)=-\nabla\left(p_{i}+p_{e}\right)+\boldsymbol{J}\times\boldsymbol{B}-\nabla\cdot\boldsymbol{\Pi}+\boldsymbol{F}, \] where $\boldsymbol{J}=\nabla\times\boldsymbol{B}$ is the electric current, $p_{e}$ and $p_{i}$ are the electron and ion pressures, $\boldsymbol{\Pi}$ is the viscous stress tensor, and $\boldsymbol{F}$ is an external force. The magnetic field ${\bf B}$ is stepped by the Faraday's law \[ \partial_{t}\boldsymbol{B}=-\nabla\times\boldsymbol{E}, \] where the electric field $\boldsymbol{E}$ is determined by a generalized Ohm's law that incorporates the Hall term and the electron pressure term in the following form: \[ \boldsymbol{E}=-\boldsymbol{u}\times\boldsymbol{B}+d_{i}\frac{\boldsymbol{J}\times\boldsymbol{B}-\nabla p_{e}}{n}+\eta\boldsymbol{J}, \] with $\eta$ the resistivity and $d_{i}$ the ion skin depth. The set of equations is completed by additional equations for electron and ion pressures $p_{e}$ and $p_{i}$, where several options are available. In the simplest level an isothermal equation of state $p_{i}=p_{e}=nT$ is assumed; in a sophisticated level, the ion and electron pressure can be evolved individually with viscous and resistive heating, anisotropic thermal conduction, and thermal exchange between the two species; various additional options are available between these two levels. HMHD is flexible to switch on and off various effects in the governing equations.
HMHD uses a single Cartesian grid, with the capability of variable grid spacing. The numerical algorithm [1] approximates spatial derivatives by finite differences with a five-point stencil in each direction. The time-stepping scheme has several options including a second-order accurate trapezoidal leapfrog method as well as three-stage or four-stage strong stability preserving Runge-Kutta methods [2,3]. HMHD is written in Fortran 90 and parallelized with MPI for domain decomposition, augmented with OpenMP for multi-threading in each domain. HMHD has been employed to carry out large-scale 2D simulations plasmoid-mediated reconnection in resistive MHD [4,5,6] and Hall MHD [7,8]. It has been used to carry out 3D self-generated turbulent reconnection simulations [9,10].
[1] P. N. Guzdar, J. F. Drake et al., Phys. Fluids B 5, 3712 (1993)
[2] S. Gottlieb, C.-W. Shu & E. Tadmor, SIAM Review 43, 89 (2001)
[3] R. J. Spiteri & S. J. Ruuth, SIAM J. Numer. Anal. 40, 469 (2002)
[4] Y.-M. Huang & A. Bhattacharjee, Phys. Plasmas 17, 062104 (2010)
[5] Y.-M. Huang & A. Bhattacharjee, Phys. Rev. Lett. 109, 265002 (2012)
[6] Y.-M. Huang, L. Comisso & A. Bhattacharjee, Astrophys. J. 849, 75 (2017)
[7] Y.-M. Huang, A. Bhattacharjee & B. P. Sullivan, Phys. Plasmas 18, 072109 (2011)
[8] J. Ng, Y.-M. Huang et al., Phys. Plasmas 22, 112104 (2015)
[9] Y.-M. Huang & A. Bhattacharjee, Astrophys. J. 818, 20 (2016)
[10] D. Hu, A. Bhattacharjee & Y.-M. Huang, Phys. Plasmas 25, 062305 (2018)
[#h48: Y-M. Huang, 2018-07-11]
FOCUS: Flexible Optimized Coils Using Space curves
Modular coils for a conventional rotating elliptical stellarator. The color on the internal plasma boundary indicates the strength of mean curvature.
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. The construction of the coils is only one component of modern fusion experiments; but, realizing that it is the currents in the coils that produce the "magnetic bottle" that confines the plasma, it is easy to understand that designing and accurately constructing suitable coils is paramount.
Conventional approaches simplify this problem by assuming that coils are lying on a defined toroidal "winding" surface [1]. The Flexible Optimized Coils using Space Curves (FOCUS) code [2] represents coils as one-dimensional curves embedded in three-dimensional space. A curve is described directly, and completely generally, in Cartesian coordinates as ${\bf x}(t) = x(t) \, {\bf i} + y(t) \, {\bf j} + z(t) \, {\bf k}$. The coil parameters, ${\bf X}$, are then varied to minimize a target function consisting of multiple objective functions, \begin{equation} \chi^2({\bf X}) \equiv w_{B}\int_S \frac{1}{2} \left ( { \bf B} \cdot {\bf n} \right )^2 ds \ + \ w_{\Psi}\int_0^{2\pi} \frac{1}{2} \left( \Psi_\zeta \; - \; \Psi_o \right)^2 d\zeta \ + \ w_{L} \; \sum_{i=1}^{N_C} L_i + \cdots \end{equation} These objective functions cover both "physics" requirements and "engineering" constraints, such as the normal magnetic field, the toroidal flux, the resonant magnetic harmonics, coil length, coil-to-coil separation and coil-to-plasma separation.
With analytically calculated derivatives, FOCUS computes the gradient and Hessian with respect to coil parameters fast and accurately. It allows FOCUS to employ numerous powerful optimization algorithms, like the Newton method [3]. The figure (click to enlarge) shows using FOCUS to design modular coils for a rotating elliptical stellerator. FOCUS is also applied to analyze the error field sensitivity to coil deviations [4], vary the shape of plasma surface in order to simplify the coil geometry [5] and design non-axisymmetric RMP coils for tokamaks.
[1] P. Merkel, Nucl. Fusion, 27, 867 (1987)
[2] Caoxiang Zhu, Stuart R. Hudson et al., Nucl. Fusion, 58, 016008 (2017)
[3] Caoxiang Zhu, Stuart R. Hudson et al., Plasma Phys. Control. Fusion 60, 065008 (2018)
[5] S. R. Hudson, C. Zhu et al., Phys. Lett. A, in press
[#h49: C. Zhu, 祝曹祥, 2016-08-05]
(a) Mode evolution for different levels of collisionality featuring intermitency and steady saturation within RBQ1D depending on the effective collisionality and (b) scaling of saturation amplitude as a function of collisionality within RBQ1D. Calculations are done for the global TAE shown in panel (c).
The interaction between fast ions and Alfvénic eigenmodes has proved to be numerically expensive to be modeled in realistic large tokamak configurations such as ITER. Therefore, it is motivating to explore reduced, numerically efficient models such as the Resonance Broadened Quasilinear model, used to build the code in its one-dimensional version (RBQ1D). The code is capable of modeling the fast ion distribution function in the direction of the canonical toroidal momentum while self-consistently evolving the amplitude of interacting Alfvénic modes. The theoretical approach is based on the model proposed by Berk et al [1] to address the resonant energetic particle interaction in both regimes of isolated and overlapping modes. RBQ1D is written by using the same structure of the conventional quasilinear equations for the fast ion distribution function but with the resonant delta function broadened primarily in the radial direction. The model was designed to reproduce the expected saturation level for non-overlapping modes from analytic theory. In the model, the nonlinear trapping (bounce) frequency is the fundamental variable for the dynamics in the vicinity at a resonance. The diffusion equation of RBQ1D, derived using action and angle variables, is [2,3] \begin{eqnarray} \frac{\partial f}{\partial t}=\pi\sum_{l,M}\frac{\partial}{\partial P_{\varphi}}C_{M}^{2}\mathcal{E}^{2}\frac{G_{m^{\prime}l}^{*}G_{ml}}{\left|\partial\Omega_{l}/\partial P_{\varphi}\right|_{res}}\mathcal{F}_{lM}\frac{\partial}{\partial P_{\varphi}}f_{lM}+\nu_{eff}^{3}\sum_{l,M}\frac{\partial^{2}}{\partial P_{\varphi}^{2}}\left(f_{lM}-f_{0}\right), \end{eqnarray} and the amplitude evolution satisfies \begin{eqnarray} C_{M}\left(t\right)\sim e^{\left(\gamma_{L}+\gamma_{d}\right)t}\Rightarrow\frac{dC_{M}^{2}}{dt}=2\left(\gamma_{L}+\gamma_{d}\right)C_{M}^{2}. \end{eqnarray} RBQ1D employs a finite-difference scheme used for numerical integration of the distribution function. It recovers several scenarios for amplitude evolution, such as pulsations, intermittency and quasi-steady saturation, see figure (a). The code is interfaced with linear ideal MHD code NOVA, which provides eigenstructures, and the stability code NOVA-K, which provides damping rates and wave-particle interaction matrices for resonances in 3D constant of motion space. RBQ1D employs an iterative procedure to account for mode structure non-uniformities within the resonant island [4]. RBQ1D has been subject to rigorous verification exercises [4]. Both the wave-particle resonant diffusion and Coulomb collisional scattering diffusion operator are thoroughly verified against analytical expressions in limiting cases. In addition, the collisional scattering frequency dependence of the modes saturation level is similar to the value expected by the analytical theory, as shown in figure (b).
[1] H. L. Berk, B. N. Breizman et al., Nucl. Fusion 35, 1661 (1995)
[2] V. N. Duarte, Ph.D. Thesis, U. São Paulo (2017)
[3] N. Gorelenkov, V. Duarte et al., Nucl. Fusion 58, 082016 (2018)
[4] N. N. Gorelenkov, Invited Talk, APS-DPP (2018)
[#h50: V. Duarte, 2018-09-25]
HYbrid and MHD simulation code (HYM)
(a-b) Representative co-helicity spheromak merging simulation results, magnetic field lines are shown; final configuration corresponds to a 3D Taylor eigenstate. (c-d) Hybrid simulations of FRC spin-up and instability for non-symmetric end-shortening boundary conditions, magnetic field lines and plasma density are shown.
The nonlinear 3-D simulation code (HYM) has been originally developed at PPPL to carry out investigations of the macroscopic stability properties of FRCs [1,2]. The HYM code has also been used to study spheromak merging [3], and the excitation of sub-cyclotron frequency waves by the beam ions in the National Spherical Torus Experiment (NSTX) [4,5]. In the HYM code, three different physical models have been implemented: (a) a 3-D nonlinear resistive MHD or Hall-MHD model; (b) a 3-D nonlinear hybrid scheme with fluid electrons and particle ions; and (c) a 3-D nonlinear hybrid MHD/particle model where a fluid description is used to represent the thermal background plasma, and a kinetic (particle) description is used for the low-density energetic beam ions. The nonlinear delta-f method has been implemented in order to reduce numerical noise in the particle simulations. The capabilities of the HYM code also include an option to switch from the delta-f method to the regular particle-in-cell (PIC) method in the highly nonlinear regime. An MPI-based, parallel version of the HYM code has been developed to run on distributed-memory parallel computers. For production-size MHD runs, very good parallel scaling has been obtained for up to 1000 processors at the NERSC Computer Center. The HYM code has been validated against FRC experiments [6], SSX spheromak merging experiments [3], and NSTX and NSTX-U experimental results related to stability of sub-cyclotron frequency Alfven eigenmodes [4,5].
The HYM code is unique in that it employs the delta-f particle simulation method and a full-ion-orbit description in a toroidal geometry. Second-order, time-centered, explicit scheme is used for time stepping, with smaller time steps for field equations (subcycling). Fourth-order finite difference and cylindrical geometry are used to advance fields and apply boundary conditions, while a 3D Cartesian grid is used for the particle pushing and gathering of fast ion density and current density. Typically, the total energy is conserved within 10% of the wave energy, provided that the numerical resolution is sufficient for the mode of interest. Both 3D hybrid simulations of spheromak merging and 3D simulations of the FRC compression require use of a full-f simulation scheme, and therefore a large number of simulation particles. The HYM code has been modified in order to allow simulations with up to several billions of simulation particles.
The initial equilibrium used in the HYM code is calculated using a Grad-Shafranov solver. The equilibrium solver allows the computation of MHD equilibria including the effects of toroidal flows [1]. In addition, the MPI version of the Grad-Shafranov solver has been developed for kinetic equilibria with a non-Maxwellian and anisotropic ion distribution function [7].
The ability to choose between different physical models implemented in the HYM code facilitates the study of a variety of physical effects for a wide range of magnetic configurations. Thus, the numerical simulations have been performed for both oblate and prolate field-reversed configurations, with elongation in the range E=0.5-12, in both kinetic and MHD-like regimes; in support of co-helicity and counter-helicity spheromak merging experiments; for rotating magnetic field (RMF) FRC studies; and investigation of the effects of neutral beam injection (NBI) ions on FRC stability. In addition, hybrid simulations using the HYM code have predicted for the first time, destabilization of the Global Alfven Eigenmodes (GAEs) by the energetic beam ions in the National Spherical Torus Experiment (NSTX) [8], subsequently confirmed both by the analytical calculations and the experimental observations, and suggested a new energy channeling mechanism explaining flattening of the electron temperature profiles at high beam power in NSTX [4].
[1] E. V. Belova, S. C. Jardin et al., Phys. Plasmas 7, 4996 (2000)
[2] E. V. Belova, R. C. Davidson et al., Phys. Plasmas 11, 2523 (2004)
[3] C. Myers, E. V. Belova et al., Phys. Plasmas 18, 112512 (2011)
[4] E. V. Belova, N. N. Gorelenkov et al., Phys. Rev. Lett. 115, 015001 (2015)
[5] E.D. Fredrickson, E. Belova et al., Phys. Rev. Lett. 118, 265001 (2017)
[6] S. P. Gerhardt, E. Belova et al., Phys. Plasmas 13, 112508 (2006)
[7] E. V. Belova, N. N. Gorelenkov & C.Z. Cheng, Phys. Plasmas 10, 3240 (2003)
[8] E. V. Belova, N. N. Gorelenkov et al., "Numerical Study of Instabilities Driven by Energetic Neutral Beam Ions in NSTX", Proceedings of the 30th EPS Conference on Contr. Fusion and Plasma Phys., (2003) ECA Vol. 27A, P-3.102.
[#h51: E. Belova, 2016-08-05]
Slices through the three-dimensional atomic (vertical) and molecular (horizontal) deuterium profiles in a simulation of data from the NSTX Edge Neutral Density Diagnostic.
Neutral atoms and molecules in fusion plasmas are of interest for multiple reasons. First, neutral particles are produced via the interactions of the plasma as it flows along open field lines to surrounding material surfaces. Unconfined by the magnetic field, the atoms and molecules provide a channel for heat transport across the field lines and also serve as a source of plasma particles via ionization. Second, atoms that penetrate well past the last closed flux surface can charge exchange with plasma ions to generate high energy neutrals that can strike the vacuum vessel wall, sputtering impurities into the plasma and possibly damaging the wall. Third, the most common means of fueling plasmas is with an external puff of gas. Finally, the light emitted by the neutral atoms and molecules in all of the above processes can be monitored and used as the basis for diagnostics.
Kinetic models of neutral particle transport are based on the Boltzmann equation. For the simple case of a single ``background'' species and a single binary collision process, this is: \begin{eqnarray*} \frac{\partial f({\bf r}, {\bf v}, t)}{\partial t} & + & {\bf v} \cdot \nabla_{\bf r} f({\bf r}, {\bf v}, t) \\ & = & \int d{\bf v}^{\prime} \, d{\bf V}^{\prime} \, d{\bf V} \sigma( {\bf v}^{\prime}, {\bf V}^{\prime}; {\bf v}, {\bf V}) | {\bf v}^{\prime} - {\bf V}^{\prime} | f({\bf v}^{\prime}) f_{b}({\bf V}^{\prime}) \\ & - & \int d{\bf v}^{\prime} \, d{\bf V}^{\prime} \, d{\bf V} \sigma( {\bf v}, {\bf V}; {\bf v}^{\prime}, {\bf V^{\prime}}) | {\bf v} - {\bf V} | f({\bf v}) f_{b}({\bf V}), \label{BE} \end{eqnarray*} where $f({\bf r}, {\bf v}, t)$ and $f_{b}({\bf r}, {\bf V}, t)$ are the neutral and background distribution functions, respectively, and $\sigma$ is the differential cross section for the collision process. The first (second) integral on the right-hand side represents scattering into (out of) the velocity $v$.
DEGAS 2 [1], like its predecessor, DEGAS [2], uses the Monte Carlo approach to integrating the Boltzmann equation, allowing the treatment of complex geometries, atomic physics, and wall interactions. DEGAS 2 is written in a ``macro-enhanced'' version of FORTRAN via the FWEB library, providing an object oriented capability and simplifying tedious tasks, such as dynamic memory allocation and the reading and written of self-describing binary files. As a result, the code is extremely flexible and can be readily adapted to problems seemingly far removed from tokamak divertor physics, e.g., its use in simulating the diffusive evaporation of lithium in NSTX [3] and LTX [4]. DEGAS 2 has been extensively verified, as is documented in its User's Manual [5]. Experimental validation has been largely centered on the Gas Puff Imaging (GPI) technique for visualizing plasma turbulence in the tokamak edge. The validation against deuterium gas puff data from NSTX is described in the paper by B. Cao et al. [6] Analogous work with both deuterium and helium has been carried out on Alcator C-Mod. A related application of DEGAS 2 is in the interpretation of data from the Edge Neutral Density Diagnostic on NSTX [7] and NSTX-U. DEGAS 2 has been applied to many other devices, including JT-60U [8], ADITYA [9], and FRC experiments at Tri-Alpha Energy [10]. Neutral transport codes are frequently coupled to plasma simulation codes to allow a self-consistent plasma-neutral solution to be computed. Initially, DEGAS 2 was coupled to UEDGE [11]. More recently, DEGAS 2 has been coupled to the drift-kinetic XGC0 [12], and has been used in the development and testing of the simplified built-in neutral transport module in XGC1 [13]. A related project is a DEGAS 2-based synthetic GPI diagnostic for XGC1 [14].
[1] D. P. Stotler & C. F. F. Karney, Contrib. Plasma Phys. 34, 392 (1994)
[2] D. Heifetz, D. Post et al., J. Comp. Phys. 46, 309 (1982)
[3] D. P. Stotler et al., J. Nucl. Mater. 415, S1058 (2011)
[4] J. C. Schmitt et al., J. Nucl. Mater. 438, S1096 (2013)
[5] DEGAS 2 Home Page
[6] B. Cao et al., Fusion Sci. Tech. 64, 29 (2013)
[7] D. P. Stotler et al., Phys. Plasmas 22, 082506 (2015)
[8] H. Takenaga et al., Nucl. Fusion 41, 1777 (2001)
[9] R. Dey et al., Nucl. Fusion 57, 086003 (2017)
[10] E. M. Granstedt et al., Presented at 60th Annual Meeting of the APS Division of Plasma Physics
[11] D. P. Stotler et al., Contrib. Plasma Phys. 40, 221 (2000)[
[12] D. P. Stotler et al., Comput. Sci. Disc. 6, 015006 (2013)
[13] D. P. Stotler et al., Nucl. Fusion 57, 086028 (2017)
[14] D. P. Stotler et al., Nucl. Mater. Energy 19, 113 (2019)
[#h52: D. Stotler, 2016-08-05]
M3D-C$^1$ Extended MHD code
The accurate calculation of the equilibrium, stability and dynamical evolution of magnetically-confined plasma is fundamental for fusion research. The most suitable, macroscopic model to address some of the most critical challenges confronting tokamak plasmas is given by the extended-magnetohydrodynamic (MHD) equations, which describe plasmas as electrically conducting fluids of ions and electrons. The M3D-C$^1$ code [?] solves the fluid equations: for example, the "single-fluid" model, in which the ions and electrons are considered to have the same fluid velocity and temperature, the dynamical equations for the particle number density $n$, the fluid velocity $\vec{u}$, the total pressure $p$ are \begin{eqnarray} \frac{\partial n}{\partial t} + \nabla \cdot (n \vec{u}) & = & 0 \\ n m_i \left( \frac{\partial \vec{u}}{\partial t} + \vec{u} \cdot \nabla \vec{u} \right) & = & \vec{J} \times \vec{B} - \nabla p \color{blue}{ - \nabla \cdot \Pi + \vec{F}} \\ \frac{\partial p}{\partial t} + \vec{u} \cdot \nabla p + \Gamma p \nabla \cdot \vec{u} & = & \color{blue}{(\Gamma - 1) \left[Q - \nabla \cdot \vec{q} + \eta J^2 - \vec{u} \cdot \vec{F} - \Pi : \nabla u \right]} \end{eqnarray} together with a "generalized Ohm's law", $\vec{E} = - \vec{u} \times \vec{B} \color{blue}{ + \eta \vec{J}}$; and with a reduced set of Maxwell's equations for the electrical-current density, $\vec{J} = \nabla \times \vec{B} / \mu_0$, and for the time evolution of the magnetic field, $\partial_t \vec{B} = - \nabla \times \vec{E}$. The manifestly divergence-free magnetic field, $\vec{B}$, and the fluid velocity, $\vec{u}$, are represented using stream functions and potentials, $\vec{B} = \nabla \psi \times \nabla \varphi + F \nabla \varphi$ and $\vec{u} = R^2 \nabla U \times \nabla \varphi + R^2 \omega \nabla \varphi + R^{-2} \nabla_\perp \chi$.
The "ideal-MHD" model is described by the above equations with the terms in blue set to zero. M3D-C$^1$ is also capable of computing the "two-fluid model", which accommodates differences in the ion and electron fluid velocities: the generalized Ohm's law is augmented with $\color{red}{( \vec{J} \times \vec{B} - \nabla p_e - \nabla \cdot \Pi_e + \vec{F}_e ) / n_e}$, additional terms must be added to $\partial_t p$, and an equation for the "electron pressure", $p_e$, must be included.
Physically-meaningful, "reduced" models provide accurate solutions in certain physical limits and are obtained, at a fraction of the computational cost, by restricting the scalar fields that are evolved: e.g., the two-field, reduced model is obtained by only evolving $\psi$ and $U$, and the "four-field", reduced model by only evolving $\psi$, $U$, $F$, and $\omega$.
To obtain accurate solutions efficiently, over a broad range of temporal and spatial scales, M3D-C1 employs advanced numerical methods, such as: high-order, $C^1$, finite elements, an unstructured geometrical mesh, fully-implicit and semi-implicit time integration, physics-based preconditioning, domain-decomposition parallelization, and the use of scalable, parallel, sparse, linear algebra solvers.
M3D-C$^1$ is used to model numerous tokamak phenomena, including: edge localized modes (ELMs) [1]; sawtooth cycles [2]; and vertical displacement events (VDEs), resistive wall modes (RWMs), and perturbed (i.e. three-dimensional, 3D) equilibria [3].
[1] N.M. Ferraro, S.C. Jardin & P.B. Snyder, Phys. Plasmas 17, 102508 (2010)
[2] S.C. Jardin, N. Ferraro et al., Com. Sci. Disc. 5, 014002 (2012)
[3] N.M. Ferraro, T.E. Evans et al., Nucl. Fusion 53, 073042 (2013)
[#h4: N. Ferraro, 2016-05-23]
Stepped Pressure Equilibrium Code (SPEC)
Building on the theoretical foundations of Bruno & Laurence [1], that three-dimensional, (3D) magnetohydrodynamic (MHD) equilibria with "stepped"-pressure profiles are well-defined and guaranteed to exist, whereas 3D equilibria with integrable magnetic-fields and smooth pressure (or with non-integrable fields and continuous-but-fractal pressure) are pathological [2], Dr. S.R. Hudson wrote the "Stepped Pressure Equilibrium Code", (SPEC) [3]. SPEC finds minimal-plasma-energy states, subject to the constraints of conserved helicity and fluxes in a collection, $i=1, N_V$, of nested sub-volumes, ${\cal R}_i$, by extremizing the multi-region, relaxed-MHD (MRxMHD), energy functional, ${\cal F}$, introduced by Dr. M.J. Hole, Dr. S.R. Hudson & Prof. R.L. Dewar [4,5], \begin{eqnarray} {\cal F} \equiv \sum_{i=1}^{N_V} \left\{ \int_{{\cal R}_i} \! \left( \frac{p}{\gamma-1} + \frac{B^2}{2} \right) dv - \frac{\mu_i}{2} \left( \int_{{\cal R}_i} \!\! {\bf A}\cdot{\bf B} \, dv - H_i \right) \right\}. \end{eqnarray} Relaxation is allowed in each ${\cal R}_i$, so unphysical, parallel currents are avoided and magnetic reconnection is allowed; and the ideal-MHD constraints are enforced at selected "ideal interfaces", ${\cal I}_i$, on which ${\bf B}\cdot{\bf n}=0$. The Euler-Lagrange equations derived from $\delta {\cal F}=0$ are: $\nabla \times {\bf B} = \mu_i {\bf B}$ in each ${\cal R}_i$; and continuity of total pressure, $[[p+B^2/2]]=0$, across each ${\cal I}_i$, so that non-trivial, stepped pressure profiles may be sustained. If $N_V=1$, MRxMHD equilibria reduce to so-called "Taylor states" [6]; and as $N_V \rightarrow \infty$, MRxMHD equilibria approach ideal-MHD equilibria [7]. Discontinuous solutions are admitted. The figure (click to enlarge) shows an $N_V=4$ equilibrium with magnetic islands, chaotic fieldlines and a non-trivial pressure.
[1] Oscar P. Bruno & Peter Laurence, Commun. Pure Appl. Math. 49, 717 (1996)
[2] H. Grad, Phys. Fluids 10, 137 (1967)
[3] S.R. Hudson, R.L. Dewar et al., Phys. Plasmas 19, 112502 (2012)
[4] M.J. Hole, S.R. Hudson & R.L. Dewar, J. Plas. Physics 72, 1167 (2006)
[5] S.R. Hudson, M.J. Hole & R.L. Dewar, Phys. Plasmas 14, 052505 (2007)
[6] J.B. Taylor, Rev. Modern Phys. 58, 741 (1986)
[7] G.R. Dennis, S.R. Hudson et al.,Phys. Plasmas 20, 032509 (2013)
[#h5: S.R. Hudson, 2016-05-23]
Designing Stellarator Coils with the COILOPT++, Coil-Optimization Code
Concept design for a quasi-axisymmetric stellarator fusion reactor with the modular coils straightened and spaced for ease of access. (Figure courtesy of Tom Brown.)
Electrical currents flowing inside magnetically-confined plasmas cannot produce the magnetic field required for the confinement of the plasma itself. Quite aside from understanding the physics of plasmas, the task of designing external, current-carrying coils that produce the confining magnetic field, ${\bf B}_C$, remains a fundamental problem; particularly for the geometrically-complicated, non-axisymmetric, "stellarator" class of confinement device [1]. The coils are subject to severe, engineering constraints: the coils must be "buildable", and at a reasonable cost; and the coils must be supported against the forces they exert upon each other. For reactor maintenance, the coils must allow access to internal structures, such as the vacuum vessel, and must allow room for diagnostics; many, closely-packed coils that give precise control over the external magnetic field might not be satisfactory.
The COILOPT code [2] and its successor, COILOPT++ [3], vary the geometrical degrees-of-freedom, ${\bf x}$, of a discrete set of coils to minimize a physics+engineering, "cost-function", $\chi({\bf x})$, defined as the surface integral over a given, "target", plasma boundary, ${\cal S}$, of the squared normal component of the "error" field, \begin{eqnarray} \chi^2 \equiv \oint_{\cal S} \left( \delta {\bf B} \cdot {\bf n} \right)^2 da + \mbox{engineering constraints}, \end{eqnarray} where $\delta {\bf B} \equiv {\bf B}_C - {\bf B}_P$ is the difference between the externally-produced magnetic field (as computed using the Biot-Savart law [5]) and the magnetic field, ${\bf B}_P$, produced by the plasma currents (determined by an equilibrium calculation). Using mathematical optimization algorithms, by finding an ${\bf x}$ that minimizes $\chi^2$ we find a coil configuration that minimizes the total, normal magnetic field at the boundary; thereby producing a "good flux surface".
A new design methodology [3] has been developed for "modular" coils for so-called "quasi-axisymmetric" stellarators (stellarators for which the field strength appears axisymmetric in appropriate coordinates), leading to coils that are both simpler to construct and that allow easier access [4]. By straightening the coils on the outboard side of the device, additional space is created for insertion and removal of toroidal vessel segments, blanket modules, and so forth. COILOPT++ employs a "spline+straight" representation, with fast, parallel optimization algorithms (e.g. the Levenberg-Marquardt algorithm [6], differential evolution [7], ...), to quickly generate coil designs that produce the target equilibrium. COILOPT++ has allowed design of modular coils for a moderate-aspect-ratio, quasi-axisymmetric, stellarator reactor configuration, an example of which is shown in the figure (click to enlarge).
[1] Lyman Spitzer Jr., Phys. Fluids 1, 253 (1958)
[2] Dennis J. Strickler, Lee A. Berry & Steven P. Hirshman, Fusion Sci. & Technol. 41, 107 (2002)
[3] J.A. Breslau, N. Pomphrey et al., in preparation (2016)
[4] George H. Neilson, David A. Gates et al., IEEE Trans. Plasma Sci. 42, 489 (2014)
[5] Wikipedia, Biot-Savart Law
[6] Wikipedia, Levenberg–Marquardt Algorithm
[7] Wikipedia, Differential Evolution
[#h12: J. Breslau, 2016-05-23]
STELLOPT: Stellarator Optimization and Equilibrium Reconstruction Code
Design of the National Compact Stellarator eXperiment (NCSX) optimized by STELLOPT. The shape of three-dimensional nested flux surfaces is optimized to have good MHD stability, plasma confinement and turbulence transport. External non-planar coils with relatively simple geometries, large coil-plasma space and coil-coil separation are also obtained.
One of the defining characteristics of the "stellarator" class of magnetic confinement device [1] is that the confining magnetic field is, for the most part, generated by external, current-carrying coils; stellarators are consequently more stable than their axisymmetric, "tokamak" cousins, for which an essential component of the confining field is produced by internal plasma currents. Stellarators have historically had degraded confinement, as compared to tokamaks, and require more-complicated, "three-dimensional" geometry; however, by exploiting the three-dimensional shaping of the plasma boundary (which in turn determines the global, plasma equilibrium), stellarators may be designed to provide optimized plasma confinement. This is certainly easier said-than-done: the plasma equilibrium is a nonlinear function of the boundary, and the particle and heat transport are nonlinear functions of the plasma equilibrium!
STELLOPT [2,3] is a versatile, optimization code that constructs suitable, magnetohydrodynamic equilibrium states via minimization of a "cost-function", $\chi^2$, that quantifies how "attractive" an equilibrium is; for example, $\chi^2$ quantifies the stability of the plasma to small perturbations (including "ballooning" stability, "kink" stability) and the particle and heat transport (including "neoclassical" transport, "turbulent" transport, "energetic particle" confinement). The independent, degrees-of-freedom, ${\bf x}$, describe the plasma boundary, which is input for the VMEC equilibrium code [4]. The boundary shape is varied (using either a Levenberg-Marquardt [5], Differential-Evolution [6] or Particle-Swarm [7] algorithms) to find minima of $\chi^2$; thus constructing an optimal, plasma equilibrium with both satisfactory stability and confinement properties.
A recent extension of STELLOPT enables "equilibrium reconstruction" [8,9]. By minimizing $\chi^2({\bf x})$ $\equiv$ $\sum_{i} [ y_i - f_i({\bf x}) ]^2 / \sigma_i^2$, where the $y_i$ and the $f_i({\bf x})$ are, respectively, the experimental measurements and calculated "synthetic diagnostics" (i.e. numerical calculations that mimic signals measured by magnetic sensors) of Thomson scattering, charge exchange, interferometry, Faraday rotation, motional Stark effect, and electro-cyclotron emission reflectometry, to name just a few, STELLOPT solves the "inverse" problem of inferring the plasma state given the experimental measurements. (The $\sigma_i$ are user-adjustable "weights", which may reflect experimental uncertainties.) Equilibrium reconstruction is invaluable for understanding the properties of confined plasmas in present-day experimental devices, such as the DIIID device [10].
[2] S.P. Hirshman, D.A. Spong et al., Phys. Rev. Lett. 80, 528 (1998)
[3] D.A. Spong, S.P. Hirshman et al., Nucl. Fusion 40, 563 (2000)
[4] S.P. Hirshman, J.C. Whitson, Phys. Fluids 26, 3553 (1983)
[5] Donald W. Marquardt, SIAM J. Appl. Math. 11, 431 (1963)
[6] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (Reading, Addison-Wesley, 1989)
[7] Riccardo Poli, James Kennedy & Tim Blackwell, Swarm Intell. 1, 33 (2007)
[8] S. Lazerson & I.T. Chapman, Plasma Phys. Control. Fusion 55, 084004 (2013)
[9] J.C. Schmitt, J. Bialek et al., Rev. Sci. Instrum. 85, 11E817 (2014)
[10] S. Lazerson S and the DIII-D Team, Nucl. Fusion 55, 1 (2015)
[#h35: S. Lazerson, 2016-08-18]
© 2023 PPPL Theory Department. All rights reserved.
Research & Review Seminars
Princeton, NJ 08543-0451 U.S.A
Follow PPPL
|
CommonCrawl
|
Calculate Tangent Points to Circle
Given a circle with radius $r = 2$ and center $C = (4,2)$ and a point $P = (-1,2)$ outside the circle.
How do I calculate the coordinates of two tangent points to the circle, given that the tangents both have to go through $P$?
My (sad) try
All I can think of is finding the equation for the circle, which is
\begin{equation} (x-4)^{2} + (y-2)^{2} = 4. \end{equation}
I have no idea what to do next. (I don't even if finding the circle's equation is relevant.)
After using Dhanush Krishna'a answer, I can (easily) find the two intersection points:
\begin{equation} (x_{1,2}, y_{1,2}) = \frac{2}{5}(8, 5\pm\sqrt{21}). \end{equation}
Svend Tveskæg
Svend TveskægSvend Tveskæg
$\begingroup$ Do you know calculus and are you allowed to use it on this problem? $\endgroup$ – lordoftheshadows Mar 25 '16 at 2:43
$\begingroup$ Yes and yes. :-) It's just a problem for myself -- not a school assignment or anything like that. P.S. (I'll go to sleep now and come back tomorrow.) $\endgroup$ – Svend Tveskæg Mar 25 '16 at 2:46
$\begingroup$ I don't know whether this can be done by calculus, as it gets pretty ugly pretty fast. Nevertheless I'd like to see someone give a simple analytic approach to this problem. $\endgroup$ – Airdish Mar 25 '16 at 3:09
Take the equation of the tangent to be $(y-2)=m(x+1)$
This touches the circle. Therefore the distance of this line from the centre of the circle is equal to the radius of the circle.
$${|5m|\over {1+m^2}} {=2}$$
Square and rearrange. $|m|={4\over {\sqrt 21}}$
Now you know the line. Find the point of intersection of this line with the circle.
Another solution uses the graph of these curves. The point $P$ lies on the horizontal line joining the centre of the circle. You know the distance between the point and the centre. It is $5$. You know the radius ($2$). The angle between the radius and the line joining the centre and the point P can be found out. Use this data and the centre of the circle to find the coordinates.
The upper coordinate is $(4-2cos(\theta),2+2sin(\theta))$ where $\theta$ is the angle that I have mentioned above. By the way $cos(\theta)=2/5$
Dhanush KrishnaDhanush Krishna
$\begingroup$ Couldn't sleep ... If I use $\cos(\theta) = 2/5$ I get $(4-2\cos(\theta),2+2\sin(\theta)) \approx (3.2, 2.9)$. That doesn't lie on the circle $\endgroup$ – Svend Tveskæg Mar 25 '16 at 3:50
$\begingroup$ @SvendTveskæg: Do not substitute the value, you may make calculation error. Substitute it directly in your circle equation. $4\cos^2(\theta)+4\sin^2(\theta)=4$ $\endgroup$ – Dhanush Krishna Mar 25 '16 at 3:54
There is another way to find the that may strike you as funny. Use homogeneous coordinates for the points in the place so the exterior point is located at $P = (-1,2,1)$.
The homogeneous coordinates of a circle located at $(c_x,c_y)$ with radius $r$ are
$$ C = \begin{vmatrix} 1 & 0 & -c_x \\ 0 & 1 & -c_y \\ -c_x & -c_y & c_x^2+c_y^2-r^2 \end{vmatrix} $$
such that the expression $$\begin{pmatrix} x & y & 1 \end{pmatrix} \begin{vmatrix} 1 & 0 & -c_x \\ 0 & 1 & -c_y \\ -c_x & -c_y & c_x^2+c_y^2-r^2 \end{vmatrix} \begin{pmatrix} x \\ y \\ 1 \end{pmatrix} = 0 $$ simplifies to the standard equation for a circle
$$ (x-c_x)^2 + (y-c_y)^2 - r^2 = 0 $$
Ok, back to the problem. If you take the product $L=C P$, then $L$ is a line with coordinates $L=[a,b,c]$ such as $a x+b y + c =0$ is the equation of the line. This line connects the two tangent points.
In your case
$$ C = \begin{vmatrix} 1 & 0 & -4 \\ 0 & 1 & -2 \\ -4 & -2 & 16 \end{vmatrix} $$
$$ L = \begin{vmatrix} 1 & 0 & -4 \\ 0 & 1 & -2 \\ -4 & -2 & 16 \end{vmatrix} \begin{pmatrix} -1 \\ 2 \\1 \end{pmatrix} = \begin{bmatrix} -5 \\ 0 \\ 16 \end{bmatrix} $$
or $$ \left. (x,y,1) \cdot [-5,0,16] = 0 \right\} \left. 16-5 x =0 \right\} x = \frac{16}{5} $$
Plug the line equation in the circle equation to get the other coordinate
$$ \left. \left( \frac{16}{5} \right)^2 - 8 \left( \frac{16}{5} \right) + y^2 -4 y + 16 =0 \right\} y =2 \pm \frac{2\sqrt{21}}{5} $$
The coordinates are thus
$$ \begin{pmatrix} \frac{16}{5} \\ 2 - \frac{2\sqrt{21}}{5} \end{pmatrix} \mbox{ or } \begin{pmatrix} \frac{16}{5} \\ 2 + \frac{2\sqrt{21}}{5} \end{pmatrix}$$
NOTE: The relationship between the point $P$ and the line $L$ is that of pole and polar
ja72ja72
Hint: Simple geometry tells us that the distance of those points from $P$ would be $\sqrt{5^2-2^2}=\sqrt{21}$ units. So, just build a circle with radius of that length from with $P$ at the centre and calculate the intersection points of the two circles. And you're done.
SinTan1729SinTan1729
Not the answer you're looking for? Browse other questions tagged geometry or ask your own question.
How to calculate the two tangent points to a circle with radius R from two lines given by three points
Find the center of circle given two tangent lines and two points
Determine center of circle if radius and 2 tangent line segments are given
Find centre of circle with equation of tangent given
Tangent of a circle
How to calculate the distance between two points on a circle in degrees
Euclid 1999 Question 4(a) - Circle Tangent Intersection
Find coordinates of circle center
Calculate min/max radii of circle tangent to two lines
Find tangent points in a circle from a point
|
CommonCrawl
|
Why is the tension on both sides of an Atwood machine identical?
The field forces $F_{g1}$ and $F_{g2}$ push down on Block 1 and Block 2, respectivley, where $$F_{g1}=m_1g$$$$F_{g2}=m_2g$$ Since the pully system reverses the direction of each force, wouldn't the following be true? $$T_1 = F_{g2} = m_2g$$$$T_2 = F_{g1} = m_1g$$ And since $m_1 \neq m_2$, wouldn't $T_1 \neq T_2$?
My textbook states that tension is the same throughout the whole string, but I can't wrap my head around why this is so. If $m_1 \neq m_2$, how could the same force $T$ accelerate both of them an equal amount? Wouldn't Block 2 require a greater force?
newtonian-mechanics forces string
John HippisleyJohn Hippisley
Tension in a rope is another word for the force the rope exerts.
Forces always occur in equal and opposite pairs. This is true for a rope. There are two ends. The ends pull two objects toward each other with equal force.
Instead of masses hanging from a pulley, suppose two massive people named 1 and 2, had a tug of war. Suppose 1 pulled harder than 2.
So if 1 is pulling harder on the rope than 2, why is the rope pulling equally hard on 1 and 2? Why doesn't the rope pull back with the same force as each person pulls?
That would happen if 1 and 2 were on opposite sides of a wall and pulled on separate ropes tied to the wall. The wall would pull back just hard enough to keep them both still. It would pull back just as hard as each pulled on the rope.
But in a tug of war, both 1 and 2 will move if 1 pulls harder. They will have the same velocity and acceleration in the direction 1 is pulling, and they will stay a constant distance apart.
Suppose 2 wasn't pulling. He was just standing on frictionless ice and letting 1 pull him with force $F_1$. What is the force in the rope? You can see the force is between $F_1$ and $0$.
It isn't $0$ on 2's end. The rope is pulling him, making him accelerate.
It isn't $0$ on 1's end. He is accelerating forward, but the rope is pulling him back. He would accelerate faster if not for the rope.
Likewise, you can see it isn't $F_1$ on 1's end. It isn't enough to hold 1 still.
It isn't $F_1$ on 2's end. 1 is accelerating a total mass of $m_1 + m_2$. The acceleration for both is $a_1 = a_2 = F_1/(m_1+m_2)$. This is smaller than if 1 stood still and pulled 2 toward him with force $F_1$. 2 would accelerate with $a_2 = F_1/m_2$.
So how big is $T_2 $, the force from the rope that accelerates 2?
$$T_2 = m_2a_2 = F_1 \frac{m_2}{m_1+m_2}$$
How big is $T_1$, the force from the rope that holds 1 back?
$$F_1 - T_1 = m_1a_1 = F_1\frac{m_1}{m_1+m_2}$$
$$T_1 = F_1 - F_1\frac{m_1}{m_1+m_2} = F_1\frac{m_2}{m_1+m_2} = T_2$$
You can do similar calculations when both 1 and 2 pull.
mmesser314mmesser314
Equality of the two tensions only holds in the special case of a massless wheel in the pulley. If you read forward in your textbook to the section which deals with massive pullies, you will find that indeed these tensions should not generally be the same and it's only in the special case where the wheel is massless that they work out to be the same.
So in the end, I think you're right to be confused about this...the textbook hasn't yet given you all the tools needed to actually justify the assumption that the tensions work out to be equal.
(By the way, the section with massive pullies is likely titled "torque" or something similar. If not, check for "torque" in the index at the back. The massive pullies will be somewhere around there if the text is at all standard).
Richard MyersRichard Myers
$\begingroup$ I guess I'm even more so confused with the concept of tension itself. Even if I disregard the pully and consider a horizontal string where one side is pulled to the left $m_1g$ Newtons and the other side to the right $m_2g$ Newtons, How is any point along the string "not moving" as I've heard in other explanations? The whole string is moving to the right, as that force is greater. $\endgroup$
– John Hippisley
$\begingroup$ Please see my answer regarding the assumption of a massless, perfectly rigid string. $\endgroup$
– John Darby
$\begingroup$ @JJohnHipp It is not that no point along the string is not moving, it's that no point on the string is moving relative to any other point. This is the rigidity assumption John mentions. $\endgroup$
– Richard Myers
how could the same force T accelerate both of them an equal amount?
This part of your question holds the key - specifically the word acceleration.
Lets start with your second question first: Why are the accelerations the same? This is a result of the constraint that the string is a fixed length. As long as this is true then for each metre that the larger mass moves the smaller mass moves one metre in the same time. The distances travelled are the same in the same time so the speeds are the same and likewise the (magnitudes of the) accelerations.
What your question assumes, without explicitly stating, is that $T_1=m_1g$ and $T_2=m_2g$. There is no reason that this has to be true and if it were true then the forces on each mass would be balanced and they would not accelerate.
The tension will have a magnitude somewhere between the weights of the two masses so for the heavier mass the net force is down and for the lighter mass the net force is up.
M. EnnsM. Enns
$\begingroup$ Why is $T_1$ not equal to $m_2g$? Isn't the first mass pulled up by the force due to gravity of the second mass, since the pully reverses its direction? $\endgroup$
$\begingroup$ The size of the tension force on either side of the pulley is the same (ideally) but the size of the tension force on pulling up on m1 is not the equal to the force of gravity on m1 - if it were m1 wouldn't accelerate. So if the amount of tension on the left side isn't equal to the weight of m1 and the tension is the same on both sides of the pulley then its not going to be equal to m1g on the side pulling up m2 $\endgroup$
– M. Enns
$\begingroup$ Is the force of the weight of the second mass pulling up on the first mass via the pulley not tension at all? $\endgroup$
In addition to the assumption of a massless wheel as Myers points out in his earlier answer, the analysis you refer to also assumes the string is massless and perfectly rigid (no stretching/compression). With this assumption, the tension at both ends of the string is the same. Specifically, T1 - T2 = ma where T1 and T2 are the tensions at each end of the string, m is the mass of the string, and a is the acceleration of the string. As m approaches zero, T2 approaches T1 even though the string is accelerating (a is not zero).
Also, when you later consider the mass of the pully- in which case the tensions at the ends of the string are not the same- you will see the implicit assumption that the tensions in the $\textit{string}$ are the same as the forces of the string on the $\textit{pulley}$. This is true, but not at all obvious as discussed in detail in the following reference. https://www.researchgate.net/publication/318107848_Force_and_torque_of_a_string_on_a_pulley.
You can search this site for detailed discussions of the case considering the mass of the pulley; e.g., search for "Atwood friction".
John DarbyJohn Darby
$\begingroup$ So in this example, the forces pulling on both ends of the string are $m_1g$ ($T_2$) and $m_2g$ ($T_1$) in the opposite direction? $\endgroup$
$\begingroup$ The forces on the ends of the string are T1 and T2 in opposite directions; for a massless pully these forces are equal, for a pully with mass these forces are not equal. You can search this site and/or a good physics book to help you see how to calculate T1 and T2 in terms of $m_1g$, $m_2g$,and the mass of the pully and its inertia. $\endgroup$
$\begingroup$ If the string is perfectly rigid every point has both $T_1$ and $T_2$ applied to it, but since it is also massless, wouldn't $T_1$ and $T_2$ cancel out, giving a net force of 0 for every point in the string? I don't understand how it can be accelerating if both the tensions are equal and opposite $\endgroup$
$\begingroup$ Furthermore, wouldn't the acceleration of the string be $\frac{T_1 - T_2}{0}$? $\endgroup$
$\begingroup$ Consider this simple case; string laid out horizontally, no pulley, no gravity. Let T1 be the tension on the left end and T2 the tension on the right end. From a force balance, the acceleration of the string to the right is ma = (T2 - T1) (a is negative if T1 exceeds T2) where m is the mass of the string. Since m is very small, T2 -T1 is very small no matter what a is. That is, $a = \lim_{m \to 0}[(T_2 - T_1)/m] = \lim_{m \to 0}[ma/m] = a$ $\endgroup$
Since the pully system reverses the direction of each force, wouldn't the following be true?
T1=Fg2=m2g
Technically you are correct. But a few changes have to be made to your statement.
The blocks are not at rest and both travel with acceleration $a$. We can assume that $M_1$ goes up and $M_2$ goes down. Since $M_1$ goes up, it's weight increase ($W_{M_1}=M_1(g+a)$). While $M_2$ goes down, its weight decreases ($W_{M_2}=M_2(g-a)$)
$\therefore T_1=W_{M_2}=M_2(g-a)$ and $T_2=W_{M_1}=M_1(g+a)$(modified version of your quoted equations).
Now, $T_1$ supports $W_{M_{1}}$,
$\therefore T_1=W_{M_{1}}=M_1(g+a)=T_2$
Alpha DeltaAlpha Delta
Its because the pulley in an atwood's machine is an Ideal pulley. It has no mass and its frictionless. This means that the rope is only going to slip over the pulley freely without rotating it at all. In that case the rope is completely isolated from the pulley and tension should be uniform throughout.
Note that frictionless means that no friction between rope and pulley
Had it been a rolling pulley (one with friction along rim so that rope does not slip), the tension on both ends would be different due to friction. and it would be that difference that help it turn.
Everywhere in mechanics a frictionless pulley implies that its a "slipping" pulley and not a rolling pulley (as we see in our daily life). These frictionless pulleys are kept only to change the direction of pull keeping the tension in the string same.
Rishab NavaneetRishab Navaneet
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces string or ask your own question.
In a system of stacked blocks with friction, how can the top block have no acceleration?
Work energy homework question
Physics studying Forces help sos
Tension in rope between two objects
Tension in string in Atwood's machine
Action-reaction force diagrams
Forces in an elastic 1D collision
When we push a box, the box applies an opposite force on my hand, but why does my hand move with the box as I push the box if the net force of is 0?
Tension between two masses connected by a massles rod
Why does the tension on the pulley in an Atwood machine not equal $(m_1 + m_2)g$?
Tension in an Atwoods machine conceptual?
Newtons third law and Atwood Machines, confusion about tension
Can two bodies move with equal acceleration if forces acting on both are unequal?
Why do the masses in an Atwood's machine move?
How to find a constraint equation to relate the two tension forces in this arrangement?
Tension in an atwood machine
Why do we not consider tension force to calculate work?
|
CommonCrawl
|
Probabilistic modelling of effects of antibiotics and calendar time on transmission of healthcare-associated infection
Mirjam Laager1,
Ben S. Cooper1,
David W. Eyre2 &
the CDC Modeling Infectious Diseases in Healthcare Program (MInD-Healthcare)
Scientific Reports volume 11, Article number: 21417 (2021) Cite this article
Healthcare-associated infection and antimicrobial resistance are major concerns. However, the extent to which antibiotic exposure affects transmission and detection of infections such as MRSA is unclear. Additionally, temporal trends are typically reported in terms of changes in incidence, rather than analysing underling transmission processes. We present a data-augmented Markov chain Monte Carlo approach for inferring changing transmission parameters over time, screening test sensitivity, and the effect of antibiotics on detection and transmission. We expand a basic model to allow use of typing information when inferring sources of infections. Using simulated data, we show that the algorithms are accurate, well-calibrated and able to identify antibiotic effects in sufficiently large datasets. We apply the models to study MRSA transmission in an intensive care unit in Oxford, UK with 7924 admissions over 10 years. We find that falls in MRSA incidence over time were associated with decreases in both the number of patients admitted to the ICU colonised with MRSA and in transmission rates. In our inference model, the data were not informative about the effect of antibiotics on risk of transmission or acquisition of MRSA, a consequence of the limited number of possible transmission events in the data. Our approach has potential to be applied to a range of healthcare-associated infections and settings and could be applied to study the impact of other potential risk factors for transmission. Evidence generated could be used to direct infection control interventions.
There is widespread concern that the rise of antimicrobial resistance (AMR) threatens the delivery of safe healthcare. This is particularly disconcerting as many healthcare associated infections are themselves antimicrobial resistant. For example, methicillin-resistant Staphylococcus aureus (MRSA) can be thought of as one of the original "superbugs", but despite marked falls in invasive infections in some settings, including the United Kingdom (UK)1, it remains a serious threat.
Efforts to control the spread and impact of AMR depend on understanding the transmission of resistant pathogens and the impact antimicrobial use. The relationship between human antimicrobial use and population-level risk of AMR is well established, e.g.2. Similarly increased individual use of antimicrobials is associated with increased personal risk of AMR including, e.g., AMR in nasally carried S. aureus3. Time series approaches have been used to show temporal relationships between several antimicrobial classes and MRSA incidence4,5. However, to date the specific impact of antimicrobial exposures on individual patient transmission dynamics within healthcare settings has not been explored.
Mathematical modelling can provide powerful tools for understanding the transmission of healthcare-associated infections and other drug-resistant pathogens, and in a number of cases has directly informed national control policies6,7,8, but wider application of models to inform AMR control polices requires an improved biological understanding of the key epidemiological and evolutionary processes9. Amongst the most important needs is for a quantitative understanding for how patient antimicrobial exposure selects for resistant organisms. Mechanistically this involves understanding at an individual patient level the extent to which antimicrobials affect susceptibility to infection and whether they promote or inhibit onward transmission. Antimicrobials may also change detection of pathogens in screening or clinical testing. If this is not accounted for it can lead to erroneous conclusions, for example, pathogen colonisation temporarily supressed at hospital admission by antimicrobials may falsely be attributed to healthcare-associated transmission if it becomes apparent later in a hospital admission when antimicrobials are stopped10.
Temporal trends in healthcare associated infections are typically reported in terms of changes in incidence, rather than analysing underling transmission processes. The marked fall in MRSA incidence in the UK followed the introduction of a bundle of infection control interventions and mandatory reporting of MRSA blood stream infections, and it is plausible that these measures contributed to the decline of MRSA1. However, it is also possible that at least part of the decline may have occurred in the absence of such interventions, perhaps driven by long-term ecological interactions between components of the nasopharyngeal flora11. Mathematical modelling offers the opportunity to study how specific processes have changed over time, for example how infectious a given patient on a hospital ward is. Better understanding the relative contribution of decreased transmission in hospitals versus decreased MRSA importation from the community may yield insights into the relative contribution of hospital infection control or ecological changes.
Here we present a statistical inference approach that allows transmission events to be reconstructed and the impact of antimicrobial exposures on acquisition, onward transmission and detection to be estimated. Our approach also accounts for changes over time in transmission and importation. We use a data-augmented Markov chain Monte Carlo (MCMC) method to allow for the fact we do not directly observe transmission, but instead a series of imperfectly sensitive screening results, alongside patients' hospital admission records. We apply our approach to study the transmission of MRSA in an adult intensive care unit (ICU) in Oxford, UK, between 2008 and 2017, allowing insights to be gained into the mechanisms behind the successful control of MRSA in the UK during this period. We additionally extend our models to allow discrete subtypes of MRSA to be modelled, e.g. based on molecular subtyping schemes such as spa typing or on the basis of antimicrobial susceptibility patterns. This additional information can be used to inform the likely sources of infection; we show antimicrobial susceptibility data can be used to refine transmission estimates.
We developed an approach to model transmission of MRSA in hospitals and the impact of antimicrobial exposures. Firstly, we conduct simulations to demonstrate our method is accurate and estimates of uncertainty are well calibrated. We then apply our model to an observed dataset from Oxford, UK. We model the transmission of MRSA within a hospital intensive care unit (ICU), regarding patients as either colonised (with probability \(\varphi\)) or susceptible at the point of admission to the ICU. Once admitted susceptible patients can become infected at a rate proportional to the number of colonised patients present in the ICU (\(\beta C\)). Patients are screened for MRSA at admission and then at regular intervals; the screening test is assumed to be imperfectly sensitive (with sensitivity \(\rho\)), i.e. false negative results can occur, but perfectly specific, i.e. there are no false positive results. The analysis is performed in discrete time using daily time steps reflecting the precision of some of the underlying data items. Exposure to antibiotics alters the rate of onward transmission from each colonised patient by a factor, \(\uptau\), susceptibility to colonisation in susceptible patients by a factor, \(\mathrm{\alpha }\), and test sensitivity by a factor, \(\updelta\).
Identifying the effect of antibiotics in simulated data
We use simulated data to show that given a large enough dataset our approach was able to independently and simultaneously identify all three effects of antibiotics on transmission, acquisition and detection (Fig. 1). We generated 10 datasets with 10,000 admissions during 2000 days and assumed the true effect of antibiotics on onward transmission, acquisition and detection to be relatively modest, i.e. \(\uptau = 1.2\), \(\mathrm{\alpha }= 1.3\) and \(\updelta = 1.1\) respectively. We randomly assigned antibiotic prescriptions to patients and as a simplifying assumption did not include any effects of antibiotics beyond the prescribed time window.
Posterior estimates of the effect of antibiotics on onward transmission, acquisition and detection in 10 simulated datasets. The true values are indicated by the solid horizontal lines. Each dataset was analysed with the model using positive and negative swabs only (blue circles) and using typing information (red squares, assuming 4 types were present at equal frequency). The dashed horizontal lines indicate 1 (i.e. no effect). HPDI, highest posterior density interval.
The models were also able to successfully recover the main model parameters (\(\varphi , \beta , \rho\)) (Supplementary Fig. S2) and the statuses of the patients (Supplementary Fig. S3). We also generated 40 datasets with time dependent transmission and importation and were able to recover increases as well as decreases in both transmission and importation (Supplementary Fig. S4).
The circulating MRSA was assumed to be from one of four sub-types present at equal frequency, and model inferences were generated with and without accounting for this typing data. Accounting for typing improves estimates of the importation parameter (Supplementary Fig. S2), but does not yield more precise estimates of the antibiotic effects (Fig. 1).
Estimates of the effect of antibiotics on detection had the narrowest credibility intervals, reflecting the fact that data from all patients with positive swabs, who may be tested multiple times, are informative here. There was also less uncertainty about the impact of antibiotics on acquisition than onward transmission, likely reflecting the increased uncertainty introduced by the multiple possible sources for many acquisition events.
We also analysed simulations including re-admissions of a subset of the same patients to the ICU. In the baseline model described above these re-admissions were artificially treated as new patients. This simplifying assumption avoided the need to track the colonisation status of patients between ICU admissions. We also developed an alternative formulation that tracked the probability of previously admitted patients being re-admitted colonised, described in the Supplementary Information. Model inferences were similar for both models (Supplementary Fig. S1), and as the baseline model was computationally simpler this was used for subsequent analyses. The proportion of patients readmitted to the ICU in our study dataset, described below, was relatively low (around 10%); using this proportion in simulations produced similar parameter estimates using both approaches, however in settings with higher re-admission rates the more complex model may be required.
MRSA transmission in Oxfordshire, UK
Having shown our model performed well on simulated data, we applied our model to analyse data from the Oxford University Hospitals NHS Foundation Trust, a 1200-bed teaching hospital group providing secondary care to the population of Oxfordshire, UK (approximately 600,000 people), and tertiary care services to the surrounding region. Data were available for all patients admitted to the combined medical and surgical adult ICU between June 2008 and November 2017 inclusive. Data items included ICU admission and discharge dates, MRSA screening swab results including antimicrobial susceptibility testing results for positive cultures, antimicrobial prescriptions, and patient factors including age, sex, Charlson comorbidity score, and admission specialty (Supplementary Table S1).
There were 7924 admissions to the ICU from 7138 patients. A total of 12,047 MRSA screens were performed in 6757 patients, 271 (2.2%) were MRSA-positive in 179 different patients. An overview of the antimicrobial prescription data is shown in Supplementary Fig. S5, the most commonly prescribed antibiotics were co-amoxiclav, vancomycin, metronidazole, piperacillin-tazobactam and meropenem. Molecular typing or whole-genome sequencing was not routinely undertaken, however routine antimicrobial susceptibility testing provides a proxy for isolate relatedness. In addition to confirming methicillin resistance, seven antibiotics (gentamicin, erythromycin, tetracycline, fusidic acid, ciprofloxacin, rifampicin and mupirocin) were consistently tested for during the whole study period and therefore used to establish resistance profiles. The number of tests per resistance profile and the distance between profiles over time for individual patients are shown in Supplementary Fig. S6. We found 31 different combinations of presence and absence of resistance with respect to these seven antibiotics. The maximum number of profiles in a single patient was 5. However, no patient had multiple different profiles during the same admission. Therefore, as repeat admissions were considered separately, when using typing data, we considered transmission from one patient to another to be plausible only when both patients were colonized with strains with identical resistance profiles. We also performed analyses without typing data, which provides a sensitivity analysis if the assumption requiring matching antibiograms is overly restrictive. For 50 admissions with a positive test there was no profile as recorded susceptibly data was not present for all seven antibiotics. These profiles were augmented in the same way that profiles for patients with no positive test were augmented, which is described in more detail in the Methods section.
Changes over time in MRSA transmission and importation to hospital
The number of positive tests per year in the ICU studied decreased over time from 48 in 2009 to 9 in 2016, while the diversity of the resistance profiles observed increased with time (Supplementary Fig. S7). While in the earlier years of the study, a majority of colonized patients had one of two resistance profiles (ciprofloxacin resistance only or ciprofloxacin and erythromycin resistance) all 8 colonized patients in 2017 had a different resistance profile. Provided transmission events associated with changes in antibiograms are uncommon, as suggested by the stability of antibiograms over time in individual patients (Supplementary Fig. S6), the diversity of resistance profiles seen at the end of the study suggests that towards the end of the study transmission was uncommon and importation was the most important driver of positive tests. We formally model this by allowing the transmission and importation parameters to vary as logistic functions of time. We find a strong decrease in both the transmission parameter and the importation probability over time (Fig. 2). In the model using typing data, the estimate of \(\beta\) falls from 0.0074 (95% highest posterior density interval, HPDI 0.0035, 0.0113) in 2008 to 0.0004 (1.9e–7, 0.0023) by 2017 (Fig. 2A). This is reflected in the number of acquisitions from early 2013 onwards being estimated as zero (Fig. 2C). The importation probability falls from 0.044 (0.034, 0.060) in 2008 to 0.012 (0.007, 0.018) in 2017 (Fig. 2B, D). The signal for these declines is captured by the slope of the function we use to model changes over time, which is − 0.0014 (− 0.0032, − 0.0002) and − 0.0007 (− 0.0009, − 0.0005) for transmission and importation respectively (Supplementary Fig. S8). Temporal trends in transmission and importation from the model without typing data were similar (Fig. 2, which also shows estimates from models without time-dependent transmission and importation).
Change in transmission and importation of MRSA in an Oxford ICU, 2008–2017: posterior estimates from four different models. The top panels show point estimates (solid lines) and highest posterior density intervals (ribbons) of the transmission (A) and importation (B) parameters from the models with constant transmission and importation without (grey) and with (orange) considering antibiograms as typing data, and the models with time dependent transmission and importation without (blue) and with (red) antibiograms. The bottom panels show the posterior mean estimates of the number of patients estimated to have been colonised due to acquisition (C) and importation (D). The models without typing data (grey and blue) estimated very similar numbers of acquisitions and importations.
Inferring effects of antibiotics on MRSA transmission and detection
We analysed the effects of antibiotics on MRSA transmission and detection in the ICU for 3 antibiotics/antibiotic groups: co-amoxiclav (the most commonly used), ciprofloxacin, and broad-spectrum beta-lactams without activity against MRSA (including co-amoxiclav, piperacillin/tazobactam, cefuroxime, ceftriaxone, ceftazidime, ceftolozane and meropenem). As a simplifying assumption, the effects were considered to last only while the patient was taking the antibiotics. The posterior estimates for the effect of antibiotics on transmission, acquisition and detection are shown in Fig. 3, using the basic model without time-dependent transmission or typing data. (The very similar posterior estimates from the model with typing data and the model with time dependent transmission and importation are shown in Supplementary Figs. S9–S11).
Effects of antibiotics on acquisition, onward transmission and MRSA detection in an Oxford ICU, 2008–2017. Posterior estimates are shown in black and prior distributions in grey. The plot is based on a model without time-dependent transmission or importation parameters or typing data. Alternative plots for these models are very similar and shown in Supplementary Figs. S7–S9.
There was a moderate effect of antibiotics on detection. Co-amoxiclav was associated with a 1.3-fold (95% HPDI 1.01, 1.57) increase in the relative probability of detection, however there was no strong evidence that this was also the case for broad-spectrum beta-lactams as a group,1.03-fold (95% HPDI 0.87, 1.26). The proportion of patients who were given an MRSA active drug simultaneously with any broad-spectrum beta-lactam antibiotic or co-amoxiclav on the day of the test was 0.25 and 0.25 respectively. Conversely, ciprofloxacin was estimated to be associated with a 0.29-fold (95% HPDI 0.03, 1.04) reduction in test sensitivity. However only a handful of tests were conducted while patients were exposed to ciprofloxacin, one positive test and an estimated four false negative tests, and the uncertainty was large. The true positive test was a result of colonisation with ciprofloxacin-resistant MRSA, and the inferred resistance profiles of the four false negative results also included ciprofloxacin resistance, so a reduction in sensitivity due to ciprofloxacin being active against MRSA is unlikely.
Posterior distributions of the effects of antibiotics on acquisition or onward transmission of MRSA were very similar to the priors, and for all three groups of antibiotics considered we are unable to rule out substantial effects in both directions (Fig. 3). For example, co-amoxiclav was estimated to be associated with a 0.88-fold reduction in acquisition (95% HPDI 0.22, 1.66) and a 1.20-fold increase in onward transmission (95% HPDI 0.22, 1.87) respectively. This lack of information about the effects of antibiotics on acquisition and transmission rates reflects the low number of possible transmission events in the data. Of the 179 patients who were tested positive, only 50 were not tested positive on admission, which is many fewer than the transmission events used to identify the antibiotic effects in simulated data.
Patient factors associated with acquisition and onward transmission
For each ICU patient we used the most probable MRSA colonisation status (susceptible, imported, acquired), i.e. the status with the highest posterior density, and performed logistic regression to investigate for any relationship between importation, acquisition or onward transmission and patient age, sex, Charlson comorbidity score, admission speciality group (acute medicine, specialist medicine, general surgery, trauma and orthopaedics [T&O], other). Controlling for all other factors, patients admitted under T&O were less likely to be admitted colonised than the acute medicine reference group (OR 0.52 [95%CI 0.28, 0.97]) and importation was associated with increased age (OR per 10 year increase 1.11 [1.01, 1.22]). There was no strong statistical evidence that any of the other variables studied were associated with a change in importation, acquisition and onward transmission. The results for all variables studied are reported in Supplementary Table S2.
We describe an individual-patient probabilistic method for simultaneously reconstructing transmission events, estimating parameters and quantifying important co-variates. We apply this to study healthcare-associated MRSA transmission, the impact of antibiotics and changes over time.
We make several key observations. Firstly, it is possible to independently recover the effects of antibiotics on acquisition, onward transmission and detection, even if these changes are relatively modest. However, this requires large datasets e.g. in our simulations several thousand MRSA cases were needed, of which up to 80% were acquisitions which facilitated estimation of acquisition/onward transmission effects. In many settings this is likely to exceed the number of cases present within a dataset, unless potential heterogeneity is introduced by pooling data across multiple institutions or over prolonged periods of time (which would need to be accounted for). In the dataset we study, there are an order of magnitude fewer cases. This suggests that to make reliable estimates of the effect of antibiotics on transmission we are likely to need to pool data from multiple wards or hospitals, taking care to appropriately to account for heterogeneity. Here, we can only rule out very large effects, e.g. increases in acquisition or onward transmission of > 1.9-fold and > 1.7-fold respectively.
We find evidence that antibiotics can have contrasting associations on MRSA detection, with co-amoxiclav associated with enhanced detection and ciprofloxacin associated with reducing detection. These findings may reflect co-amoxiclav, without activity against MRSA, clearing other organisms present in the nose and allowing MRSA to proliferate. However, why the same effect was not seen with all broad-spectrum beta-lactams is unclear. Additionally, with much of the MRSA isolated resistant to ciprofloxacin it seems unlikely that the ciprofloxacin effect is causal unless in vitro test results are not reflecting in vivo activity. Antibiotic exposure transiently masking pre-existing carriage has been hypothesized to account for some apparent hospital acquisition on the basis of whole genome sequencing10, and these findings underscore the need to better understand the role of antibiotics on detection in transmission studies.
We find that importation of MRSA decreased from around 5% in 2008 to below 2% in 2018. This is consistent with estimates of importation found in other studies. A study of an ICU in London between 1995 and 1997 found an importation probability of 4.6%12 and another from a neonatal ward in Cambridge estimated importation at 1.2% in 201113. We explore the relative contribution of falls in transmission and importation to the decline of MRSA seen on our ICU, mirroring the decline in MRSA in the UK nationally. We find evidence of transmission rates falling from 2009 onwards. In fact, it is likely that transmission rates had been falling prior to our study, given the timing of key interventions locally and nationally earlier in the decade1. In the ICU studied, most infection control enhancements made in response to MRSA were already in place at the start of the study, including isolation and contact precautions for colonised patients, screening of all patients at ICU admission and at regular intervals, universal skin decolonisation with 2% chlorhexidine, antimicrobial impregnated central lines, with dedication insertion packs and care standards and root-cause analyses of all blood stream infections11.
Our study has several limitations. Most importantly, we have limited power to detect antibiotic effects in our observed data, which only contained 179 patients with MRSA isolated. In contrast, in simulations where we did detect antibiotic effects, several 1000 individuals were simulated to be colonized on admission or acquire MRSA. The precise number of patients with MRSA needed to estimate the impact of antibiotics on transmission will depend on effect sizes, the proportion of MRSA acquired versus imported, the density of testing and any available subtyping or genomic data. However, in practice appropriately powered studies are likely to need thousands rather than hundreds of patients. This limitation is compounded by the fact the different patients receive different antibiotics which may impact transmission in varying ways. Furthermore, rather than model re-admissions, as in our setting these are uncommon to the ICU, we treat re-admissions as new patients. However, we have detailed an alternative approach to account for these, but it is more computationally intensive. Our model also treats the ICU in isolation to the rest of the hospital, as antimicrobial prescribing data were only available for the ICU. However, MRSA transmission dynamics within the hospital as a whole likely contributed to MRSA observed on the ICU. We use antibiograms as an example of a discrete typing scheme, however this is a relatively crude typing scheme, that may vary within a single infection14, although we did not see this in the current study. However, our approach could be extended to other typing schemes, such as spa-typing or multilocus sequence typing. We also did not study the effect of glycopeptides including vancomycin and teicoplanin directly. This was a prospectively made decision given concern about reverse causation whereby patients suspected (or known, from another hospital's data) to have MRSA colonisation may be given these antibiotics prior to a positive test, which would complicate the interpretation of the observed increase in test sensitivity associated with vancomycin administration (Supplementary Fig. S13).
We use data which is routinely collected in an ICU. This has the advantage that our model could potentially be applied to other, similar datasets. But as with all such observational studies we cannot make strong causal statements about antibiotics effects as they were not randomly assigned to patients. Our modelling approach expands existing methods by allowing to estimate the effect of covariates. This could be further developed by capturing effects other than multiplicative and by extending the timespan of effects beyond the day of antibiotic exposure.
In conclusion, we present a method that can allow important risk factors for transmission and infection detection to be estimated. This provides a mechanism for undertaking rational infection control whereby not just who-infected-whom can be estimated, but also the conditions leading to transmission understood and combatted. We use the example of antibiotic use, but our method could also be applied to assess the impact of other factors, e.g. bed occupancy, hand hygiene compliance, staff absences, etc. It could also be used to assess the impact of specific interventions. Use of such data-driven approaches is likely to lead to better infection control in the future and in turn better outcomes for patients.
Transmission models and likelihood functions
We implement a stochastic mechanistic transmission model, that builds on previous approaches12,13. The data augmentation approach used in these publications introduces additional parameters for the unknown status of for each patient, which makes it possible to split the overall likelihood into the following product:
$$\Pi \left(\Omega ,\mathrm{W}|\uprho ,\varphi ,\upbeta \right)=\Pi \left(\Omega |\mathrm{W},\uprho \right)\cdot\Pi \left(\mathrm{W}|\varphi ,\upbeta \right)$$
where \(\Omega\) denotes the test results, W is the colonisation status of the patients and colonisation times, \(\rho\) is the test sensitivity, \(\varphi\) is the probability of a patient being positive on admission and \(\beta\) is a transmission parameter. The first factor of the product describes the observation process, which can be modelled as a binomial distribution. Assuming perfect specificity, the likelihood of the test results given the test sensitivity and the true statuses is therefore given by
$$\Pi \left(\Omega |\mathrm{W},\uprho \right)={\uprho }^{\mathrm{TP}}\cdot {\left(1-\uprho \right)}^{\mathrm{FN}}$$
where TP and FN denote the number of true positive and false negative tests. The second factor of the overall likelihood captures the transmission process. Following the approach taken in an existing model13 we assume that patients who acquire a colonisation stay colonised until discharge. The patient statuses, W, are therefore fully described by the colonisation times, \({\mathrm{t}}^{\mathrm{c}}\), which are set to infinity for patients who do not get colonised during their ward stay. By discretising time into days, the likelihood of the colonisation times given the transmission parameters can be modelled as
$$\Pi \left({\mathrm{t}}^{\mathrm{c}}|{\varphi },\upbeta \right)={{\varphi }}^{{\mathrm{N}}_{\mathrm{p}}}{\left(1-{\varphi }\right)}^{\mathrm{N}-{\mathrm{N}}_{\mathrm{p}}}\mathop{\prod }\limits_{k:{t}_{k}^{c}=\infty }\left[\mathop{\prod }\limits_{j={t}_{k}^{a}}^{{t}_{k}^{d}}\left(1-{p}_{kj}\right)\right]\mathop{\prod }\limits_{k:{t}_{k}^{c}\ne \infty }\left[{p}_{k{t}_{k}^{c}}\mathop{\prod }\limits_{j={t}_{k}^{a}}^{{t}_{k}^{c}-1}\left(1-{p}_{kj}\right)\right]$$
where N is the total number of patients, \({\mathrm{N}}_{\mathrm{p}}\) is the number of patients who are admitted already colonised and \({t}_{k}^{a}\), \({t}_{k}^{c}\) and \({t}_{k}^{d}\) denote the day of admission to the ICU, colonisation and discharge from the ICU of patient k. \({p}_{kj}\) is the probability of patient k becoming colonised on day j which is given by
$${\mathrm{p}}_{\mathrm{kj}}= 1 -\mathrm{ exp}(-\mathrm{ \beta C}(\mathrm{j}))$$
where C(j) is the number of colonised patients present on the ICU on day j.
Accounting for the effect of antibiotics
We expand this model to account for the effect of antibiotics on detection and transmission. We assume that antibiotics can affect the test sensitivity, the probability of acquisition and the probability of onward transmission. To capture the effect of antibiotics on test sensitivity we assume that in the presence of antibiotics the baseline test sensitivity changes by a factor \(\updelta\) and therefore model the likelihood of the test results as
$$\Pi \left(\Omega |\mathrm{W},\uprho ,\updelta ,\Sigma \right)={\uprho }^{{\mathrm{TP}}_{\mathrm{nabx}}}\cdot {\left(1-\uprho \right)}^{{\mathrm{FN}}_{\mathrm{nabx}}}\cdot {\left(\mathrm{\delta \rho }\right)}^{{\mathrm{TP}}_{\mathrm{abx}}}\cdot {\left(1-\mathrm{\delta \rho }\right)}^{{\mathrm{FN}}_{\mathrm{abx}}}$$
where \(T{P}_{nabx}\), \({FN}_{nabx}\), \(T{P}_{abx}\) and \({FN}_{abx}\) denote the total number of true positive and false negative tests conducted while a patient was off or on antibiotics and \(\Sigma\) is a matrix with entries \({\upsigma }_{ij}\) equal to one if patient i is on antibiotics on day j and zero otherwise. We model the effect of antibiotics on the transmission dynamics by introducing a parameter \(\mathrm{\alpha }\) for the effect on acquisition and a parameter \(\uptau\) for the effect on onward transmission and modify the daily probability of acquisition as follows
$${\mathrm{p}}_{\mathrm{kj}}=1-\mathrm{ exp}( - {\mathrm{\alpha }}^{{1}_{{\upsigma }_{\mathrm{kj}}=1}}(\upbeta {\mathrm{C}}_{\mathrm{nabx}}\left(\mathrm{j}\right)+\mathrm{\tau \beta }{\mathrm{C}}_{\mathrm{abx}}\left(\mathrm{j}\right)))$$
where \({C}_{nabx}\)(j) and \({C}_{abx}\)(j) denote the number of colonised patients on day j who are off or on antibiotics and \({1}_{{\upsigma }_{\mathrm{kj}}=1}\) is an indicator function which equals to 1 if patient k is on antibiotics on day j and zero otherwise.
Modifications to the likelihood and data augmentation with typing data
To extend our model to account for discrete typing data we use a simplified version of an existing model13. The dataset we are using does not contain typing or sequencing data, however, resistance profiles can be used as a proxy for the type of strain. We denote by Z the augmented source patients for all transmission events and factorise the overall likelihood as follows
$$\Pi \left(\Omega ,\Sigma ,\mathrm{ T},\mathrm{ D},\mathrm{ W},\mathrm{Z}|\uptheta \right)=\Pi \left(\Omega ,\Sigma ,\mathrm{ T},\mathrm{ D}|\mathrm{W},\mathrm{ Z},\uptheta \right)\cdot\Pi \left(\mathrm{Z}|\mathrm{W},\uptheta \right)\cdot\Pi \left(\mathrm{W}|\uptheta \right)$$
with T denoting the MRSA type causing colonisation and \(\uptheta\) a vector whose elements are the model parameters \(\upbeta ,\uprho , \varphi , \mathrm{\alpha }, \mathrm{\delta and}\uptau\) described in Table 1. D is a matrix of distances between the different types. In the simplest case we can assume that patients colonised with different MRSA types cannot be linked by transmission and set the likelihood contribution of all patients who acquire to 1 if the augmented type of the patient is the same as the type as the source and to 0 otherwise. For patients who are imported and do not have a positive test, a type has to be augmented. We assume that for the imported patients the probability of being colonised with any given type is equal to the proportion of patients who were tested positive and found to have that type. We therefore model the likelihood of the observed types given the augmented sources and colonisation times as
$$\Pi \left(\Omega ,\Sigma ,\mathrm{ T},\mathrm{ D}|\mathrm{W},\mathrm{ Z},\uptheta \right)= {\prod }_{k:{t}_{k}^{c}\ne \infty , { t}_{k}^{c} \ge {t}_{k}^{a}}\left[{1}_{{T}_{k}={T}_{{Z}_{k}}}\right] {\prod }_{k: { t}_{k}^{c} < {t}_{k}^{a}}\frac{{N}_{{T}_{k}}}{{N}_{T}}$$
where \({N}_{{T}_{k}}\) is the number of patients who were tested positive with type \({T}_{k}\) and \({N}_{T}\) is the total number of patients tested positive with any type. The first term applies to patients who acquire the pathogen in the ward. Since we assume that transmission can only take place between patients with the same type, this term, denoted by the indicator function \({1}_{{T}_{k}={T}_{{Z}_{k}}},\) is zero if the type of any patient, \({T}_{k}\), is not equal to the type of the source of infection for that patient, \({T}_{{Z}_{k}}.\) The second term is the importation model. Since our model allows for an effect of antibiotics on onward transmission, patients who are on antibiotics on a given day are not equally likely to act as sources of infection as patients who are not on antibiotics. In the absence of effects of antibiotics, the likelihood of any given patient being the source of infection to a patient who acquires infection on a given day is equal to one over the number of colonised patients on that day. In that case the likelihood of the sources given the colonisation times would be given by
$$\Pi \left(\mathrm{Z}|\mathrm{W }\theta \right)=\prod_{i :{t}_{i}^{c}\ne \infty , { t}_{i}^{c} \ge {t}_{i}^{a}}\frac{1}{C\left({t}_{i}^{c}\right)}$$
where \(C\left({t}_{i}^{c}\right)\) is the number of colonised patients on the day when patient i becomes colonised. To account for the effect of antibiotics on transmission we modify this as follows
$$\Pi \left(\mathrm{Z}|\mathrm{W}, \theta \right)=\prod_{i: {t}_{i}^{c}\ne \infty , { t}_{i}^{c} \ge {t}_{i}^{a}}\frac{{b}^{{v}_{ik}}}{{C}_{nabx}\left({t}_{i}^{c}\right)+b{C}_{abx}\left({t}_{i}^{c}\right)}$$
$${\mathrm{v}}_{\mathrm{ik}}=\left\{\begin{array}{l}1, if {\upsigma }_{{t}_{i}^{c},{Z}_{i}}=1\\ 0, else\end{array}\right.$$
The approach we present here can also be used for discrete typing schemes such as multi-locus sequence typing (MLST). It can also be easily adapted to more continuous measures of distance, such as spa typing, by deriving a likelihood function which is based on the distance between the type of the offspring and the type of the proposed source.
Table 1 Model parameters and priors.
Time dependent acquisition and importation
We are analysing a dataset that was collected during 10 years, a timespan during which changes in importation and transmission probability are likely to occur. We can account for this by allowing the importation probability, \(\varphi\), and the transmission parameter, \(\upbeta\), to vary with time and set
$$\varphi \left(\mathrm{t}\right)=\frac{{a}_{\varphi }}{1+exp\left(-{b}_{\varphi }\left(t+{c}_{\varphi }\right)\right)}$$
$$\upbeta \left(\mathrm{t}\right)=\frac{{a}_{\upbeta }}{1+exp\left(-{b}_{\upbeta }\left(t+{c}_{\upbeta }\right)\right)}$$
with parameters \({a}_{\varphi },{b}_{\varphi },{{c}_{\varphi },{a}_{\upbeta },b}_{\upbeta }\) and \({c}_{\upbeta }\) being estimated. An example of curves resulting from different combinations of the shape parameters a, b and c is shown in Supplementary Fig. S12.
MCMC algorithm and implementation
We use a Markov chain Monte Carlo (MCMC) algorithm to generate posterior estimates of the model parameters and the augmented patient statuses. In each iteration of the MCMC we first update the parameters of the transmission model one at a time using a Metropolis step and the importation probability via a Gibbs update. We then update the statuses and sources of infection of a subset of patients. The moves and Hastings ratios involved in the status updates are described in detail in Worby et al., 2016. The width of the proposal distributions are adapted during the first 30% of the iterations in order to achieve an acceptance ratio between 0.1 and 0.3 for each parameter. These iterations are discarded as part of the burn-in. Convergence is assessed using the Gelman–Rubin convergence diagnostic as implemented in the Rpackage coda using 1.1 as a cut-off indicating convergence. We combine chains from different starting values and use a minimum effective sample size of above 200 for all parameters. We use weakly informative priors for all parameters. Parameters and priors are shown in Table 1.
The models are implemented in R and C++, making use of Rcpp15. All four models are available as a package (https://github.com/mirjamlaager/mrsamcmc). The package also includes functions for forward simulations which can be used to create simulated data, conduct posterior predictive checks and simulate the effect of interventions.
We use simulated data to show that our models are accurate and well calibrated. For each model we generate 10 datasets, simulating under the same assumptions as in the inference model. For each dataset we compute the absolute difference between the point estimate and the true value of the parameter (precision) and check whether the true value lies within the 90% highest posterior density interval (calibration). The average precision and calibration are shown in Table 2.
Table 2 Absolute difference between the maximum posterior density point estimate and the true value (top row, median and interquartile range) from ten simulated datasets and proportion of simulations where the true value lies within the 0.90 highest posterior density interval (bottom row).
Oxford ICU dataset
Data on linked admissions and microbiology data from the Oxford University Hospitals NHS Foundation Trust were extracted from the Infections in Oxfordshire Research Database (IORD). IORD has generic Research Ethics Committee (National Research Ethics Service Committee South Central, Oxford C Research Ethics Committee), Health Research Authority and Confidentiality Advisory Group approvals (14/SC/1069, ECC5-017(A)/2009) as a research database, including approvals for de-identified data to be used without individual patient consent. In line with the provisions of the IORD protocol, the specific analysis conducted was reviewed and approved by the IORD Research Database Team. The study was carried out following all relevant guidelines and regulation.
Model assessment
We assessed the model fit in our Oxford ICU analysis by sampling 1000 parameters from the posterior distributions and running forward simulations with the simulated values. These posterior predictive checks have been advocated for as a useful tool to assess whether a model is able to reproduce key aspects of the transmission process16. The results are shown in Fig. 4. The total number of positive tests in the dataset lies within the interquartile range of the simulated data for all four models. To check whether the models appropriately differentiate between acquisition and importation we used the number of patients with a negative test followed by a positive test as an approximation of acquisitions and the number of patients whose first test was positive as an approximation of importations. All four models performed well in this comparison. The decrease of positive tests over time is better captured by the models that include time.
Posterior predictive checks. The number of positive tests in the true patient data (black lines) is compared to the positive tests in 1000 simulated datasets (boxplots, mean and interquartile range). D shows the number of positive tests for each year in the true patient data (black lines) and in 1000 simulated datasets (median and 0.9 credible interval).
Duerden, B., Fry, C., Johnson, A. P. & Wilcox, M. H. The control of methicillin-resistant Staphylococcus aureus blood stream infections in england. Open Forum Infectious Diseases 2 (2015).
Seppälä, H. et al. The effect of changes in the consumption of macrolide antibiotics on erythromycin resistance in group A streptococci in Finland. N. Engl. J. Med. 337, 441–446 (1997).
van Bijnen, E. M. E. et al. Antibiotic exposure and other risk factors for antimicrobial resistance in nasal commensal Staphylococcus aureus: An ecological study in 8 European countries. PLOS ONE 10, e0135094 (2015).
Vernaz, N. et al. Temporal effects of antibiotic use and hand rub consumption on the incidence of MRSA and Clostridium difficile. J. Antimicrob. Chemother. 62, 601–607 (2008).
Aldeyab, M. A. et al. Modelling the impact of antibiotic use and infection control practices on the incidence of hospital-acquired methicillin-resistant Staphylococcus aureus: A time-series analysis. J. Antimicrob. Chemother. 62, 593–600.
Robotham, J. V. et al. Screening, isolation, and decolonisation strategies in the control of meticillin resistant Staphylococcus aureus in intensive care units: Cost effectiveness evaluation. BMJ 343, d5694–d5694 (2011).
van Bunnik, B. A. D. et al. Small distances can keep bacteria at bay for days. Proc. Natl. Acad. Sci. 111, 3556–3560 (2014).
Bootsma, M. C. J., Wassenberg, M. W. M., Trapman, P. & Bonten, M. J. M. The nosocomial transmission rate of animal-associated ST398 meticillin-resistant Staphylococcus aureus. J. R. Soc. Interface 8, 578–584 (2011).
Knight, G. M. et al. Mathematical modelling for antibiotic resistance control policy: Do we know enough? BMC Infect. Dis. 19, 1–9 (2019).
Price, J. R. et al. Transmission of Staphylococcus aureus between health-care workers, the environment, and patients in an intensive care unit: A longitudinal cohort study based on whole-genome sequencing. Lancet. Infect. Dis 17, 207–214 (2017).
Wyllie, D. H. et al. Decline of meticillin-resistant Staphylococcus aureus in Oxfordshire hospitals is strain-specific and preceded infection-control intensification. BMJ Open 1, e000160–e000160 (2011).
Forrester, M., Pettitt, A. & Gibson, G. Bayesian inference of hospital-acquired infectious diseases and control measures given imperfect surveillance data. Biostatistics 8, 383–401 (2007).
Worby, C. J. et al. Reconstructing transmission trees for communicable diseases using densely sampled genetic data. Ann. Appl. Stat. 10, 395–417 (2016).
MathSciNet Article Google Scholar
Eyre, D. W. et al. A pilot study of rapid benchtop sequencing of Staphylococcus aureus and Clostridium difficile for outbreak detection and surveillance. BMJ Open 2, e001124 (2012).
Eddelbuettel, D. & François, R. Rcpp: Seamless R and C++ integration. J. Stat. Softw. 40, 1–8 (2011).
Gibson, G. J., Streftaris, G. & Thong, D. Comparison and assessment of epidemic models. Statist. Sci. 33, 19–33 (2018).
We thank all the people of Oxfordshire who contribute to the Infections in Oxfordshire Research Database. Research Database Team (Oxford): R Alstead, C Bunch, DW Crook, J Davies, J Finney, J Gearing (community), H Jones, L O'Connor, TEA Peto (PI), TP Quan, J Robinson (community), B Shine, AS Walker, D Waller, D Wyllie. Patient and Public Panel: G Blower, C Mancey, P McLoughlin, B Nichols.
This work was funded by the Centers for Disease Control and Prevention's Modeling Infectious Diseases in Healthcare program (MInD-Healthcare). Computation used the Oxford Biomedical Research Computing (BMRC) facility, a joint development between the Wellcome Centre for Human Genetics and the Big Data Institute supported by Health Data Research UK and the NIHR Oxford Biomedical Research Centre. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. DWE is a Big Data Institute Robertson Fellow.
Nuffield Department of Medicine, University of Oxford, Oxford, UK
Mirjam Laager & Ben S. Cooper
Big Data Institute, Nuffield Department of Population Health, University of Oxford, Oxford, UK
David W. Eyre
Mirjam Laager
Ben S. Cooper
D.W.E. and B.S.C. conceived the study. M.L. conducted the analysis. M.L and D.W.E. wrote the code. M.L. and D.W.E. wrote the manuscript. All authors reviewed the manuscript. D.W.E. and B.S.C. supervised the project.
Correspondence to Mirjam Laager.
Laager, M., Cooper, B.S., Eyre, D.W. et al. Probabilistic modelling of effects of antibiotics and calendar time on transmission of healthcare-associated infection. Sci Rep 11, 21417 (2021). https://doi.org/10.1038/s41598-021-00748-y
DOI: https://doi.org/10.1038/s41598-021-00748-y
|
CommonCrawl
|
Electrical networks with prescribed current and applications to random walks on graphs
IPI Home
An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration
April 2019, 13(2): 337-351. doi: 10.3934/ipi.2019017
Propagation of boundary-induced discontinuity in stationary radiative transfer and its application to the optical tomography
I-Kun Chen 1, and Daisuke Kawagoe 2,,
Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei 10617, Taiwan
Institute of Applied Mathematics, Inha University, 100 Inha-ro, Nam-gu, Incheon, 22212, Republic of Korea
Received April 2018 Revised August 2018 Published January 2019
Fund Project: The first author was supported in part by JSPS KAKENHI grant number 15K17572.
We consider a boundary value problem of the stationary transport equation with the incoming boundary condition in two or three dimensional bounded convex domains. We discuss discontinuity of the solution to the boundary value problem arising from discontinuous incoming boundary data, which we call the boundary-induced discontinuity. In particular, we give two kinds of sufficient conditions on the incoming boundary data for the boundary-induced discontinuity. We propose a method to reconstruct the attenuation coefficient from jumps in boundary measurements.
Keywords: Integro-differential equation, boundary value problem, boundary-induced discontinuity, X-ray transform, optical tomography.
Mathematics Subject Classification: Primary: 35R09; Secondary: 35R30, 35Q60.
Citation: I-Kun Chen, Daisuke Kawagoe. Propagation of boundary-induced discontinuity in stationary radiative transfer and its application to the optical tomography. Inverse Problems & Imaging, 2019, 13 (2) : 337-351. doi: 10.3934/ipi.2019017
V. Agoshkov, Boundary Value Problems for Transport Equations, Birkhäuser, Boston, 1998. doi: 10.1007/978-1-4612-1994-1. Google Scholar
D. S. Anikonov, I. V. Prokhorov and A. E. Kovtanyuk, Investigation of scattering and absorbing media by the methods of X-ray tomography, J. Inv. Ill-Posed Problems, 1 (1993), 259-281. doi: 10.1515/jiip.1993.1.4.259. Google Scholar
K. Aoki, C. Bardos, C. Dogbe and F. Golse, A note on the propagation of boundary induced discontinuities in kinetic theory, Math. Models Methods Appl. Sci., 11 (2001), 1581-1595. doi: 10.1142/S0218202501001483. Google Scholar
S. R. Arridge and J. C. Schotland, Optical tomography: Forward and inverse problems, Inverse Problems, 25 (2009), 123010, 59pp. doi: 10.1088/0266-5611/25/12/123010. Google Scholar
G. Bal and A. Jollivet, Stability estimates in stationary inverse transport, Inverse Probl. Imaging, 2 (2008), 427-454. doi: 10.3934/ipi.2008.2.427. Google Scholar
M. Cessenat, Théorèmes de trace pour des espaces de fonctions de la neutronique, (French) [Trace theorems for neutronic function spaces], C. R. Acad. Sci. Paris, Sér. I, Math., 300 (1985), 89-92. Google Scholar
S. Chandrasekhar, Radiative Transfer, Dover Publications Inc., New York, 1960. Google Scholar
M. Choulli and P. Stefanov, An inverse boundary value problem for the stationary transport equation, Osaka J. Math., 36 (1999), 87-104. Google Scholar
Y. Guo, Decay and continuity of the Boltzmann equation in bounded domains, Arch. Ration. Mech. Anal., 197 (2010), 713-809. doi: 10.1007/s00205-009-0285-y. Google Scholar
D. Kawagoe and I.-K. Chen, Propagation of boundary-induced discontinuity in stationary radiative transfer, J. Stat. Phys., 170 (2018), 127-140. doi: 10.1007/s10955-017-1922-8. Google Scholar
F. Natterer, The Mathematics of Computerized Tomography, SIAM, Germany, 2001. doi: 10.1137/1.9780898719284. Google Scholar
J. N. Wang, Stability estimates of an inverse problem for the stationary transport equation, Ann. Inst. Henri Poincaré, 70 (1999), 473-495. Google Scholar
Dan Jane, Gabriel P. Paternain. On the injectivity of the X-ray transform for Anosov thermostats. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 471-487. doi: 10.3934/dcds.2009.24.471
François Rouvière. X-ray transform on Damek-Ricci spaces. Inverse Problems & Imaging, 2010, 4 (4) : 713-720. doi: 10.3934/ipi.2010.4.713
Aleksander Denisiuk. On range condition of the tensor x-ray transform in $ \mathbb R^n $. Inverse Problems & Imaging, 2020, 14 (3) : 423-435. doi: 10.3934/ipi.2020020
Zhenhua Zhao, Yining Zhu, Jiansheng Yang, Ming Jiang. Mumford-Shah-TV functional with application in X-ray interior tomography. Inverse Problems & Imaging, 2018, 12 (2) : 331-348. doi: 10.3934/ipi.2018015
Yu-Feng Sun, Zheng Zeng, Jie Song. Quasilinear iterative method for the boundary value problem of nonlinear fractional differential equation. Numerical Algebra, Control & Optimization, 2020, 10 (2) : 157-164. doi: 10.3934/naco.2019045
Linh Nguyen, Irina Perfilieva, Michal Holčapek. Boundary value problem: Weak solutions induced by fuzzy partitions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 715-732. doi: 10.3934/dcdsb.2019263
Huy Tuan Nguyen, Huu Can Nguyen, Renhai Wang, Yong Zhou. Initial value problem for fractional Volterra integro-differential equations with Caputo derivative. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6483-6510. doi: 10.3934/dcdsb.2021030
Mark Hubenthal. The broken ray transform in $n$ dimensions with flat reflecting boundary. Inverse Problems & Imaging, 2015, 9 (1) : 143-161. doi: 10.3934/ipi.2015.9.143
Wenzhong Zhu, Huanlong Jiang, Erli Wang, Yani Hou, Lidong Xian, Joyati Debnath. X-ray image global enhancement algorithm in medical image classification. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1297-1309. doi: 10.3934/dcdss.2019089
Silvia Allavena, Michele Piana, Federico Benvenuto, Anna Maria Massone. An interpolation/extrapolation approach to X-ray imaging of solar flares. Inverse Problems & Imaging, 2012, 6 (2) : 147-162. doi: 10.3934/ipi.2012.6.147
Vladimir V. Varlamov. On the initial boundary value problem for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 431-444. doi: 10.3934/dcds.1998.4.431
Gen Nakamura, Michiyuki Watanabe. An inverse boundary value problem for a nonlinear wave equation. Inverse Problems & Imaging, 2008, 2 (1) : 121-131. doi: 10.3934/ipi.2008.2.121
VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003
Walter Allegretto, John R. Cannon, Yanping Lin. A parabolic integro-differential equation arising from thermoelastic contact. Discrete & Continuous Dynamical Systems, 1997, 3 (2) : 217-234. doi: 10.3934/dcds.1997.3.217
Narcisa Apreutesei, Nikolai Bessonov, Vitaly Volpert, Vitali Vougalter. Spatial structures and generalized travelling waves for an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 537-557. doi: 10.3934/dcdsb.2010.13.537
Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013 (special) : 129-137. doi: 10.3934/proc.2013.2013.129
Frederic Abergel, Remi Tachet. A nonlinear partial integro-differential equation from mathematical finance. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 907-917. doi: 10.3934/dcds.2010.27.907
Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 57-71. doi: 10.3934/dcdsb.2011.16.57
Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569
Angelo Favini, Rabah Labbas, Stéphane Maingot, Maëlis Meisner. Boundary value problem for elliptic differential equations in non-commutative cases. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 4967-4990. doi: 10.3934/dcds.2013.33.4967
I-Kun Chen Daisuke Kawagoe
|
CommonCrawl
|
Differentiation done correctly: 5. Maxima and minima
Posted on February 25, 2013 by wj32 / 1 Comment
Navigation: 1. The derivative | 2. Higher derivatives | 3. Partial derivatives | 4. Inverse and implicit functions | 5. Maxima and minima
In this final post, we are going to look at some applications of differentiation to locating maxima and minima of real valued functions. In order to do this, we will be using Taylor's theorem (covered in part 2) to prove the higher derivative test for functions on Banach spaces, and the implicit function theorem (covered in part 4) to prove a special case of the method of Lagrange multipliers.
Consequences of Taylor's theorem
Definition 35. Let \(f:X\to\mathbb{R}\) be a map defined on a topological space \(X\). If there is a neighborhood \(U\) of \(x\in X\) such that \(f(t)\le f(x)\) for all \(t\in U\), then we say that \(f\) has a local maximum at \(x\). Similarly, if \(f(t)\ge f(x)\) for all \(t\in U\) then we say that \(f\) has a local minimum at \(x\). If \(f\) has a local maximum or local minimum at \(x\), then we say that \(f\) has an extreme value at \(x\). If strict inequality holds, then we say that \(f\) has a strict local maximum or minimum.
In single variable calculus, a differentiable function \(f:\mathbb{R}\to\mathbb{R}\) has a local maximum or minimum at a point \(x\in\mathbb{R}\) only if \(f'(x)=0\). It is easy to extend this result to maps defined on Banach spaces.
Theorem 36. Let \(A\subseteq E\) be an open set and let \(f:A\to\mathbb{R}\). If \(f\) is differentiable at \(x\in A\) and has an extreme value at \(x\), then \(f'(x)=0\).
Proof. Let \(v\in E\) and let \(g(t)=x+tv\). Then \(f\circ g\) has an extreme value at \(0\), so \(0=(f\circ g)'(0)=f'(g(0))g'(0)=f'(x)v\). Therefore \(f'(x)=0\). \(\square\)
Also recall that if \(f:\mathbb{R}\to\mathbb{R}\) is of class \(C^1\) and there is a point \(x\in\mathbb{R}\) such that \(f'(x)=0\), then \(f(x)\) is a local minimum if \(f^{\prime\prime}(x) > 0\) and \(f(x)\) is a local maximum if \(f^{\prime\prime}(x) < 0\). There is a similar test for higher derivatives that follows from Taylor's theorem. Again, we can prove analogous statements for maps defined on Banach spaces. If \(q\in L(E,\dots,E;\mathbb{R})\) is a multilinear map from \(E^p\) to \(\mathbb{R}\), then we say that \(q\) is a multilinear form.
Definition 37. Write \(h^{(p)}\) for the \(p\)-tuple \((h,\dots,h)\). We say that a form \(q\) is positive semidefinite if \(qh^{(p)} \ge 0\) for all \(h\) and positive definite if \(qh^{(p)} > 0\) for all \(h \ne 0\). The terms negative semidefinite and negative definite are defined similarly. If \(qh^{(p)}\) takes on both positive and negative values, then we say that \(q\) is indefinite.
Theorem 38 (Higher derivative test). Let \(A\subseteq E\) be an open set and let \(f:A\to\mathbb{R}\). Assume that \(f\) is \((p-1)\) times continuously differentiable and that \(D^p f(x)\) exists for some \(p\ge 2\) and \(x\in A\). Also assume that \(f'(x),\dots,f^{(p-1)}(x)=0\) and \(f^{(p)}(x)\ne 0\). Write \(h^{(p)}\) for the \(p\)-tuple \((h,\dots,h)\).
If \(f\) has an extreme value at \(x\), then \(p\) is even and the form \(f^{(p)}(x)h^{(p)}\) is semidefinite.
If there is a constant \(c\) such that \(f^{(p)}(x)h^{(p)}\ge c > 0\) for all \(|h|=1\), then \(f\) has a strict local minimum at \(x\) and (1) applies.
If there is a constant \(c\) such that \(f^{(p)}(x)h^{(p)}\le c < 0\) for all \(|h|=1\), then \(f\) has a strict local maximum at \(x\) and (1) applies.
Proof. By Corollary 24 and the given assumptions, we can write $$
f(x+h)-f(x)=\frac{1}{p!}f^{(p)}(x)h^{(p)}+\theta(h)|h|^p
$$ where \(\theta(h)\to 0\) as \(h\to 0\). First assume that \(f\) has an extreme value at \(x\). Choose a vector \(h_0\ne 0\) such that \(f^{(p)}(x)h_0^{(p)}\ne 0\). Then for sufficiently small \(t\in\mathbb{R}\) we have both $$
f(x+th_{0})-f(x)=\left(\frac{1}{p!}f^{(p)}(x)h_{0}^{(p)}\pm\theta(th_{0})\left|h_{0}\right|^{p}\right)t^{p}\tag{*}
$$ and $$
\left|\theta(th_{0})\right|\left|h_{0}\right|^{p}<\frac{1}{p!}f^{(p)}(x)h_{0}^{(p)}. $$ For these \(t\), the sign of (*) is the same as the sign of \(f^{(p)}(x)h_0^{(p)}\). Since \(x\) is an extreme value, the sign of (*) must remain constant for small \(t\), which cannot happen unless \(p\) is even. Similarly, if \(f^{(p)}(x)h^{(p)}\) is not semidefinite then there is some vector \(h_1\ne 0\) such that \(f^{(p)}(x)h_1^{(p)}\) and \(f^{(p)}(x)h_0^{(p)}\) have opposite signs, which contradicts the fact that the sign of (*) is constant for small \(t\). Now suppose that the condition in (2) holds. Then \begin{align} f(x+h)-f(x) &= \frac{1}{p!}f^{(p)}(x)h^{(p)}+\theta(h)\left|h\right|^{p} \\ &= \left[\frac{1}{p!}f^{(p)}(x)\left(\frac{h}{\left|h\right|}\right)^{(p)}+\theta(h)\right]\left|h\right|^{p} \\ &\ge \left[\frac{c}{p!}+\theta(h)\right]\left|h\right|^{p}. \end{align} Since \(\theta(h)\to 0\) as \(h\to 0\), the last term is positive for sufficiently small \(h\ne 0\). For these \(h\) we have \(f(x+h) > f(x)\), so \(f\) has a strict local minimum at \(x\). The proof for (3) is similar. \(\square\)
Corollary 39 (Higher derivative test, finite-dimensional case). In Theorem 38, further assume that \(E\) is finite-dimensional. Then \(h\mapsto f^{(p)}(x)h^{(p)}\) has both a minimum and maximum value on the set \(\{h\in E:|h|=1\}\), and:
If the form \(f^{(p)}(x)h^{(p)}\) is indefinite, then \(f\) does not have an extreme value at \(x\).
If the form \(f^{(p)}(x)h^{(p)}\) is positive definite, then \(f\) has a strict local minimum at \(x\).
If the form \(f^{(p)}(x)h^{(p)}\) is negative definite, then \(f\) has a strict local maximum at \(x\).
Proof. Since \(E\) is finite-dimensional, the set \(S=\{h\in E:|h|=1\}\) is compact. Therefore the continuous map \(h\mapsto f^{(p)}(x)h^{(p)}\) attains a minimum \(c\) and a maximum \(C\) on \(S\). Part (1) follows directly from part (1) of Theorem 38. If \(f^{(p)}(x)h^{(p)}\) is positive definite then \(c > 0\), so part (2) of Theorem 38 applies. If \(f^{(p)}(x)h^{(p)}\) is negative definite then \(C < 0\), so part (3) of Theorem 38 applies. \(\square\) The simplest form of Corollary 39 occurs when \(p=2\). Let \(E\) be an \(n\)-dimensional real Banach space, let \(A\subseteq E\) be an open set, and let \(f:A\to\mathbb{R}\) be a class \(C^1\) map. Suppose that \(f^{\prime\prime}(x)\) exists at \(x\in A\). Let \(\{e_1,\dots,e_n\}\) be a basis for \(E\) so that \(E=E_1\times\cdots\times E_n\), where \(E_i\) is the subspace generated by \(e_i\). Definition 40. The Hessian matrix of \(f\) at \(x\) is the real matrix $$
\begin{bmatrix}
D_1 D_1 f(x) & \cdots & D_1 D_n f(x) \\
\vdots & \ddots & \vdots \\
D_n D_1 f(x) & \cdots & D_n D_n f(x)
\end{bmatrix},
$$ where each element $$
D_i D_j f(x) \in L(E_i,L(E_j,\mathbb{R}))
$$ is identified with \(D_i D_j f(x)(e_i,e_j)\in\mathbb{R}\).
Theorem 29 shows that this matrix is symmetric. We can restate Corollary 39 in terms of the Hessian matrix.
Corollary 41. Suppose that \(f'(x)=0\) and \(f^{\prime\prime}(x)\) exists. Let \(H\) be the Hessian matrix of \(f\) at \(x\).
If \(H\) has both positive and negative eigenvalues, then \(f\) does not have an extreme value at \(x\).
If \(H\) is positive definite, then \(f\) has a strict local minimum at \(x\).
If \(H\) is negative definite, then \(f\) has a strict local maximum at \(x\).
Proof. It is clear that \(f^{\prime\prime}(x)(h,h)=\widetilde{h}^T H \widetilde{h}\), where \(\widetilde{h}\) is the column vector representing \(h\). \(\square\)
The method of Lagrange multipliers provides a necessary condition for a function \(f:A\to\mathbb{R}\) to be maximized or minimized subject to a constraint expressed as a function \(g:A\to\mathbb{R}\). We first need an elementary result from linear algebra.
Lemma 42. Let \(f,g:E\to\mathbb{R}\) be nonzero linear functionals. If \(\ker f\subseteq \ker g\), then \(f=\lambda g\) for some \(\lambda\in\mathbb{R}\).
Proof. \(\ker f\) cannot be a strict subset of \(\ker g\) since \(\dim(E/\ker f)=1\), so \(\ker f=\ker g\). Let \(v\notin \ker f\) and take \(\lambda=f(v)/g(v)\). Clearly \(f=\lambda g\) on \(\ker f=\ker g\). If \(x\notin \ker f\) then \(x=rv\) for some \(r\in\mathbb{R}\), so $$
f(x)=rf(v)=\lambda rg(v)=\lambda g(x).
$$ Therefore \(f=\lambda g\) on \(E\). \(\square\)
Theorem 43 (Method of Lagrange multipliers, single constraint). Let \(A\subseteq E\) be an open set. Let \(f:A\to\mathbb{R}\) and \(g:A\to\mathbb{R}\) be of class \(C^1\), and let \(S=g^{-1}(\{0\})\). If \(f|_S\) has an extreme value at \(x\in S\) and \(g'(x)\ne 0\), then there is a number \(\lambda\in\mathbb{R}\) such that \(f'(x)=\lambda g'(x)\).
Proof. If we can prove that \(\ker g'(x)\subseteq\ker f'(x)\) then the result follows from Lemma 42. Choose some \(w\notin\ker g'(x)\), let \(F=\ker g'(x)\) and let \(G=\langle w \rangle\); then \(E=F\oplus G\). Let \(B=A\cap F\) and let \(C=A\cap G\). Write \(x=(x_1,x_2)\) where \(x_1\in B\) and \(x_2\in C\). Since \(g'(x)\ne 0\), \(D_2 g(x)\) is invertible; we also have \(g(x_1,x_2)=0\). By the implicit function theorem, there exists a neighborhood \(U\subseteq B\) of \(x_1\) and a \(C^1\) map \(h:U\to C\) such that \(h(x_1)=x_2\) and \(g(x_1,h(x_1))=0\). Let \(\widetilde{h}:U\to A\) be given by \(t\mapsto(t,h(t))\) so that \(\widetilde{h}(U)\subseteq S\) and \(h'(x_1)|_F\) is the identity map on \(F\). Since \(f|_S\) has an extreme value at \(x\) we have \((f\circ\widetilde{h})'(x_1)=0\) by Theorem 36, so \(f'(x)\circ \widetilde{h}'(x_1)=0\) by the chain rule. In particular, if \(v\in\ker g'(x)=F\) then $$
0=[f'(x)\circ\widetilde{h}'(x_1)](v)=f'(x)v,
$$ so \(v\in\ker f'(x)\). \(\square\)
There is also a more general version for a constraint function that maps into an infinite-dimensional space. We omit the proof because it requires a few theorems from functional analysis.
Theorem 44 (Method of Lagrange multipliers, multiple constraints). Let \(A\subseteq E\) be an open set. Let \(f:A\to\mathbb{R}\) and \(g:A\to F\) be of class \(C^1\), and let \(S=g^{-1}(\{0\})\). If \(f|_S\) has an extreme value at \(x\in S\) and \(g'(x)\) is surjective, then there is a continuous linear map \(\lambda:F\to\mathbb{R}\) such that \(f'(x)=\lambda\circ g'(x)\).
We have seen how the theorems of multivariable calculus in \(\mathbb{R}^n\) generalize easily to more general Banach spaces. Because we can work coordinate-free, the proofs are often easier to understand than their \(\mathbb{R}^n\) counterparts. By constructing the derivative on Banach spaces, we gain a powerful tool that allows us to both do computations and prove things much more easily than before.
Differentiation done correctly: 4. Inverse and implicit functions
Convex functions, second derivatives and Hessian matrices
ab cd says:
There is a minor error in proof of lemma 42: if x is not in ker(f), that doesn't imply that x = rv. It implies that x = rv + w for some w \in ker(f) = ker(g).
But since f(w) = g(w) = 0 your next step still holds, viz.:
f(x) = f(rv+w) = rf(v) + f(w) = rf(v) = r \lambda g(v) = \lambda [ r g(v) + g(w) ] = \lambda g(x)
|
CommonCrawl
|
Search SpringerLink
Supervised learning in the presence of concept drift: a modelling framework
Part of a collection:
S.I. : WSOM 2019
M. Straat1,
F. Abadi2,
Z. Kan1,
C. Göpfert3,
B. Hammer3 &
M. Biehl ORCID: orcid.org/0000-0001-5148-45681
Neural Computing and Applications volume 34, pages 101–118 (2022)Cite this article
We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student–teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift processes while the training is performed. Different types of concept drift are studied, which affect the density of example inputs only, the target rule itself, or both. By applying methods from statistical physics, we develop a modelling framework for the mathematical analysis of the training dynamics in non-stationary environments. Our results show that standard LVQ algorithms are already suitable for the training in non-stationary environments to a certain extent. However, the application of weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes. Furthermore, we investigate gradient-based training of layered neural networks with sigmoidal activation functions and compare with the use of rectified linear units. Our findings show that the sensitivity to concept drift and the effectiveness of weight decay differs significantly between the two types of activation function.
The topic of efficiently learning from example data in the presence of concept drift has attracted significant interest in the machine learning community. Terms such as lifelong learning or continual learning have become popular keywords in this context [55].
Very often, machine learning processes [23] are realized according to a standard setup which distinguishes two main stages: In the first, the so-called training phase, parameters of the learning system are adapted in an optimization process which is guided by a given set of example data. In the following working phase, the obtained hypothesis, e.g., a classifier or regression system, can be applied to novel data. This workflow relies on the implicit assumption that the training data is indeed representative for the target task in the working phase. Statistical properties of the data and the target itself should not change during or after training.
However, in many practical tasks and relevant real-world scenarios, the assumed separation of training and working phase appears artificial and cannot be justified. Obviously, in most human or other biological learning processes [3], the assumption is unrealistic. Similarly, in many technical contexts, training data is available as a non-stationary stream of observations. In such settings, the separation of training and working phase is meaningless, see [1, 17, 27, 32, 55] for reviews.
In the literature, two major types of non-stationary environments have been discussed: The term virtual drift refers to situations in which statistical properties of the training data are time-dependent, while the actual target task remains unchanged. Scenarios where the target classification or regression scheme itself changes with time are referred to as real drift processes. Frequently, both effects coincide and a clear distinction of the two cases becomes difficult.
The presence of drift requires some form of forgetting of dated information while the system is adapted to more recent observations. The design of useful, forgetful training schemes hinges on an adequate theoretical understanding of the relevant phenomena. To this end, the development of a suitable modelling framework is instrumental. An overview of earlier work and more recent developments in the context of non-stationary learning environments can be found in references like [1, 17, 27, 32, 55].
Methods developed in statistical physics can be applied in the mathematical description of the training dynamics to obtain typical learning curves. The statistical mechanics of on-line learning has helped to gain insights into the behavior of various learning systems; see, e.g., [5, 19, 43, 53] and references therein. Here, we apply these concepts to study the influence of concept drift and weight decay in two exemplary model situations: prototype-based binary classification and continuous regression with feedforward neural networks. We study standard training algorithms under concept drift and address, both, virtual and real drift processes.
This paper presents extensions of our contribution to the Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering, and Visualization (WSOM 2019) [48]. Consequently, parts of the text resemble or have been taken over literally from [14] without explicit notice. This concerns, for instance, parts of the introduction and the description of models and methodology in Sect. 2. Similarly, some of the results have also been presented in [14], which focused on the study of explicitly time-dependent densities in a stream of clustered data for LVQ training.
We complement our conference contribution [14] significantly by studying also the influence of drift on the training of regression type layered neural networks. First results concerning such systems with sigmoidal hidden unit activation function under concept drift have been published in [47], recently. Here, the scope of the analysis is extended to layered networks of rectified linear units (ReLU). We concentrate on the comparison of the latter, very popular activation function and its classical, sigmoidal counterpart with respect to the sensitivity to drift and the effect of weight decay.
We have selected LVQ for classification and layered neural networks for regression as representatives of important paradigms in machine learning. These systems provide a workshop in which to develop modelling techniques and analytical approaches that will facilitate the study of other setups in the future.
In the following section, we introduce the machine learning systems, the model setup including the assumed densities of data, the target rules as well as the mathematical framework of the statistical physics-based analysis. Our results concerning classification and regression systems in the presence of concept drift are presented and discussed in Sect. 3 before we conclude with a summary and outlook on forthcoming investigations.
Model and methods
In Sect. 2.1, we introduce learning vector quantization for classification tasks with emphasis on the well established LVQ1 training scheme. We also propose a model density of data which was previously investigated in the mathematical analysis of LVQ training in stationary and specific non-stationary environments. Here, we extend the approach to the presence of virtual concept drift and consider weight decay as an explicit mechanism of forgetting.
Thereafter, Sect. 2.2 presents a student–teacher scenario for the learning of a regression scheme with shallow, layered neural networks of the feedforward type. Emphasis is on the comparison of two important types of hidden unit activations; traditional sigmoidal transfer functions and the popular rectified linear unit (ReLU) activation. We consider gradient-based training in the presence of real concept drift and also introduce weight decay as a mechanism of forgetting.
A unified description of the theoretical approach to analyse the training dynamics in classification and regression systems is given in Sect. 2.3.
Learning vector quantization
The family of LVQ algorithms is widely used for practical classification problems [13, 29, 30, 39]. The popularity of LVQ is due to a number of attractive features: It is straightforward to implement, very flexible and intuitive. Moreover, it constitutes a natural tool for multi-class problems. The actual classification scheme is very often based on Euclidean metrics or other simple measures, which quantify the distance of inputs or feature vectors from the class-specific prototypes. Unlike many other methods, LVQ facilitates direct interpretation of the classifier because prototypes are defined in the same space as the data [13, 39]. The approach is based on the idea of representing classes by more or less typical representatives of the training instances. This suggests that LVQ algorithms should also be capable of tracking changes in the density of samples, a hypothesis that has been studied for instance in [14, 25], recently.
Nearest prototype classifier
In general, several prototypes can be employed to represent each class. However, we restrict the analysis to the simple case of only one prototype per class in binary classification problems. Hence we consider two prototypes \({\bf w}_k \in \mathbb{R}^N\) each representing one of the classes \(k\in \{1,2\}.\) Together with a distance measure \(d({\bf w},{\boldsymbol{\xi}}),\) the system parameterizes a Nearest Prototype Classification (NPC) scheme: Any given input \({\boldsymbol{\xi} } \in \mathbb{R}^N\) is assigned to the class \(k=1\) if \(d({\bf w}_1,{\boldsymbol{\xi} })< d({\bf w}_2,{\boldsymbol{\xi}})\) and to class 2, otherwise. In practice, ties can be broken arbitrarily.
A variety of distance measures have been used in LVQ, enhancing the flexibility of the approach even further [13, 39]. This includes the conceptually interesting use of adaptive metrics in relevance learning, see [13] and references therein. Here, we restrict our analysis to the simple (squared) Euclidean measure
$$\begin{aligned} d({\bf w}, {\boldsymbol{\xi} })= ({\bf w} - {\boldsymbol{\xi} })^2. \end{aligned}$$
We assume that the training procedure provides a stream of single examples [5]: At time step \(\mu \, = \, 1,2,\ldots ,\) the vector \({\boldsymbol{\xi} }^{\, \mu }\) is presented, together with its given class label \(\sigma ^\mu =1,2\). Iterative on-line LVQ updates are of the general form [12, 20, 54]
$$\begin{aligned} {\bf w}_k^\mu= & {} {\bf w}_k^{\mu -1} \, + \, \frac{\eta }{N} \, \Delta {\bf w}_k^\mu \text{ with } \nonumber \\ \Delta {\bf w}_k^\mu= & {} f_k\left[ d_1^{\mu },d_2^{\mu },\sigma ^\mu ,\ldots \right] \, \left( {\boldsymbol{\xi}}^\mu - {\bf w}_k^{\mu -1}\right) \end{aligned}$$
where \(d_i^\mu = d({\bf w}_i^{\mu -1},{\boldsymbol{\xi} }^\mu )\) and the learning rate \(\eta\) is scaled with the input dimension N. The precise algorithm is specified by choice of the modulation function \(f_k[\ldots ]\), which depends typically on the Euclidean distances of the data point from the current prototype positions and on the labels \(k,\sigma ^\mu =1,2\) of the prototype and training example, respectively.
The LVQ1 training algorithm
A popular and intuitive LVQ training scheme was already suggested by Kohonen and is known as LVQ1 [29, 30]. Following the NPC concept, it updates only the currently closest prototype in a so-called Winner-Takes-All (WTA) scheme. Formally, the LVQ1 prescription for a system with two competing prototypes is given by Eq. (2) with
$$\begin{aligned} f_k[d_1^\mu ,d_2^\mu ,\sigma ^\mu ] \, = \Theta \left( d_{\widehat{k}}^\mu - d_{k}^\mu \right) \Psi (k,\sigma ^\mu ), \end{aligned}$$
where \(\widehat{k} = \left\{ \begin{array}{ll} 2 &{} \text{ if } k=1 \\ 1 &{} \text{ if } k=2, \end{array} \right. \text{ and } \Psi (k,\sigma )= \left\{ \begin{array}{ll} +1 &{} \text{ if } k=\sigma \\ -1 &{} \text{ else. } \\ \end{array} \right.\)
Here, the Heaviside function \(\Theta (\ldots )\) singles out the winning prototype and the factor \(\Psi (k,\sigma ^\mu )\) determines the sign of the update: The WTA update according to Eq. (3) moves the prototype towards the presented feature vector if it carries the same class label \(k=\sigma ^\mu\). On the contrary, if the prototype is meant to present a different class, its distance from the data point is increased even further. Note that LVQ1 cannot be interpreted as a gradient descent procedure of a suitable cost function in a straightforward way due to discontinuities at the class boundaries, see [12] for a discussion and references.
Numerous variants and modifications of LVQ have been presented in the literature, aiming at better convergence or classification performance, see [12, 13, 29, 39]. Most of these modifications, however, retain the basic idea of attraction and repulsion of the winning prototypes.
Clustered model data
LVQ algorithms are most suitable for classification schemes which reflect a given cluster structure in the data. In the modelling, we therefore consider a stream of random input vectors \({\boldsymbol{\xi} } \in \mathbb {R}^N\) which are generated independently according to a mixture of two Gaussians [12, 20, 54]:
$$\begin{aligned} P({\boldsymbol{\xi} })= & {} {\textstyle \sum _{m=1,2}} \, \, \,p_m P({\boldsymbol{ \xi} }\mid m) \text{ with } \text{ contributions } \nonumber \\ P({\boldsymbol \xi }\mid m)= & {} \frac{1}{(2\, \pi \, v_m)^{N/2}} \, \exp \left[ -\frac{1}{2 \, v_m} \left( {\boldsymbol{\xi} } - \lambda {\bf B}_m \right) ^2 \right] . \end{aligned}$$
The target classification coincides with the cluster membership, i.e., \(\sigma =m\) in Eq. (3). The class-conditional densities \(P({\boldsymbol{\xi} }\!\mid \!m\!=\!1,2)\) correspond to isotropic, spherical Gaussians with variance \(\, v_m\) and mean \(\lambda \, {\bf B}_m\). Prior weights of the clusters are denoted as \(p_m\) and satisfy \(p_1 + p_2 =1\). We assume that the vectors \({\bf B}_m\) are orthonormal with \({\bf B}_1^{\, 2}={\bf B}_2^{\, 2}=1\) and \({\bf B}_1 \cdot {\bf B}_2 =0\). Obviously, the classes \(m=1,2\) are not perfectly separable due to the overlap of the clusters.
We denote conditional averages over \(P({\boldsymbol{\xi }}\mid m)\) by \(\left\langle \cdots \right\rangle _m\), whereas mean values \(\langle \cdots \rangle = \sum _{m=1,2} \, p_m \, \left\langle \cdots \right\rangle _m\) are defined with respect to the full density (4). One obtains, for instance, the conditional and full averages
$$\begin{aligned} \left\langle {\boldsymbol{\xi} } \right\rangle _m&= {} \lambda \, {\bf B}_m, \langle {\boldsymbol{\xi} }^{\, 2} \rangle _m = v_m \, N + \lambda ^2 \text{ and } \nonumber \\ \langle {\boldsymbol{\xi}}^{\, 2}\rangle &= {} \left( p_1v_1 + p_2 v_2 \right) \, N + \lambda ^2. \end{aligned}$$
Note that in the thermodynamic limit \(N\rightarrow \infty\) considered later, \(\lambda ^2\) can be neglected in comparison to the terms of \(\mathcal{{O}}(N)\) in Eq. (5).
Similar clustered densities have been studied in the context of unsupervised learning and supervised perceptron training; see, e.g., [4, 10, 35]. Also, online LVQ in stationary situations was analysed in, e.g., [12].
Here we focus on the question whether LVQ learning schemes are able to cope with drift in characteristic model situations and whether extensions like weight decay can improve the performance in such settings.
Layered neural networks
The term Soft Committee Machine (SCM) has been established for shallow feedforward neural networks with a single hidden layer and a linear output unit, see for instance [2, 8, 9, 11, 26, 42, 44, 45, 49]. Its structure resembles that of a (crisp) committee machine with binary threshold hidden units, where the network output is given by their majority vote, see [4, 19, 53] and references therein.
The output of an SCM with K hidden units and fixed hidden-to-output weights is of the form
$$\begin{aligned} y({\boldsymbol{\xi} }) = \sum _{k=1}^K \, g({\bf w}_k \cdot {\boldsymbol{\xi} }) \text{ where } {\bf w}_k \in \mathbb {R}^N \end{aligned}$$
denotes the weight vector connecting the N-dimensional input layer with the k-th hidden unit. A non-linear transfer function \(g(\cdots )\) defines the hidden unit states and the final output is given as their sum.
As specific examples we consider the sigmoidal
$$\begin{aligned} g(x) = \mathrm{{erf}}\left( x/\sqrt{2}\right) \text{ with } g^\prime (x)= \sqrt{{2}/{\pi }} \,\, e^{-x^2/2} \end{aligned}$$
and the popular rectified linear unit (ReLU):
$$\begin{aligned} g(x) = x \, \Theta (x) \text{ with } g^\prime (x)= \, \Theta (x). \end{aligned}$$
The activation (7) resembles closely other sigmoidal functions, e.g., the more popular \(\tanh (x)\), but it facilitates the analytical treatment in the mathematical analysis as exploited in [8], originally. In the following, we refer to an SCM with the above sigmoidal activation as Erf-SCM, for brevity.
Similarly, we use the term ReLU-SCM for networks with hidden unit states given by Eq. (8). The ReLU activation has recently gained significant popularity in the context of Deep Learning [22]. This is, among other reasons, due to its simplicity which offers computational ease and numerical stability. According to the literature, ReLU networks have displayed favorable training and generalization behavior in several practical applications and benchmark problems [18, 31, 34, 38, 40].
Note that an SCM, cf. Eq. (6), is not quite a universal approximator. However, this property could be achieved by introducing hidden-to-output weights and adaptive local thresholds \(\vartheta _i \in \mathbb {R}\) in hidden unit activations of the form \(g\left( {\bf w}_i\cdot {\boldsymbol{\xi} } -\vartheta _i\right)\), see [16]. Adaptive hidden-to-output weights have been studied in, for instance, [42] from a statistical physics perspective. However, we restrict ourselves to the simpler model defined above and focus on basic dynamical effects and potential differences of ReLU- versus Erf-SCM in the presence of concept drift.
Regression scheme and on-line learning
The training of a neural network with real-valued output \(y({\boldsymbol{\xi}})\) based on examples \(\left\{ {\boldsymbol{\xi }}^\mu \in \mathbb {R}^N, \tau ^\mu \in \mathbb {R} \right\}\) for a regression problem is frequently guided by the quadratic deviation of the network output from the target values [15, 22, 23] . It serves as a cost function which evaluates the network performance with respect to a single example as
$$\begin{aligned} e^\mu \left( \{{\bf w}_k\}_{k=1}^K\right) = \frac{1}{2} \big ( y^\mu - \tau ^\mu \big )^2 \text{ with } y^\mu = y({\bf \xi }^\mu ). \end{aligned}$$
In stochastic or on-line gradient descent, updates of the weight vectors are based on the presentation of a single example at time step \(\mu\)
$$\begin{aligned} {\bf w}_k^{\mu } = {\bf w}_k^{\mu -1} + \frac{\eta }{N} \, \Delta {\bf w}_k^{\mu } \text{ with } \Delta {\bf w}_k^\mu = \, - \, \frac{\partial e^\mu }{\partial {\bf w}_k} \end{aligned}$$
where the gradient is evaluated in \({\bf w}_k^{\mu -1}\). For the SCM architecture specified in Eq. (6), \(\partial y^\mu / {\partial {\bf w}_k} = g'\left( h_k^\mu \right) {\boldsymbol\xi }^\mu ,\) and we obtain
$$\begin{aligned} \Delta {\bf w}_k^{\mu } = - \left( \sum _{i=1}^K g\left( h_i^\mu \right) - \tau ^\mu \right) \, g^\prime \left( h_k^\mu \right) {\boldsymbol \xi }^\mu \end{aligned}$$
with the inner products \(h^\mu _i = {\bf w}_i^{\mu -1}\cdot {\boldsymbol \xi }^\mu\) of the current weight vectors with the next example input in the stream. Note that the change of weight vectors is proportional to \({\boldsymbol \xi }^\mu\) and can be interpreted as a form of Hebbian Learning [15, 22, 23].
Student–teacher scenario and model data
In order to define and model meaningful learning situations, we resort to the consideration of student–teacher scenarios [4, 5, 19, 53].
We assume that the target can be defined in terms of an SCM with a number M of hidden units and a specific set of weights \(\left\{ {\bf B}_m \in \mathbb {R}^N \right\} _{m=1}^M\):
$$\begin{aligned} \tau ({\boldsymbol \xi }) = \sum _{m=1}^M \, g({\bf B}_m \cdot {\boldsymbol \xi }) \text{ and } \tau ^\mu = \tau ({\boldsymbol \xi }^\mu ) = \sum _{m=1}^M g(b_m^\mu ) \end{aligned}$$
with \(b_m^\mu = {\bf B}_m \cdot {\boldsymbol \xi }^\mu\) for one of the training examples. This so-called teacher network can be equipped with \(M>K\) hidden units in order to model regression schemes which cannot be learnt by an SCM student of the form (6). On the contrary, \(K>M\) would correspond to an over-learnable target or over-sophisticated student. For the discussion of these highly interesting cases in stationary environments, see for instance [8, 9, 42, 44, 45]. In a student–teacher scenario with K and M hidden units the update of the student weight vectors by on-line gradient descent is given by Eq. (11) with \(\tau ^\mu\) from Eq. (12).
In the following, we will restrict our analysis to perfectly matching student complexity with \(K=M=2\) only, which further simplifies Eq. (11). Extensions to more hidden units and settings with \(K\ne M\) will be considered in forthcoming projects.
In contrast to the model for LVQ-based classification, the vectors \({\bf B}_m\) define the target outputs \(\tau ^\mu = \tau ({\boldsymbol \xi }^\mu )\) explicitly via the teacher network for any input vector. While clustered input densities of the form (4) can also be studied for feedforward networks as in [35, 36], we assume here that the actual input vectors are uncorrelated with the teacher vectors \({\bf B}_m\). Consequently, we can resort to a simpler model density and consider vectors \({\boldsymbol \xi }\) of independent, zero mean, unit variance components with
$$\begin{aligned} P({\boldsymbol \xi }) = {(2\, \pi )^{-N/2}} \, \exp \left[ - \, {\bf \xi }^2/2 \right] . \end{aligned}$$
Note that the density (13) is recovered formally from Eq. (4) by setting \(\lambda =0\) and \(v_1=v_2=1\), for which both clusters in (4) coincide in the origin and the parameters \(p_{1,2}\) become irrelevant.
Note that the student/teacher scenario considered here is different from concepts used in studies of knowledge distillation, see [51] and references therein. In the context of distillation, a teacher network is itself trained on a given data set to approximate the target function. Thereafter a student network, frequently of a simpler architecture, distills the knowledge in a subsequent training process. In our work, as in most statistical physics-based studies [4, 19, 53], the teacher network is taken to directly define the true target function. A particular architecture is chosen and, together with its fixed weights, it controls the complexity of the task. The teacher network provides correct target outputs to all input data that are generated according to the distribution in Eq. (13). In the actual training process, a sequence of such input vectors and teacher-generated labels is presented to the student network.
Mathematical analysis of the training dynamics
In the following we sketch the successful theory of on-line learning [4, 5, 19, 43, 53] as, for instance, applied to the dynamics of LVQ algorithms in [12, 20, 54] and to on-line gradient descent in SCM in [8, 9, 26, 42, 44, 45, 49]. We refer the reader to the original publications for details. The extensions to non-stationary situations with concept drifts are discussed in Sect. 2.4.
The mathematical analysis proceeds along the same generic steps in both settings. Our presentation follows closely the descriptions in [14, 47].
We consider adaptive vectors \({\bf w}_{1,2}\in \mathbb {R}^N\) (prototypes in LVQ, student weights in the SCM) while the characteristic vectors \({\bf B}_{1,2}\) specify the target task (cluster centers in LVQ training, SCM teacher vectors for regression).
The consideration of the thermodynamic limit \(N\rightarrow \infty\) is instrumental for the theoretical treatment. The limit facilitates the following key steps which, eventually, yield an exact mathematical description of the training dynamics in terms of ordinary differential equations (ODE):
(a) Order parameters
The many degrees of freedom, i.e., the components of the adaptive vectors, can be characterized in terms of only very few quantities. The definition of these so-called order parameters follows naturally from the mathematical structure of the model. After presentation of a number \(\mu\) of examples, as indicated by corresponding superscripts, we describe the system by the projections for \(i,k,m \in \{1,2\}\)
$$\begin{aligned} R_{{\rm im}}^\mu ={\bf w}_i^\mu \cdot {\bf B}_m \,\, \text{ and } Q_{ik}^\mu ={\bf w}_i^\mu \cdot {\bf w}_k^\mu . \end{aligned}$$
Obviously, \(Q_{11}^\mu ,Q_{22}^\mu\) and \(Q_{12}^\mu =Q_{21}^\mu\) relate to the norms and mutual overlap of the adaptive vectors, while the quantities \(R_{{\rm im}}\) specify their projections into the linear subspace defined by the characteristic vectors \(\{{\bf B}_1,{\bf B}_2\}\), respectively.
(b) Recursions
Recursion relations for the order parameters (14) can be derived directly from the update steps, which are of the generic form \({\bf w}_k^\mu \, = {\bf w}_k^{\mu -1} \, + \eta /N \, \Delta {\bf w}_k^\mu .\) The corresponding inner products yield
$$\begin{aligned} N({R_{{\rm im}}^{\mu } - R_{{\rm im}}^{\mu -1}})&= {} \eta \, \Delta {\bf w}_i^\mu \cdot {\bf B}_m \nonumber \ \\ N ({Q_{ik}^{\mu } - Q_{ik}^{\mu -1}})&= {} \eta \left( {\bf w}^{\mu -1}_i \cdot \Delta {\bf w}^{\mu }_k + {\bf w}^{\mu -1}_k \cdot \Delta {\bf w}^{\mu }_i \right) \nonumber \\&\quad + \, \eta ^2/N \, \Delta {\bf w}^{\mu }_i \cdot \Delta {\bf w}^{\mu }_k. \end{aligned}$$
Terms of order \(\mathcal{O}(1/N)\) on the r.h.s. will be neglected in the following. Note however that \(\Delta {\bf w}^{\mu }_i \cdot \Delta {\bf w}^{\mu }_k\) comprises contributions of order \(|{\boldsymbol \xi }|^2 \propto N\) for the considered updates (2) and (10).
(c) Averages over the model data
Applying the central limit theorem (CLT) we can perform an average over the random sequence of independent examples.
Note that \(\Delta {\bf w}^\mu _k \propto {\boldsymbol \xi }^\mu\) or \(\Delta {\bf w}^\mu _k \propto \left( {\boldsymbol \xi }^\mu - {\bf w}^{\mu -1}_k\right)\) for the SCM and LVQ, respectively. Consequently, the current input \({\boldsymbol \xi }^\mu\) enters the r.h.s. of Eq. (15) only through its norm \(\mid {\bf \xi }\mid ^2 = \mathcal{{O}}(N)\) and the quantities
$$\begin{aligned} h_i^\mu \, = {\bf w}_i^{\mu -1} \cdot {\boldsymbol \xi }^\mu \text{ and } b_m^\mu \, = {\bf B}_m \cdot {\boldsymbol \xi }^\mu . \end{aligned}$$
Since these inner products correspond to sums of many independent random quantities in our model, the CLT implies that the projections in Eq. (16) are correlated Gaussian quantities for large N and the joint density \(P(h_1^\mu ,h_2^\mu ,b_1^\mu ,b_2^\mu )\) is given completely by first and second moments.
LVQ: For the clustered density, cf. Eqs. (4), the conditional moments read
$$\begin{aligned}&\left\langle h^\mu _{i} \right\rangle _{m} = \lambda R_{{\rm im}}^{\mu -1}, \quad \left\langle b^\mu _{m} \right\rangle _{n} = \lambda \delta _{mn},\nonumber \\&\left\langle h^\mu _{i} h^\mu _{k} \right\rangle _{m} - \left\langle h^\mu _{i} \right\rangle _{m} \left\langle h^\mu _{k} \right\rangle _{m} = v_m \, Q^{\mu -1}_{ik},\nonumber \\&\left\langle h^\mu _{i} b^\mu _{n} \right\rangle _{m} - \left\langle h^\mu _{i} \right\rangle _{m} \left\langle b^\mu _{n} \right\rangle _{m} = v_m \, R^{\mu -1}_{in}, \nonumber \\&\left\langle b^\mu _{l} b^\mu _{n} \right\rangle _{m} - \left\langle b^\mu _{l} \right\rangle _{m} \left\langle b^\mu _{n} \right\rangle _{m} = v_m \, \delta _{ln}, \end{aligned}$$
with \(i,k,l,m,n \in \{1,2\}\) and the Kronecker-Delta \(\delta _{ij}= 1\) for \(i=j\) and \(\delta _{ij}=0\) else.
SCM: In the simpler case of the isotropic, spherical density (13) with \(\lambda =0\) and \(v_1=v_2=1\) the moments reduce to
$$\begin{aligned}&\left\langle h^\mu _{i} \right\rangle = 0, \, \left\langle b^\mu _{m} \right\rangle = 0, \left\langle h^\mu _{i} h^\mu _{k} \right\rangle - \left\langle h^\mu _{i} \right\rangle \left\langle h^\mu _{k} \right\rangle = Q^{\mu -1}_{ik} \nonumber \\&\left\langle h^\mu _{i} b^\mu _{n} \right\rangle - \left\langle h^\mu _{i} \right\rangle \left\langle b^\mu _{n} \right\rangle = R^{\mu -1}_{in}, \left\langle b^\mu _{l} b^\mu _{n} \right\rangle \!-\! \left\langle b^\mu _{l} \right\rangle \left\langle b^\mu _{n} \right\rangle = \delta _{ln}. \end{aligned}$$
Hence, in both cases (LVQ and SCM) the four-dim. density of \(h_{1,2}^\mu\) and \(b_{1,2}^\mu\) is fully specified by the values of the order parameters in the previous time step and the parameters of the model density. This important result enables us to average the recursion relations (15) over the most recent training example by means of Gaussian integrals. The resulting r.h.s. can be expressed as functions of \(\{ R_{{\rm im}}^{\mu -1},Q_{ik}^{\mu -1} \}.\) Obviously, the precise form depends on the details of the algorithm and model setup.
(d) Self-Averaging Properties
The self-averaging property of the order parameters allows us to describe the dynamics in terms of mean values: Fluctuations of the stochastic dynamics can be neglected in the limit \(N\rightarrow \infty\). The concept relates to the statistical physics of disordered materials and has been transferred successfully to the study of neural network models and learning processes [4, 19, 53]. A detailed mathematical discussion in the context of sequential on-line learning dynamics is given in [41]. As a consequence, we can interpret the averaged equations (15) directly as deterministic recursions for the actual values of \(\{R_{{\rm im}}^\mu ,Q_{ik}^\mu \},\) which coincide with their disorder average in the thermodynamic limit.
(e) Continuous Time Limit
In the thermodynamic limit \(N\rightarrow \infty ,\) ratios of the form \((\ldots )/(1/N)\) on the left hand sides of Eq. (15) can be interpreted as derivatives with respect to a continuous learning time \(\alpha\) defined by
$$\begin{aligned} \alpha \, = {\, \mu \, }/{N} \text{ with } {\rm d}\alpha \, \sim \, 1/N. \end{aligned}$$
This scaling corresponds to the natural assumption that the number of examples should be proportional to the number of adaptive quantities in the system.
Averages are performed over the joint density \(P\left( h_1^\mu ,h_2^\mu ,b_1^\mu ,b_2^\mu \right)\) corresponding to the latest, independently drawn input vector. For simplicity, we omit indices \(\mu\) in the following. The resulting sets of coupled ODE is of the form
$$\begin{aligned} \left[ \frac{{\rm d}R_{{\rm im}}}{{\rm d}\alpha } \right] _{{\rm stat}} \!\!\!\!\! = \eta F_{{\rm im}} \text{; } \left[ \frac{{\rm d}Q_{ik}}{{\rm d}\alpha }\right] _{{\rm stat}} \!\!\!\!\! = \eta \, G^{(1)}_{ik} + \eta ^2 G^{(2)}_{ik}. \end{aligned}$$
Here, the subscript stat indicates that the ODE describe learning from a stationary density, Eqs. (4) or (13).
Limit of small learning rates
The dynamics can also be studied in the limit of small learning rates \(\eta \rightarrow 0\). In this case, the term \(\eta ^2 G_{ik}^{(2)}\) can be neglected in Eq. (20). In order to retain non-trivial performance, the small step size has to be compensated for by training with a large number of examples that diverges like \(1/\eta\). Formally, we introduce the quantity \(\widetilde{\alpha }\) in the simultaneous limit
$$\begin{aligned} \widetilde{\alpha } \, = \lim _{\eta \rightarrow 0} \lim _{\alpha \rightarrow \infty } \, (\eta \alpha ), \end{aligned}$$
which leads to a simplified system of ODE
$$\begin{aligned} \left[ \frac{{\rm d}R_{{\rm im}}}{{\rm d}\widetilde{\alpha }} \right] _{{\rm stat}} \!\!\!\!\! = F_{{\rm im}} \text{; } \left[ \frac{{\rm d}Q_{ik}}{{\rm d}\widetilde{\alpha }}\right] _{{\rm stat}} \!\!\!\!\! = G^{(1)}_{ik} \end{aligned}$$
in rescaled continuous time \(\widetilde{\alpha }\) for \(\eta \rightarrow 0.\)
LVQ: In the classification model we have to insert
$$\begin{aligned}&F_{{\rm im}} = \left( \left\langle b_m f_i \right\rangle \! -\! R_{{\rm im}} \left\langle f_i \right\rangle \right) , \,\nonumber \\&G^{(1)}_{ik} = \Big ( \left\langle h_i f_k + h_k f_i \right\rangle \! -\! Q_{ik} \left\langle f_i \! +\! f_k \right\rangle \Big ) \nonumber \\&\text{ and } G^{(2)}_{ik}= {\textstyle \sum _{m=1,2}} \, v_m p_m \left\langle f_i f_k \right\rangle _m \end{aligned}$$
in Eqs. (20) or (22). The LVQ1 modulation functions \(f_i\) is given in Eq. (3) and conditional averages \(\langle \ldots \rangle _m\) are with respect to the density (4).
SCM: In the case of non-linear regression we obtain
$$\begin{aligned}&F_{{\rm im}} = \langle \rho _i b_m \rangle , \quad G^{(1)}_{ik} = \langle \left( \rho _{i} h_k + \rho _k h_i\right) \rangle , \nonumber \\&\quad \text{ and } G^{(2)}_{ik}= \langle \rho _i \rho _k \rangle \text{ with } \rho _k=-(y-\tau ) g^\prime (h_k). \end{aligned}$$
Eventually, the r.h.s. of Eqs. (20) or (22) are expressed in terms of elementary functions of order parameters. For the straightforward, yet lengthy results we refer the reader to the original literature for LVQ [12, 20] and SCM [9, 42, 44, 45], respectively.
(f) Generalization error
After training, the success of learning is quantified in terms of the generalization error \(\epsilon _g\), which is also given as a function of the macroscopic order parameters.
LVQ: In the case of the LVQ model, \(\epsilon _g\) is given as the probability of misclassifying a novel, randomly drawn input vector. The class-specific errors corresponding to data from clusters \(k=1,2\) in Eq. (4) can be considered separately:
$$\begin{aligned} \epsilon _g = p_1 \, \epsilon _g^1 + p_2 \, \epsilon _g^2 \text{ where } \epsilon _g^k \, = \, \bigg \langle \Theta \left( d_{k} - d_{\widehat{k}} \right) \bigg \rangle _k \end{aligned}$$
is the class-specific misclassification rate, i.e., the probability for an example drawn from a cluster k to be assigned to \(\widehat{k}\ne k\) with \(d_{k} > d_{\widehat{k}}\). For the derivation of the class-wise and total generalization error for systems with two prototypes as functions of the order parameters we also refer to [12]. One obtains
$$\begin{aligned} \epsilon _g^k \, = \, \Phi \left( \frac{ Q_{kk}-Q_{\widehat{k}\widehat{k}}-2\lambda ( R_{kk}-R_{\widehat{k}\widehat{k}})}{2 \sqrt{v_k} \sqrt{Q_{11}-2Q_{12}+ Q_{22}}} \right) \end{aligned}$$
with the function \(\Phi (z)=\int _{-\infty }^{z} dx \, {e^{-x^2/2}}/{\sqrt{2\pi }}.\)
SCM: In the regression scenario, the generalization error is defined as an average \(\left\langle \cdots \right\rangle\) of the quadratic deviation between student and teacher output over the isotropic density, cf. Eq. (13):
$$\begin{aligned} \epsilon _g \, = \frac{1}{2} \left\langle \left[ \sum _{k=1}^K g \left( {h_k}\right) - \sum _{m=1}^M g\left( {b_m}\right) \right] ^2 \right\rangle . \end{aligned}$$
In the simplifying case of \(K=M=2\) we obtain for Erf-SCM:
$$\begin{aligned}\epsilon _g \, &= \frac{1}{3} + \frac{1}{\pi } \ \sum _{i,k=1}^2 \sin ^{-1}\left( \frac{Q_{ik}}{\sqrt{1+Q_{ii}}\sqrt{1+Q_{kk}}}\right) \nonumber \\ &\quad- \frac{2}{\pi } \sum _{i,m=1}^2 \sin ^{-1}\left( \frac{R_{{\rm im}}}{\sqrt{2} \sqrt{1+Q_{ii}} } \right) \end{aligned}$$
and for ReLU-SCM:
$$\begin{aligned}\epsilon _g&= \sum _{i,j=1}^2 \!\!\left[ \frac{Q_{ij}}{8}\!+\!\frac{\sqrt{Q_{ii}Q_{jj}\!-\!Q_{ij}^2}\!+\! Q_{ij}\sin ^{-1}\left( \!\frac{Q_{ij}}{\sqrt{Q_{ii}Q_{jj}}}\!\right) }{4\pi } \right] \nonumber \\&\quad -\!\!\sum _{i,j=1}^2 \!\!\left[ \frac{R_{ij}}{4}\!\!+\!\!\frac{\sqrt{Q_{ii}\!-\!R_{ij}^2}\!+\! R_{ij}\sin ^{-1}\left( \frac{R_{ij}}{\sqrt{Q_{ii}}}\!\right) }{2\pi } \right] \!+\! \frac{\pi \!+\!1}{2\pi }. \end{aligned}$$
Both results are for orthonormal teacher vectors, extensions to general \({\bf B}_m \cdot {\bf B}_n = T_{mn}\) can be found in [45, 47].
(g) Learning curves
The (numerical) integration of the ODE for a given particular training algorithm, model density and specific initial conditions \(\{ R_{{\rm im}}(0), Q_{ik}(0) \}\) yields the temporal evolution of order parameters in the course of training.
Exploiting the self-averaging properties of order parameters once more, we can obtain the learning curves \(\epsilon _g (\alpha )= \epsilon _g\left( \{ R_{{\rm im}}(\alpha ), Q_{ik}(\alpha )\}\right)\) or the class-wise \(\epsilon _g^{k}(\alpha )\), respectively. Hence, we determine the typical generalization error after on-line training with \((\alpha \, N)\) random examples.
The learning dynamics under concept drift
The analysis summarized in the previous section concerns learning in the presence of a stationary concept, i.e., for a density of the form (4) or (13) which does not change in the course of training. Here, we introduce the effect of concept drift to the modelling framework and consider weight decay as an example mechanism for explicit forgetting.
Virtual drift in classification
As defined above, virtual drifts affect statistical properties of the observed example data while the actual target function remains unchanged.
A variety of virtual drift processes can be addressed in our modelling framework. For example, time-varying label noise in regression or classification could be incorporated in a straightforward way [4, 19, 53]. Similarly, non-stationary cluster variances in the input density, cf. Eq. (4), can be introduced through explicitly time-dependent \(v_\sigma (\alpha )\) into Eq. (20) for the LVQ system.
Here we focus on a particularly relevant case in classification, in which a varying fraction of examples represents each of the classes in the data stream. We consider non-stationary, \(\alpha\)-dependent prior probabilities \(p_1(\alpha ) = 1-p_2(\alpha )\) in the mixture density (4). In practical situations, varying class bias can complicate the training significantly and lead to inferior performance [52]. Specifically, we distinguish the following scenarios:
(A) Drift in the training data only
Here we assume that the true target classification is defined by a fixed reference density of data. As a simple example we consider equal priors \(p_1=p_2=1/2\) in a symmetric reference density (4) with \(v_1=v_2\). On the contrary, the characteristics of the observed training data are assumed to be time-dependent. In particular, we study the effect of non-stationary \(p_m(\alpha )\) and weight decay on the learning dynamics. Given the order parameters of the learning systems in the course of training, the corresponding reference generalization error
$$\begin{aligned} \epsilon _{{\rm ref}}(\alpha )= \left( \epsilon _g^1 + \epsilon _g^2\right) /2 \end{aligned}$$
is obtained by setting \(p_1=p_2=1/2\) in Eq. (25), but inserting \(R_{{\rm im}}(\alpha )\) and \(Q_{ik}(\alpha )\) as obtained from the integration of the corresponding ODE with time dependent \(p_1(\alpha )=1-p_2(\alpha )\) in the training process.
(B) Drift in training and test data
In the second interpretation we assume that the variation of \(p_m(\alpha )\) affects training and test data in the same way. Hence, the change of the statistical properties of the data is inevitably accompanied by a modification of the target classification: For instance, the Bayes optimal classifier and its best linear approximation depend explicitly on the actual priors [12].
The learning system is supposed to track the actual drifting concept and we refer to the corresponding generalization error as the tracking error
$$\begin{aligned} \epsilon _{{\rm track}}= p_1(\alpha ) \, \epsilon _g^1 \, +\, p_2(\alpha ) \, \epsilon _g^2. \end{aligned}$$
In terms of modelling the training dynamics, both scenarios, (A) and (B), require the same straightforward modification of the ODE system: the explicit introduction of \(\alpha\)-dependent quantities \(p_\sigma (\alpha )\) in Eq. (20). The obtained temporal evolution yields the reference error \(\epsilon _{{\rm ref}}(\alpha )\) for the case of drift in the training data (A) and \(\epsilon _{{\rm track}}(\alpha )\) in interpretation (B).
Note that in both interpretations, we consider the very same drift processes affecting the training data. However, the interpretation of the relevant performance measure is different. In (A) only the training data is subject to the drift, but the classifier is evaluated with respect to an idealized static situation representing a fixed target. On the contrary, the tracking error in (B) is thought to be computed with respect to test data available from the stream, at the given time. Alternatively, one could interpret (B) as an example of real drift with a non-stationary target, where \(\epsilon _{{\rm track}}\) represents the corresponding generalization error. However, we will refer to (A) and (B) as virtual drift throughout the following.
Real drift in regression
In the presented framework, a real drift can be modelled as a process which displaces the characteristic vectors \({\bf B}_{1,2}\), i.e., the cluster centers in LVQ or the teacher weight vectors in the SCM. Here we focus on the latter case and refer the reader to [47] for earlier results on LVQ training under real drift.
A variety of time dependences could be considered in the model. We restrict ourselves to the analysis of diffusion-like random displacements of vectors \({\bf B}_{1,2} (\mu )\) at each time step. Upon presentation of example \(\mu\), we assume that random vectors \({\bf B}_{1,2}(\mu )\) are generated which satisfy the conditions
$$\begin{aligned}&{\bf B}_1(\mu ) \cdot {\bf B}_1(\mu \!-\!1) = {\bf B}_2(\mu ) \cdot {\bf B}_2(\mu \!-\!1) = \left( 1 - {\delta }/{N}\right) \nonumber \\&{\bf B}_1(\mu )\cdot {\bf B}_2(\mu )= 0 \text{ and } \mid {\bf B}_1(\mu )\mid ^2 = \mid {\bf B}_2(\mu )\mid ^2 = 1. \end{aligned}$$
Here \(\delta\) quantifies the strength of the drift process. The displacement of the teacher vectors is very small in an individual training step. For simplicity we assume that the orthonormality of teacher vectors is preserved in the drift. In continuous time \(\alpha =\mu /N\), the drift parameter defines a characterstic scale \(1/\delta\) on which the overlap of the current teacher vectors with their initial positions decay: \({\bf B}_{m}(\mu )\cdot {\bf B}_{m}(0)\, = \exp [-\delta \, \mu /N ].\)
The effect of such a drift process is easily taken into account in the formalism: For a particular student \({\bf w}_i\in \mathbb {R}^N\) we obtain [6, 7, 28, 50]
$$\begin{aligned} \left[ {\bf w}_i\cdot {\bf B}_k(\mu )\right] = \left( 1- {\delta }/{N}\right) \, \left[ {\bf w}_i\cdot {\bf B}_k(\mu -1)\right] . \end{aligned}$$
under the above specified random displacement. Hence, the drift tends to decrease the quantities \(R_{ik}\) which clearly reduces the success of training compared with the case of stationary teachers. The corresponding ODE in the limit \(N\rightarrow \infty\) in the drift process (32) become
$$\begin{aligned}&\left[ {{\rm d}R_{{\rm im}}}/{{\rm d}\alpha } \right] _{{\rm drift}} \, = \, \left[ {{\rm d}R_{{\rm im}}}/{{\rm d}\alpha } \right] _{{\rm stat}} \, - \delta \, R_{{\rm im}} \text{ and } \nonumber \\&\left[ {{\rm d}Q_{ik}}/{{\rm d}\alpha }\right] _{{\rm drift}} = \left[ {{\rm d}Q_{ik}}/{{\rm d}\alpha }\right] _{{\rm stat}} \end{aligned}$$
with the terms \(\left[ \cdots \right] _{{\rm stat}}\) for stationary environments taken from Eq. (20). Note that now order parameters \(R_{{\rm im}}(\alpha )\) correspond to the inner products \({\bf w}_i^\mu \cdot {\bf B}_m(\alpha )\), as the teacher vectors themselves are time-dependent.
Weight decay
Possible motivations for the introduction of so-called weight decay in machine learning systems range from regularization as to reduce the risk of over-fitting in regression and classification [15, 22, 23] to the modelling of forgetful memories in attractor neural networks [24, 37].
Here we include weight decay as to enforce explicit forgetting and to potentially improve the performance of the systems in the presence of real concept drift. We consider the multiplication of all adaptive vectors by a factor \((1-\gamma /N)\) before the generic learning step given by \(\Delta {\bf w}_i^\mu\) in Eq. (2) or Eq. (10) is performed:
$$\begin{aligned} {\bf w}_i^\mu \, = \, \left( 1-{\gamma }/{N}\right) \, {\bf w}_i^{\mu -1} \, + {\eta }/{N} \, \Delta {\bf w}_i^\mu . \end{aligned}$$
Since the multiplications with \(\left( 1-\gamma /N\right)\) accumulate in the course of training, weight decay enforces an increased influence of the most recent training data as compared to earlier examples. Note that analagous modifications of perceptron training under concept drift have been discussed in [6, 7, 28, 50].
In the thermodynamic limit \(N\rightarrow \infty\), the modified ODE for training under real drift, cf. Eq. (32), and weight decay, Eq. (35), are obtained as
$$\begin{aligned}&\left[ {{\rm d}R_{{\rm im}}}/{{\rm d}\alpha } \right] _{{\rm decay}} = \left[ {{\rm d}R_{{\rm im}}}/{{\rm d}\alpha } \right] _{{\rm stat}} - (\delta +\gamma ) R_{{\rm im}} \text{ and } \nonumber \\&\left[ {{\rm d}Q_{ik}}/{{\rm d}\alpha }\right] _{{\rm decay}} \, = \left[ {{\rm d}Q_{ik}}/{{\rm d}\alpha }\right] _{{\rm stat}} - 2\, \gamma \,Q_{ik} \end{aligned}$$
where the terms for stationary environments in absence of weight decay are given in Eq. (20).
Here we present and discuss our results obtained by integrating the systems of ODE with and without weight decay under different time-dependent drifts. For comparison, averaged learning curves obtained by means of Monte Carlo simulations are also shown. These simulations of the actual training process provide an independent confirmation of the ODE-based description and demonstrate the relevance of results obtained in the thermodynamic limit \(N\rightarrow \infty\) for relatively small, finite systems.
LVQ1 in the presence of a concept drift with linearly increasing \(p_1(\alpha )\) given by \(\alpha _o\!=\!20\), \(\alpha _{{\rm end}}\!=\!200\), \(p_{{\rm max}}\!=\!0.8\) in (38). Solid lines correspond to the integration of ODE with initialization as in Eq. (37). We set \(v_{1,2}\!=\!0.4\) and \(\lambda =1\) in the density (4). The upper graph corresponds to LVQ1 without weight decay, the lower graph displays results for \(\gamma =0.05\) in Eq. (35). In addition, Monte Carlo results for \(N=100\) are shown: class-wise errors \(\epsilon ^{1,2}(\alpha )\) are displayed as downward (upward) triangles, respectively; squares mark the reference error \(\epsilon _{{\rm ref}}(\alpha );\) circles correspond to \(\epsilon _{{\rm track}}(\alpha )\), cf. Eqs. (30, 31)
Virtual drift in LVQ training
All results presented in the following are for constant learning rate \(\eta =1\) in the LVQ training. The results remain qualitatively the same for a range of learning rates. LVQ prototypes were initialized as normalized independent random vectors without prior knowledge:
$$\begin{aligned} Q_{11}(0)=Q_{22}(0)=1, \, Q_{12}(0)=0, \text{ and } R_{ik}(0)=0. \end{aligned}$$
We study three specific scenarios for the time dependence \(p_1(\alpha )=1\!-p_2(\alpha )\) as detailed in the following.
Linear increase of the bias
Here we consider a time-dependent bias of the form \(p_1(\alpha ) = 1/2 \text{ for } \alpha <\alpha _o\) and
$$\begin{aligned} p_1(\alpha ) = \frac{1}{2} + \frac{(p_{{\rm max}}\!-\!1/2) \, (\alpha -\alpha _o)}{(\alpha _{{\rm end}}-\alpha _o)} \text{ for } \alpha \ge \alpha _o. \end{aligned}$$
where the maximum class weight \(p_1=p_{{\rm max}}\) is reached at learning time \(\alpha _{{\rm end}}\). Figure 1 shows the learning curves as obtained by numerical integration of the ODE together with Monte Carlo simulation results for \((N=100)\)-dimensional inputs and prototype vectors. As an example we set the parameters to \(\alpha _o=25, p_{{\rm max}}=0.8, \alpha _{{\rm end}}=200\). The learning curves are displayed for LVQ1 without weight decay (upper) and with \(\gamma =0.05\) (lower panel). Simulations show excellent agreement with the ODE results.
The system adapts to the increasing imbalance of the training data, as reflected by a decrease (increase) of the class-wise error for the over-represented (under-represented) class, respectively. The weighted overall error \(\epsilon _{{\rm track}}\) also decreases, i.e., the presence of class bias facilitates smaller total generalization error, see [12]. The performance with respect to unbiased reference data deteriorates slightly, i.e., \(\epsilon _{{\rm ref}}\) grows with increasing class bias as the training data represents the target less faithfully.
Sudden change of the class bias
Here we consider an instantaneous switch from low bias \(p_1(\alpha )= 1-p_{{\rm max}}\) for \(\alpha \le \alpha _o\) to high bias
$$\begin{aligned} p_1(\alpha ) = \left\{ \begin{array}{ll} 1 -p_{{\rm max}} &{} \text{ for } \alpha \le \alpha _o. \\ p_{{\rm max}}>1/2 &{} \text{ for } \alpha > \alpha _o. \end{array} \right. \end{aligned}$$
We consider \(p_{{\rm max}}=0.75\) as an example, the corresponding results from the integration of ODE and Monte Carlo simulations are shown in Fig. 2 for training without weight decay (upper) and for \(\gamma =0.05\) (lower panel).
LVQ1 in the presence of a concept drift with a sudden change of class weights according to Eq. (39) with \(\alpha _o=100\) and \(p_{{\rm max}}=0.75\). Only the \(\alpha\)-range close to \(\alpha _o\) is shown. All other details are provided in Fig. 1
We observe similar effects as for the slow, linear time dependence: The system reacts rapidly with respect to the class-wise errors and the tracking error \(\epsilon _{{\rm track}}\) maintains a relatively low value. Also, the reference error \(\epsilon _{{\rm ref}}\) displays robustness with respect to the sudden change of \(p_1\). Weight decay, as can be seen in the lower panel of Fig. 2 reduces the overall sensitivity to the bias and its change: Class-wise errors are more balanced and the weighted \(\epsilon _{{\rm track}}\) slightly increases compared to the setting with \(\gamma =0\).
Periodic time dependence
As a third scenario we consider oscillatory modulations of the class weights during training:
$$\begin{aligned} p_1(\alpha ) = 1/2 +\left( p_{{\rm max}}-1/2\right) \, \cos \left( 2\pi \, {\alpha }\big /{T} \right) \end{aligned}$$
with periodicity T on \(\alpha\)-scale and maximum amplitude \(p_{{\rm max}} <1\). Example results are shown in Fig. 3 for \(T=50\) and \(p_{{\rm max}}=0.8\). Monte Carlo results for \(N=100\) are only displayed for the class-wise errors, for the sake of clarity. They show excellent agreement with the numerical integration of the ODE for training without weight decay (upper panel) and for \(\gamma =0.05\) (lower panel). These results confirm our findings for slow and sudden changes of the prior weights: Weight decay limits the flexibility of the LVQ system to react to the presence of a bias and its time dependence.
LVQ1 in the presence of oscillating class weights according to Eq. (40) with parameters \(T=50\) and \(p_{{\rm max}}=0.8\), without weight decay \(\gamma =0\) (upper graph) and for \(\gamma =0.05\) (lower). For clarity, Monte Carlo results are only shown for the class-conditional errors \(\epsilon ^1\) (downward) and \(\epsilon ^2\) (upward triangles). All other details are given in Fig. 1
Discussion: LVQ under virtual drift
Our results for the different realizations of time-dependent class weights show that Learning Vector quantization can cope with this form of drift to a certain effect. By design, standard incremental updates like the classical LVQ1 allow the prototypes to adjust to the changing statistics of the data. This has been shown in [47] for the actual drift of the cluster centers in the model density. Here we show that LVQ1 can also cope with the virtual drift processes.
In analogy to our findings in [47], one might have expected improved performance when introducing weight decay as a mechanism of forgetting. As we demonstrate, however, weight decay does not have a very strong effect on the the system's reaction to changing prior class weights. Essentially, weight decay limits the prototype norms and hinders shifts of the decision boundary by prototype displacement. The overall influence of class bias and its time dependence is reduced in the presence of weight decay. Weight decay restricts the norm of the prototypes, i.e., the possible offset of the decision boundary from the origin. As a consequence, the tracking error slightly increases for \(\gamma >0\), in general. On the contrary, the error \(\epsilon _{{\rm ref}}\) with respect to the reference density decreases compared to the training without weight decay.
A clear beneficial effect of forgetting previous information in favor of the most recent examples cannot be confirmed. The reaction of the learning system to sudden (B) or oscillatory changes of the priors (C) remains also unchanged when introducing weight decay.
Results: SCM regression under real drift
Here we present the results concerning the SCM student–teacher scenario with \(K=M=2\) under real concept drift, i.e., random displacements of the teacher vectors as introduced in Sect. 2.4.2. Unlike LVQ for classification, gradient descent-based training of a regression system is expected to be much more sensitive to the choice of the learning rate. Here, we restricted the discussion to the well-defined limit of small learning rates, \(\eta \rightarrow 0\) and \(\alpha \rightarrow \infty\) with \(\widetilde{\alpha } = \eta \alpha = \mathcal{O}(1),\) see the discussion before Eq. (21). In the corresponding Monte Carlo simulations, cf. Fig. 4a, b, we employed a small learning rate \(\eta =0.05\) which yielded very good agreement.
The learning performance under concept drift in terms of generalization error as a function of the learning time \(\widetilde{\alpha }\). Dots correspond to 10 runs of Monte Carlo simulations with \(N=500\), \(\eta =0.05\) with initials conditions as in Eq. (41). Solid lines show ODE integrations. a Erf SCM. From bottom to top, the curves correspond to the levels of target drift \(\widetilde{\delta }=\{0,0.01,0.02,0.05\}\). b ReLU SCM. From bottom to top, the levels of target drift are: \(\widetilde{\delta }=\{0,0.05,0.1,0.3\}\)
Already in the absence of concept drift, the model displays non-trivial effects as shown in, for instance [9, 44, 45]. Perhaps the most thoroughly studied phenomenon in the SCM training process is the existence of quasi-stationary plateaus in the evolution of the order parameters and the generalization error. In the most clear-cut cases, they correspond to approximately symmetric configurations of the student network with respect to the teacher network, i.e., \(R_{{\rm im}} \approx R\) for all i, m. In such a state, all student units have acquired the same, limited knowledge of the target rule. Hence, the generalization error in the plateau is sub-optimal. In terms of Eq. (20), plateaus correspond to weakly repulsive fixed points of the ODE system. One can show in case of orthonormal teacher units and for small learning rates that a symmetric fixed point with \(R_{{\rm im}}=R\) and the associated plateau state always exists; see, e.g., [45]. In order to achieve a further decrease of the generalization error, the symmetry of the student with respect to the teacher units has to be broken by specialization: Each student weight vector \({\bf w}_{1,2}\) has to represent a specific teacher unit and \(R_{i1} \ne R_{i2}\) is required for successful learning.
Our recent comparison of Erf-SCM and ReLU-SCM revealed interesting differences even in absence of concept drift [46]. For instance, in the Erf-SCM, student vectors are nearly identical in the symmetric plateau with \(Q_{ik} \approx Q\) for all \(i,k \in \{1,2\}.\) On the contrary, in ReLU systems the student weights are not aligned in the quasi-stationary state: \(Q_{ii}=Q\) and \(Q_{12}<Q\) [46].
ODE and Monte Carlo simulations
Here, we investigate and compare the learning dynamics of networks with Erf- and ReLU-activation under concept drift and in the presence of weight decay. To this end we study the models by numerical integration of the corresponding ODE and, in addition, by Monte Carlo simulations.
We study training processes in absence of prior knowledge in the student. In the following we consider exemplary initial conditions with
$$\begin{aligned} R_{{\rm im}}(0)&=0, \\ Q_{11}(0)&=Q_{22}(0)=0.5, \\ Q_{12}(0)&=0.49\, \end{aligned}$$
which correspond to almost identical \(\mathbf {w}_1(0)\) and \(\mathbf {w}_2(0),\) which are both orthogonal to the teacher vectors. Note that the initial norm of the student vectors and their mutual overlap \(Q_{12}(0)\) can be set arbitrarily in practice.
For the networks with two hidden units we define the quantity \(S_i(\alpha )=|R_{i1}(\alpha ) - R_{i2}(\alpha )|\) as the specialization of student units \(i=1,2\). In the plateau state, \(S_i(\alpha ) \approx 0\) for an extended amount of training time, while an increasing value of \(S_i(\alpha )\) indicates the specialization of the unit. In practice, one expects that initially \(R_{{\rm im}}(0) \approx 0\) for all i, m if no prior information is available about the target rule. Hence, the student specialization \(S_i(0) = |R_{i1}(0) - R_{i2}(0)|\) is also small, initially.
The unspecialized plateau can dominate the learning process and, consequently, its length is a quantity of significant interest. Quite generally, it is governed by the repulsive properties of the relevant fixed point of the ODE system and depends logarithmically on the the magnitude of the initial specialization \(S_i(0)\), see [9] for a detailed discussion. In simulations for large N, a random initialization of student vectors would result in overlaps \(R_{{\rm im}}(0)=\mathcal{O}(1/\sqrt{N})\) with the teacher vectors which also implies that \(S_i(0)=\mathcal{O}(1/\sqrt{N}).\) The accurate extrapolation of simulation results for \(N\rightarrow \infty\) is complicated by this interplay of finite size effects and initial specialization which governs the escape from the plateau states [9]. Due to fluctuations in a finite system, plateaus are typically left earlier than predicted by the theoretical prediction for \(N\rightarrow \infty\). Here we focus on the performance achieved in the plateau states and resort to a simpler strategy: The values of the order parameters observed at \(\widetilde{\alpha }=0.05\) in the Monte Carlo simulation are used as initial values for the numerical integration of the ODE. This does not necessarily warrant a one-to-one correspondence of the precise shape and length of the plateau states. However, the comparison shows excellent qualitative agreement and allows for the quantitative comparison of the performance in the quasistationary and states.
We have studied the Erf-SCM and the ReLU-SCM under concept drift, Eq. (32), and weight decay, Eq. (35), in the limit of small learning rates \(\eta \rightarrow 0\). We resorted to this simplifying limit as the term \(G_{ik}^{(2)}\) in Eq. (24) could not be obtained analytically for the ReLU-SCM. However, non-trivial results can be achieved in terms of the rescaled training time \(\widetilde{\alpha }\) in the limit (21). Hence we integrate the ODE provided in Eq. (22), combined with the drift and weight decay terms from Eqs. (34, 36) that also have to be scaled with \(\eta\) in this case: \(\widetilde{\delta } = \eta \delta\), \(\widetilde{\gamma } = \eta \gamma\). In addition to the numerical integration we have performed and averaged over 10 independent runs of Monte Carlo simulations with system size \(N=500\) and small but finite learning rate \(\eta =0.05\).
Learning curves under concept drift
Figure 4 shows the learning curves \(\epsilon _g (\widetilde{\alpha })\) as results of the averaged Monte Carlo simulations and the ODE integration for different strengths \(\widetilde{\delta }\) of concept drift with no weight decay (\(\widetilde{\gamma }=0\)). The left and right panel corresponds to Erf- and ReLU-SCM, respectively.
Apart from deviations in terms of the plateau lengths, simulations and the numerical integration of the ODE show very good agreement. In particular, the generalization error in the plateau and final states nearly coincides. As outlined in Sect. 3.2.1, the actual length of plateaus in simulations depends on subtle details [9] which were not addressed here.
Note also that a direct, quantitative comparison of Erf- and ReLU-SCM in terms of training times \(\widetilde{\alpha }\) is not meaningful. For instance, it seems tempting to conclude that the ReLU-SCM exhibit shorter plateau states for the same network size and training conditions. However, one has to take into account that the activation functions influence the complexity of the input output relation of the network in a non-trivial way.
From the behavior of the learning curves for increasing strengths \(\widetilde{\delta }\), several impeding effects of the drift can be identified: The generalization error in the unspecialized plateau and in the final state for large \(\widetilde{\alpha }\) increase with \(\widetilde{\delta }\). At the same time, the plateau lengths increase. These effects are observed for both types of activation function. More specifically, the behavior for small \(\widetilde{\delta }\) is close to the stationary setting with \(\widetilde{\delta }=0\): A rapid initial decrease of the generalization error is followed by the quasi-stationary plateau state that persists for a relatively long training time. Eventually, the system escapes from the plateau and improved generalization performance becomes possible. Despite the matching complexity of student and teacher, perfect generalization cannot be achieved in the presence of on-going concept drift.
We note that the stronger the drift, the smaller is the difference between the performance in the plateau and the final state. For very large values of \(\widetilde{\delta }\) both versions of the SCM cannot escape the plateau state anymore as it corresponds to a stable fixed point of the ODE.
In the following we discuss for both activation functions the effect of concept drift on the plateau- and final generalization error in greater detail. The influence of weight decay on the dynamics is also presented.
Erf-SCM: Generalization error under concept drift in unspecialized plateau states (dashed lines) and final states (solid) of the learning process. a Plateau- and final generalization error for an increasing strength \(\widetilde{\delta }\) of the target drift. Here, weight decay is not applied: \(\widetilde{\gamma }=0\). For \(\widetilde{\delta }>\widetilde{\delta }_c\) as marked by the vertical line, the curves merge. b The plateau- and final generalization error as a function of the weight decay parameter \(\widetilde{\gamma }\) for a fixed level of real target drift, here: \(\widetilde{\delta }=0.03\). The curves merge for \(\widetilde{\gamma }>\widetilde{\gamma }_c\), as marked by the vertical line. The lower panels show the observed plateau lengths as a function of \(\widetilde{\delta }\) for \(\widetilde{\gamma }=0\) (c) and as a function of \(\widetilde{\gamma }\) for fixed \(\widetilde{\delta }=0.03\) (d), respectively
Erf-SCM under drift and weight decay
Figure 5a displays the effect of the drift strength \(\widetilde{\delta }\) on the generalization error in the unspecialized plateau state and in the final state for \(\widetilde{\alpha }\rightarrow \infty\), i.e., \(\epsilon _g^p(\widetilde{\delta })\) and \(\epsilon _g^\infty (\widetilde{\delta }),\) respectively. As mentioned above, weak drifts still allow for student specialization with improved performance in the final state for large \(\widetilde{\alpha }\). However, increasing the drift strength results in a decrease of the difference \(|\epsilon _g^\infty (\widetilde{\delta }) - \epsilon _g^p(\widetilde{\delta })|.\) We have marked the value of \(\widetilde{\delta }\), above which the plateau becomes the stable final state for \(\widetilde{\alpha }\rightarrow \infty\) in the figure and refer to it as \(\widetilde{\delta }_c\).
Interestingly, in a small range of the drift parameter, \(0.036< \widetilde{\delta } < 0.061\), the final performance is actually worse than in the plateau with \(\epsilon _g^\infty (\widetilde{\delta }) > \epsilon _g^p(\widetilde{\delta })\). Since \(\epsilon _g\) depends explicitly also on the \(Q_{ik}\), it is possible for an unspecialized state with \(R_{{\rm im}}=R\) to generalize better than a slightly specialized configuration with unfavorable values of the student norms and mutual overlaps.
Figure 5c shows the effect of the drift on the plateau length. The start and end of the plateau are defined as
$$\begin{aligned} \widetilde{\alpha }_0= & {} \min \{ \widetilde{\alpha } \, | \, \epsilon _g^p - 10^{-4}< \epsilon _g(\widetilde{\alpha }) < \epsilon _g^p + 10^{-4} \} \nonumber \ \\ \widetilde{\alpha }_P= & {} \min \{ \widetilde{\alpha } \, | \, S_i(\widetilde{\alpha }) \ge 0.2 \, S_i(\widetilde{\alpha } \rightarrow \infty ) \} \, . \end{aligned}$$
Here, \(S_i(\widetilde{\alpha }\rightarrow \infty )\) represents the final specialization that is achieved by the system for large training times. \((\alpha _P - \alpha _0)\) is used as a meaure of the plateau length.
In the weak drift regime, the plateau length increases slowly with \(\widetilde{\delta }\) as shown in panel (c) for \(\widetilde{\gamma }=0\). It eventually diverges as \(\widetilde{\delta }\) approaches \(\widetilde{\delta }_c\) from Fig. 5a.
The dependence of \(\epsilon _g^p\) and \(\epsilon _g^\infty\) on the weight decay parameter \(\widetilde{\gamma }\) is shown in Fig. 5b. We observe improved performance for a small amount weight decay compared to absence of weight decay (\(\widetilde{\gamma } = 0\)). However, the system is quite sensitive to the actual setting of \(\widetilde{\gamma }\): Values slightly larger than the optimum quickly deteriorate the ability for improvement from the plateau generalization error. The value of \(\widetilde{\gamma }\), above which the plateau- and final generalization error coincide has been marked in the figure and we refer to it as \(\widetilde{\gamma }_c\).
Figure 5d shows the effect of the weight decay on the plateau length in the same setting as in Fig. 5b. Introducing a weight decay always extends the plateau length. For small \(\widetilde{\gamma }\) the plateau length grows slowly and diverges as \(\widetilde{\gamma }\) approaches \(\widetilde{\gamma }_c\) from Fig. 5b.
ReLU-SCM: Generalization error under concept drift in unspecialized plateau states (dashed lines) and final states (solid), as a function of the drift strength (a) and weight decay (b). In (b), \(\widetilde{\delta }=0.2\). The drift strength \(\widetilde{\delta }_c\) above which the curves merge is marked in (a) and similar for weight decay \(\widetilde{\gamma }_c\) in (b). The lower panels show the observed plateau lengths as a function of \(\widetilde{\delta }\) for \(\widetilde{\gamma }=0\) (c) and as a function of \(\widetilde{\gamma }\) for fixed \(\widetilde{\delta }=0.2\) (d), respectively
ReLU-SCM under drift and weight decay
The effect of the strength of the drift on the generalization error in the unspecialized plateau state and in the final state is displayed in Fig. 6a. The picture is similar to the Erf-SCM: an increase in the drift strength causes an increase in the plateau- and final generalization error. We have marked in the figure the drift strength \(\widetilde{\delta }_c\) at which there is no further change in performance from the plateau. In contrast to the Erf-SCM, there is no range of \(\widetilde{\gamma }\) for which the ReLU-SCM generalization error increases after leaving the plateau.
Figure 6c shows the effect of the strength of the drift on the plateau length. Here too, a similar dependence is observed as for the Erf-SCM: For the range of weaker drifts, the plateau length grows slowly and diverges for strong drifts up to the drift strength \(\widetilde{\delta }_c\) from Fig. 6a.
Figure 6b shows the effect of the amount of weight decay on the plateau- and final generalization error in a concept drift situation. A small amount of weight decay can improve the generalization error compared to no weight decay (\(\widetilde{\gamma }=0\)). The effect weight decay has on the ReLU-SCM shows a much greater robustness compared to the Erf-SCM in terms of the ability to improve from the plateau value: For high amounts of weight decay, an escape from the plateau to better performance can still be observed. The value \(\widetilde{\gamma }_c\), above which the plateau- and final generalization error coincide has been marked in the figure.
Figure 6d shows the effect of the amount of weight decay on the plateau length in the same concept drift situation as in Fig. 6b. It shows that the plateau is shortened significantly in the smaller range of weight decay, the same range that also improves the final generalization error as observed in Fig. 6b. The plateau length increases again for very high levels of weight decay and diverges as \(\widetilde{\gamma }\) approaches the \(\widetilde{\gamma }_c\) from Fig. 6b.
Discussion: SCM regression under real drift
As was already discussed, the symmetric plateau corresponds to states where the student units have all learned the same limited and general knowledge about the teacher units, i.e., \(R_{ij} \approx R\) and therefore the specialization of each student unit i is small: \(S_i(\widetilde{\alpha }) = |R_{i1}(\widetilde{\alpha }) - R_{i2}(\widetilde{\alpha })| \approx 0\). Eventually, the symmetry is broken by the start of specialization, when \(S_i(\widetilde{\alpha })\) increases for each student unit i. For stationary learnable situations with \(K=M\), throughout learning the students units will acquire a full overlap to the teacher units: \(S_i = 1\) for all student units i. In this configuration, the target rule has been fully learned and therefore the generalization error is zero. In our modelled concept drift, the teacher vectors are changing continuously. This reduces the overlaps the student units can achieve with the teacher units, which increases the generalization error in the plateau state and the final state.
Identifying the specific teacher vectors is more difficult than learning the general structure of the teacher: Hence, increasing the drift causes the final generalization error to deteriorate faster than the plateau generalization error. For very strong target drift, the teacher vectors are changing too fast for specialization to be possible. We have identified the strength of the drift above which any kind of specialization is impossible for both SCM by studying the properties of the fixed point in the ODE. In stationary situations, one eigenvalue of the linearized dynamics near the fixed point is positive and causes the repulsion away from the fixed point to specialization. We refer to this positive eigenvalue as \(\lambda _s\). The eigenvalue decreases linearly with the drift strength: For small \(\widetilde{\delta }\), \(\lambda _s\) is still positive and therefore an escape from the plateau is observed. However, \(\lambda _s\) is negative for \(\widetilde{\delta } > \widetilde{\delta }_c\), the symmetric fixed point is stable and specialization becomes impossible. For the Erf-SCM, \(\widetilde{\delta }_c \approx 0.0615\) and for the ReLU-SCM \(\widetilde{\delta }_c \approx 0.225\). The weaker repulsion of the fixed point for stronger drift causes the plateau length to grow for \(\widetilde{\delta } \rightarrow \widetilde{\delta }_c\). In practice, this implies that higher training effort is necessary the stronger the concept drift is.
In the \(\widetilde{\alpha }\rightarrow \infty\) final state, the student tracks the drifting target rule. For \(\widetilde{\delta } \ll \widetilde{\delta }_c\), the student can achieve highly specialized states while tracking the teacher. The closer the drift strength is to \(\widetilde{\delta }_c\), the weaker is the specialization that can be achieved by the student while following the rapidly moving teacher vectors. For \(\widetilde{\delta } > \widetilde{\delta }_c\), the unspecialized student can only track the rule in terms of a simple approximation.
In the results of the Erf-SCM, a range of drift strength \(0.036< \widetilde{\delta } < \widetilde{\delta }_c\) was observed for which the final generalization error in the tracking state is worse than the plateau generalization error. Upon further inspection, this is due to the large values of \(Q_{11}\) and \(Q_{22}\) of the student vectors in the specialized regime. Hence, the effect can be prevented by introducing an appropriate weight decay.
Erf SCM versus ReLU SCM: weight decay in concept drift situations
Our results show that weight decay can improve the final generalization error in the specialized tracking state for both SCM. The suppression of the contributions of older and thus less representative data shows benefits in both systems.
However, from the result in Fig. 5b, we find that it is particularly important to tune the weight decay parameter for the Erf-SCM, since the specialization ability deteriorates quickly for values slightly off the optimum, as shown in the figure by the rapid increase in \(\epsilon _g^\infty\). This reflects a steep decrease of the largest eigenvalue \(\lambda _s\) in the ODE for the Erf-SCM with \(\widetilde{\gamma }\), which also causes the increase of the plateau length as observed in Fig. 5d. Already from \(\widetilde{\gamma }_c \approx 0.0255\), the eigenvalue \(\lambda _s\) becomes negative, and therefore the fixed point becomes an attractor.
We found a very different effect of weight decay on the performance of the ReLU-SCM. Not only is it able to improve the final generalization error in the tracking state as shown in Fig. 6b, but it also significantly reduces the plateau length in the lower range of weight decay. This reflects the increase of \(\lambda _s\) with the weight decay parameter in the fixed point of the ODE, which increases the repulsion from the unspecialized fixed point. Clearly, suppressing the contribution of older data is beneficial for the specialization ability of the ReLU-SCM. For larger values of \(\widetilde{\gamma },\) the plateau length increases, reflecting a decrease of \(\lambda _s\). However, specialization remains possible up to a rather high value of weight decay \(\widetilde{\gamma }_c \approx 1.125\). The greater robustness to weight decay with respect to specialization as shown in Fig. 6b is likely related to our previous findings in [46], which show that the ReLU student–teacher setup needs fewer examples to reach specialization. We hypothesize that the simple linear nature of the function makes it easier for the student to learn features of the target rule. Hence a relatively small window of recent examples can already facilitate a degree of specialization.
Summary and outlook
We have presented a mathematical framework in which to study the influence of concept drift systematically in model scenarios. We exemplified the use of the versatile approach in terms of models for the training of prototype-based classifiers (LVQ) and shallow neural networks for regression, respectively.
LVQ for classification under drift and weight decay
In all specific drift scenarios considered here, we observe that simple LVQ training can track the time-varying class bias to a non-trivial extent: In the interpretation of the results in terms of real drift, the class-conditional performance and the tracking error \(\epsilon _{{\rm track}}(\alpha )\) clearly reflect the time dependence of the prior weights. In general, the reference error \(\epsilon _{{\rm ref}}(\alpha )\) with respect to class-balanced test data, displays only little deterioration due to the drift in the training data. The main effect of introducing weight decay is a reduced overall sensitivity to bias in the training data: Figures 1, 2 and 3 display a decreased difference between the class-wise errors \(\epsilon ^{1,2}\) for \(\gamma >0\). Naïvely, one might have expected an improved tracking of the drift due to the imposed forgetting, resulting in, for instance, a more rapid reaction to the sudden change of bias in Eq. (39). However, such an improvement cannot be confirmed. This finding is in contrast to a recent study [47], in which we observe increased performance by weight decay for a particular real drift, i.e., the randomized displacement of cluster centers.
The precise influence of weight decay clearly depends on the geometry and relative position of the clusters. Its dominant effect, however, is the regularization of the LVQ system by reducing the norms of the prototype vectors. Consequently, the NPC classifier is less flexible to reflect class bias which would require significant offset of the prototypes and decision boundary from the origin. This mildens the influence of the bias and its time dependence, and it results in a more robust behavior of the employed error measures.
SCM for regression under drift and weight decay
On-line gradient descent learning in the SCM has proven able to cope with drifting concepts in regression: For weak drifts, the SCM still achieves significant specialization with respect to the drifting teacher vectors, although the required learning time increases with the strength of the drift. In practice, this results in higher training effort to reach beneficial states in the network. The drift constantly reduces the overlaps with the teacher vectors which deteriorates the performance. After reaching a specialized state, the network efficiently tracks the drifting target. However, in the presence of very strong drift, both versions of the SCM (with Erf- and ReLU-activation) lose their ability to specialize and as a consequence their generalization behavior remains poor.
We have shown that weight decay can improve the performance in the plateau and in the final tracking state. For the Erf-SCM, we found that there is a small range in which weight decay yields favorable performance while the network quickly loses the specialization ability for values outside this range. Therefore, in practice a careful tuning of the weight decay parameter would be required. The ReLU network showed greater robustness to the magnitude of the weight decay parameter and displayed a stronger tendency to specialize. Weight decay also reduced the plateau length significantly in the training of ReLU SCM. Hence, weight decay could speed up the training of ReLU networks in practical concept drift situations, achieving favorable weight configurations more efficiently. Also, the network performs well with a larger range of the weight decay parameter and does not require the careful tuning necessary for the Erf-SCM.
The presented modelling framework offers the possibility to extend the scope of our studies in several relevant directions. For instance, the formalism facilitates the consideration of more complex model scenarios. Greater values of K and M should be studied in, both, classification and regression. While we expect key results to carry over from \(K=M=2\), the greater complexity of the systems should result in richer dynamical behavior in detail. We will study if and how a mismatched number of prototypes further impedes the ability of LVQ systems to react appropriately to the presence of concept drift. The training of an SCM with \(K\ne M\) should be of considerable interest and will also be addressed in forthcoming studies. One might speculate that concept drift could enhance overfitting effects in over-sophisticated SCM with \(K>M\) hidden units. Ultimately, the characteristic robustness of the ReLU activation function to weight decay that was found should be studied in practical situations. Qualitative results are likely to carry over to similarly shaped activation functions, which will be verified in future work.
In a sense, the considered sigmoidal and ReLU activation functions are prototypical representatives of the most popular choices in machine learning practice. The extension to various modifications or significantly different transfer functions [18, 22] should provide additional valuable insights of practical relevance. Exact solutions to the averages that are necessary for the formulation of the learning dynamics in the thermodynamic limit may not be available for all activation functions. In such cases we can resort to approximations schemes and simulations.
The consideration of more complex input densities will also shed light on the practical relevance of our theoretical investigations. Recent work [21, 33] shows that the statistical physics-based investigation of machine learning processes can take into account realistic input densities, bridging the gap between the theoretical models and practical applications.
Our modeling framework can also be applied in the analysis of other types of drift or combinations thereof. Several virtual processes could readily be implemented in the model of LVQ training: time-dependent characteristics of the input density could include the variances of the clusters or their relative position in feature space. A number of extensions is also possible in the regression model. For instance, teacher networks with time-dependent complexity could be studied by varying the mutual teacher overlaps \({\bf B}_{m}\cdot {\bf B}_n\) in the course of training.
Alternative mechanisms of forgetting beyond weight decay should be considered, which do not limit the flexibility of the trained systems as drastically. As one example strategy we intend to investigate the accumulation of additive noise in the training processes. We will also explore the parameter space of the model density in greater depth and study the influence of the learning rate systematically.
One of the major challenges in the field is the reliable detection of concept drift in a stream of data. Learning systems should be able to discriminate drift from static noise in the data and infer also the type of drift, e.g., virtual versus real. Moreover, the strength of the drift has to be estimated reliably in order to adjust the training prescription accordingly. It could be highly beneficial to extend our framework towards efficient drift detection and estimation procedures.
Ade R, Desmukh P (2013) Methods for incremental learning—a survey. Int J Data Min Knowl Manag Process 3(4):119–125
Ahr M, Biehl M, Urbanczik R (1999) Statistical physics and practical training of soft-committee machines. Eur Phys J B 10:583–588
Amunts K, Grandinetti L, Lippert T, Petkov N (eds) (2014) Brain-inspired computing, second international workshop brainComp 2015. LNCS, vol 10087. Springer, Berlin
Barkai N, Seung H, Sompolinsky H (1993) Scaling laws in learning of classification tasks. Phys Rev Lett 70(20):L97–L103
Biehl M, Caticha N (2003) The statistical mechanics of on-line learning and generalization. In: Arbib M (ed) The handbook of brain theory and neural networks. MIT Press, London, pp 1095–1098
Biehl M, Schwarze H (1992) On-line learning of a time-dependent rule. Europhys Lett 20:733–738
Biehl M, Schwarze H (1993) Learning drifting concepts with neural networks. J Phys A Math Gen 26:2651–2665
Biehl M, Schwarze H (1995) Learning by on-line gradient descent. J Phys A Math Gen 28:643–656
Biehl M, Riegler P, Wöhler C (1996) Transient dynamics of on-line learning in two-layered neural networks. J Phys A Math Gen 29:4769–4780
Biehl M, Freking A, Reents G (1997) Dynamics of on-line competitive learning. Europhys Lett 38:73–78
Biehl M, Schlösser E, Ahr M (1998) Phase transitions in soft-committee machines. Europhys Lett 44:261–266
Biehl M, Ghosh A, Hammer B (2007) Dynamics and generalization ability of LVQ algorithms. J Mach Learn Res 8:323–360
Biehl M, Hammer B, Villmann T (2016) Prototype-based models in machine learning. Wiley Interdiscipl Rev Cogn Sci 7(2):92–111. https://doi.org/10.1002/wcs.1378
Biehl M, Abadi F, Göpfert C, Hammer B (2020) Prototype-based classifiers in the presence of concept drift: a modelling framework. In: Vellido A, Gibert K, Angulo C, Martin Guerrero J (ed) 13th workshop on self-organizing maps and learning vector quantization, clustering and data visualization (WSOM 2019). Springer, Cham, Switzerland, Advances in Intelligent Systems and Computing, vol 976, pp 210–221
Bishop C (2006) Pattern recognition and machine learning. Springer, Berlin
Cybenko G, z1Ss (1989) QApproximations by superpositions of sigmoidal functions. Math Control Signals Syst 2(4):303–314
Ditzler G, Roveri M, Alippi C, Polikar R (2015) Learning in nonstationary environment: a survey. Comput Intell Mag 10(4):12–25
Eger S, Youssef P, Gurevych I (2018) Is it Time to Swish? Comparing deep learning activation functions across NLP tasks. In: Proceedings of the 2018 conference on empirical methods in natural language processing, association for computational linguistics, Brussels, Belgium, pp 4415–4424
Engel A, van den Broeck C (2001) The statistical mechanics of learning. Cambridge University Press, Cambridge
Ghosh A, Biehl M, Hammer B (2006) Performance analysis of LVQ algorithms: a statistical physics approach. Neural Netw 19(6–7):817–829
Goldt S, Mézard M, Krzakala F, Zdeborová L (2020) Modeling the influence of data structure on learning in neural networks: the hidden manifold model. Phys Rev X 10(4):041044. https://doi.org/10.1103/PhysRevX.10.041044
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge
Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: data mining, inference, and prediction. Springer, Berlin
van Hemmen J, Keller G, Kühn R (1987) Forgetful memories. Europhys Lett 5(7):663–668
Heusinger M, Raab C, Schleif FM (2020) Passive concept drift handling via momentum based robust soft learning vector quantization. In: Vellido, A and Gibert, K and Angulo, C and Martin Guerrero, J (ed) 13th workshop on self-organizing maps and learning vector quantization, clustering and data visualization (WSOM 2019), Springer, Cham, Advances in Intelligent Systems and Computing, vol 976, pp 200–209
Inoue M, Park H, Okada M (2003) On-line learning theory of soft committee machines with correlated hidden units—steepest gradient descent and natural gradient descent. J Phys Soc Jpn 72(4):805–810
Joshi J, Kulkarni P (2012) Incremental learning: areas and methods—a survey. Int J Data Min Knowl Manag Process 2(5):43–51
Kinouchi O, Caticha N (1993) Lower bounds on generalization errors for drifting rules. J Phys A Math Gen 26(22):6161–6172
Kohonen T (2001) Self-Organizing Maps. Springer Series in Information Sciences, vol 30, 2nd edn. Springer, Berlin
Kohonen T, Barna G, Chrisley R (1988) Statistical pattern recognition with neural network: benchmarking studies. In: Proceedings of the IEEE 2nd international conference on neural networks, San Diego, pp 61–68. IEEE
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems (NIPS) - vol 1. Curran Association Inc., USA, pp 1097–1105
Losing V, Hammer B, Wersing H (2017) Incremental on-line learning: a review and of state of the art algorithms. Neurocomputing 275:1261–1274
Loureiro B, Gerbelot C, Cui H, Goldt S, Krzakala F, Mézard M, Zdeborová L (2021) Capturing the learning curves of generic features maps for realistic data sets with a teacher–student model. arxiv:2102.08127
Maas AL, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the 30th ICML workshop on deep learning for audio, speech and language processing
Marangi C, Biehl M, Solla SA (1995) Supervised learning from clustered input examples. Euro Phys Lett 30:117–122
Meir R (1995) Empirical risk minimization versus maximum-likelihood estimation: a case study. Neural Comput 7(1):144–157
Mezard M, Nadal J, Toulouse G (1986) Solvable models of working memories. J Phys (Paris) 47(9):1457–1462
Nair V, Hinton G (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of 27th international conference on machine learning (ICML). Omni Press, USA, pp 807–814
Nova D, Estevez P (2014) A review of learning vector quantization classifiers. Neural Comput Appl 25(3–4):511–524
Ramachandran P, Zoph B, Le QV (2017) Searching for activation functions. ArXiv abs/1710.05941, Presented at sixth international conference on learning representations. ICLR 2018
Reents G, Urbanczik R (1998) Self-averaging and on-line learning. Phys Rev Lett 80(24):5445–5448
Riegler P, Biehl M (1995) On-line backpropagation in two-layered neural networks. J Phys A Math Gen 28:L507–L513
Saad D (ed) (1999) On-line learning in neural networks. Cambridge University Press, Cambridge
Saad D, Solla S (1995a) Exact solution for on-line learning in multilayer neural networks. Phys Rev Lett 74:4337–4340
Saad D, Solla S (1995b) On-Line learning in soft committee machines. Phys Rev E 52:4225–4243
Straat M, Biehl M (2019) On-line learning dynamics of RELU neural networks using statistical physics techniques. In: Verleysen M (ed) 27th European symposium on artificial neural networks (ESANN 2019), Ciaco-i6doc.com, p 6
Straat M, Abadi F, Göpfert C, Hammer B, Biehl M (2018) Statistical mechanics of on-line learning under concept drift. Entropy 20(10), art. No. 775
Vellido A, Gibert K, Angulo C, Martin Guerrero J (eds) (2019) 13th workshop on self-organizing maps and learning vector quantization, clustering and data visualization (WSOM 2019), Advances in intelligent systems and computing, vol 976. Springer, Cham
Vicente R, Caticha N (1997) Functional optimization of online algorithms in multilayer neural networks. J Phys A Math Gen 30:L599–L605
Vicente R, Caticha N (1998) Statistical mechanics of on-line learning of drifting concepts: a variational approach. Mach Learn 32(2):179–201
Wang L, Yoon KJ (2021) Knowledge distillation and student–teacher learning for visual intelligence: a review and new outlooks. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2021.3055564, early access
Wang S, Minku LL, Yao X (2017) A systematic study of online class imbalance learning with concept drift. CoRR abs/1703.06683. arxiv:1703.06683
Watkin T, Rau A, Biehl M (1993) The statistical mechanics of learning a rule. Rev Mod Phys 65(2):499–556
Witoelar A, Biehl M, Hammer B (2007) Learning vector quantization: generalization ability and dynamics of competing prototypes. In: Proceedings of 6th international workshop on self-organizing-maps (WSOM 2007), Univ. Bielefeld, Germany
Zliobaite I, Pechenizkiy M, Gama J (2016) An overview of concept drift applications. In: Big data analysis: new algorithms for a new society. Springer, Berlin
M.B. and M.S. acknowledge support through the Northern Netherlands Region of Smart Factories (RoSF) consortium, lead by the Noordelijke Ontwikkelings en Investerings Maatschappij (NOM), The Netherlands, see http://www.rosf.nl. B.H. gratefully acknowledges funding by Bundesministerium für Bildung und Forschung (BMBF) under Grant No. 01IS18041A.
Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Nijenborgh 9, 9747 AG, Groningen, The Netherlands
M. Straat, Z. Kan & M. Biehl
Institute of Engineering and Technology, Computing Science Department, Aksum University, Axum, Tigray, Ethiopia
F. Abadi
CITEC, Machine Learning Group, Bielefeld University, 33594, Bielefeld, Germany
C. Göpfert & B. Hammer
M. Straat
Z. Kan
C. Göpfert
B. Hammer
M. Biehl
Correspondence to M. Biehl.
Straat, M., Abadi, F., Kan, Z. et al. Supervised learning in the presence of concept drift: a modelling framework. Neural Comput & Applic 34, 101–118 (2022). https://doi.org/10.1007/s00521-021-06035-1
Issue Date: January 2022
Supervised learning
Drifting concepts
Over 10 million scientific documents at your fingertips
Switch Edition
Corporate Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Proportion of explained variance in PCA and LDA
I have some basic questions regarding PCA (principal component analysis) and LDA (linear discriminant analysis):
In PCA there is a way to calculate the proportion of variance explained. Is it also possible for LDA? If so, how?
Is the "Proportion of trace" output from the lda function (in R MASS library) equivalent to the "proportion of variance explained"?
r variance pca discriminant-analysis
wrekwrek
$\begingroup$ Your first question may be a duplicate of stats.stackexchange.com/questions/22569, where you can find answers. Presumably "LDA" means Linear Discriminant Analysis (it has other statistical meanings too, which is why we try to expand acronyms). $\endgroup$ – whuber♦ Aug 13 '13 at 21:45
$\begingroup$ In a sense, a discriminant accounts for a variability as a p. component does, the eigenvalue being the amount of it. However, the "variability" in LDA is of special sort - it is the ratio of between-class variabilty to the within-class variability. Each discriminant tries to account for as much as possible of that ratio. Read further $\endgroup$ – ttnphns Aug 14 '13 at 8:17
$\begingroup$ Thanks for the explanation. Therefore, if in the axes of the PC components I label them as "PC (X% of explained Variance)" what would be the correct short term when I label the LDs. Thanks again. $\endgroup$ – wrek Aug 14 '13 at 8:56
$\begingroup$ With LDA, the correct wording will be "LD (X% of explained between-group Variance)". $\endgroup$ – ttnphns Aug 14 '13 at 11:11
$\begingroup$ Thanks again for the great help and patience. BTW how can I access the Proportion of trace (LD1, LD2) as I wish to save them in two separate variables? $\endgroup$ – wrek Aug 14 '13 at 12:16
I will first provide a verbal explanation, and then a more technical one. My answer consists of four observations:
As @ttnphns explained in the comments above, in PCA each principal component has certain variance, that all together add up to 100% of the total variance. For each principal component, a ratio of its variance to the total variance is called the "proportion of explained variance". This is very well known.
On the other hand, in LDA each "discriminant component" has certain "discriminability" (I made these terms up!) associated with it, and they all together add up to 100% of the "total discriminability". So for each "discriminant component" one can define "proportion of discriminability explained". I guess that "proportion of trace" that you are referring to, is exactly that (see below). This is less well known, but still commonplace.
Still, one can look at the variance of each discriminant component, and compute "proportion of variance" of each of them. Turns out, they will add up to something that is less than 100%. I do not think that I have ever seen this discussed anywhere, which is the main reason I want to provide this lengthy answer.
One can also go one step further and compute the amount of variance that each LDA component "explains"; this is going to be more than just its own variance.
Let $\mathbf{T}$ be total scatter matrix of the data (i.e. covariance matrix but without normalizing by the number of data points), $\mathbf{W}$ be the within-class scatter matrix, and $\mathbf{B}$ be between-class scatter matrix. See here for definitions. Conveniently, $\mathbf{T}=\mathbf{W}+\mathbf{B}$.
PCA performs eigen-decomposition of $\mathbf{T}$, takes its unit eigenvectors as principal axes, and projections of the data on the eigenvectors as principal components. Variance of each principal component is given by the corresponding eigenvalue. All eigenvalues of $\mathbf{T}$ (which is symmetric and positive-definite) are positive and add up to the $\mathrm{tr}(\mathbf{T})$, which is known as total variance.
LDA performs eigen-decomposition of $\mathbf{W}^{-1} \mathbf{B}$, takes its non-orthogonal (!) unit eigenvectors as discriminant axes, and projections on the eigenvectors as discriminant components (a made-up term). For each discriminant component, we can compute a ratio of between-class variance $B$ and within-class variance $W$, i.e. signal-to-noise ratio $B/W$. It turns out that it will be given by the corresponding eigenvalue of $\mathbf{W}^{-1} \mathbf{B}$ (Lemma 1, see below). All eigenvalues of $\mathbf{W}^{-1} \mathbf{B}$ are positive (Lemma 2) so sum up to a positive number $\mathrm{tr}(\mathbf{W}^{-1} \mathbf{B})$ which one can call total signal-to-noise ratio. Each discriminant component has a certain proportion of it, and that is, I believe, what "proportion of trace" refers to. See this answer by @ttnphns for a similar discussion.
Interestingly, variances of all discriminant components will add up to something smaller than the total variance (even if the number $K$ of classes in the data set is larger than the number $N$ of dimensions; as there are only $K-1$ discriminant axes, they will not even form a basis in case $K-1<N$). This is a non-trivial observation (Lemma 4) that follows from the fact that all discriminant components have zero correlation (Lemma 3). Which means that we can compute the usual proportion of variance for each discriminant component, but their sum will be less than 100%.
However, I am reluctant to refer to these component variances as "explained variances" (let's call them "captured variances" instead). For each LDA component, one can compute the amount of variance it can explain in the data by regressing the data onto this component; this value will in general be larger than this component's own "captured" variance. If there is enough components, then together their explained variance must be 100%. See my answer here for how to compute such explained variance in a general case: Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
Here is an illustration using the Iris data set (only sepal measurements!): Thin solid lines show PCA axes (they are orthogonal), thick dashed lines show LDA axes (non-orthogonal). Proportions of variance explained by the PCA axes: $79\%$ and $21\%$. Proportions of signal-to-noise ratio of the LDA axes: $96\%$ and $4\%$. Proportions of variance captured by the LDA axes: $48\%$ and $26\%$ (i.e. only $74\%$ together). Proportions of variance explained by the LDA axes: $65\%$ and $35\%$.
\begin{array}{lcccc} & \text{LDA axis 1} & \text{LDA axis 2} & \text{PCA axis 1} & \text{PCA axis 2} \\ \text{Captured variance} & 48\% & 26\% & 79\% & 21\% \\ \text{Explained variance} & 65\% & 35\% & 79\% & 21\% \\ \text{Signal-to-noise ratio} & 96\% & 4\% & - & - \\ \end{array}
Lemma 1. Eigenvectors $\mathbf{v}$ of $\mathbf{W}^{-1} \mathbf{B}$ (or, equivalently, generalized eigenvectors of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$) are stationary points of the Rayleigh quotient $$\frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{\mathbf{v}^\top\mathbf{W}\mathbf{v}} = \frac{B}{W}$$ (differentiate the latter to see it), with the corresponding values of Rayleigh quotient providing the eigenvalues $\lambda$, QED.
Lemma 2. Eigenvalues of $\mathbf{W}^{-1} \mathbf{B} = \mathbf{W}^{-1/2} \mathbf{W}^{-1/2} \mathbf{B}$ are the same as eigenvalues of $\mathbf{W}^{-1/2} \mathbf{B} \mathbf{W}^{-1/2}$ (indeed, these two matrices are similar). The latter is symmetric positive-definite, so all its eigenvalues are positive.
Lemma 3. Note that covariance/correlation between discriminant components is zero. Indeed, different eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$ of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$ are both $\mathbf{B}$- and $\mathbf{W}$-orthogonal (see e.g. here), and so are $\mathbf{T}$-orthogonal as well (because $\mathbf{T}=\mathbf{W}+\mathbf{B}$), which means that they have covariance zero: $\mathbf{v}_1^\top \mathbf{T} \mathbf{v}_2=0$.
Lemma 4. Discriminant axes form a non-orthogonal basis $\mathbf{V}$, in which the covariance matrix $\mathbf{V}^\top\mathbf{T}\mathbf{V}$ is diagonal. In this case one can prove that $$\mathrm{tr}(\mathbf{V}^\top\mathbf{T}\mathbf{V})<\mathrm{tr}(\mathbf{T}),$$ QED.
amoeba says Reinstate Monicaamoeba says Reinstate Monica
$\begingroup$ +1. Many things you discuss here were covered, slightly more compressed, in my answer. I have added a link to your present answer in the body of that my old one. $\endgroup$ – ttnphns Aug 6 '14 at 10:30
$\begingroup$ @ttnphns: I remember that answer of yours (it has my +1 from long time ago), but did not look there when writing this answer, so many things are indeed presented very similarly, perhaps too much. The main reason I wrote this answer, however, was to discuss "explained variance" (in the PCA sense) of the LDA components. I am not sure how useful it is in practice, but I was often wondering about it before, and have recently struggled for some time to prove the inequality from Lemma 4 that in the end was proved for me on Math.SE. $\endgroup$ – amoeba says Reinstate Monica Aug 6 '14 at 10:47
$\begingroup$ Note that the diagonal of $\mathbf{V}^\top\mathbf{T}\mathbf{V}$ is $\lambda+1$, the denominator to compute canonical correlations. $\endgroup$ – ttnphns Aug 6 '14 at 11:54
$\begingroup$ @ttnphns: Hmmm... I think that for each eigenvector $\mathbf{v}$, $$B/W = \frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{\mathbf{v}^\top\mathbf{W}\mathbf{v}} = \lambda$$ and $$B/T = \frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{\mathbf{v}^\top\mathbf{T}\mathbf{v}} = \frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{(\mathbf{v}^\top\mathbf{B}\mathbf{v}+\mathbf{v}^\top\mathbf{W}\mathbf{v})} = \frac{\lambda}{\lambda+1},$$ as you say in your linked answer. But the value of $\mathbf{v}^\top\mathbf{T}\mathbf{v}$ (outside of any ratio) cannot really be expressed with $\lambda$ only. $\endgroup$ – amoeba says Reinstate Monica Aug 6 '14 at 12:06
$\begingroup$ It appears to me that the eigenvector of a given discriminant contains information of $B/W$ for that discriminant; when we calibrate it with $\bf T$ which keeps the covariances between the variables, we can arrive at the eigenvalue of the discriminant. Thus, the information on $B/W$'s is stored in eigenvectors, and it is "standardized" to the form corresponding to no correlations between the variables. $\endgroup$ – ttnphns Aug 6 '14 at 13:04
Not the answer you're looking for? Browse other questions tagged r variance pca discriminant-analysis or ask your own question.
PCA and proportion of variance explained
Algebra of LDA. Fisher discrimination power of a variable and Linear Discriminant Analysis
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
Deriving total (within class + between class) scatter matrix
Why do all the PLS components together explain only a part of the variance of the original data?
Sources' seeming disagreement on linear, quadratic and Fisher's discriminant analysis
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants?
Does it make sense to combine PCA and LDA?
LDA, PCA and k-means: how are they related?
How LDA, a classification technique, also serves as dimensionality reduction technique like PCA
Using PCA, clustering, and LDA together
When would you use PCA rather than LDA in classification?
|
CommonCrawl
|
Instability for a priori unstable Hamiltonian systems: A dynamical approach
DCDS Home
On the Cauchy problem for nonlinear Schrödinger equations with rotation
March 2012, 32(3): 717-751. doi: 10.3934/dcds.2012.32.717
Localized asymptotic behavior for almost additive potentials
Julien Barral 1, and Yan-Hui Qu 2,
LAGA (UMR 7539), Département de Mathématiques, Institut Galilée, Université Paris 13, Villetaneuse, France
Department of Mathematics, Tsinghua University, Beijing, China
Received September 2010 Revised December 2010 Published October 2011
We conduct the multifractal analysis of the level sets of the asymptotic behavior of almost additive continuous potentials $(\phi_n)_{n=1}^\infty$ on a topologically mixing subshift of finite type $X$ endowed itself with a metric associated with such a potential. We work without additional regularity assumption other than continuity. Our approach differs from those used previously to deal with this question under stronger assumptions on the potentials. As a consequence, it provides a new description of the structure of the spectrum in terms of weak concavity. Also, the lower bound for the spectrum is obtained as a consequence of the study sets of points at which the asymptotic behavior of $\phi_n(x)$ is localized, i.e. depends on the point $x$ rather than being equal to a constant. Specifically, we compute the Hausdorff dimension of sets of the form $\{x\in X: \lim_{n\to\infty} \phi_n(x)/n=\xi(x)\}$, where $\xi$ is a given continuous function. This has interesting geometric applications to fixed points in the asymptotic average for dynamical systems in $\mathbb{R}^d$, as well as the fine local behavior of the harmonic measure on conformal planar Cantor sets.
Keywords: multifractal analysis., Almost additive potential, weak concavity.
Mathematics Subject Classification: Primary: 37B40; Secondary: 28A8.
Citation: Julien Barral, Yan-Hui Qu. Localized asymptotic behavior for almost additive potentials. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 717-751. doi: 10.3934/dcds.2012.32.717
J. Barral and D. J. Feng, Weighted thermodynamic formalism and applications,, \arXiv{0909.4247v1}., (). Google Scholar
L. Barreira, A non-additive thermodynamic formalism and applications to dimension theory of hyperbolic dynamical systems, Ergod. Th. Dynam. Sys., 16 (1996), 871-927. doi: 10.1017/S0143385700010117. Google Scholar
L. Barreira, Nonadditive thermodynamic formalism: Equilibrium and Gibbs measures, Discrete Contin. Dyn. Syst., 16 (2006), 279-305. doi: 10.3934/dcds.2006.16.279. Google Scholar
L. Barreira and P. Doutor, Almost additive multifractal analysis, J. Math. Pures Appl., 92 (2009), 1-17. doi: 10.1016/j.matpur.2009.04.006. Google Scholar
L. Barreira and B. Saussol, Multifractal analysis of hyperbolic flows, Comm. Math. Phys., 214 (2000), 339-371. doi: 10.1007/s002200000268. Google Scholar
L. Barreira, B. Saussol and J. Schmeling, Higher-dimensional multifractal analysis, J. Math. Pures Appl. (9), 81 (2002), 67-91. doi: 10.1016/S0021-7824(01)01228-4. Google Scholar
R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms," Lecture Notes in Mathematics, 470, Springer-Verlag, Berlin-New York, 1975. Google Scholar
R. Bowen, Hausdorff dimension of quasicircles, Inst. Hautes Études Sci. Publ. Math., 50 (1979), 11-25. Google Scholar
R. L. Brooks, C. A. B. Smith, A. H. Stone and W. T. Tutte, The dissection of rectangles into squares, Duke Math. J., 7 (1940), 312-340. doi: 10.1215/S0012-7094-40-00718-9. Google Scholar
G. Brown, G. Michon and J. Peyrière, On the multifractal analysis of measures, J. Stat. Phys., 66 (1992), 775-790. doi: 10.1007/BF01055700. Google Scholar
Y.-L. Cao, D.-J. Feng and W. Huang, The thermodynamic formalism for sub-additive potentials, Discrete Contin. Dyn. Syst., 20 (2008), 639-657. Google Scholar
P. Collet, J. L. Lebowitz and A. Porzio, The dimension spectrum of some dynamical systems, J. Statist. Phys., 47 (1987), 609-644. doi: 10.1007/BF01206149. Google Scholar
K. J. Falconer, A subadditive thermodynamic formalism for mixing repellers, J. Phys. A, 21 (1988), L737-L742. doi: 10.1088/0305-4470/21/14/005. Google Scholar
A.-H. Fan and D.-J. Feng, On the distribution of long-term time averages on symbolic space, J. Statist. Phys., 99 (2000), 813-856. doi: 10.1023/A:1018643512559. Google Scholar
A.-H. Fan, D.-J. Feng and J. Wu, Recurrence, dimension and entropy, J. London Math. Soc., 64 (2001), 229-244. doi: 10.1017/S0024610701002137. Google Scholar
D.-J. Feng, Lyapounov exponents for products of matrices and multifractal analysis. Part I: Positive matrices, Israel J. Math., 138 (2003), 353-376. doi: 10.1007/BF02783432. Google Scholar
D.-J. Feng, The variational principle for products of non-negative matrices, Nonlinearity, 17 (2004), 447-457. doi: 10.1088/0951-7715/17/2/004. Google Scholar
D.-J. Feng and W. Huang, Lyapunov spectrum of asymptotically sub-additive potentials, Commun. Math. Phys., 297 (2010), 1-43. doi: 10.1007/s00220-010-1031-x. Google Scholar
D.-J. Feng and K.-S. Lau, The pressure function for products of non-negative matrices, Math. Res. Lett., 9 (2002), 363-378. Google Scholar
D.-J. Feng, K.-S. Lau and J. Wu, Ergodic limits on the conformal repellers, Adv. Math., 169 (2002), 58-91. doi: 10.1006/aima.2001.2054. Google Scholar
D. Gatzouras and Y. Peres, Invariant measures of full dimension for some expanding maps, Ergod. Th. Dynam. Sys., 17 (1997), 147-167. doi: 10.1017/S0143385797060987. Google Scholar
M. Kesseböhmer, Large deviation for weak Gibbs measures and multifractal spectra, Nonlinearity, 14 (2001), 395-409. doi: 10.1088/0951-7715/14/2/312. Google Scholar
M. Kesseböhmer and B. Stratmann, A multifractal formalism for growth rates and applications to geometrically finite Kleinian groups, Ergod. Th. Dynam. Sys., 24 (2004), 141-170. Google Scholar
N. G. Makarov, Fine structure of harmonic measure, St. Peterburg Math. J., 10 (1999), 217-268. Google Scholar
A. Mummert, The thermodynamic formalism for almost-additive sequences, Discrete Contin. Dyn. Syst., 16 (2006), 435-454. doi: 10.3934/dcds.2006.16.435. Google Scholar
E. Olivier, Multifractal analysis in symbolic dynamics and distribution of pointwise dimension for $g$-measures, Nonlinearity, 12 (1999), 1571-1585. doi: 10.1088/0951-7715/12/6/309. Google Scholar
L. Olsen, Multifractal analysis of divergence points of deformed measure theoretical Birkhoff averages, J. Math. Pures Appl., 82 (2003), 1591-1649. doi: 10.1016/j.matpur.2003.09.007. Google Scholar
Y. Peres, M. Rams, K. Simon and B. Solomyak, Equivalence of positive Hausdorff measure and the open set condition for self-conformal sets, Proc. Amer. Math. Soc., 129 (2001), 2689-2699. doi: 10.1090/S0002-9939-01-05969-X. Google Scholar
Y. Pesin, "Dimension Theory in Dynamical Systems. Contemporary Views and Applications," Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1997. Google Scholar
Y. Pesin and H. Weiss, A multifractal analysis of equilibrium measures for conformal expanding maps and Moran-like geometric constructions, J. Statist. Phys., 86 (1997), 233-275. doi: 10.1007/BF02180206. Google Scholar
Y. Pesin and H. Weiss, The multifractal analysis of Gibbs measures: Motivation, Mathematical Foundation, and Examples, Chaos, 7 (1997), 89-106. doi: 10.1063/1.166242. Google Scholar
D. A. Rand, The singularity spectrum $f(\alpha)$ for cookie-cutters, Ergod. Th. Dynam. Sys., 9 (1989), 527-541. Google Scholar
D. Ruelle, "Thermodynamic Formalism. The Mathematical Structures of Classical Equilibrium Statistical Mechanics," Encyclopedia of Mathematics and its Applications, 5, Addison-Wesley Publishing Co., Reading, Mass., 1978. Google Scholar
Godofredo Iommi, Bartłomiej Skorulski. Multifractal analysis for the exponential family. Discrete & Continuous Dynamical Systems, 2006, 16 (4) : 857-869. doi: 10.3934/dcds.2006.16.857
Julien Barral, Yan-Hui Qu. On the higher-dimensional multifractal analysis. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 1977-1995. doi: 10.3934/dcds.2012.32.1977
Mario Roy, Mariusz Urbański. Multifractal analysis for conformal graph directed Markov systems. Discrete & Continuous Dynamical Systems, 2009, 25 (2) : 627-650. doi: 10.3934/dcds.2009.25.627
Anna Mummert. The thermodynamic formalism for almost-additive sequences. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 435-454. doi: 10.3934/dcds.2006.16.435
Juan Wang, Xiaodan Zhang, Yun Zhao. Dimension estimates for arbitrary subsets of limit sets of a Markov construction and related multifractal analysis. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2315-2332. doi: 10.3934/dcds.2014.34.2315
Scipio Cuccagna, Masaya Maeda. On weak interaction between a ground state and a trapping potential. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3343-3376. doi: 10.3934/dcds.2015.35.3343
Zied Douzi, Bilel Selmi. On the mutual singularity of multifractal measures. Electronic Research Archive, 2020, 28 (1) : 423-432. doi: 10.3934/era.2020024
Gaston N'Guerekata. On weak-almost periodic mild solutions of some linear abstract differential equations. Conference Publications, 2003, 2003 (Special) : 672-677. doi: 10.3934/proc.2003.2003.672
Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215
Yao Kuang, Raphael Douady. Crisis risk prediction with concavity from Polymodel. Journal of Dynamics & Games, 2022, 9 (1) : 97-115. doi: 10.3934/jdg.2021027
Jinsen Zhuang, Yan Zhou, Yonghui Xia. Synchronization analysis of drive-response multi-layer dynamical networks with additive couplings and stochastic perturbations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1607-1629. doi: 10.3934/dcdss.2020279
Henri Schurz. Analysis and discretization of semi-linear stochastic wave equations with cubic nonlinearity and additive space-time noise. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 353-363. doi: 10.3934/dcdss.2008.1.353
Balázs Bárány, Michaƚ Rams, Ruxi Shi. On the multifractal spectrum of weighted Birkhoff averages. Discrete & Continuous Dynamical Systems, 2022 doi: 10.3934/dcds.2021199
Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331
Demou Luo, Qiru Wang. Dynamic analysis on an almost periodic predator-prey system with impulsive effects and time delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3427-3453. doi: 10.3934/dcdsb.2020238
Chuangxia Huang, Hua Zhang, Lihong Huang. Almost periodicity analysis for a delayed Nicholson's blowflies model with nonlinear density-dependent mortality term. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3337-3349. doi: 10.3934/cpaa.2019150
Lilun Zhang, Le Li, Chuangxia Huang. Positive stability analysis of pseudo almost periodic solutions for HDCNNs accompanying $ D $ operator. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021160
Lars Olsen. First return times: multifractal spectra and divergence points. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 635-656. doi: 10.3934/dcds.2004.10.635
Imen Bhouri, Houssem Tlili. On the multifractal formalism for Bernoulli products of invertible matrices. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1129-1145. doi: 10.3934/dcds.2009.24.1129
Julien Barral Yan-Hui Qu
|
CommonCrawl
|
Matthew McAteer
Deploying and Scaling ML
Practical considerations of scaling and implementing ML in the real world
November 4, 2019 | UPDATED December 26, 2019
This post is Part 4 of the 4-part Machine Learning Research Interview Handbook series (you can see the rest of the series here).
Whether you're an ML engineer or a researcher, you'll need to know how to deploy machine learning jobs. If you're an engineer, this could involve scaling ML-based applications across millions of users. If you're in a research lab, you may need to scale training across months of GPU-hours. Here are a few useful questions for figuring out how well you know how to scale and deploy machine learning applications.
What is Tensorflow? What is it good for?
What is PyTorch? What is it good for?
What is Sonnet? What is it good for?
What is Keras? What is it good for?
What is MXNet? What is it good for?
What is Gluon? What is it good for?
What is Chainer? What is it good for?
What is DL4J? What is it good for?
What is ONNX? What is it good for?
What is Jax? What is it good for?
What is Swift? What is it good for?
Which framework should you use?
General-Purpose System Design Principles
What is "Scalability" with regards to system design?
What is "Availability" with regards to system design?
What is "Efficiency" with regards to system design?
What is "Manageability" with regards to system design?
What is CAP Theorem?
What system design choices can we make for various CAP Theorem tradeoffs?
What are the differences between SQL and no-SQL?
What is the difference between redundancy and replication?
What are some reasons to use SQL for data storage?
What are some reasons to use NoSQL for data storage?
What are some examples of NoSQL types?
What is Load Balancing?
What are some Load Balancing Algorithms?
How do Indexes decrease write performance?
ML Pipeline Construction
Which processors can be used in machine learning?
What are Data Collection and Warehousing?
What is a Data Warehouse?
What is a Data Lake
What is a Data Mart?
What does Data Collection involve?
What are the differences between a Data Warehouse and a Data mart?
What are the considerations behind one's choice of data format?
What are the approches for integrating heterogeneous databases?
What are the steps of an ETL Pipeline?
What must be done to the Model-training part of a pipeline to make it scalable?
Distributed Machine Learning
Describe how MapReduce works.
What are the components of a Distributed Machine Learning architecture?
How does an Async Parameter Server architecture work?
How does a Sync AllReduce architecture work?
What are some Distributed machine learning frameworks?
What are some good design principles for deploying a machine learning model to a front-end?
What is Michelangelo? What is it good for?
What is Feast? What is it good for?
What is Cron? What is it good for?
What is Luigi? What is it good for?
What is MLflow? What is it good for?
What is Pentaho Kettle? What is it good for?
What is Apache Airflow? What is it good for?
Which distributed machine learning/Deployment frameworks/tools should you use?
Open-ended Questions
Duolingo is a platform for language learning. When a student is learning a new language, Duolingo wants to recommend increasingly difficult stories to read. How would you measure the difficulty level of a story?
Given a Duolingo story, how would you edit it to make it easier or more difficult?
Given a dataset of credit card purchases information, each record is labelled as fraudulent or safe, how would you build a fraud detection algorithm?
You run an e-commerce website. Sometimes, users want to buy an item that is no longer available. Build a recommendation system to suggest replacement items.
For any user on Twitter, how would you suggest who they should follow? What do you do when that user is new? What are some of the limitations of data-driven recommender systems?
When you enter a search query on Google, you're shown a list of related searches. How would you generate a list of related searches for each query?
Build a system that return images associated with a query like in Google Images.
How would you build a system to suggest trending hashtags on Twitter?
Each question on Quora often gets many different answers. How do you create a model that ranks all these answers? How computationally intensive is this model?
How to you build a system to display top 10 results when a user searches for rental listings in a certain location on Airbnb?
Autocompletion: how would you build an algorithm to finish your sentence when you text?
When you type a question on StackOverflow, you're shown a list of similar questions to make sure that your question hasn't been asked before. How do you build such a system?
How would you design an algorithm to match pool riders for Lyft or Uber?
On social networks like Facebook, users can choose to list their high schools. Can you estimate what percentage of high schools listed on Facebook are real? How do we find out, and deploy at scale, a way of finding invalid schools?
How would you build a trigger word detection algorithm to spot the word "activate" in a 10 second long audio clip?
If you were to build a Netflix clone, how would you build a system that predicts when a user stops watching a TV show, whether they are tired of that show or they're just taking a break?
Facebook would like to develop a way to estimate the month and day of people's birthdays, regardless of whether people give us that information directly. What methods would you propose, and data would you use, to help with that task?
Build a system to predict the language a text is written in.
Predict the house price for a property listed on Zillow. Use that system to predict whether we invest on buying more properties in a certain city.
Imagine you were working on iPhone. Everytime users open their phones, you want to suggest one app they are most likely to open first with 90% accuracy. How would you do that?
How do you map nicknames (Pete, Andy, Nick, Rob, etc) to real names?
An e-commerce company is trying to minimize the time it takes customers to purchase their selected items. As a machine learning engineer, what can you do to help them?
Build a chatbot to help people book hotels.
How would you design a question answering system that can extract an answer from a large collection of documents given a user query?
How would you train a model to predict whether the word "jaguar" in a sentence refers to the animal or the car?
Suppose you're building a software to manage the stock portfolio of your clients. You manage X amount of money. Imagine that you've converted all that amount into stocks, and find a stock that you definitely must buy. How do you decide which of your currently owned stocks to drop so that you can buy this new stock?
How would you create a model to recognize whether an image is a triangle, a circle, or a square?
Given only CIFAR-10 dataset, how to build a model to recognize if an image is in the 10 classes of CIFAR-10 or not?
Let's say that you are the first person working on the Facebook News Feed. What metrics would you track and how would you improve those metrics?
Google's Tensorflow - the most popular Deep Learning system today. Google, Uber, Airbnb, Nvidia, and several other popular companies use it as their go-to tool. TF is turrently the most common DL frameword today, but frankly, it's a rare case when popularity is equivalent to performance.
Python is the easiest language for working with TensorFlow. However, experimental interfaces are also available in C#, C++, Java, Scala, JavaScript, C++, Go, and Julia. TF was built not only with strong computing clusters in mind, but also iOS and Android compatibility. TF will require a lot of coding from you. It's not going to give you a strong AI overnight, it is only a technique for deep learning research that might potentially make it a little less sluggish. You need to consider carefully about the neural network architecture, measure the size and amount of input and output data correctly. Tensorflow works with a static computational graph. In other words, first we describe the structure, then we run the calculations through it and, if we need adjustments are needed, we re-train the model. This design choice was picked with efficiency in mind, but other frameworks have been built response to this loss in learning speed as an alternative (e.g., PyTorch, the close #2 most common framework).
As for what it's good for, it is useful for developing and playing with deep learning models, and its implementation is helpful for data integration (such as binding graph representations, SQL tables, and images together). Google has also invested a lot of time an effort into building and maintaining it, so they have an incentive to make sure it sticks around.
Sonnet deep learning framework built on top of TensorFlow. It is designed to create neural networks with a complex architecture by the world famous company DeepMind.
Sonnet is defined by it's high-level object-oriented libraries that bring about abstraction when developing neural networks (NN) or other machine learning (ML) algorithms. The idea of Sonnet is to construct the primary Python objects corresponding to a specific part of the neural network. Further, these objects are independently connected to the computational TensorFlow graph. Separating the process of creating objects and associating them with a graph simplifies the design of high-level architectures. More information about these principles can be found in the framework documentation.
The main advantage of Sonnet, is you can use it to reproduce the research demonstrated in DeepMind's papers with greater ease than Keras, since DeepMind will be using Sonnet themselves.
So all-in-all, it's a flexible functional abstractions tool that is absolutely a worthy opponent for TF and PyTorch.
Keras is a machine learning framework that might be your new best friend if you have a lot of data and/or you're after the state-of-the-art in AI: deep learning. Plus, it's the most minimalist approach to using TensorFlow, Theano, or CNTK is the high-level Keras shell.
Keras is usable as a high-level API on top of other popular lower level libraries such as Theano and CNTK in addition to Tensorflow. Prototyping here is facilitated to the limit. Creating massive models of deep learning in Keras is reduced to single-line functions. But this strategy makes Keras a less configurable environment than low-level frameworks.
Keras is by far the best Deep Learning framework for those who are just starting out. It's ideal for learning and prototyping simple concepts, to understand the very essence of the various models and processes of their learning. Keras is a beautifully written API. The functional nature of the API helps you completely and gets out of your way for more exotic applications. Keras does not block access to lower level frameworks. Keras results in a much more readable and succinct code. Keras model Serialization/Deserialization APIs, callbacks, and data streaming using Python generators are very mature.
It should be noted that one cannot really compare Keras and Tensorflow since they sit on different abstraction levels:
Tensorflow is on the Lower Level: This is where frameworks like MXNet, Theano, and PyTorch sit. This is the level where mathematical operations like Generalized Matrix-Matrix multiplication and Neural Network primitives like Convolutional operations are implemented.
Keras is on the higher Level. At this Level, the lower level primitives are used to implement Neural Network abstraction like Layers and models. Generally, at this level, other helpful APIs like model saving and model training are also implemented.
MXNet is a highly scalable deep learning tool that can be used on a wide variety of devices. Although it does not appear to be as widely used as yet compared to TensorFlow, MXNet growth likely will be boosted by becoming an Apache project.
The framework initially supports a large number of languages (C ++, Python, R, Julia, JavaScript, Scala, Go, and even Perl). The main emphasis is placed on the fact that the framework is very effectively parallel on multiple GPUs and many machines. This, in particular, has been demonstrated by his work on Amazon Web Services.
MXNet supports multiple GPUs very well (with optimized computations and fast context switching). It also has a clean and easily maintainable codebase (Python, R, Scala, and other APIs). Although it is not as popular as Tensorflow, MXNet has detailed documentation and is easy to use, with the ability to choose between imperative and symbolic programming styles, making it a great candidate for both beginners and experienced engineers.
Much like how Keras is a simplified version of Tensorflow, Gluon can be thought of as a simplified wrapper over MXNet. Like PyTorch, Gluon was built with ease-of-prototyping in mind, and also uses dynamic graphs to define network structures.
The main advantage of Gluon is the ease of use almost on the level of Keras, but with the added ability to create dynamic graph models. There are also performance benefits on AWS, along with preservation of the Python language's control flow.
Until the advent of DyNet at CMU, and PyTorch at Facebook, Chainer was the leading neural network framework for dynamic computation graphs or nets that allowed for input of varying length, a popular feature for NLP tasks.
The code is written in pure Python on top of the Numpy and CuPy libraries. Chainer is the first framework to use a dynamic architecture model (as in PyTorch). Chainer several times beat records on the effectiveness of scaling when modeling problems solved by neural networks.
By its own benchmarks, Chainer is notably faster than other Python-oriented frameworks, with TensorFlow the slowest of a test group that includes MxNet and CNTK. It has better GPU & GPU data center performance than TensorFlow. (TensorFlow is optimized for TPU architecture) Recently, Chainer became the world champion for GPU data center performance. It also has good Japanese support and an OOP-like programming style.
Most of the frameworks described apply to languages like Python or Swift. DL4J (Deep Learning for Java) was built with Java and Scala in mind.
If you absolutely must use Java, DL4J is a great machine learning framework to use. DL4J splits neural network training across parallel clusters, and is supported by Hadoop and Spark. DL4J also allows for building deep learning applications for programs on Android devices.
As the number of ML frameworks grew, so did the number of ways to store pre-trained model. ONNX was the result of a joint effort between Facebook AI and Microsot Research to create something akin to a universal standard for saving machine learning models. Using the ONNX format, you can pass saved models between programs written in different frameworks. Right now ONNX is supported by CNTK, PyTorch, MXNet, Caffe2, and as of recently Tensorflow and Keras (though the latter have slightly less coverage).
New framework out of Google Brain. It is built to resemble the numpy library, though it also has a bunch of built-in deep learning tools (as well as autodifferentiation). Great for researchers, as well as beginners.
Swift is a bit of a break from the rest of these since it's not actually a framework. You're probably familiar with Swift as the go-to tool for development for iOS or MacOS. It's important to pay attention to in the deep learning space because two frameworks, Apple's own CoreML and Swift for Tensorflow, integrate directly into it.
Swift has a few important features for machine learning practicioners. For example, like JAX, it has extensive support for autodifferentiation. You can take derivatives of any function or make differentiable data structures. As an added bonus, Swift can also be used on top of Jupyter, LLDB, and Google Colab. Swift is a good choice in general if dynamically typed languages are presenting too much trouble (e.g., training a model for a long time only to run into a type error at the very end).
If you are just starting out and want to figure out what's what, the best choice is Keras. For research purposes, choose PyTorch (or possibly Jax). For production, you need to focus on the environment. So, for Google Cloud, the best choice is TensorFlow, for AWS - MXNet and Gluon. Android developers should pay attention to D4LJ. For iOS, a similar range of tasks is compromised by Core ML. Finally, ONNX will help with questions of interaction between different frameworks.
A fairly typical research and deployment hybrid pipeline
Scalability is the capability of a system, process, or a network to grow and manage increased demand. Any distributed system that can continuously evolve in order to support the growing amount of work is considered to be scalable. A system may have to scale because of many reasons like increased data volume or increased amount of work, e.g., number of transactions. A scalable system would like to achieve this scaling without performance loss.
Generally, the performance of a system, although designed (or claimed) to be scalable, declines with the system size due to the management or environment cost. For instance, network speed may become slower because machines tend to be far apart from one another. More generally, some tasks may not be distributed, either because of their inherent atomic nature or because of some flaw in the system design. At some point, such tasks would limit the speed-up obtained by distribution. A scalable architecture avoids this situation and attempts to balance the load on all the participating nodes evenly.
Horizontal vs. Vertical Scaling: Horizontal scaling means that you scale by adding more servers into your pool of resources whereas Vertical scaling means that you scale by adding more power (CPU, RAM, Storage, etc.) to an existing server.
With horizontal-scaling it is often easier to scale dynamically by adding more machines into the existing pool; Vertical-scaling is usually limited to the capacity of a single server and scaling beyond that capacity often involves downtime and comes with an upper limit.
Good examples of horizontal scaling are Cassandra and MongoDB as they both provide an easy way to scale horizontally by adding more machines to meet growing needs. Similarly, a good example of vertical scaling is MySQL as it allows for an easy way to scale vertically by switching from smaller to bigger machines. However, this process often involves downtime.
By definition, availability is the time a system remains operational to perform its required function in a specific period. It is a simple measure of the percentage of time that a system, service, or a machine remains operational under normal conditions. An aircraft that can be flown for many hours a month without much downtime can be said to have a high availability. Availability takes into account maintainability, repair time, spares availability, and other logistics considerations. If an aircraft is down for maintenance, it is considered not available during that time.
Reliability is availability over time considering the full range of possible real-world conditions that can occur. An aircraft that can make it through any possible weather safely is more reliable than one that has vulnerabilities to possible conditions.
Reliability Vs. Availability
If a system is reliable, it is available. However, if it is available, it is not necessarily reliable. In other words, high reliability contributes to high availability, but it is possible to achieve a high availability even with an unreliable product by minimizing repair time and ensuring that spares are always available when they are needed. Let's take the example of an online retail store that has 99.99% availability for the first two years after its launch. However, the system was launched without any information security testing. The customers are happy with the system, but they don't realize that it isn't very reliable as it is vulnerable to likely risks. In the third year, the system experiences a series of information security incidents that suddenly result in extremely low availability for extended periods of time. This results in reputational and financial damage to the customers.
To understand how to measure the efficiency of a distributed system, let's assume we have an operation that runs in a distributed manner and delivers a set of items as result. Two standard measures of its efficiency are the response time (or latency) that denotes the delay to obtain the first item and the throughput (or bandwidth) which denotes the number of items delivered in a given time unit (e.g., a second). The two measures correspond to the following unit costs:
Number of messages globally sent by the nodes of the system regardless of the message size.
Size of messages representing the volume of data exchanges.
The complexity of operations supported by distributed data structures (e.g., searching for a specific key in a distributed index) can be characterized as a function of one of these cost units. Generally speaking, the analysis of a distributed structure in terms of 'number of messages' is over-simplistic. It ignores the impact of many aspects, including the network topology, the network load, and its variation, the possible heterogeneity of the software and hardware components involved in data processing and routing, etc. However, it is quite difficult to develop a precise cost model that would accurately take into account all these performance factors; therefore, we have to live with rough but robust estimates of the system behavior.
Another important consideration while designing a distributed system is how easy it is to operate and maintain. Serviceability or manageability is the simplicity and speed with which a system can be repaired or maintained; if the time to fix a failed system increases, then availability will decrease. Things to consider for manageability are the ease of diagnosing and understanding problems when they occur, ease of making updates or modifications, and how simple the system is to operate (i.e., does it routinely operate without failure or exceptions?).
Early detection of faults can decrease or avoid system downtime. For example, some enterprise systems can automatically call a service center (without human intervention) when the system experiences a system fault.
Machine learning has the "no free lunches" theorem. Basically, there's no universal algorthm that can be applied to all problem. CAP theorem is similar. Namely it's impossible to create a system that's both Consistent, Available, and Partition-tolerant (you can only choose two). In more detail:
Consistency: All nodes see the same data at the same time. Consistency is achieved by updating several nodes before allowing further reads.
Availability: Every request gets a response on success/failure. Availability is achieved by replicating the data across different servers.
Partition tolerance: The system continues to work despite message loss or partial failure. A system that is partition-tolerant can sustain any amount of network failure that doesn't result in a failure of the entire network. Data is sufficiently replicated across combinations of nodes and networks to keep the system up through intermittent outages.
We cannot build a general data store that is continually available, sequentially consistent, and tolerant to any partition failures. We can only build a system that has any two of these three properties. Because, to be consistent, all nodes should see the same set of updates in the same order. But if the network suffers a partition, updates in one partition might not make it to the other partitions before a client reads from the out-of-date partition after having read from the up-to-date one. The only thing that can be done to cope with this possibility is to stop serving requests from the out-of-date partition, but then the service is no longer 100% available.
Consistency & Availability (C/A): Most types of Relational Database Management System (RDBMS) will qualify, such as MySQL, Microsoft SQL Server, IBM DB2, MariaDB, PostgreSQL, Sybase, and Oracle Database
Availability & Partition-Tolerance (A/P): CassandraDB, CouchDB, Riak, Aerospike, Voldemort, Dynamo DB, SimpleDB, Oracle NoSQL Database (depending on configuration)
Consistency & Partition Tolerance (C/P): Google BigTable, MongoDB, HBase, Redis, MemcacheDB, Oracle NoSQL Database (depending on configuration)
Storage: SQL stores data in tables where each row represents an entity and each column represents a data point about that entity; for example, if we are storing a car entity in a table, different columns could be 'Color', 'Make', 'Model', and so on.
NoSQL databases have different data storage models. The main ones are key-value, document, graph, and columnar. We will discuss differences between these databases below.
Schema: In SQL, each record conforms to a fixed schema, meaning the columns must be decided and chosen before data entry and each row must have data for each column. The schema can be altered later, but it involves modifying the whole database and going offline.
In NoSQL, schemas are dynamic. Columns can be added on the fly and each 'row' (or equivalent) doesn't have to contain data for each 'column.'
Querying: SQL databases use SQL (structured query language) for defining and manipulating the data, which is very powerful. In a NoSQL database, queries are focused on a collection of documents. Sometimes it is also called UnQL (Unstructured Query Language). Different databases have different syntax for using UnQL.
Scalability: In most common situations, SQL databases are vertically scalable, i.e., by increasing the horsepower (higher Memory, CPU, etc.) of the hardware, which can get very expensive. It is possible to scale a relational database across multiple servers, but this is a challenging and time-consuming process.
On the other hand, NoSQL databases are horizontally scalable, meaning we can add more servers easily in our NoSQL database infrastructure to handle a lot of traffic. Any cheap commodity hardware or cloud instances can host NoSQL databases, thus making it a lot more cost-effective than vertical scaling. A lot of NoSQL technologies also distribute data across servers automatically.
Reliability or ACID Compliancy (Atomicity, Consistency, Isolation, Durability): The vast majority of relational databases are ACID compliant. So, when it comes to data reliability and safe guarantee of performing transactions, SQL databases are still the better bet.
Most of the NoSQL solutions sacrifice ACID compliance for performance and scalability.
Redundancy is the duplication of critical components or functions of a system with the intention of increasing the reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance. For example, if there is only one copy of a file stored on a single server, then losing that server means losing the file. Since losing data is seldom a good thing, we can create duplicate or redundant copies of the file to solve this problem. Redundancy plays a key role in removing the single points of failure in the system and provides backups if needed in a crisis. For example, if we have two instances of a service running in production and one fails, the system can failover to the other one.
Replication means sharing information to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility. Replication is widely used in many database management systems (DBMS), usually with a master-slave relationship between the original and the copies. The master gets all the updates, which then ripple through to the slaves. Each slave outputs a message stating that it has received the update successfully, thus allowing the sending of subsequent updates.
Here are a few reasons to choose a SQL database:
We need to ensure ACID compliance. ACID compliance reduces anomalies and protects the integrity of your database by prescribing exactly how transactions interact with the database. Generally, NoSQL databases sacrifice ACID compliance for scalability and processing speed, but for many e-commerce and financial applications, an ACID-compliant database remains the preferred option.
Your data is structured and unchanging. If your business is not experiencing massive growth that would require more servers and if you're only working with data that is consistent, then there may be no reason to use a system designed to support a variety of data types and high traffic volume.
When all the other components of our application are fast and seamless, NoSQL databases prevent data from being the bottleneck. Big data is contributing to a large success for NoSQL databases, mainly because it handles data differently than the traditional relational databases. A few popular examples of NoSQL databases are MongoDB, CouchDB, Cassandra, and HBase.
Storing large volumes of data that often have little to no structure. A NoSQL database sets no limits on the types of data we can store together and allows us to add new types as the need changes. With document-based databases, you can store data in one place without having to define what "types" of data those are in advance.
Making the most of cloud computing and storage. Cloud-based storage is an excellent cost-saving solution but requires data to be easily spread across multiple servers to scale up. Using commodity (affordable, smaller) hardware on-site or in the cloud saves you the hassle of additional software and NoSQL databases like Cassandra are designed to be scaled across multiple data centers out of the box, without a lot of headaches.
Rapid development. NoSQL is extremely useful for rapid development as it doesn't need to be prepped ahead of time. If you're working on quick iterations of your system which require making frequent updates to the data structure without a lot of downtime between versions, a relational database will slow you down.
Following are the most common types of NoSQL:
Key-Value Stores: Data is stored in an array of key-value pairs. The 'key' is an attribute name which is linked to a 'value'. Well-known key-value stores include Redis, Voldemort, and Dynamo.
Document Databases: In these databases, data is stored in documents (instead of rows and columns in a table) and these documents are grouped together in collections. Each document can have an entirely different structure. Document databases include the CouchDB and MongoDB.
Wide-Column Databases: Instead of 'tables,' in columnar databases we have column families, which are containers for rows. Unlike relational databases, we don't need to know all the columns up front and each row doesn't have to have the same number of columns. Columnar databases are best suited for analyzing large datasets - big names include Cassandra and HBase.
Graph Databases: These databases are used to store data whose relations are best represented in a graph. Data is saved in graph structures with nodes (entities), properties (information about the entities), and lines (connections between the entities). Examples of graph database include Neo4J and InfiniteGraph.
Load Balancer (LB) is another critical component of any distributed system. It helps to spread the traffic across a cluster of servers to improve responsiveness and availability of applications, websites or databases. LB also keeps track of the status of all the resources while distributing requests. If a server is not available to take new requests or is not responding or has elevated error rate, LB will stop sending traffic to such a server.
Typically a load balancer sits between the client and the server accepting incoming network and application traffic and distributing the traffic across multiple backend servers using various algorithms. By balancing application requests across multiple servers, a load balancer reduces individual server load and prevents any one application server from becoming a single point of failure, thus improving overall application availability and responsiveness.
To utilize full scalability and redundancy, we can try to balance the load at each layer of the system. We can add LBs at three places:
Between the user and the web server
Between web servers and an internal platform layer, like application servers or cache servers
Between internal platform layer and database.
How does the load balancer choose the backend server? Load balancers consider two factors before forwarding a request to a backend server. They will first ensure that the server they choose is actually responding appropriately to requests and then use a pre-configured algorithm to select one from the set of healthy servers. We will discuss these algorithms shortly.
Health Checks - Load balancers should only forward traffic to "healthy" backend servers. To monitor the health of a backend server, "health checks" regularly attempt to connect to backend servers to ensure that servers are listening. If a server fails a health check, it is automatically removed from the pool, and traffic will not be forwarded to it until it responds to the health checks again.
There is a variety of load balancing methods, which use different algorithms for different needs.
Least Connection Method - This method directs traffic to the server with the fewest active connections. This approach is quite useful when there are a large number of persistent client connections which are unevenly distributed between the servers.
Least Response Time Method - This algorithm directs traffic to the server with the fewest active connections and the lowest average response time.
Least Bandwidth Method - This method selects the server that is currently serving the least amount of traffic measured in megabits per second (Mbps).
Round Robin Method - This method cycles through a list of servers and sends each new request to the next server. When it reaches the end of the list, it starts over at the beginning. It is most useful when the servers are of equal specification and there are not many persistent connections.
Weighted Round Robin Method - The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server is assigned a weight (an integer value that indicates the processing capacity). Servers with higher weights receive new connections before those with less weights and servers with higher weights get more connections than those with less weights.
IP Hash - Under this method, a hash of the IP address of the client is calculated to redirect the request to a server.
An index can dramatically speed up data retrieval but may itself be large due to the additional keys, which slow down data insertion & update.
When adding rows or making updates to existing rows for a table with an active index, we not only have to write the data but also have to update the index. This will decrease the write performance. This performance degradation applies to all insert, update, and delete operations for the table. For this reason, adding unnecessary indexes on tables should be avoided and indexes that are no longer used should be removed. To reiterate, adding indexes is about improving the performance of search queries. If the goal of the database is to provide a data store that is often written to and rarely read from, in that case, decreasing the performance of the more common operation, which is writing, is probably not worth the increase in performance we get from reading.
Toying with ML algorithms versus everything else in your system
CPU & GPU are general purpose, but they can run into Von Neuman Bottlenecks. TPUs are basically ASICs for the multiplication and addition operations in MAchine learning (basicaly they can convert many matrix operations from O(n3)O(n^3)O(n3) to O(3n−2)O(3n - 2)O(3n−2)).
Data collection is the process of acquiring and formatting (and possibly transforming). Data Warehousing typically refers to storing data in a Data Warehouse, though it can occasionally be used to storing data in a different structure like a Data Lake or a Data Mart.
A data warehouse usually only stores data that's already modeled/structured. A Data Warehouse is multi-purpose and meant for all different use-cases. It doesn't take into account the nuances of requirements from a specific business unit or function. As an example, let's take a Finance Department at a company. They care about a few metrics, such as Profits, Costs, and Revenues to advise management on decisions, and not about others that Marketing & Sales would care about. Even if there are overlaps, the definitions could be different.
A data lake is the place where you dump all forms of data generated in various parts of your business: structured data feeds, chat logs, emails, images (of invoices, receipts, checks etc.), and videos. The data collection routines does not filter any information out; data related to canceled, returned, and invalidated transactions will also be captured, for instance.
A data lake is relevant in two contexts:
Your organization is so big and your product does so many functions that there are many possible ways to analyze data to improve the business. Thus, you need a cheap way to store different types of data in large quantities. E.g., Twitter in the B2C space (They have text (Tweets), Images, Videos, Links, Direct Messages, Live Streams, etc.), and Square (B2B) (Transactions, Returns, Refunds, Customer Signatures, Logon IDs etc.).
You don't have a plan for what to do with the data, but you have a strong intent to use it at some point. Thus, you collect data first and analyze later. Also, the volume is so high that traditional DBs might take hours if not days to run a single query. So, having it in a Massively Parallel Processor (MPP) infrastructure helps you analyze the data comparatively quickly.
While a data-warehouse is a multi-purpose storage for different use cases, a data-mart is a subsection of the data-warehouse, designed and built specifically for a particular department/business function. Some benefits of using a data-mart include:
Isolated Security: Since the data-mart only contains data specific to that department, you are assured that no unintended data access (finance data, revenue data) are physically possible.
Isolated Performance: Similarly, since each data-mart is only used for particular department, the performance load is well managed and communicated within the department, thus not affecting other analytical workloads.
Data collection can be automatic, but more often than not it requires a lot of human involvement. Data collection steps like cleaning, feature selection, labeling can often be redundant and time-consuming. For automating these steps, some work being done with Semi-supervised learning, but high computational costs (the same goes for approaches using GANs, Variational Autoencoders, or Autoregressive models).
Data warehouse is an independent application system whereas a data mart is more specific to support decision application system. The data in a data warehouse is stored in a single, centralised archive. Compared to, data mart where data is stored decentrally in different user area. A data warehouse consists of a detailed form of data. Whereas, a data mart consists of a summarized and selected data. The development of data warehouse involves a top-down approach, while a data mart involves a bottom-up approach. A data warehouse is said to be more adjustable, information-oriented and longtime existing. However, with data mart it is said to be restricted, project-oriented and has a shorter existence.
Data format is vital. In many data collection scenarios, we might have a lot of CSVs, but this might be less than Ideal. For the sake of data reliability, we probably shouldn't have a massive 6 GB file called the data (perhaps client-side guarantees can be made better by providing them an API to upload the data to).
A good alternative to CSV files would be HDF5 files. If we cannot go to HDF5, we can use an S3 bucket that can provide versioning.
To integrate heterogeneous databases, we have two approaches
Query-driven Approach
Query-driven approach needs complex integration and filtering processes.
This approach is very inefficient, and It is very expensive for frequent queries.
This approach is also very expensive for queries that require aggregations.
Update-driven Approach
This approach provide high performance.
The data is copied, processed, integrated, annotated, summarized and restructured in semantic data store in advance.
Query processing does not require an interface to process data at local sources.
ETL stands for "extract, transform, load".
The Extract step involves reading the data from the source (either a disk, a stream, or a network of peers). Extraction depends heavily on I/O devices (reading from disk, network, etc.);
The Transform step involves preprocessing transformations on the data. Examples of these transformations include scaling for images, or assembly for Genomic data or anonymization. The transformation usually depends on CPU.
The Load step (the final step) bridges between the working memory of the training model and the transformed data. Those two locations can be the same or different depending on what kind of devices we are using for training and transformation. Assuming that we are using accelerated hardware, loading depends on GPU/ASICs.
To make a model training scalable, one should take a "divide and conquer" approach to decompose the computations performed in these steps into granular ones that can be run independently of each other, and aggregated later on to get the desired result. Approaches to this can be classified as either functional decomposition and/or data decomposition.
Functional decomposition: Breaking the logic down to distinct and independent functional units, which can later be recomposed to get the results. This is also known as "Model parallelism".
Data decomposition: Data is divided into chunks, and multiple machines perform the same computations on different data.
Utilizing both kinds of decomposition: Using both kinds of decomposition would involve using an ensemble learning model like random forest, which is conceptually a collection of decision trees. In the case of a random forest, decomposing the model into individual decision trees in functional decomposition. Further training the individual tree in parallel is known as data parallelism. This is also an example of what's called "embarrassingly parallel tasks".
MapReduce is based on a "split-apply-combine" strategy. It works in the following stages:
The map function maps the data to zero or more key-value pairs.
The execution framework groups these key-value pairs using a shuffle operation.
The reduce function takes in those key-value groups and aggregates them to get the final result.
MapReduce takes care of running the Map and Reduce functions in a highly optimized and parallelized manner on multiple workers (aka cluster of nodes)
A distributed machine learning architecture consists of 1) the Data, 2) Partitioned Data, 3) the Driver, and 4) the Node Cluster (with multiple nodes).
The data is partitioned, and the driver node assigns tasks to the nodes in the cluster. The nodes might have to communicate among each other to propagate information, like the gradients. There are various arrangements possible for the nodes, and a couple of extreme ones include Async parameter server and Sync AllReduce.
In the Async parameter server architecture, as the name suggests, the transmission of information in between the nodes happens asynchronously.
You can see how a single worker can have multiple computing devices. The worker, labeled "master", also takes up the role of the driver. Here the workers communicate the information (e.g. gradients) to the parameter servers, update the parameters (or weights), and pull the latest parameters (or weights) from the parameter server itself. One drawback of this kind of set up is delayed convergence, as the workers can go out of sync.
In a Sync AllReduce architecture, the focus is on the synchronous transmission of information between the cluster node.
Workers are mutually connected via fast interconnects. There's no parameter server. This kind of arrangement is more suited for fast hardware accelerators. All workers have to be synced before a new iteration, and the communication links need to be fast for it to be effective.
Some distributed machine learning frameworks do provide high-level APIs for defining these arrangement strategies with little effort.
What are some examples of Distributed machine learning frameworks?
Hadoop is most popular MapReduce tool. Hadoop stores the data in the Hadoop Distributed File System (HDFS) format and provides a Map Reduce API in multiple languages. The scheduler used by Hadoop is called YARN (Yet Another Resource Negotiator), which takes care of optimizing the scheduling of the tasks to the workers based on factors like localization of data.
Apache Spark uses immutable Resilient Distributed Datasets (RDDs) as the core data structure. In Spark, distributed machine learning algorithms can be written either in the MapReduce paradigm in a library like MLlib.
Apache Mahout is more focused on performing distributed linear-algebra computations (This is a great backup if something isn't supported by Spark MLLib).
Message Passing Interface (MPI) is a paradigm that's more of a general model and provides a standard for communication between the processes by message-passing. MPI gives more flexibility (and control) over inter-node communication in the cluster. For use cases w/ smaller datasets or more communication amongst the decomposed tasks, libraries based on MPI are better.
Tensorflow & PyTorch have built in APIs for data and model parallelism. Frameworks like Horovod and Elephas provide much higher-level model distribution.
It's important to properly manage and schedule ML processes so that there are no lags or bottlenecks. A good strategy is to use a first-in, first-out (FIFO) queue. The backend simply enqueues jobs. Workers pick and process jobs out of the queue, performing training or inference, and storing models or predictions to the database when done. With queueing libraries like Celery & MLQ, the following is literally all you need for a backend web server: an endpoint to enqueue a job, an endpoint to check the progress of a job, and an endpoint to serve up a job's result if the job has finished. In short, use a queue, and don't tie up your backend webserver. Separate any ML processes from the act of serving up assets and endpoints. Even better if you ensure that everything is stateless and able to operate in parallel.
If you still need perofrmance enhancements and have exhausted all options of using the backend, consider using frontend deployment with a framework like TensorflowJS.
Michelangelo is a distributed deep learning framework created by Uber. The internal motivations at Uber to build the platform were the following : Limited impact of ML due to huge resources needed when translating a local model into production.
Unreliable ML and data pipelines
Engineering teams had to create a custom serving containers and systems to systems at hand
Inability to scale ML projects
High-Level Architecture of Michelangelo
The main steps of Michelangelo are the following:
Data Management (Pallete): Data feature store. Can sort data based on recency, as well as use streaming, batch processing, and online learning.
Model Training (Horovod): distributed model training with specialized tools and model-specific model monitoring
Model evaulaution: Infrastructure for inspection, visualization
Model depolyment: CI/CD, Rollbacks on metrics monitoring, usually served on Uber data clusters
Prediction: Routes prediction data through a DSL detailing the model and input path.
Prediction monitoring: Log predictions and join to actual outcomes, publish error metrics and aggregates, ongoing accuracy messages, data distribution monitoring
Some tradeoffs from the Java packaged (Michelangelo v1) approach vs the democratized hands-on approach with PYML include:
Slightly higher latency
Quicker prototyping and pilots in production for ML models
Delivery friendliness
When it makes sense it's ported to the more highly scalable system
End to end data scientist ownership of the deployment process
Enables velocity from ideation to production
Feast (Feature Store) is a tool for managing and serving machine learning features. Feast is the bridge between models and data. Feast was built with the goals of "Providing a unified means of managing feature data from a single person to large enterprises", "Providing scalable and performant access to feature data when training and serving models", Providing consistent and point-in-time correct access to feature data", and "Enable discovery, documentation, and insights into your features". Feast is an open-source alternative to Pallete (Uber).
Cron is a utility that provides scheduling functionality for machines running the Linux operating system. You can Set up a scheduled task using the crontab utility and assign a cron expression that defines how frequently to run the command. Cron jobs run directly on the machine where cron is utilized, and can make use of the runtimes and libraries installed on the system.
There are a number of challenges with using cron in production-grade systems, but it's a great way to get started with scheduling a small number of tasks and it's good to learn the cron expression syntax that is used in many scheduling systems. The main issue with the cron utility is that it runs on a single machine, and does not natively integrate with tools such as version control. If your machine goes down, then you'll need to recreate your environment and update your cron table on a new machine.
A cron expression defines how frequently to run a command. It is a sequence of 5 numbers that define when to execute for different time granularities, and it can include wildcards to always run for certain time periods.
Luigi is an ETL-pipeline-building tool created by Spotify. Beyond just automating ETL pipelines, Luigi was built with monitoring Cron jobs, transferring data from one place to another, automating DevOps operations, periodically fetching data from websites and update databases, data processing for recommendation-based systems, and machine learning pipelines (to a certain extent). A Luigi pipeline is usually made of the following components:
Target: Holds the output of a task; could be a local(e.g: a file), HDFS or RDBMS (MySQL etc)
Task: Where the actual work takes place; could be independent or dependent. Example of a dependent task is dumping the data into a file or database. Before loading the data the data must be there by any mean(scraping, API, etc). Each task is represented as a Python Class which contains certain mandatory member functions. A task function contains the following methods:
requires(): This member function of the task class contains all the task instances that must be executed before the current task. In the example I shared above, a task, named ScrapeData, will be included in the requires() method, hence make a task a dependant task.
output(): This method contains the target where the task output will be stored. This could contain one or more target objects.
run(): This method contains the actual logic to run a task.
Luigi doesn't offer quite the same level of flexibility compared to ETL pipeline tools like Apache Spark, And the dashboard is less intuitive than tools like MLFlow and Airflow. It is stilluseful for building out ETL pipelines with complex functionalities fast.
MLflow is built as a "ML lifecyle management" tool. It is used primarily in cases where models are being deployed from development to production. MLFlow is useful for having integrations for Tensorflow, Pytorch, keras, apache spark, Scikit learn, H2O.ai, python, R, java, MLeap, ONNX, Gluon, XGboost, LightGBM, Conda, Kubernetes, Docker, Amazon Sagemaker, Azure machine Learning, and Google Cloud. It can be a very useful framework for avoiding tangled pipelines that can pile up in machine learning projects.
The main downsides are occasional instability with certain builds (due to some customization of the software by different organizations), and early learning curve for getting around crashes.
Pentaho Kettle is Hitachi's ML workflow framework. It's main usefulness is its' usability with less coding. It is entirely possible to manage an ETL pipeline using. The downside is it's comparative lack of versatility compared to many other ETL pipeline frameworks, at least in part due to it's proprietary and closed-source nature. New functionailites are even added through a top-down-controlled "store".
Airflow is an open source workflow tool that was originally developed by Airbnb and publically released in 2015. It helps solve a challenge that many companies face, which is scheduling tasks that have many dependencies. One of the core concepts in this tool is a graph that defines the tasks to perform and the relationships between these tasks.
In Airflow, a graph is referred to as a DAG, which is an acronym for directed acyclic graph. A DAG is a set of tasks to perform, where each task has zero or more upstream dependencies. One of the constraints is that cycles are not allowed, where two tasks have upstream dependencies on each other.
DAGs are set up using Python code, which is one of the differences from other workflow tools such as Pentaho Kettle which is GUI focused. The Airflow approach is called "configuration as code", because a Python script defines the operations to perform within a workflow graph. Using code instead of a GUI to configure workflows is useful because it makes it much easier to integrate with version control tools such as GitHub.
If you're deploying a lot of ML models from development to production very frequently, use Michaelangelo.
If you're trying to create a dashboard for tracking your model progress, use Airflow.
If you're managing FIFO queues of ML jobs for Flask apps, go with Celery (MLQ if you don't expect the project to advance beyond prototype).
If you're integrating your ML models into your JavaScript code, use TensorflowJS
Not far off from what your actual system design interviews will feel like
Ultimately a systems design interview will come down to designing a full machine learning pipeline. There are many ways of approaching each question, and you will have to weigh the different options. Ultimately, you will need to stick with one choice rather than dwelling on it forever (this is one of the few times where being opinionated almost to the point of being arrogant helps more than it hurts).
Here are a few curated questions asked at actual companies, along with sample answers (which you won't find anywhere else).
We can interpret this language difficulty problem as a text complexity measurement problem, i.e., how difficult it is to understand a document. Text complexity can be defined by two concepts: legibility and readability.
Legibility ranges from character perception to all the nuances of formatting such as bold, font style, font size, italic, word spacing, etc.
Readability focuses on textual content such as lexical, semantical, syntactical and discourse cohesion analysis. It is usually computed as an approximation from other metrics, such as using average sentence length (in characters or words) and average word length (in characters or syllables) in sentences.
There are other text complexity features that do not depend on the text itself
Reader's intent (homework, learning, leisure, etc.)
Cognitive focus (which can be influenced by ambient noise, stress level, or any other type of distraction).
WIth the text dependent features, we can frame this as a supervised NLP problem. For each story, we can clean and tokenize the text into sentences and words. From this, we can create feature vectors that represent higher-level qualities of the text such as
Mean number of words per sentence
Mean number of syllables per word
Number of polysyllables (more than 3 syllables)
Part-of-Speech (POS) tags count per book
Mean number of words considered "difficult" in a sentence (a word is "difficult" if it is not part of an "easy" words reference list)
Readability formulas such as Flesch-Kincaid
All of these features would be scaled to the same [−1,+1][-1, +1][−1,+1] range. Further feature selection would be done through a combination of LASSO regression and k-fold Cross-validation. For each feature set a regression function is fitted using our training data. Then a correlation is computed (using a metric such as Person correlation coefficient, Kendall-Tau, or Chi-Square test between each set's regression function and the readability score. Feature sets are ranked by correlation performance and the best one is selected.
In addition to accuracy of the model, we want humans to be able to easily understand exactlywhy a text is difficult to read. With that in mind a decision-tree-based algorithm like XGBoost regression would be an ideal choice. Once the model outputs a readability score, the readability can be converted to a more meaningful grade-level representation representaiton using the formula lg=exp(sr)0.002l_g = \exp(s_r)^{0.002}lg=exp(sr)0.002, where lg=grade levell_g = \text{grade level}lg=grade level and sr=readability scores_r = \text{readability score}sr=readability score. The progress of students' reading comprehension in a given language could then be split into levels based on grade levels (or some similar staging strategy).
Overview and examples of the proposed workflow, alongside example grade levels and corresponding scores
This supervised approach should be able to get high accuracy given the wide availability of language corpuses. That being said, language agnostic unsupervised approaches also exist for evaluating text readability such as using neural language models as comprehension systems to infill Cloze tests (text chunks with blank words).
Given certain phrases or words, we could set up a graph-specific structure (such as one based on Word2Vec) to find words/phrases with similar meanings. These could then be replaced with other words or phrases that result in changes in features like syllablecount or wordcount. This could also be combined with specific reference lists of "difficult" or "easy" words (which DuoLingo likely has). If we use a decision tree-based model for the difficulty score, we could be recommended a few branches where changing one feature of the language changes the readability more to our liking.
In this problem, we want to take in details of transactions such as amount, merchant, location, time and other features, and output a "safe" or "fraud" label. Hackers and crooks are fast enough in updating their techniques to render hard-coded rule-based systems obselete before they get to market, hence why we're choosing a learning system.
While it's straightforward to identify this problem as one of binary classification, it does suffer from an enormous imbalance between "safe" and "fraud" classes. This makes the problem much more difficult. Suppose we have less than 0.1% of the dataset being "fraud" labelled. We have three main methods at our disposal: oversampling, undersampling, and combined class methods.
oversampling refers to artificially creating observations in our data set belonging to the class that is under represented in our data. One common technique for this is SMOTE (Synthetic Minority Over-sampling Technique), which does the following:
Finding the k-nearest-neighbors for minority class observations (finding similar observations)
Randomly choosing one of the k-nearest-neighbors and using it to create a similar, but randomly tweaked, new observations.
Undersampling refers to randomly selecting a handful of samples from the class that is overrepresented. imbalanced-learn's RandomUnderSampler class works by performing k-means clustering on the majority class and removing data points from high-density centroids.
One of the issues with SMOTE is that it can generate noisy samples by interpolating new points between marginal outliers and inliers. This issue can be solved by cleaning the resulted space obtained after over-sampling. To do this, we can use SMOTE together with edited nearest-neighbours (ENN). ENN is used as the cleaning method after SMOTE has been run. This technique can be run with imblearn's SMOTEENN class.
For the model itself, we can use a technique like a random forest or neural network. Regardless of our choice, it should be incredibly easy to achieve 100% precision for the imbalanced negative class label. Recall of course looks much worse (i.e., predicting the positive "fraud" labels).
This was expected since we're dealing with imbalanced data, so for the model it's easy to notice that predicting everything as negative class will reduce the error. Precision obviously would look good, but recall will be much worse than precision for the positive class (fraud). We can measure additional performance characteristics of our model such as the Area Under the Receiver-Operating Characteristic (AUROC) metric. Depending on how these metrics perform in the classification, we can improve the performance further with wither SMOTE, RandomUnderSampler, or SMOTE + ENN. Which one is better will depend heavily on the specific situation.
It's not enough for a recommender system to be able to recommend the most relevant item. A simple solution to this problem would be to make a recommender system that returns the top N items. In addition to this list of N items being calculated by a filtering technique, new features need to be calculated for all items with a rating above a certain threshold (e.g., semantic similarity, likability, overlap in users, variance in ratings). This new feature calculation then re-ranks the top candidates, with only the top NNN items being recommended.
With an item-replacement technique, we need to have some control of the number of distinct items (i.e., diversity) of the list of top NNN items LN(u)L_N(u)LN(u) (uuu representing a user, UUU being the set of all users), while also accurately predicting relevant items. The two optimization metrics for our system are accuracy (∑u∈U∣correct(LN(u))∣∑u∈U∣(LN(u))∣\frac{\sum_{u \in U} |\text{correct}(L_N(u))| }{\sum_{u \in U}|(L_N(u))|}∑u∈U∣(LN(u))∣∑u∈U∣correct(LN(u))∣) and diversity (⋃u∈ULN(u)\bigcup_{u \in U} L_N(u)⋃u∈ULN(u)).
The basic idea behind our iterative Item Replacement recommender system is as follows. First, we use a standard-ranking approach (algorithm could be anything from neighbor-based to probabilistic matrix factorization). Once this is done, one of the already-recommended itms is replaced by an item that has not been recommended to any other user. The item that will be replaced (ioldi_\text{old}iold) will be an item with a known rating from the user (umaxu_\text{max}umax, or whomever rated the item most highly) that will be replaced by a comparatively unseen item inewi_\text{new}inew with a predicted rating. Not only does this increase the predicted rating for the new item, it also increases the diversity score by exactly one unit. This continues until the diversity reaches a desired threshold or until there are no more items that can be recommended for replacement.
Our general architecture is as follows: We have a database (or multiple) with all the products, along with details about all the registered users on the site. The user details include information like user ID, login info, interests, and so on. For the sake of scaling this example, we can specify that our relevance-prediciton algorithm R∗(u,i)R*(u, i)R∗(u,i) is a neighbor-based algorithm, since this scales better than probabilistic matrix factorization. Since we're more concerned with Consistency and Partition tolerance for our database, we can imagine our architecture making use of a set of CasandraDB databases with load balancers.
graph TD A((Expert)) -->|Item Replacement| B((Users)) A -->|User Detail Request| C[Database] C --> A B --> A
The interactions between the users, item-replacement expert system, and database(s) are as follows:
Initializing top-NNN recommendation list
Replacing already recommended item ioldi_\text{old}iold with the never recommended item inewi_\text{new}inew
Repeat steps 1 and 2 until maximum diversity is reached.
In a nutshell, our first recommendation of N items is generated using the Standard Ranking Algorithm (Rank standard (i)=R∗(u,i)−1\text{Rank standard }(i) = R*(u,i)^{-1}Rank standard (i)=R∗(u,i)−1 where RRR represents ratings, uuu user, III item), and items are gradually replaced by infrequently-recommended items. Not only is the user recommended items similar to the one missing, but many of the items will not be advertised as perfect copies (i.e., less likely for the list to be populated with counterfeits).
We have the choice between a collaborative filtering recommenders system (recommends based on user history) or a product-focused (using demographic data about users) recommender system. This strategy works best when there is a lot of data on a given user. If the user is new, one possible strategy would be to suggest some of the most-followed users on the cite (such as celebrities or public figures). Not only does this give a starting point for the user to start using the site, but it also gives you a starting point for figuring out consumer habits or opinions that would shape their behavior on the site. Given that we have less of a history to work with, a product-focused recommender system (using data like location) might be more useful here.
Data Driven recommender systems have a variety of flaws. Some of the top immediately obvious ones are the following:
The cold-start problem: Collaborative filtering systems are based on the action of available data from similar users. If you are building a brand new recommendation system, you would have no user data to start with. You can use content-based filtering first and then move on to the collaborative filtering approach.
Scalability: As the number of users grow, the algorithms suffer scalability issues. If you have 10 million customers and 100,000 movies, you would have to create a sparse matrix with one trillion elements.
The lack of right data: Input data may not always be accurate because humans are not perfect at providing ratings. User behavior is more important than ratings. Item-based recommendations provide a better answer in this case.
Much like related questions on other sites, a related searches list would be a clustering (or topic modelling) problem. For a site as large as google, we also need to consider approaches that aren't completely compute- and memory-inefficient. As a modelling technique, we want to be able to pick one that can make clusters of similar queries AND is highly scalable.
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.
Latent semantic indexing (a type of multivariate correspondence analysis used to create "contingency tables") from documents. It's called "latent semantic indexing" because of its ability to correlate semantically related terms that are latent in a collection of text. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don't share a specific word or words with the search criteria. There are a ton of benefits to LSI, including but not limited to the following:
LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text. Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories. LSI uses example documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems. As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.
Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.
LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.). This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.
Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
LSI has proven to be a useful solution to a number of conceptual matching problems. The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information
The image search system would behave very similarly to document searches in Google. However, the categories that would be used for doing the search would be different. Many images will have enough associated metadata (or even surrounding text) to be placed within a graph created by a word clustering tool like word2vec or GloVe. Many of these images will have tags added onto them through this surrounding context information. Still, for some images Google could also run label-detection on many images, and then cache them for easier search. Said images would likely have multiple categories and labels sorted by specificity. This would allow a heirarchal structure to the image-references, allowing for more specific or more general queries for images.
Our goal is to provide a dashboard widget that recommends content on Twitter (the microblogging site if you somehow didn't know already) that is becoming popular in a given window of time (however small that may be). Given the usefulness of this tool to marketting and advertising, the company also wants to be able to predict popular hashtags in the near future. This problem can be modelled as a classification task, where the goal is to classify trending and non-trending hashtags with high precision and recall.
For data, you could likely use a small subset of tweets within a certain region (such as 0.5%) over the course of 1.25 days and use that for training and test data. After all at Twitter's scale, it would be impractical to use the entiredy of the site's content for training data. Once this is done, and the trending and non-trending hashtags are to be identified, the trending hashtags from the first 24 hours can be used as training data to predict the trending hashtags in the first hours of the second day. To increase the predictive power of an arbitrary hashtag hhh in the first day, the following features will be added to the feature vectors:
lifetime\text{lifetime}lifetime: The lifetime of a hashtag hhh is the number of hours during which the hashtag hhh appeared in at least one tweet posted during that hour.
ntweetn_\text{tweet}ntweet: This is the total number of tweets containing the particular hashtag hhh during the lifetime of hhh.
nusern_\text{user}nuser: This is the total number of distinct users who posted at least one tweet containing hashtag h during the lifetime of hhh.
The first three properties – lifetime\text{lifetime}lifetime, ntweetn_\text{tweet}ntweet, and nusern_\text{user}nuser – help to estimate the global level of popularity of a hashtag.
velocity\text{velocity}velocity: This feature captures the rate of change of the number of tweets containing a particular hashtag. Let the number of tweets containing hashtag hhh that are posted during a particular hour kkk be denoted by NkhN_k^hNkh . Then the hhh velocity of hashtag hhh during the hour kkk is Nk+1−NkhN_{k+1} - N_k^hNk+1−Nkh, i.e., the difference between the number of tweets containing hhh posted during hour kkk and the next hour. Note that velocity can be negative as well – for instance, if a hashtag is included in lesser number of tweets during hour k+1k + 1k+1 than during hour kkk, then its velocity during hour kkk will be negative.
acceleration\text{acceleration}acceleration: This is analogous to the rate of change of velocity at a particular hour. This is computed in a way similar to the Velocity feature described above. Let the velocity of hashtag hhh during the hour kkk be denoted by VkhV_k^hVkh (computed in the way described above). Then the acceleration of hhh during hhh hour kkk is computed as Vk+1−VkhV_{k+1} - V_k^hVk+1−Vkh. Similar to velocity, acceleration can be negative as well.
The last two properties, velocity and acceleration, help to estimate how quickly a hashtag is gaining (or losing) popularity. Even if the hashtag has a short lifespan, it shouls still be considered trending if its popularity is rapidly rising during that period. Maximum values of the velocity and acceleration will also help distinguish trending from non-trending hashtags. In short, this gives us a new feature vector of length 5 ([lifetime[\text{lifetime}[lifetime, ntweetn_\text{tweet}ntweet, nusern_\text{user}nuser, velocitymax\text{velocity}_\text{max}velocitymax, accelerationmax]\text{acceleration}_\text{max}]accelerationmax]) for a given hour during the hashtag's lifetime.
There are a variety of classifiers that can be used for this, but this seems like the kind of task for which a neural network or tree-based model (e.g., random forest or boosted tree) would be most approprate, rather than an SVM. For a service as large as Twitter, we probably need to use a tool like Apache Spark (or a proprietary DataBricks version of Spark) or Michaelangeo to scale up the learning,
We want to rank answers on a page according to how helpful they are. The goal of ranking is to put the most comprehensive, trustworthy answers at the top of the page, so that they are easily accessible for people who have a question. We have a variety of factors we can use to judge how helpful questions are:
upvotes and downvotes on the answer, in part based on how trustworthy voters are in a given topic
the previous answers written by the author
whether the author is an expert in the subject
the contents of the answer, including the type and quality of content (words, photos, links)
other signals, including ones that help us prevent gaming of ranking through votes
One way of combining these would be to normalize and scale (i.e., on a log scale) certain features, combine them into one large feature vector, and reduce this vector to a scalar score through dimensionality reduction (e.g., via PCA or t-SNE or UMAP)
Once the scores are generated for each answer, the sorting system arranges them from highest to lowest score (i.e, worst-case runtime complexity of O(nlogn)O(n \log n)O(nlogn)). Our of all the score-generating methods at our disposal, UMAP has the advantage of being less computationally intensive and more reproducible.
What we need is functionally a location-based recommender system. This is more complex than just listing places that are geometrically close (like one would with the start of a Lyft or Uber recommender system), but also needs to take into account rental listing ratings. We can imagine the ranking system for a given location (be it a new zipcode the service has expanded to or even a new country) as follows:
In the early stage the main priority would be to collect as much relevant user data. The best data would be in the form of search logs from users that make bookings. The training data would then need to be labelled according to which listings were booked fully, and which were clicked but ultimately not booked. For improving predictions for this classification task, relevant features would need to be aggregated from each booking and listing (e.g., duration of stay, price, price/hour, reviews, numbers of bookings, occupancy rate, maximum number of occupants, and CTR). This should be enough to train a machine learning model such as a neural network (MLP) or a tree-based model (like a gradient-boosted tree), though many features will need to be converted to ratios rather than raw counts before being entered (e.g., fractions of bookings, relative to listing views). The performance on held-out test data can be measured with AUC and NDGC. Given the simplicity of such a setup using only standard listing vectors for all users, it's possible to use a service like Apache Airflow to gt all the rankings for a given region in a short time period.
Once the first stage is working, the next task would be to add personallization to it. This involves switching to a user-based recommendation system. We can add new features to our feature vectors (Booked Home location, Trip dates, rip length, Number of guests, Trip price relative to market, Type of trip, First trip or returning to location, Domestic / International trip, Lead days) to improve how customizable the search rankings are. Given a user's short search history, it should also be possible to predict the categories of stays to recommend, as well as the user's time-of-day availability. The personalization features would increase the computational load on the machine learning pipeline. With Apache Airflow, the pipeline could experience up to one-day latency (which is acceptable for a booking site), but it would eventually need to be moved from offline to online learning.
Moving from offline to online learning increases the number of features that can be added to the input data for the ML pipeline. These features mainly refer to those of the real-time user queries (location, number of guests, and dates. For example, we can create a feature from the distance between the entered location and the listing location, and factor that into the rankings. Ultimately this brings the number of total features for our model up to nearly 100, and this data will be spread across thousands of listings and millions of users. With this scale, it would make sense to train multiple ML models: a model for logged-in users (makes use of query features for user-personalization), and a model for logged-out traffic (also uses query and location features, but not user-features. Online logging allows us not only to train both types of models, but to also train on the logged-in user data far more frquently (no more need for pre-computed personallized rankings)personalization signals were available for a particular user id, else we fall back to using logged-out model. At this scale, our testing on held-out data would more resemble what's typically called A/B testing.
The actual infrastructure for this behemoth ML pipeline will be divided into 3 components: 1) getting model input from various places in real time, 2) model deployment to production, and 3) model scoring. The models make use of three types of signals: those for listings, users, and user-queries. Since there are millions of users, user features need to be stored in an online key-value store and search-server boxes. listing-features are only in the thousands, and can easily be stored in the search-server boxes. These two features will be updated daily in the Airflow pipeline. query-features do not need to be stored at all, as they can just be acted on as they come in. The model files themselves can be converted to an H5 file format, that can be deployed to a spark or Java architecture. This will reduce the load-time for deploying the model. Which model to be used will be quickly determined by whether the user is logged in or not. This will determine which features to bring together for a final ranking of the listings in a given area.
For the monitoring of this system (mandatory for being able to give hosts feedback as well as helping the engineering team), one can use tools like Apache Superset and Airflow to create two dashboards:
Dashboard that tracks rankings of specific listings in their market over time, as well as values of features used by the ML model.
Dashboard that shows overall ranking trends for different regions or groups of listings (e.g. how 5-star listings rank in their market).
Autocomplete is commonly found on search engines and messaging apps as a tool that speeds up interactions by suggesting what you'll search. It must be fast and update the list of suggestions immediately after the user types the next letter. Autocomplete can only suggest words it knows about, whether these be English dictionary words or distinct words on Wikipedia (or words it's learned). This search space grows larger if it incorporates multi-word phrases or entity names. The search space is reduced by the fact that the Autocomplete takes in a prefix and finds kkk words in the vocabulary that have it.
One of the more useful data structures we can use for this problem is a Trie data retrieval data structure. It reduces search complexities and improves optimality and speed. Trie can search the key in O(M)O(M)O(M) time as long as it has proper data storage, which can be in-memory cache (Redis or Memcached), a database, or even a file. Assume SSS is a set of kkk strings. In other words, S=s1,s2,...,skS = {s_1, s_2, ..., s_k}S=s1,s2,...,sk. We model set SSS as a rooted tree TTT in such a way that each path from the root of TTT to any of its nodes corresponds to a prefix of at least one string of SSS. Consider an example set where S={het,hel,hi,car,cat}S = \{ het, hel, hi, car, cat \}S={het,hel,hi,car,cat} and ϵ\epsilonϵ corresponds to an empty string. Then a trie tree for SSS looks like this:
Trie tree for S
Each edge of an internal node to its child is marked by a letter denoting the extension of the represented string. Each node corresponding to a string from SSS is marked with color. If hehehe is a prefix of another string from SSS, then a node corresponding to hehehe is an internal node of TTT, else it's a leaf. Sometimes, it is useful to have all nodes corresponding to strings from SSS as leaves, and it is very common to append to each string from SSS a character that is guaranteed not to be in Σ\SigmaΣ. In our case, denote $ as this special character. Duch a modified trie looks like this:
Our modified Trie
Since now there is no string in SSS, which is a prefix and another string from SSS, all nodes in TTT corresponding to strings from SSS are leaves. In order to create trie TTT, we just start with one node as a root representing the empty string ϵ\epsilonϵ. If you want to insert a string eee to TTT, start at the root of TTT and consider the first character hhh of eee. If there is an edge marked with hhh from the current node to any of its children, you consume the character hhh and get down to this child. If at some point there is no such an edge and child, you have to create them, consume hhh, and continue the process until the whole eee is done.
Now, we can simply search a string eee in TTT. We just have to iterate through the characters of eee and follow corresponding edges to these characters in TTT starting from the root. If at some point there is no transition to children or if you consume all the letters of e, but the node in which you end the process does not correspond to any string from SSS, then eee does not belong to SSS either. Otherwise, you end the process in a node corresponding to eee, so eee belongs to SSS.
Another version of the Trie approach is the Suffix-Tree Algorithm. A suffix-tree is a compressed trie of all the suffixes of a given string. Suffix trees are useful in solving a lot of string-related problems like pattern matching, finding distinct substrings in a given string, finding the longest palindrome, etc.. A suffix tree TTT is an improvement over trie data structure used in pattern matching problems. The one defined over a set of substrings of a string sss. This is a trie that can have a long path without branches. The better approach is reducing these long paths into one path, thereby greatly reducing the size of the trie significantly. Consider the suffix tree TTT for a string s=abakans = abakans=abakan. A word abakanabakanabakan has 6 suffixes{abakan,bakan,akan,kan,an,n}\{ abakan, bakan, akan, kan, an, n \}{abakan,bakan,akan,kan,an,n} and its suffix tree looks like this:
Esko Ukkonen demonstrated in 1995 that it's possible to make a suffix tree for s in linear time
Suffix trees can solve many complicated problems because it contains so many data about the string itself. For example, in order to know how many times a pattern PPP occurs in sss, it is sufficient to find PPP in TTT and return the size of a subtree corresponding to its node. Another well-known application is finding the number of distinct substrings of sss, and it can be solved easily with a suffix tree. The completion of a prefix is found by first following the path defined by the letters of the prefix. This will end up in some inner node. For example, in the pictured prefix tree, the prefix corresponds to the path of taking the left edge from the root and the sole edge from the child node. The completions can then be generated by continuing the traversal to all leaf nodes that can be reached from the inner node. Searching in a prefix tree is extremely fast. The number of needed comparison steps to find a prefix is the same as the number of letters in the prefix. Particularly, this means that the search time is independent of the vocabulary size. Therefore, prefix trees are suitable even for large vocabularies. Prefix trees provide substantial speed improvements to over-ordered lists. The improvement is realized because each comparison is able to prune a much larger fraction of the search space.
Can we do better than prefix trees? Prefix trees handle common prefixes efficiently, but other shared word parts are still stored separately in each branch. For example, suffixes, such as -ing\text{-ing}-ing and -ion\text{-ion}-ion, are common in the English language. Fortunately, there is an approach to save shared word parts more efficiently. A prefix tree is an instance of a class of more general data structures called acyclic deterministic finite automata (DFA). There are algorithms for transforming a DFA into an equivalent DFA with fewer nodes. Minimizing a prefix tree DFA reduces the size of the data structure. A minimal DFA fits in the memory even when the vocabulary is large. Avoiding expensive disk accesses is key to lightning-fast autocomplete.
The Myhill-Nerode theorem gives us a theoretical representation of the minimal DFA in terms of string equivalence classes. Saying that two states are indistinguishable means that they both run to final states or both to non-final states for all strings. Obviously, we do not test all the strings. The idea is to compute the indistinguishability equivalence classes incrementally. We say ppp and qqq are kkk-distinguishable if they are distinguishable by a string of length≤k\text{length} ≤ klength≤k
It's easy to understand the inductive property of this relation: ppp and qqq are kkk-distinguishable if they are (k−1)(k-1)(k−1)-distinguishable, or δ(p,σ)\delta(p, \sigma)δ(p,σ) and δ(q,σ)\delta(q, \sigma)δ(q,σ) are (k−1k-1k−1)-distinguishable for some symbol σ∈Σ\sigma \in \Sigmaσ∈Σ
The construction of the equivalence classes starts like this: ppp and qqq are 000-indistinguishable if they are both final or both non-final. So we start the algorithm with the states divided into two partitions: final and non-final.
Within each of these two partitions, ppp and qqq are 1-distinguishable if there is a symbol σ\sigmaσ so that δ(p,σ)\delta(p, \sigma)δ(p,σ) and δ(q,σ)\delta(q, \sigma)δ(q,σ) are 0-distinguishable. For example, one is final and the other is not. By doing so, we further partition each group into sets of 1-indistinguishable states.
The idea then is to keep splitting these partitioning sets as follows: ppp and qqq within a partition set are k-distinguishable if there is a symbol σ\sigmaσ so that δ(p,σ)\delta(p, \sigma)δ(p,σ) and δ(q,σ)\delta(q, \sigma)δ(q,σ) are (k−1)(k-1)(k−1)-distinguishable.
At some point, we cannot subdivide the partitions further. At that point, terminate the algorithm because no further step can produce any new subdivision. When we terminate, we have the indistinguishability equivalence classes, which form the states of the minimal DFA. The transition from one equivalence class to another is obtained by choosing an arbitrary state in the source class, applying the transition, and then taking the entire state:
State diagram for minimal DFA
Start by distinguishing final and non-final: {q1,q3}\{q_1,q_3\}{q1,q3}, {q2,q4,q5,q6}\{q_2,q_4,q_5,q_6\}{q2,q4,q5,q6} Distinguish states within the groups, to get the next partition: {q1,q3}\{q_1,q_3\}{q1,q3}, {q4,q6}\{q_4,q_6\}{q4,q6}, {q5}\{q_5\}{q5}, {q2}\{q_2\}{q2}
bbb distinguishes q2,q4:δ(q2,b)∈{q1,q3}q_2, q_4: \delta(q_2,b) \in \{q_1,q_3\}q2,q4:δ(q2,b)∈{q1,q3}, δ(q4,b)∈{q2,q4,q5,q6}\delta(q_4,b) \in \{q_2,q_4,q_5,q_6\}δ(q4,b)∈{q2,q4,q5,q6}
aaa distinguishes q4,q5:δ(q4,a)∈{q1,q3}q_4, q_5: \delta(q_4,a) \in \{q_1,q_3\}q4,q5:δ(q4,a)∈{q1,q3}, δ(q5,a)∈{q2,q4,q5,q6}\delta(q_5,a) \in \{q_2,q_4,q_5,q_6\}δ(q5,a)∈{q2,q4,q5,q6}
neither aaa nor bbb distinguishes (q4,q6)(q_4,q_6)(q4,q6)
We cannot split the two non-singleton groups further; the algorithm terminates. The minimal DFA has start state {q1,q3}\{q_1,q_3\}{q1,q3}, single final state {q4,q6}\{q_4,q_6\}{q4,q6} with transition function:
{q1,q3}\{q_1,q_3\}{q1,q3} {q2}\{q_2\}{q2} {q4,q6}\{q_4,q_6\}{q4,q6}
{q4,q6}\{q_4,q_6\}{q4,q6} {q1,q3}\{q_1,q_3\}{q1,q3} {q5}\{q_5\}{q5}
{q2}\{q_2\}{q2} {q5}\{q_5\}{q5} {q1,q3}\{q_1,q_3\}{q1,q3}
{q5}\{q_5\}{q5} {q5}\{q_5\}{q5} {q5}\{q_5\}{q5}
new minimal DFA
For designing the actual system, we need to consider components beyond just the basic trie structure. We need to design a system that is fast and scalable. At a high level, our autocomplete gets the request of a prefix and then sends it to the API. In front of the API server, we need to have a load balancer. The load balancer distributes the prefix to a node. The node is a microservice that is responsible for checking cache if related data of the prefix is there or not. If yes, then return back to the API, else check zookeeper in order to find a right suffix-tree server. Zookeeper defines the availability of the suffix-tree server. For example, in zookeeper, we define a $ s-1 . It means for aaa to $ that indicates the end of suffix, server s1 is responsible.Background processors will be needed to take a bunch of strings and aggregate them in databases in order to apply on suffix-tree servers. In these, we get streams of phrases and weights (these streams can be obtained from a glossary, dictionary, etc.). The weights are provided based on data mining techniques that can be different in each system. We need to hash our weight and phrase and then send them to aggregators that aggregate the database on their similar terms, created time, and the sum of their weights to databases. The advantage of this approach is that we can recommend data based on relevance and weights.
Initial system setup
The current system is also poorly equiped to handle more than a thousand simultaneous requests. We need to improve it with vertical scaling but all while reduce the number of single failure points (i.e., we need horizontal scaling too). A round-robin algorithm is recommended for equally distributing traffic between systems. We also need to add more cache servers while guaranteeing that the data on each is equal. We can simply use a hashing algorithm to decide which data should be inserted in which server (though this system could break if one server fails). WIth this design requirement, we need to use consistent hashing to allow servers and objects to scale without affecting the overall system. Also, our zookeeper needs some changes, and as we add more servers of suffix-tree, we need to change the definition of zookeeper to a-k s1 l-m s2 m-z s3 $. This will help node servers to fetch the right data of suffix-tree.
The full autocomplete system
These data structures can be extended in many ways to improve performance, as there are often many more possible completions than can be displayed by the user interface. It's also important that the existence of search engines like Elasticsearch & [Solr]((https://lucene.apache.org/solr/) obviate building your own autocomplete system from scratch each time.
Clustering (or unsupervised learning in general), would be a useful tool for building this system. Each question becomes a part of a cluster, and other questions in the same cluster (probably in the order of similarity measure\text{similarity measure}similarity measure distance) are displayed as similar questions. There are many features we can use for the clustering, including but not limited to question tags, words in the heading, words in the text (with less weighting than the heading), and/or hyperlinks to other questions or pages. Other features could be generated from the data using techniques like text summarization or sentiment analysis.
From this clustering, we could select the NNN top closest questions by feature-vector-closeness and display those.
In Uber pool or Lyft Line, we want to match riders on similar routes or similar destinations. The challenge is to create a matching system that would result in we accurately predicting whether two, three, or four passengers together would enjoy the selected route.
An initial naive approach could be to try matching riders to other nearby riders, and see who matches in under 1 minute before the ride arrives. There could also be a bucketed matching system that asigns riders to smaller clusters based on which 10-minute period of the day they called the ride in. Unfortunately, this would cause a supply shock, as it would require drivers being available for every single 10-minute interval in a 24-hour day (and it's not guaranteed that there would be any requests in many of these intervals).
For a matching system, we could start with a a simple greedy haversine matchmaking system. Haversine distances are straight-line distances between two points and are multiplied by the region's average speed to get a time estimate. A haversine implementation is quick to implement and with tight enough constraints (e.g. detours, time until pickup, etc.) would make sure users had a good match. Such a system would compare each rider with every other rider in the system (O(n2)O(n^2)O(n2)), and whichever riders were the immediate closest that satisfied certain constraints (e.g., rating, maximum distance) would be paired. This in turn would be combined with an interval-matching problem making use of the departing and arriving distances/times. The problem is that this could result in drivers taking a lot of detours (since this is evaluating passengers based on straight-line distances). This distance judgement could be improved with geohashing (location-based bucketing), and adding a modified A* algorithm to more accurately measure distances to the passengers' desired routes. More data collection from ride histories would eventually allow for optimized geohash sizes, based on relative demand in each one.
Geohashing example
This is still a very simplified system that would need to be optimized further to handle triple-matching (this makes the interval-matching problem a much harder combinatorial explosion), longitudinal sorting (going east-west vs. north-sourth may be more difficult in some regions than others, such as in the SF bay area), and rerouting (would require many calculations to be redone).
We have three main approaches we can take: finding out from user data, finding out from available public data, and finding out from user votes or reports.
If we have a lot of user data already, we can find similar users based on places born, grow up, street names, tags, also their friends. If in a circle no one or very few people have listed that high school name, it is likely a fake high school. If we need to use an algorithm, we can use K-Nearest-Neighbors, find all the similar users with the person with potentially fake high school, then list out and rank of similar users' high school.
If we have easy access to public data (in a country like the United States, for example), we can get a list of high school names (from government or trust worthy organizations) and match high school names.
If we already have an abuse-reporting or downvoting system in place, we can root out the fake schools from people who report scams or downvote fake high school names.
We want to set up a model that can do a batch-evaluation of whether a 10-second audio clip contains the trigger word (in our case, the word "activate"). Ideally the training data would contain both instances of the trigger word and other words (along with background noise and silence), to resemble the production environment. While more labelled training data helps, it should be possible to artificially generate training data by combining recordings of different background audios, 1-second recordings of the trigger word in different tones or voices (all given positive labels), and 1-second recordings of non-trigger words. The negatively and positively labeled words can then be overlaid onto the background audio clips (with about 0-5 trigger word examples and 0-3 negative word examples per clip). Overlaying gives us more realistic data since we want our model to identify when someone has finished saying the trigger word, not just whether it exists in the sound clip.
We first initialize all timesteps of the output labels to 0s. Then for each "activate" we overlayed, we also update the target labels by assigning the subsequent 50 timesteps to 1s. The reason we have 50 timesteps 1s is because if we only set 1 timestep after the "activate" to 1, there will be too many 0s in the target labels. It creates a very imbalanced training set. It is a little bit of a hack to have 50 1 but could make them a little bit easy to train the model. Here is an illustration to show you the idea.
target label diagram
For a clip which we have inserted "activate", "innocent", activate", "baby." Note that the positive labels 1 are associated only with the positive words. In the spectrogram that represents our input data (the frequency representation of the audio wave over time), he x-axis is the time and y-axis is frequencies. Bright color indicates how loud a certain frequency is at that time. bright the color is the more certain frequency is active (loud). For reading this input data, we have the following model architecture:
trigger word model
The 1D convolutional step inputs 5511 timesteps of the spectrogram (10 seconds), outputs a 1375 step output. It extracts low-level audio features similar to how 2D convolutions extract image features. This also speeds up the model by reducing the number of timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately use a dense plus a sigmoid layer to make a prediction. The final sigmoid layer outputs the range of each label between 0 and 1, with 1 corresponding to the user having just said the trigger word.
This is a modified customer chrun modelling problem.
Churn is when a user stops using the platform - in the case of Robinhood this could be defined as when a user's account value falls below a minimum threshold for some number of days (say 28). We can build a classifier to predict user churn by incorporating features such as time horizon of the user (i.e. long term investor or short term investor), how much they initially deposited, time spent on the app, etc.
To assess feature importance, we can either run a model like logistic regression and look for weights with a large magnitude, or we can run a decision tree or random forest and look at the feature importance there (which is calculated as the decrease in node impurity weighed by the probability of reaching that node).
In some cases, this might be as simple of a task as looking at a person's timeline and seeing when people start sending tons of congratulatory messages. This will be especially telling if these are people that they person does not normally interact with (or family members) that are making the posts. Pictures of birthday celebrations are also a helpful indicator.These are all useful if the person actively chooses not to put their birthday details on Facebook.
It is unlikely that trying to cluster people by interests would yield much correlation with birth month or birth day. An exception to this might be people who indicate preferences correlating with a strong belief in the efficacy of astrology. Such individuals may have self-reinforcing preferences based around their self image of how they should act based on their zodiac sign (in whatever zodiac system they choose to believe in). If the users are initially filtered based on their astrology beliefs, it should be possible to make reliable month estimates for at least a subset of users based on interest groups.
If it's just a matter of the person forgetting to put the information there, this could be acquired through design testing, a birthday wishes function (designing a box of birthday wishes and friends can anonymously send gifts), questionaires, or sending a friendly reminder. If all else fails some of this information could be imported from another site.
This would most likely involve building a classifier system that takes in text, and then outputs a class label corresponding to several of the most likely languages to be used in an app (e.g., 0 for English, 1 for French, 2 for Spanish, etc.). As a supervised learning problem, there is a relatively straightforward process to creating training data, with just samples of text along with a few hundred thousand (at least) examples in each language category.
One way to do this would be to take metadata such as relative character frequencies and develop a simple multi-layer perceptron that can do the classification. Howerver, we may want to try an approach that would work better with shorter lengths of text. This would require using a character-level tokenizer, which we could then feed into a network such as a tree-based model (like random forest or XGBoost) or neural network model (Stacked LSTM/GRU, Bi-directional LSTM, transformer-based model like BERT).
If building a system from scratch isn't an option due to time constraints, the library langdetect (ported from Google's language-detection) is available on pypi. It works on 55 different languages.
For training, we can create a training dataset of properties with known prices, along with a feature vector containing information on known home. We can look at a distributon of house price vs livable surface area, and use this to eliminate any outliers (e.g., condemned buildings) from the data. We then want to create a probability plot for the sale price. Since this is likely going to be right-skwed (e.g., prices between more expensive houses being more different than prices between less expensive houses), we need to do a log-transform on the target variable. We also want to tailor our data based on how many features are missing from large chunks of the listed homes (e.g., unless otherwise specified, many listings are probably not listing whether or not they have a pool if they don't have one). For this missing data, we will just set the label to 'None' (e.g., 'no alley access', 'no fence', 'no misc feature'. A lot of features that are numerical (e.g., such as year of construction) will actually be categorical instead of continuous, so they will need to be formatted as such. Highly skewed features will need to undergo a box-cox transformation before any model training is done.
Speaking of modelling, we would be wise to choose from a variety of model types (such as elasticNet regression, XGboost regression, and LightGBM). These models would be trained on the data and evaluated based on root mean squared error (RMSE). With our base model scores we can ensemble them by averaging the scores, though we can do better by leveraging encapsulation. We can then build a powerful stacked regressor as our final trained model.
For prediting whether to invest in a certain city, this combined the model outputs with future forecasting. Since not everything is known about the future, some of the details of the future could be modeled with random variables. These random variables could then be fit into the model across hundreds of samples representing a distribution of outcomes for a certain city. Some features, such as the floor size, are unlikely to change, and will be set as constants for given houses. Other details, such as demand and city legal changes, will need to be factored in as samples from bernoilli or categorical distributions. A hamiltonian monte carlo method could then be used to feed these distributions into .
Given the difficulty of scaling up a hamiltonian monte carlo with compute equipment, such batch processing would need to be repeated separately on different devices for different cities. Such compute management could be managed with a service like Horovod.
Predicting the house prices accurately is not enough. We also want to use this tool to decide whether or not to invest in properties in a certain city. Once the algorithm runs correctly, we can track differences in projected home prices vs actual real-world home prices, and use a positive delta beyond a certain theshold as a 'buy' signal.
We want to be able to predict, given a period of app usage, what the first app label within that usage log will be. We also want to do this with 90% Accuracy. Ultimately we want to build a recommender system with this information.
Your would likely have lists of apps that are used over the course of the day. The phone will probably log some kinf of information related to when an app was opened, closed, running in the background, or whether the phone was closed or turned off. This logging has a time-series element to it, but it could also be featurized into other descriptors such as morning vs afternoon, time from wakeup hour or meal, or day of the week. We could also add features such as previously-used-5-apps. All of this information wold then be combined into a classification network (with the ranks of the logits being used to identify the top NNN most likely next apps if needed, though a single output will meet our functional requirements in this case).
If we have no other information about the user (e.g., the user just got their phone), then we could draw upon user-specific data for our recommender system. If the user registered their phone, there is at least name, address, and payment information. We can use these information to find out what the most probable popular apps will be (by usage) among certain demographics (provided those apps are installed on their phone).
For the sake of memory and CPU-time conservation, such a recommendation system could likely be constructed from a random forest model. In fact, some parts of the resulting decision tree could even be hard-coded to take into account what to do given a lack of information. While tools like XGBoost are great for constructing such trees in normal circumstances, Apple's own CoreML framework can be leveraged for this kind of decision tree work. If we wanted to leverage greater compute power, we could use federated learning (only recently been implemented in CoreML). This would involve updating a model partly on a phone's on-device data, sending part of the model (either trees in a random forest or weights from a neural network) to a central oracle when the phone is not in use (such as at night). This would allow a centralized model to learn from all the app usage data while preserving user privacy (an important selling point for the iPhone).
At a high level, we want to extract data from various sources and create a statistical approach. We need to think about how this will be used, and design an experiment to test it. Assuming the goal is to map nicknames to real names for social media users, our possible approaches could resemble one or more of the following:
Use an open-source corpus to map names (many NLP libraries may have something like this).
Set up quick questionnaires for people seemingly using nicknames (e.g. is this your real name Andy?) or users connections.
Extract data from comments within a profile.
Detect how a users' friends friends or connections them in comments, posts, and birthday greetings.
From comments in his network, detect from comments how their close connections or friends address them.
Extract data from conversations (assuming this method will be harder, since people who interact with the users are more likely to be friends and family members that will call the nicknames).
Extract data from external sources (other social media network or websites). If such information is provided during sign-up, this can be found using first names, last names, and profile pics to search for external sources for possible real names.
Building on the methods above, we would want to do A/B testing and see people's reactions to our attempts at mapping nicknames to real names.
It doesn't matter if you own an e-commerce or a supermarket. It doesn't matter if it is a small shop or a huge company such as Amazon or Netflix, it's better to know your customers. In this case, we want to be able to know our customers' behavior during the checkout process (we're referring to selected items and not searching for items), and figure out the bottlenecks.
First, we want to make sure we're measuring the time it takes for orders to be processed. This will be the time from the clicking of the checkout button, to the completion of the payment and shipping details. An e-comerce system will probably already have a hashing system for tagging customer receipts. A similar mechanism can be used to create customer-specific hashes corresponding to logged events such as moving from the payment page to the shipping page. It's entirely possible that some customers may not complete the process, so the information should be stored in a local cache until the checkout is complete.
Given all this information, we can set up an A/B testing strategy. We can create variants of the layouts to be displayed to certain subsets of the population, and use this information to figure out which layouts make navigation and payment easier. Once the time-stamps are collected for the interval lengths for each page (as well as the total time it takes to go through checkout), this can be reduced to a time-series problem. Given the limited size of the website features that would be incorporated into an A/B test, a relatively simple model like random forest regression should be able to identify the factors on the website that are contributing to longer checkout times.
The word "chatbot" is so ubiquitous that we should quickly stop to define what we're building. We're referring to a machine learning system that is used to generate natural language to simulate conversations with end users. This output may come through websites, messages on mobile apps, or through voice over phone calls. The system takes in input from a user, processes the input (such as by extracting questions from the input), and delivers the most relevant response to the user.
Any hotel-booking chatbot will need to be trained with actual chat data. Companies which already make use of human-powered chat have logs of the previous conversations (these can be structured as pairs of text with the users' words being labelled as the context, and the bots' response being the correct label). At least several thousand logs will be needed to properly train the network.
These logs are used to analyze what people are asking and what they intend to convey. With a combination of machine learning models (including models based off of transformer models like BERT) and an ensemble of other tools (the training may involve a multi-input network that takes in both tokenized text and conversation metadata), the user's questions are matched with their intents. Models are trained in such a way that the bot is able to connect similar questions to a correct intent and produce the correct answer.
At the end of each conversation with a chatbot, the question, 'Did we help you?' is asked and if we answer 'No,' the conversation is forwarded to human support. The same way when a chatbot is unable to understand a question provided by a human, it will forward the conversation by itself to human customer support. The chatbot will need to be able to continuously learn throughout its lifetime (and not drift into inappropriate behavior)
Once the bot is set-up and starts interacting with customers, smart feedback loops are implemented. When customers ask questions, chatbots give them certain options in order to understand their questions and needs more clearly. This information is used to retrain the model thereby ensuring the accuracy of the chatbot. There are also guarding systems which have been implemented to make sure that the bot doesn't change based on a few replies. This is also ensured by making sure that the bot simply just doesn't rephrase what people tell it but is actually taught to answer questions the way its owner (the company) wants it to answer. The engineers at the developing company can decide the extent to which it wants to expand or shrink its understanding.
Example high-level framework for our chatbot
The basic idea here will be to treat a document set covering many different topics as a kind of knowledge base to which we can submit questions and receive exact answers. The fact that the documents may have varying levels of authenticity or objectivity is irrelevant.
First, we would want to build a preprocessing system for the documents. If these are physical documents we would set up an OCR system. If this is just text, we skip the OCR and move onto removing headers, footers, and quotes from the documents. We can do this easily with a package like sklearn.
With a dataset in hand, we will first need to create a search index. The search index will allow us to quickly and easily retrieve documents that contain words present in the question. Such documents are likely to contain the answer and can be analyzed further to extract candidate answers. We will first initialize the search index and then add documents from the Python list to the index. For document sets that are too large to be loaded into a Python list, we can set our indexer to crawl a folder and index all plain text documents found.
Next, we will create a QA instance, which is largely a wrapper around a pretrained BertForQuestionAnswering model from the excellent transformers library.. We can train this using utilities from the ktrain python library
And that's all that's needed for the basic functionality. In roughly 3 lines of code, one can construct an end-to-end Question-Answering system that is ready to receive questions.
Even though the text may be structured, the's no set limit to how complex the questions can get. Facebook Research open-sourced their code for decomposing large, harder questions into smaller, easier to answer sub-questions. This is an unsupervised technique that starts by learning to generate useful decompositions from questions with unsupervised sequence-to-sequence learning.
This is a classic Entity Disambiguation problem. A system could conceivably take the following high-level steps (borrowed heavily from OpenAI's research into this problem):
Extract every Wikipedia-internal link to determine, for each word, the set of conceivable entities it can refer to. For example, when encountering the link [jaguar](https://en.wikipedia.org/wiki/Jaguar) in a Wikipedia page, we conclude that https://en.wikipedia.org/wiki/Jaguar is one of the meanings of jaguar.
Walk the Wikipedia category tree (using the Wikidata knowledge graph) to determine, for each entity, the set of categories it belongs to. For example, at the bottom of https://en.wikipedia.org/wiki/Jaguar_Cars's Wikipedia page, are the following categories (which themselves have their own categories, such as Automobiles):jaguar
Pick a list of ~100 categories to be your "type" system, and optimize over this choice of categories so that they compactly express any entity. We know the mapping of entities to categories, so given a type system, we can represent each entity as a ~100-dimensional binary vector indicating membership in each category.
Using every Wikipedia-internal link and its surrounding context, produce training data mapping a word plus context to the ~100-dimensional binary representation of the corresponding entity, and train a neural network to predict this mapping. This chains together the previous steps: Wikipedia links map a word to an entity, we know the categories for each entity from step 2, and step 3 picked the categories in our type system.
At test time, given a word and surrounding context, our neural network's output can be interpreted as the probability that the word belongs to each category. If we knew the exact set of category memberships, we would narrow down to one entity (assuming well-chosen categories). But instead, we must play a probabilistic 20 questions: use Bayes' theorem to calculate the chance of the word disambiguating to each of its possible entities.
OpenAI's Paper "DeepType: Multilingual Entity Linking by Neural Type System Evolution" goes into more detail on how this can be done.
It's not as simple as just picking the lowest-trading stock. If we were deciding whether or not to sell a stock without being forced, there are a variety of different options we could choose. For example, we could sell shares of a stock when its 50-day moving average goes below the 200-day moving average.
However in this case, we're trying to decide which of our current portfolio to trade for another stock. We could choose modified versions of several algorithmic trading strategies.
Trend following strategy is one of the more common algorithmic stock-trading strategies. We could simply select which of the current stocks of ours has the downward-projected trend with the greatest magnitude (if a different stock is a must-buy, it's a safe assumption this means it will be a better performer than our current worst-performing stock).
Mean reversion strategy is based on the concept that the high and low prices of an asset are a temporary phenomenon that revert to their mean value (average value) periodically. Identifying and defining a price range and implementing an algorithm based on it allows trades to be placed automatically when the price of an asset breaks in and out of its defined range. In this case, we could determine which of the stocks in our portfolio have had the smallest ranges around the 50-day moving averages (compared to their all-time largest ranges).
There are a few approaches to this problem. Many people's first instinct would be to reply "classify with a Convolutional Neural Network". This process would most likely involve gathering a bunch of training data for the various shapes, and training a multi-class classifier on the data. Some may change this approach by recognizing that that there might be multiple shapes in one image, and their instinct would be to switch to a semgentation architecture like PSPNet or YOLO.
It should be worth noting that for a shape as simple as a Triangle, Circle, or square, a technique like a Haar Cascade could be used. Such a computer vision technique would be much easier and faster to train (and depending on the implementation, could even be built from scratch). Like the segmentation network, this would also have the benefit of identifying multiple shapes in one image.
Another approach is to calculate the best-fit bounding rectangle of each object, such that the bounding rectangle can be oriented at an arbitrary angle. We can take the following approach:
Identify the boundary (i.e. perimeter) of each object.
Calculate the smallest rectangular bounding box that contains this perimeter.
Calculate the width, height, and area of each bounding rectangle. The aspect ratio (ratio between the width and height) can be used to determine whether it is a square (aspect ratio ∼1.0\sim 1.0∼1.0) or rectangle.
For a rectangle or square, the filled area of the object (from regionprops()) should be almost the same as the area of its bounding rectangle, whereas for a triangle it should be substantially less.
For the circularity condition, one can use the ratio between perimeter and area.
It should be noted that OpenCV has many shape-detection tools. All one would need to do is convert the image to greyscale, smooth the image with a gaussian blur, threshold it, and use the built-in shape detection utility to fin the contours.
See the original Bayesian Neural Network post for an in-depth code walkthrough
Typical image-recognition neural networks are great at assigning correct classifications on datasets guaranteed to contain data of one of the classification categories. However at the moment what we need is a network that, when given data far outside the distribution of data it was trained on, refuse to make a classification decision. The network in question would be a Bayesian Convolutional Neural Network. Much of the network architecture would be superficially similar (2D Convolution and Dense layers replaced with 2D Convolutional flipout and dense flipout layers). The main differences are that the weights and biases in these layers would be prior distirbutions instead of set initialized values. As the network is trained via variational Bayes, the priors update. To get a classification evaluation, all the distributions in the network are sampled (let's say 100 times for our purposes) based on the input data. Each class will get a set classification probability, but they will not necessarily add up to a certain amount. All the probabilities can be incredibly small. We can set a probability boundary (like 0.2) where if none of the class probabilities are above this, the network will refuse to make a decision (i.e., it's not part of a CIFAR-10 class in this case) rather than producing a label from the argmax\arg \maxargmax of the logits.
Being the first person on the project, you would have two tasks: figuring out what metrics to track and how to improve them, and which items to promote in the beginning with little to no data.
The goal of this optimization is increasing user engagement (i.e., we want users to stay engaged more), which can be defined as time spent on the newsfeed and/or actions on the feed (e.g., likes, posting activity, shares, reactions, commenting). Some of the features that would be relevant to keeping the newsfeed up-to-date and personalized include but are not limited to the following: reaction types, placement in various ranking models that can more accurately predict how likely a user is to engage with posts, etc. All of these can be either A/B tested or Blue/Green tested for their effects on the engagement metrics (e.g., number of interactions or length of engagement). It is of course important to not optimize for only these metrics, as it could incentivize promoting clickbait content.
For someone new to the news feed, some of the initial choices of posts could be pulled from posts that friends in the users' network (with the posts) have been engaging in.
In the ML community, it's important to give credit where credit is due, and not just slap our name on someone else's hard work cough.. Here is a list of the references and resources I used for various equations, concepts, explanations, examples, and inspiration for visualizations:
Kansal, Satwik. "Machine Learning: How to Build Scalable Machine Learning Models." Codementor.
Huyen, Chip. "Machine Learning Interviews: Machine Learning System Design." chiphuyen.github.io, Github, 2019.
Kharkovyna, Oleksii. "Top 10 Best Deep Learning Frameworks in 2019." Medium, Towards Data Science, 3 June 2019.
Design Gurus. "Grokking the System Design Interview - Learn Interactively." Educative, 2017.
Raisinghani, Jatin. "Data Lake vs Data Warehouse vs Data Mart." The Holistics Blog, The Holistics Blog, 13 May 2019.
Nelson, Jessica, et al. "Measures of text difficulty: Testing their predictive value for grade levels and student performance." Council of Chief State School Officers, Washington, DC (2012).
Das, Anubrata, et al. "Predicting trends in the twitter social network: A machine learning approach." International Conference on Swarm, Evolutionary, and Memetic Computing. Springer, Cham, 2014.
Dominik Lukes, How to find a given text's complexity?, Linguistics StackExchangeURL (version: 2014-10-28)
kolonel, How do search engines generate related searches?, Stats StackExchange URL (version: 2014-10-28)
BLZofHK, How does Stack Overflow match similar questions?, Meta StackExchange URL (version: 2017-03-29)
Brown, Timothy. "Matchmaking in Lyft Line." Medium, Lyft Engineering, 28 Aug. 2016.
Zhang, Chengwei. "How to Do Real Time Trigger Word Detection with Keras." DLology, 2017.
Khalili, Alireza Rahmani. "How to Design an Autocomplete System - DZone AI." Dzone.com, 19 Aug. 2019.
Wei, Jennifer. "How Machine Learning Helps You Choose What to Consume Next." Science in the News, Harvard University, 28 Aug. 2017.
Brownlee, Jason. "How to Make Predictions for Time Series Forecasting with Python." Machine Learning Mastery, 23 Feb. 2017.
Raiman, Jonathan. "Discovering Types for Entity Disambiguation." OpenAI, OpenAI, 7 Mar. 2019.
Humphries, Stan. "Introducing a New and Improved Zestimate Algorithm." Zillow Tech Hub, Zillow, 24 Sept. 2019.
Woo, Win. "Building an Image Search Application That Uses the Cloud …" Google Cloud Platform, Google, 4 May 2018.
"Building an Image Search Application That Uses the Cloud Vision API and AutoML Vision." Google Cloud Solutions, Google, 2019.
Benzahra, Marc. "How to Evaluate Text Readability with NLP." Medium, Glose Engineering, 20 June 2019, medium.com/glose-team/how-to-evaluate-text-readability-with-nlp-9c04bd3f46a2.
Pierre, Rafael. "Detecting Financial Fraud Using Machine Learning: Winning the War Against Imbalanced Data." Medium, Towards Data Science, 4 Jan. 2019.
Seif, George. "An Easy Introduction to Machine Learning Recommender Systems." Medium, Towards Data Science, 13 Nov. 2019.
Yejas, Oscar D. Lara. "GitHub Autocompletion with Machine Learning." Medium, Towards Data Science, 6 May 2019.
Maiya, Arun. "Build an Open-Domain Question-Answering System With BERT in 3 Lines of Code." Medium, Towards Data Science, 17 Apr. 2020.
Arbuzova, Yana. "Automatic Question Answering." Medium, Towards Data Science, 13 Nov. 2018.
Gon, Pedro. "New to Chatbots? Learn How to Book Your next Vacation Using One." Medium, HiJiffy, 16 Nov. 2017.
Inzaugarat, Euge. "Using Machine Learning to Understand Customers Behavior." Medium, Towards Data Science, 19 May 2019.
Orac, Roman. "Churn Prediction." Medium, Towards Data Science, 26 Jan. 2019, .
Karbhari, Vimarsh. "Facebook Data Science Interview." Medium, Acing AI, 15 Aug. 2019.
Edell, Aaron. "How I Trained a Language Detection AI in 20 Minutes with a 97% Accuracy." Medium, Towards Data Science, 27 June 2019.
Karkare, Prateek. "OK Google, Tell Me How Trigger Word Detection Works." Medium, AI Graduate, 9 Dec. 2019.
Flo.tausend. "Hands-on: Predict Customer Churn." Medium, Towards Data Science, 19 Nov. 2019.
Robert Chang, "Using Machine Learning to Predict Value of Homes On Airbnb", Airbnb Engineering & Data Science, 2017)
Chaitanya Ekanadham, "Using Machine Learning to Improve Streaming Quality at Netflix", Netflix Technology Blog, 2018)
Bernardi et al., "150 Successful Machine Learning Models: 6 Lessons Learned at Booking.com" KDD, 2019)
Gabriel Aldamiz, "How we grew from 0 to 4 million women on our fashion app, with a vertical machine learning approach" HackerNoon, 2018)
Mihajlo Grbovic, "Machine Learning-Powered Search Ranking of Airbnb Experiences" (Airbnb Engineering & Data Science, 2019)
Hao Yi Ong, "From shallow to deep learning in fraud" Lyft Engineering, 2018)
Jeremy Stanley, "Space, Time and Groceries", Tech at Instacart, 2017
"Uber's Big Data Platform: 100+ Petabytes with Minute Latency"
Brad Neuberg, "Creating a Modern OCR Pipeline Using Computer Vision and Deep Learning", Dropbox Engineering, 2017
Jeremy Hermann and Mike Del Balso, "Scaling Machine Learning at Uber with Michelangelo", Uber Engineering, 2019
Jannach, D., Karakaya, Z., Gedikli, F., (2012). "Accuracy improvements for multicriteria recommender systems". In: Proceedings of the 13th ACM Conference on Electronic Commerce (EC 2012). pp. 674-689.
Mehrbakhsh Nilashi, Dietmar Jannach, Othman bin Ibrahim, Norafida Ithnin,(2015): "Clustering and regression-based multi-criteria collaborative filtering with incremental updates" .Inf. Sci. 293: 235-250
Cited as:
@article{mcateer2019dsml,
title = "Deploying and Scaling ML",
author = "McAteer, Matthew",
journal = "matthewmcateer.me",
year = "2019",
url = "https://matthewmcateer.me/blog/ml-research-interview-deployment-scaling/"
If you notice mistakes and errors in this post, don't hesitate to contact me at [contact at matthewmcateer dot me] and I will be very happy to correct them right away! Alternatily, you can follow me on Twitter and reach out to me there.
See you in the next post 😄
I write about AI, Biotech, and a bunch of other topics. Subscribe to get new posts by email!
Send me only ML posts
Numerical Optimization
Excess COVID-19 Deaths
Have 200K+ Americans really died from COVID?
The Math Behind ML (the important stuff)
Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space
Matthew McAteer @MatthewMcAteer0
Biologist-turned-ML-Engineer in San Francisco. I blog about machine learning, biotech, distributed systems, dogs, and more.
At least this isn't a full-screen popup
That'd be more annoying. Anyways, subscribe to my newsletter to get new posts by email! I write about AI, Biotech, and a bunch of other topics.
|
CommonCrawl
|
Fourier Transform of sin
The Fourier Transform and Its Applications, 3rd ed. New York: McGraw-Hill, pp. 79-90 and 100-101, 1999. CITE THIS AS: Weisstein, Eric W. Fourier Transform--Sine. From MathWorld --A Wolfram Web Resource. https://mathworld.wolfram.com/FourierTransformSine.html On this page, the Fourier Transforms for the sinusois sine and cosine function are determined. The result is easily obtained using the Fourier Transform of the complex exponential. We'll look at the cosine with frequency f=A cycles/second. This cosine function can be rewritten, thanks to Euler, using the identity
Fourier Transform--Sine -- from Wolfram MathWorl
While solving the Fourier transformation of a sine wave (say h (t) = A sin (2 π f 0 t)) in time domain, we get two peaks in frequency domain in frequency space with a factor of (A / 2) j with algebraic sum of delta function for f + f 0 and f − f 0 frequency, where j is the imaginary unit
The Fourier sine transform of f (t), sometimes denoted by either ^ or (), is f ^ s ( ν ) = ∫ − ∞ ∞ f ( t ) sin ( 2 π ν t ) d t . {\displaystyle {\hat {f}}^{s}(\nu )=\int _{-\infty }^{\infty }f(t)\sin(2\pi \nu t)\,dt.
The Fourier transform of S is defined by ˆS(f) = S(ˆf) = ∫Rˆf(s)sin(s)dx, f ∈ S. The above is simplified by using the Fourier transform inversion: ˆS(f) = ∫Rˆf(s)eisx − e − isx 2i ds|x = 1 = √2π 2i (f(1) − f(− 1)) = − i√π 2(δ1(f) − δ − 1(f)) Therefore, ˆS = − i√π 2(δ1 − δ − 1
Fourier Transform Symmetry Properties Expanding the Fourier transform of a function, f(t): F() Re{()}cos( ) Im{()}sin( )ωω ωft t dt ft tdt ∞∞ −∞ −∞ =+∫∫←Re{F(ω)} ←Im{F(ω)} ↑↑ ↓↓ = 0 if Re{f(t)} is odd = 0 if Im{f(t)} is even Even functions of ω Odd functions of ω F() [Reωωω{f ()ti} Im{f ()ttitdt}][cos( ) sin( )
Put simply, the Fourier transform is a way of splitting something up into a bunch of sine waves. As usual, the name comes from some person who lived a long time ago called Fourier. Let's start with some simple examples and work our way up. First up we're going to look at waves - patterns that repeat over time
sin(0. t) ( ) 0 . 0. j u(t)cos(0. t) 2. 2 0 ( 0 ) ( 0) 2 . j u(t)sin(0. t) 2. 2 0 2 ( 0 ) ( 0) 2 . j u(t)e. t cos(0. t) 2. 2 0 ( ) j j. Sa (x) = sin(x) / x sinc function. Sa (x) = sin(x) / x sinc function. tri(t) = (1-|t|)rect(t/2) triangle function = rect(t)*rect(t
the transform is the function itself 0 the rectangular function J (t) is the Bessel function of first kind of order 0, rect is n Chebyshev polynomial of the first kind. it's the generalization of the previous transform; T (t) is the U n (t) is the Chebyshev polynomial of the second kin
Fourier transform calculator. Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest. In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical. The Fourier transform is: = ∫ + 12π − 12πsin(2.5t)e − iωtdt Figure 3 shows the function and its Fourier transform. Comparing with Figure 2, you can see that the overall shape of the Fourier transform is the same, with the same peaks at -2.5 s -1 and +2.5 s -1, but the distribution is narrower, so the two peaks have less overlap e−ax sin(qx)dx = q a2 + q2 The Fourier transform of the antisymmetric decaying exponential is plotted in the right-hand panel of Fig. E.1 and is given by F(q) = i2Aq a2 +q2 (E.4) Step function The step function f(x) = A, for x >0 −A, for x <0. 368 Fourier transforms is plotted in Fig. E.1(d). Its Fourier transform is equal to the Fourier transform of the antisymmetric decayingexponentialin.
TheFourierTransform
The Fourier transform is a mathematical formula that relates a signal sampled in time or space to the same signal sampled in frequency. In signal processing, the Fourier transform can reveal important characteristics of a signal, namely, its frequency components. The Fourier transform is defined for a vector with uniformly sampled points b
Fourier sine transform Sn = 2 L Z L 0 f(t)Sin(nπ L x)dx, f(x) = X∞ n=1 Sn Sin(nπ L x). Fourier cosine transform Cn = 2 L Z L 0 f(t)Cos(nπ L x)dx, f(x) = C0 2 + X∞ n=1 Cn Cos(nπ L x). EE-2020, Spring 2009 - p. 1/18. Sine and Cosine transforms of derivatives Finite Sine and Cosine transforms: Fs(f) ≡ Fs(ω) = 2 π Z ∞ 0 f(t)Sin(ωt)dt, Fc(f) ≡ Fc(ω) = 2 π Z ∞ 0 f(t)Cos(ωt)dt.
The Fourier Transform: Examples, Properties, Common Pairs The Fourier Transform: Phase Relative proportions of sine and cosine The Fourier Transform: Examples, Properties, Common Pairs Example: Fourier Transform of a Cosine f(t) = cos (2 st ) F (u ) = Z 1 1 f(t) e i2 ut dt = Z 1 1 cos (2 st ) e i2 ut dt = Z 1 1 cos (2 st ) [cos ( 2 ut ) + isin ( 2 ut )] dt = Z 1 1 cos (2 st ) cos ( 2 ut.
To prove that, we will use the following identity: sin A − sin B = 2 cos ½(A + B) sin ½(A − B). How do you use the sinc function in Matlab? The sinc function is the continuous inverse Fourier transform of the rectangular pulse of width and height 1. for all other elements of x
efine the Fourier transform of a step function or a constant signal unit step what is the Fourier transform of f (t)= 0 t< 0 1 t ≥ 0? the Laplace transform is 1 /s, but the imaginary axis is not in the ROC, and therefore the Fourier transform is not 1 /jω in fact, the integral ∞ −∞ f (t) e − jωt dt = ∞ 0 e − jωt dt = ∞ 0 cos ωtdt − j ∞ 0 sin ωtdt is not define
Using the Dirac function, we see that the Fourier transform of a 1kHz sine wave is: We can use the same methods to take the Fourier transform of cos(4000πt), and get: A few things jump out here. The first is that the Dirac function has an offset, which means we get the same spike that we saw for x(t) = 2, but this time we have spikes at the signal frequency and the negative of the signal.
FOURIER TRANSFORM LINKS Find the fourier transform of f(x) = 1 if |x| lesser 1 : 0 if |x| greater 1. Evaluate ∫ sin x/x dx - https://youtu.be/dowjPx8Ckv0 Fin... Evaluate ∫ sin x/x dx - https.
Fourier Transform Solution for the Dirichlet Integral (sin (x)/x) In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet. In this post I will present my solution to this integral, using Fourier transforms and their properties The Fourier Transform It is well known that one of the basic assumptions when applying Fourier methods is that the oscillatory signal can be be decomposed into a bunch of sinusoidal signals. The basic idea is that any signal can be represented as a weighted sum of sine and cosine waves of different frequencies Fourier Transform of Sine Waves with Unexpected Results. I'm plotting sine waves (left column) and their respective frequency domain representations (right column): The second wave (amplitude: 15; frequency: 5.0) looks absolutely as expected. The second frequency plot has exactly one peak at x=5 (frequency), y=15 (amplitude) EE 442 Fourier Transform 20 Properties of Fourier Transforms 1. Linearity (Superposition) Property 2. Time-Frequency Duality Property 3. Transform Duality Property 4. Time-Scaling Property 5. Time-Shifting Property 6. Frequency-Shifting Property 7. Time Differentiation & Time Integration Property 8. Area Under g(t) Property 9. Area Under G(f) Propert
The discrete-time Fourier transform (DTFT) or the Fourier transform of a discrete-time sequence x [n] is a representation of the sequence in terms of the complex exponential sequence ejωn. The DTFT sequence x [n] is given by X(ω) = Σ∞n = − ∞x(n)e − jωn...... (1 Since each of the rectangular pulses on the right has a Fourier transform given by (2 sin w)/w, the convolution property tells us that the triangular function will have a Fourier transform given by the square of (2 sin w)/w: 4 sin2 w X(()) = (0).)2 Solutions to Optional Problems S9.9 We can compute the function x(t) by taking the inverse Fourier transform of X(w). 6.082 Spring 2007 Fourier Series and Fourier Transform, Slide 3 The Concept of Negative Frequency Note: • As t increases, vector rotates clockwise - We consider e-jwtto have negativefrequency • Note: A-jBis the complex conjugateof A+jB - So, e-jwt is the complex conjugate of ejwt e-jωt I Q cos(ωt)-sin(ωt)−ω Fourier transforms take the process a step further, to a continuum of n-values. To establish these results, let us begin to look at the details first of Fourier series, and then of Fourier transforms. 3.2 Fourier Series Consider a periodic function f = f (x),defined on the interval −1 2 L ≤ x ≤ 1 2 L and having f (x + L)= f (x)for all. FOURIER TRANSFORM LINKS Find the fourier transform of f(x) = 1 if |x| lesser 1 : 0 if |x| greater 1. Evaluate ∫ sin x/x dx - https://youtu.be/dowjPx8Ckv0 Fin..
Chapter 1 Fourier Transforms. Last term, we saw that Fourier series allows us to represent a given function, defined over a finite range of the independent variable, in terms of sine and cosine waves of different amplitudes and frequencies.Fourier Transforms are the natural extension of Fourier series for functions defined over \(\mathbb{R}\).A key reason for studying Fourier transforms (and. Fourier sine transform and Fourier cosine transform, one can solve many important problems of physics with very simple way. •Thus we will learn from this unit to use the Fourier transform for solving many physical application related partial differential equations. 5.3 FOURIER SERIES •A function f (x) is called a periodic function if f ( x) is defined for all real x, except possibly at. • Fourier transform is a continuous, linear, one-to-one mapping ofSn onto Sn of period 4, with a continuous inverse. • Test-functions are dense inSn • Sn is dense in both L1(Rn) and L2(Rn) • Plancharel theorem: There is a linear isometry of L2(Rn) onto L2(Rn) that is uniquely defined via the Fourier transform in Sn. Fourier Transform.
sin (!x) @x! cos(!x). Linear Scaling: Scaling the signal domain causes scaling of the Fourier domain; i.e., given a 2R, F [s (ax)] = 1 a ^!=a). Parseval's Theorem: Sum of squared Fourier coefficients is a con-stant multiple of the sum of squared signal values. 320: Linear Filters, Sampling, & Fourier Analysis Page: 3. Convolution Theorem The Fourier transform of the convolution of two. A single cycle of the waveform is given by. sig = Sin [t] UnitBox [t/π - 1/2] This has a relatively simple Fourier Transform. spec = FullSimplify [FourierTransform [sig, t, ω]] (* - ( (1 + E^ (I π ω))/ (Sqrt [2 π] (-1 + ω^2))) *) We can convert our single cycle into a periodic signal by convolution with a Dirac Comb
Fourier Transform of Sine Waves with Unexpected Results. I'm plotting sine waves (left column) and their respective frequency domain representations (right column): The second wave (amplitude: 15; frequency: 5.0) looks absolutely as expected. The second frequency plot has exactly one peak at x=5 (frequency), y=15 (amplitude) As for your question, two functions connected through Fourier transform are in general complex, that is, they have modulus and phase. The reason why the negative frequency component in the FT of sine points down is that because this component are 180 degree out of phase from the positive frequency component. BobP said: I know that both FT boil down to a cosine expression (as the sin expression. The Fourier Transform 1.1 Fourier transforms as integrals There are several ways to de ne the Fourier transform of a function f: R ! C. In this section, we de ne it using an integral representation and state some basic uniqueness and inversion properties, without proof. Thereafter, we will consider the transform as being de ned as a suitable limit of Fourier series, and will prove the results. Fourier Series & Fourier Transforms [email protected] 19th October 2003 Synopsis Lecture 1 : • Review of trigonometric identities • ourierF Series • Analysing the square wave Lecture 2: • The ourierF ransformT • ransformsT of some common functions Lecture 3: Applications in chemistry • FTIR • Crystallography Bibliography 1. The Chemistry Maths Book (Chapter 15. Kapitel 7: Fourier-Transformation Interpretationen und Begriffe. • fT fassen wir auf als ein zeitkontinuierliches T-periodisches Signal. • Dann stellt der Fourier-Koeffizient γk den Verst¨arkungsfaktor f¨ur die Grundschwingung e−ikωτ zur Frequenz ωk = k 2π T f¨ur k= 0,±1,±2,..
mathematics - Fourier transform of sine function - Physics
ology. Number Theory. Probability and Statistics. Recreational Mathematics.
Fourier transform of sin (Wt) Nice worksheet, Jean. One comment about the DFT: the advantage of the DFT is that it allows to calculate spectral components at any frequency, contrary to the FFT (given a fixed number of samples). The FFT is usefull to get a full spectrum sampling in one shot. The DFT allows to focus on a particular frequency
Fourier sine transform of F(ω). 4. Similarly, if f(x) is an even function then F(ω) is an even function and we obtain the Fourier cosine transform pair f(x) = Z ∞ 0 F(ω)cosωxdω (28) F(ω) = 2 π Z ∞ 0 f(x)cosωxdx (29) In this case F(ω) ≡ C[f(x)] is called the Fourier cosine transform of f(x) and f(x) ≡ C−1[F(ω)] is called the inverse Fourier cosine transform of F(ω). 5. The.
Fourier Integrals and Transforms The connection between the momentum and position representation relies on the notions of Fourier integrals and Fourier transforms, (for a more extensive coverage, see the module MATH3214). Fourier Theorem: If the complex function g ∈ L2(R) (i.e. g square-integrable), then the function given by the Fourier integral, i.e. f(x) = 1 √ 2π Z ∞ −∞ g(k)eikx.
The Fourier transform is: 4 4 () sin(2.5 ) it it Yytedt te dt (14) Since y(t) is a sine function , from Equation 5 we expect the Fourier transform Equation 14 to be purely imaginary. Figure 2(a) shows the function, Equation 13, and Figure 2(b) shows the imaginary part of the result of the Fourier transform, Equation 14. 6 (a) (b) Figure 2 There are at least two things to notice in Figure 2.
Fourier sine transform: m Fs[u (x, t)] =a, (a, t) = u(x, t)sin a x dx 0 and the inverse sine transform is 00 2 - ) x us ( a , t)sinax d a u ( x , ~=- 0 Fourier cosine transform: m F~[u(x,t)] =tic ( a , t ) = Ju(x, tlcosax dx 0 and the inverse cosine transform is 0 For the interval 0 < x < L :We use finite Fourier sine or cosine transforms depending upon the boundary conditions of the problem. The Fourier Transform and its Inverse The Fourier Transform and its Inverse: So we can transform to the frequency domain and back. Interestingly, these transformations are very similar. There are different definitions of these transforms. The 2π can occur in several places, but the idea is generally the same. Inverse Fourier Transform Fourier Transform Examples. Here we will learn about Fourier transform with examples.. Lets start with what is fourier transform really is. Definition of Fourier Transform. The Fourier transform of $ f(x) $ is denoted by $ \mathscr{F}\{f(x)\}= $$ F(k), k \in \mathbb{R}, $ and defined by the integral Find the Fourier Sine transform of e-3x. 18. Find the Fourier Sine transform of f(x)= e-x. 19. Find the Fourier Sine transform of 3e-2 x. Let f (x)= 3e-2 x . 20. Find the Fourier Sine transform of 1/x. We know that . 21. State the Convolution theorem on Fourier transform. 22.State the Parseval's formula or identity. If F s is the Fourier. Deriving the Fourier transform of cosine and sine. Ask Question Asked 7 years, 7 months ago. Active 5 years ago. Viewed 32k times 10. 8 $\begingroup$ In this answer, Jim Clay writes:... use the fact that $\mathcal F\{\cos(x)\} = \frac{\delta(w - 1) + \delta(w + 1)}{2}$ The expression above is not too different from $\mathcal F\{{\cos(2\pi f_0t)\}=\frac{1}{2}(\delta(f-f_0)+\delta(f+f_0.
Fourier transform has many applications in physics and engineering such as analysis of LTI systems, RADAR, astronomy, signal processing etc. Deriving Fourier transform from Fourier series. Consider a periodic signal f(t) with period T. The complex Fourier series representation of f(t) is given as $$ f(t) = \sum_{k=-\infty}^{\infty} a_k e^{jk\omega_0 t} $$ $$ \quad \quad \quad \quad \quad. Table B.2 The Fourier transform and series of complex signals Signal y(t) Transform Y(jω) Series C k Burst of N pulses with known X(jω) X(jω)sin(ωNT/2) sin(ωT/2) 1 T 1 X # j2kπ T 1 $ sin(kπ/q 2) sin(kπ/Nq 2) Rectangular pulse-burst (Fig. 2.47) Aτsin(ωτ/2) ωτ/2 sin(ωNT/2) sin(ωT/2) A q 1 sin(kπ/q 1) kπ/q 1 sin(kπ/q 2) sin(kπ. Fourier-transform can tell from the signal that what sin waves with what frequencies are caused to create that particular signal. The noise in signals shows itself in the higher frequencies of sin. Fourier Transform theory is essential to many areas of physics including acoustics and signal processing, optics and image processing, solid state physics, scattering theory, and the more generally, in the solution of differential equations in applications as diverse as weather model- ing to quantum eld calculations. The FourierTransformcan either be considered as expansion in terms of an.
Sine and cosine transforms - Wikipedi
A Fourier Transform converts a wave from the time domain into the frequency domain. There is a set of sine waves that, when sumed together, are equal to any given wave. These sine waves each have a frequency and amplitude. A plot of frequency versus strength (amplitude) on an x-y graph of these sine wave components is a frequency spectrum (we'll see one briefly). Ie, the trajectory can be. Problem 18 ) Find Fourier Cosine Transform of and hence, evaluate Fourier Sine Transform of . Solution : Problem 20 ) Obtain Fourier Sine Transform of i) for 0<x<a, and is equal to 0 otherwise The Fourier transform accomplishes this by breaking down the original time-based waveform into a series of sinusoidal terms, each with a unique magnitude, frequency, and phase. This process, in effect, converts a waveform in the time domain that is difficult to describe mathematically into a more manageable series of sinusoidal functions that when added together, exactly reproduce the original. The Fourier transform is a mathematical technique that allows an MR signal to be decomposed into a sum of sine waves of different frequencies, phases, and amplitudes. This remarkable result derives from the work of Jean-Baptiste Joseph Fourier (1768-1830), a French mathematician and physicist. Since spatial encoding in MR imaging involves frequencies and phases, it is naturally amenable to.
The Fourier transform is crucial to any discussion of time series analysis, and this chapter discusses the definition of the transform and begins introducing some of the ways it is useful. We will use a Mathematica-esque notation. This includes using the symbol I for the square root of minus one. Also, what is conventionally written as sin(t) in Mathematica is Sin[t]; similarly the cosine is. 3. Use the time-shift property to obtain the Fourier transform of f(t) = 1 1 ≤t 3 0 otherwise Verify your result using the definition of the Fourier transform. 4. Find the inverse Fourier transforms of (a) F(ω) = 20 sin(5ω) 5ω e−3iω (b) F(ω) = 8 ω sin3ω eiω (c) F(ω) = eiω 1−iω 5. If f(t) is a signal with transform F(ω) obtain. Fourier transform. This is where the Fourier Transform comes in. This method makes use of te fact that every non-linear function can be represented as a sum of (infinite) sine waves. In the underlying figure this is illustrated, as a step function is simulated by a multitude of sine waves. Step function simulated with sine wave Fourier transforms Fourier transforms (named after Jean Baptiste Joseph Fourier, 1768-1830, a French math-ematician and physicist) are an essential ingredient in many of the topics of this lecture. Therefore let us review the basics here. We assume, however, that the reader is already mostly familiar with the concepts. A.1 Fourier integrals in infinite space: The 1-D case Let us start in 1-D
Notation• Continuous Fourier Transform (FT)• Discrete Fourier Transform (DFT)• Fast Fourier Transform (FFT) 15. Fourier Series Theorem• Any periodic function can be expressed as a weighted sum (infinite) of sine and cosine functions of varying frequency: is called the fundamental frequency 16 Die Fourier-Transformation (genauer die kontinuierliche Fourier-Transformation; Aussprache: [fuʁie]) ist eine mathematische Methode aus dem Bereich der Fourier-Analyse, mit der aperiodische Signale in ein kontinuierliches Spektrum zerlegt werden. Die Funktion, die dieses Spektrum beschreibt, nennt man auch Fourier-Transformierte oder Spektralfunktion FOURIER TRANSFORM TERENCE TAO Very broadly speaking, the Fourier transform is a systematic way to decompose generic functions into a superposition of symmetric functions. These symmetric functions are usually quite explicit (such as a trigonometric function sin(nx) or cos(nx)), and are often associated with physical concepts such as frequency or energy. What symmetric means. Other applications of the DFT arise because it can be computed very efficiently by the fast Fourier transform (FFT) algorithm. For example, the DFT is used in state-of-the-art algorithms for multiplying polynomials and large integers together; instead of working with polynomial multiplication directly, it turns out to be faster to compute the DFT of the polynomial functions and convert the. Using Fourier transform both periodic and non-periodic signals can be transformed from time domain to frequency domain. Example: The Python example creates two sine waves and they are added together to create one signal. When the Fourier transform is applied to the resultant signal it provides the frequency components present in the sine wave
Fourier transform of $\sin(x)$ - Mathematics Stack Exchang
Fourier Transform of Array Inputs. Find the Fourier transform of the matrix M. Specify the independent and transformation variables for each matrix entry by using matrices of the same size. When the arguments are nonscalars, fourier acts on them element-wise
8 The Discrete Fourier Transform Fourier analysis is a family of mathematical techniques, all based on decomposing signals into sinusoids. The discrete Fourier transform (DFT) is the family member used with digitized signals. This is the first of four chapters on the real DFT , a version of the discrete Fourier transform that uses real numbers to represent the input and output signals. The.
The Fourier transform is a powerful tool for analyzing signals and is used in everything from audio processing to image compression. SciPy provides a mature implementation in its scipy.fft module, and in this tutorial, you'll learn how to use it.. The scipy.fft module may look intimidating at first since there are many functions, often with similar names, and the documentation uses a lot of.
Table of Discrete-Time Fourier Transform Pairs: Discrete-Time Fourier Transform : X() = X1 n=1 x[n]e j n Inverse Discrete-Time Fourier Transform : x[n] = 1 2ˇ Z 2ˇ X()ej td: x[n] X() condition anu[n] 1 1 ae j jaj<1 (n+ 1)anu[n] 1 (1 ae j)2 jaj<1 (n+ r 1)! n!(r 1)! anu[n] 1 (1 ae j)r jaj<1 [n] 1 [n n 0] e j n 0 x[n] = 1 2ˇ X1 k=1 (2ˇk) u[n] 1 1 e j + X1 k=1 ˇ (2ˇk) ej 0n 2ˇ X1 k=1 (0 2�
The amplitude of the Fourier Transform is a metric of spectral density. If we assume that the unit's of the original time signal x ( t) are Volts than the units of it's Fourier Transform X ( ω) will be Volts/Hertz or V / H z. Loosely speaking it's a measure of how much energy per unit of bandwidth you have
Fourier integral and Fourier transform September 14, 2020 The following material follows closely along the lines of Chapter 11.7 of Kreyszig. The sine-cosine expressions therein were just replaced by complex exponential functions. That means, we introduce (complex-valued) coe cients c n, n2Z such that f(x) = a 0 + X1 n=1 (a ncos(nx) + b nsin(nx)) = X1 n=1 c neinx with c 0 = a 0; c n= a n i b n.
We can consider the discrete Fourier transform (DFT) to be an artificial neural network: it is a single layer network, with no bias, no activation function, and particular values for the weights. The number of output nodes is equal to the number of frequencies we evaluate. Where k is the number of cycles per N samples, x n is the signal's.
It defines the Fourier transform (also the Fourier sine and cosine transforms) and develops the Fourier integral theorem, providing formulas for these transforms and their inverses. Properties exhibited include the shift formula, formulas for the derivatives of a function, and the Fourier convolution theorem. It is shown how these properties enable the solution of linear ODEs and support the.
So to speak this is what a Fourier transform does, it finds a correlation between f(x) and sine, cosine functions in the range of -∞ to +∞. Similarly, if a periodic function f(x) is multiplied with other periodic functions and their sum of product is plotted as a function of frequency f(ξ) you will get to know about the periodic properties of the original function f(x) in terms of frequency
Collective Table of Formulas. Continuous-time Fourier Transform Pairs and Properties. as a function of frequency f in hertz. (used in ECE438 ) CT Fourier Transform and its Inverse. CT Fourier Transform. X(f) = F(x(t)) = ∫∞ −∞ x(t)e−i2πftdt. Inverse DT Fourier Transform
The code below defines as a sine function of amplitude 1 and frequency 10 Hz. We then use Scipy function fftpack.fft to perform Fourier transform on it and plot the corresponding result. Numpy.
Jun 17,2021 - Test: Fourier Transforms Properties | 10 Questions MCQ Test has questions of Electrical Engineering (EE) preparation. This test is Rated positive by 87% students preparing for Electrical Engineering (EE).This MCQ test is related to Electrical Engineering (EE) syllabus, prepared by Electrical Engineering (EE) teachers
Auxiliary Sections > Integral Transforms > Tables of Fourier Sine Transforms > Fourier Sine Transforms: Expressions with Exponential Functions Fourier Sine Transforms: Expressions with Exponential Functions No Original function, f(x) Sine transform, fˇs(u) = Z 1 0 f(x)sin(ux)dx 1 e−ax, a > 0 u a2+u2 2 xne−ax, a > 0, n =1, 2,::: n! ‡ a a2. The plot of the magnitude of the Fourier Transform of Equation [1] is given in Figure 2. Note that the vertical arrows represent dirac-delta functions. Figure 2. Plot of Absolute Value of Fourier Transform of Right-Sided Cosine Function. The Right-Sided Sine Function . The right-sided Sine function can be obtained in the same way. This function. Fourier transform is purely imaginary. For a general real function, the Fourier transform will have both real and imaginary parts. We can write f˜(k)=f˜c(k)+if˜ s(k) (18) where f˜ s(k) is the Fourier sine transform and f˜c(k) the Fourier cosine transform. One hardly ever uses Fourier sine and cosine transforms. We practically always talk. 9 Discrete Cosine Transform (DCT) When the input data contains only real numbers from an even function, the sin component of the DFT is 0, and the DFT becomes a Discrete Cosine Transform (DCT) There are 8 variants however, of which 4 are common. DCT vs DFT For compression, we work with sampled data in a finite time window. Fourier-style transforms imply the function is periodic and extends t
An Interactive Introduction to Fourier Transform
Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain) The Fourier Sine Transform pair are F. T. : U = 2/ ∫ 0 ∞ u x sin x dx, denoted as U = S[u] Inverse F.T. : u x =∫ 0 ∞ U sin x d , denoted as u = S-1 [U] Remarks: (i) The F.T. and I.F.T. defined above are analogous to their counterparts for Fourier Sine serie Fourier Transform. The Fourier Transform and the associated Fourier series is one of the most important mathematical tools in physics. Physicist Lord Kelvin remarked in 1867: Fourier's theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics Fourier Transform • Cosine/sine signals are easy to define and interpret. • However, it turns out that the analysis and manipulation of sinusoidal signals is greatly simplified by dealing with related signals called complex exponential signals. • A complex number has real and imaginary parts: z = x+j y • A complex exponential signal: rr jecos sinjα = (α+ α) Overview D Self dual P D. I have tried different Fourier transform codes out there on single sine waves, and all of them produce a distributed spectrum with a resonance at the signal frequency when they should theoretically display a single bar. The sampling frequency has little effect (10kHz here), however the number of cycles does: One cycle: 100 cycles: 100000 cycles
and consider the Fourier transform: H~ L(!) = 1 p 2 Z L 0 e¡i!t dt = 1 p 2 1¡e¡i!L i! = r 2 e¡i!L=2 sin(!L=2)!: This integral and transform make sense because L is flnite. But this calculation doesn't help us. We still have the same di-culty of limits as L ! 1. In fact, we have merely replaced \B with \L in the argument. Find the fourier series of f(x) = sin^2(x) Homework Equations bn = because f(x) is even ao = (1/(2*∏))*∫(f(x)) (from 0 to 2*∏) an = (1/(∏))*∫(f(x)*cos(x)) (from 0 to 2*∏) The Attempt at a Solution ao = (1/(2*∏))*∫(f(x)) (from 0 to 2*∏) = ao = 1/2 an = (1/(∏))*∫(f(x)*cos(x)) (from 0 to 2*∏) = sin^3(x) from 0 to 2∏ and I keep resulting in zero the answer is to the. • The Fourier transform maps a function to a set of complex numbers representing sinusoidal coefficients - We also say it maps the function from real space to Fourier space (or frequency space) - Note that in a computer, we can represent a function as an array of numbers giving the values of that function at equally spaced points. • The inverse Fourier transform maps. Find the Fourier sine transform of f(x) = 1 / x . -MATHEMATICS-3 question answer collectio
Fourier transform calculator - WolframAlph
You can know the answer by using the properties (3), (6) and (7) in the table of page two of https://www.ethz.ch/content/dam/ethz/special-interest/baug/ibk/structural.
Fourier transform can be generalized to higher dimensions. For example, many signals are functions of 2D space defined over an x-y plane. Two-dimensional Fourier transform also has four different forms depending on whether the 2D signal is periodic and discrete. Aperiodic, continuous signal, continuous, aperiodic spectrum . where and are spatial frequencies in and directions, respectively, and.
g signal into its.
Find the Fourier transform of the Gaussian function f(x) = e−x2. Start by noticing that y = f(x) solves y′ +2xy = 0. Taking Fourier transforms of both sides gives (iω)ˆy +2iyˆ′ = 0 ⇒ ˆy′ + ω 2 ˆy = 0. The solutions of this (separable) differential equation are yˆ = Ce−ω2/4. We find that C = ˆy(0) = 1 √ 2π Z∞ −∞ e.
Fourier Transforms and its properties . Fourier Transform . We know that the complex form of Fourier integral is. The function F(s), defined by (1), is called the Fourier Transform of f(x). The function f(x), as given by (2), is called the inverse Fourier Transform of F(s). The equation (2) is also referred to as the inversion formula Fourier Transforms Frequency domain analysis and Fourier transforms are a cornerstone of signal and system analysis. These ideas are also one of the conceptual pillars within electrical engineering. Among all of the mathematical tools utilized in electrical engineering, frequency domain analysis is arguably the most far-reaching. In fact, these ideas are so important that they are widely used. Fourier Transform of any periodic signal XFourier series of a periodic signal x(t) with period T 0 is given by: XTake Fourier transform of both sides, we get: XThis is rather obvious! L7.2 p693 PYKC 10-Feb-08 E2.5 Signals & Linear Systems Lecture 10 Slide 12 Fourier Transform of a unit impulse train XConsider an impulse trai The 'Fourier Transform ' is then the process of working out what 'waves' comprise an image, just as was done in the above example. 2 Dimensional Waves in Images The above shows one example of how you can approximate the profile of a single row of an image with multiple sine waves. However images are 2 dimensional, and as such the waves used to represent an image in the 'frequency domain' also. The Fourier transform is simply the set of amplitudes of those sine and cosine components (or, which is mathematically equivalent, the frequency and phase of sine components). You could calculate those coefficients yourself simply by multiplying the signal point-by-point with each of those sine and cosine components and adding up the products. The concept was originated by Carl Friedrich Gauss.
Fourier transform - Wikipedi
The Fourier Transform is a mathematical technique that transforms a function of time, x(t), to a function of frequency, X(ω). Note: This would seem to present a problem, because common signals such as the sine and cosine are not absolutely integrable. We will finesse this problem, later, by considering impulse functions, δ(α), which are not functions in the strict sense since the value.
sin(0.5N) sin(0.5 ) e−j(N−1)/2, (12.7) which is shown in Fig. 12.1(l) with its time-limited representation x w[k] plotted in Fig. 12.1(k). Symbol ⊗ in Eq. (12.7) denotes the circular convolution. Step 3: Frequency sampling The DTFT X w( )ofthe time-limited signal x w[k]isacontinuous function of and must be discretized to be stored on the digitalcomputer.Thisisachievedbymultiplying X w.
Fourier Transform. The basic idea of the Fourier Transform is that every periodic wave can be decomposed in an infinite series of sine waves (so called Fourier series, see here and here). We thereby multiply our signal (target function) with an analyzing function (which contains all sine waves). Whenever these two function are similar, they result in a large coefficient and whenever these two.
sin(ω0t+θ) jπ[e− jθδ(ω +ω0) −e δ(ω −ω0)] Using a table of transforms lets one use Fourier theory without having to formally manipulate integrals in every case. 5.3 Some Fourier transform properties There are a number of Fourier transform properties that can be applied to valid Fourier pairs to produce other valid pairs. These.
The 2 π in the definition of the Fourier transform. There are several conventions for the definition of the Fourier transform on the real line. 1 . No 2 π. Fourier (with cosine/sine), Hörmander, Katznelson, Folland. 2 . 2 π in the exponent. L. Schwartz, Trèves. 3 . 2 π square-rooted in front
All the Fourier transform pairs are connected by the Fourier transform term \(e^{ - i2\pi yx}\). Regarding this case, we can use the term to transform between two variables in this pair, namely time and frequency. In this way, we can measure the properties of the electromagnetic wave in both conventional frequency domain and somehow more robust time domain
ES 442 Fourier Transform 3 Review: Fourier Trignometric Series (for Periodic Waveforms) Agbo & Sadiku; Section 2.5 pp. 26-27 0 0 0 n1 00 0 0 0 0 Equation (2.10) should read (time was missing in book)
function is slightly different than the one used in class and on the Fourier transform table. In MATLAB: sinc(x)= sin πx) πx Thus, in MATLAB we write the transform, X, using sinc(4f), since the π factor is built in to the function. The following MATLAB commands will plot this Fourier Transform: >> f=-5:.01:5; >> X=4*sinc(4*f); >> plot(f,X) x(t) t-2 2 1. In this case, the Fourier transform. The formula of the Fourier Inverse Sine Transform sin is true when is continuous at . Moreover , recall that is computed for odd function . If we extend to be odd , we get Not continuous at 0 when taking , so we use Dirichlet's Theorem. Recommended. Explore professional development books with Scribd. Scribd - Free 30 day trial. the fourier series safi al amu. Topic: Fourier Series ( Periodic. We've just shown that the Fourier Transform of the convolution of two functions is simply the product of the Fourier Transforms of the functions. This means that for linear, time-invariant systems, where the input/output relationship is described by a convolution, you can avoid convolution by using Fourier Transforms. This is a very powerful result. Multiplication of Signals Our next property. Now calculating the FFT: Y = scipy.fftpack.fft (X_new) P2 = np.abs (Y / N) P1 = P2 [0 : N // 2 + 1] P1 [1 : -2] = 2 * P1 [1 : -2] plt.ylabel (Y) plt.xlabel (f) plt.plot (f, P1) P.S. I finally got time to implement a more canonical algorithm to get a Fourier transform of unevenly distributed data Introduction to Fourier Transforms Fourier transform as a limit of the Fourier series Inverse Fourier transform: The Fourier integral theorem Example: the rect and sinc functions Cosine and Sine Transforms Symmetry properties Periodic signals and functions Cu (Lecture 7) ELE 301: Signals and Systems Fall 2011-12 2 / 22. Fourier Series Suppose x(t) is not periodic. We can compute the Fourier.
6.6: Fourier Transform, A Brief Introduction - Physics ..
continuous Fourier transform. This is also known as the analysis equation. • In general X (w) ∈C sin( ) ( ) x x Sinc x p p = The spectrum and its inverse transform for w C =p /2 has been depicted above. 4.3 Properties of DTFT 4.3.1 Real and Imaginary Parts: x[n] = x R [n] + jx I [n] ⇔ X(w) = X R (w) + jX I (w) (4.15) 4.3.2 Even and Odd Parts: x[n] = x ev [n]+ x odd [n] ⇔ X(w) = X
† Fourier transform: A general function that isn't necessarily periodic (but that is still reasonably well-behaved) metric series and look at how any periodic function can be written as a discrete sum of sine and cosine functions. Then, since anything that can be written in terms of trig functions can also be written in terms of exponentials, we show in Section 3.2 how any periodic.
Inverse Fourier Transform is just the opposite of the Fourier Transform. It takes the frequency-domain representation of a given signal as input and does mathematically synthesize the original signal. Let's see how we can use Fourier transformation to convert our audio signal into its frequency components — 3. Fast Fourier Transform (FFT) Fast Fourier Transformation(FFT) is a mathematical.
Fourier Transform. Fourier transformation decomposes the collected signal into its component frequencies, whose magnitudes correspond to the amount of magnetization at each position, and yield a one-dimensional image. From: Handbook of Neuro-Oncology Neuroimaging (Second Edition), 2016. Related terms: Peptide; Protein Secondary Structure; Peptide
The Fourier transform is actually implemented using complex numbers, where the real part is the weight of the cosine and the imaginary part is the weight of the sine. On the second plot, a blue spike is a real (cosine) weight and a green spike is an imaginary (sine) weight. This is why cos shows up blue and sin shows up green. Most signals have both sines and cosines in them, like triangle(abs. The Fourier Transform finds the set of cycle speeds, amplitudes and phases to match any time signal. Our signal becomes an abstract notion that we consider as observations in the time domain or ingredients in the frequency domain. Enough talk: try it out! In the simulator, type any time or cycle pattern you'd like to see. If it's time points, you'll get a collection of cycles (that combine. Fourier Transform Infrared Spectroscopic Analysis of Protein Secondary Structures Jilie Kong, Jilie Kong 1. Department of Chemistry, Fudan University. Shanghai 200433, China. Search for other works by this author on: Oxford Academic. PubMed. Google Scholar. Shaoning Yu. Shaoning Yu * 1. Department of Chemistry, Fudan University. Shanghai 200433, China * Corresponding author: Tel, 86-21.
E: Fourier transforms - Wile
ing what note (frequency) is being played. The inverse Fourier Transform ( IFT ) is like the musician seeing notes (frequencies) on a sheet of music and converting them to tones (time domain signals)
The Fourier Transform can be used for this purpose, which it decompose any signal into a sum of simple sine and cosine waves that we can easily measure the frequency, amplitude and phase. The Fourier transform can be applied to continuous or discrete waves, in this chapter, we will only talk about the Discrete Fourier Transform (DFT)
SINE_TRANSFORM is a C library which demonstrates some simple properties of the discrete sine transform for real data.. The code is not optimized in any way, and is intended instead for investigation and education. Licensing: The computer code and data files described and made available on this web page are distributed under the GNU LGPL license
Fourier Transforms - MATLAB & Simulink - MathWork
if I have a sine wave signal for a duration of only a few seconds, the Fourier transform will show me, that this signal corresponds to a range of frequencies. Why is this the case? I do understand that every signal is composed of sine waves, even this sine wave pulse, but I don't get the intuition behind that for this case. Even if my sine wave is not infinitely long, I should still be able. The Fourier transform of a time series \(y_t\) for frequency \(p\) cycles per \(n\) observations can be written as \[ z_p = \sum_{t=0}^{n-1} y_t\exp(-2\pi i \,p\,t / n). \] for \(p = 0, \dots, n-1\). Note that the above expression differs slightly from what we have presented in the previous sections but is consistent with how R computes the Fourier transform. From the expression above, it's. Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). next_fast_len () Find the next fast size of input data to fft, for zero-padding, etc. set_workers (workers) Context manager for the default number of workers used in scipy.fft Last week I showed a couple of continuous-time Fourier transform pairs (for a cosine and a rectangular pulse). Today I want to follow up by discussing one of the ways in which reality confounds our expectations and causes confusion. Specifically, when we're talking about real signals and systems, we never truly have an infinitely long signal
Icard topup.
Crypto com Card PIN.
Binance processing time.
Geld sofort verdoppeln.
CME ETH futures launch.
Mallika youtuber Instagram.
Chaos Crew slot.
NYC SBS login.
Rimondo Gewinnspiel.
Rothschild asset Management.
Franck Muller Uhren Wikipedia.
PACTA for Banks.
E Wallet Sparkasse.
A2781V Forum.
Town of Salem Investigator results 2020.
Ersättning vid id kapning.
Can Komodo coin reach 100 USD.
ITM Power News.
Iac interactivecorp nasdaq.
Dreischeibenhaus adresse.
Gebrauchte Grafikkarten Erfahrung.
ForestFinance Capital GmbH.
Immo 2020 GmbH.
Gold futures CME.
Now playing Spotify Streamlabs OBS.
Arbeiten in.
Spam Ordner iPhone.
GTA Online Bunker Guide Deutsch.
Mendeley Word Plugin Problem 52.
Kallmar vdl.
Länsförsäkringar juristhjälp Göteborg.
Ny app sociala medier.
Скачать Террарию взлом старая версия.
370 Dollar in Euro.
Coiling bei Blutung.
0.0005 bitcoin.
0.0005 BTC в рублях.
Google Play gift card India free.
1 EUR to sek.
Presto plugins.
|
CommonCrawl
|
Radar cross section reduction of a cavity in the ground plane: TE polarization
DCDS-S Home
June 2015, 8(3): 389-417. doi: 10.3934/dcdss.2015.8.389
Detection, reconstruction, and characterization algorithms from noisy data in multistatic wave imaging
Habib Ammari 1, , Josselin Garnier 2, and Vincent Jugnon 3,
Department of Mathematics and Applications, Ecole Normale Supérieure, 45 Rue d'Ulm, 75005 Paris, France
Laboratoire de Probabilités et Modèles Aléatoires & Laboratoire Jacques-Louis Lions, Université Paris Diderot, 75205 Paris Cedex 13
Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307, United States
Received June 2013 Revised January 2014 Published October 2014
The detection, localization, and characterization of a collection of targets embedded in a medium is an important problem in multistatic wave imaging. The responses between each pair of source and receiver are collected and assembled in the form of a response matrix, known as the multi-static response matrix. When the data are corrupted by measurement or instrument noise, the structure of the response matrix is studied by using random matrix theory. It is shown how the targets can be efficiently detected, localized and characterized. Both the case of a collection of point reflectors in which the singular vectors have all the same form and the case of small-volume electromagnetic inclusions in which the singular vectors may have different forms depending on their magnetic or dielectric type are addressed.
Keywords: random matrices, detection and reconstruction algorithms, Multistatic wave imaging, noisy data, Helmholtz equation., measurement noise.
Mathematics Subject Classification: Primary: 78M35, 78A46; Secondary: 15B5.
Citation: Habib Ammari, Josselin Garnier, Vincent Jugnon. Detection, reconstruction, and characterization algorithms from noisy data in multistatic wave imaging. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 389-417. doi: 10.3934/dcdss.2015.8.389
M. Alam, V. Cevher, J. H. McClellan, G. D. Larson, and W. R. Scott, Jr., Optimal maneuvering of seismic sensors for localization of subsurface targets,, IEEE Trans. Geo. Remote Sensing, 45 (2007), 1247. doi: 10.1109/TGRS.2007.894551. Google Scholar
H. Ammari, An Introduction to Mathematics of Emerging Biomedical Imaging,, Mathematics & Applications, (2008). Google Scholar
H. Ammari, P. Calmon and E. Iakovleva, Direct elastic imaging of a small inclusion,, SIAM J. Imaging Sci., 1 (2008), 169. doi: 10.1137/070696076. Google Scholar
H. Ammari, J. Chen, Z. Chen, J. Garnier and D. Volokov, Target detection and characterization from electromagnetic induction data,, J. Math. Pures Appl., 101 (2014), 54. doi: 10.1016/j.matpur.2013.05.002. Google Scholar
H. Ammari, P. Garapon, L. Guadarrama Bustos and H. Kang, Transient anomaly imaging by the acoustic radiation force,, J. Diff. Equat., 249 (2010), 1579. doi: 10.1016/j.jde.2010.07.012. Google Scholar
H. Ammari, J. Garnier, H. Kang, M. Lim and K. Sølna, Multistatic imaging of extended targets,, SIAM J. Imag. Sci., 5 (2012), 564. doi: 10.1137/10080631X. Google Scholar
H. Ammari, J. Garnier and K. Sølna, A statistical approach to target detection and localization in the presence of noise,, Waves Random Complex Media, 22 (2012), 40. doi: 10.1080/17455030.2010.532518. Google Scholar
H. Ammari, J. Garnier and K. Sølna, Limited view resolving power of linearized conductivity imaging from boundary measurements,, SIAM J. Math. Anal., 45 (2013), 1704. doi: 10.1137/120861849. Google Scholar
H. Ammari, J. Garnier and K. Sølna, Resolution and stability analysis in full-aperture, linearized conductivity and wave imaging,, Proc. Amer. Math. Soc., 141 (2013), 3431. doi: 10.1090/S0002-9939-2013-11590-X. Google Scholar
H. Ammari, J. Garnier, H. Kang, W. K. Park and K. Sølna, Imaging schemes for cracks and inclusions,, SIAM J. Appl. Math., 71 (2011), 68. doi: 10.1137/100800130. Google Scholar
H. Ammari, E. Iakovleva and D. Lesselier, A MUSIC algorithm for locating small inclusions buried in a half-space from the scattering amplitude at a fixed frequency,, Multiscale Model. Simul., 3 (2005), 597. doi: 10.1137/040610854. Google Scholar
H. Ammari, E. Iakovleva and D. Lesselier, Two numerical methods for recovering small inclusions from the scattering amplitude at a fixed frequency,, SIAM J. Sci. Comput., 27 (2005), 130. doi: 10.1137/040612518. Google Scholar
H. Ammari, E. Iakovleva, D. Lesselier and G. Perrusson, A MUSIC-type electromagnetic imaging of a collection of small three-dimensional inclusions,, SIAM J. Sci. Comput., 29 (2007), 674. doi: 10.1137/050640655. Google Scholar
H. Ammari and H. Kang, Reconstruction of Small Inhomogeneities from Boundary Measurements,, Lecture Notes in Mathematics, (1846). doi: 10.1007/b98245. Google Scholar
H. Ammari and H. Kang, Polarization and Moment Tensors: with Applications to Inverse Problems and Effective Medium Theory,, Applied Mathematical Sciences, (2007). Google Scholar
H. Ammari, H. Kang, E. Kim and J.-Y. Lee, The generalized polarization tensors for resolved imaging. Part II: Shape and electromagnetic parameters reconstruction of an electromagnetic inclusion from multistatic measurements,, Math. Comp., 81 (2012), 839. doi: 10.1090/S0025-5718-2011-02534-2. Google Scholar
H. Ammari, H. Kang, H. Lee and W. K. Park, Asymptotic imaging of perfectly conducting cracks,, SIAM J. Sci. Comput., 32 (2010), 894. doi: 10.1137/090749013. Google Scholar
A. Aubry and A. Derode, Random matrix theory applied to acoustic backscattering and imaging in complex media,, Phys. Rev. Lett., 102 (2009). doi: 10.1103/PhysRevLett.102.084301. Google Scholar
A. Aubry and A. Derode, Singular value distribution of the propagation matrix in random scattering media,, Waves Random Complex Media, 20 (2010), 333. doi: 10.1080/17455030903499698. Google Scholar
A. Aubry and A. Derode, Detection and imaging in a random medium: A matrix method to overcome multiple scattering and aberration,, J. Appl. Physics, 106 (2009). doi: 10.1063/1.3200962. Google Scholar
J. Baik and J. W. Silverstein, Eigenvalues of large sample covariance matrices of spiked population models,, Journal of Multivariate Analysis, 97 (2006), 1382. doi: 10.1016/j.jmva.2005.08.003. Google Scholar
G. Bao, S. Hou and P. Li, Inverse scattering by a continuation method with initial guesses from a direct imaging algorithm,, J. Comp. Phys., 227 (2007), 755. doi: 10.1016/j.jcp.2007.08.020. Google Scholar
F. Benaych-Georges and R. R. Nadakuditi, The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices,, Advances in Mathematics, 227 (2011), 494. doi: 10.1016/j.aim.2011.02.007. Google Scholar
J. Byrnes, Advances in Sensing with Security Applications,, Springer-Verlag, (2006). doi: 10.1007/1-4020-4295-7. Google Scholar
D. H. Chambers, Target characterization using time-reversal symmetry of wave propagation,, Int. J. Modern Phys. B, 21 (2007), 3511. doi: 10.1142/S0217979207037521. Google Scholar
D. H. Chambers and J. G. Berryman, Analysis of the time-reversal operator for a small spherical scatterer in an electromagnetic field,, IEEE Trans. Antennas Propagat., 52 (2004), 1729. doi: 10.1109/TAP.2004.831323. Google Scholar
D. H. Chambers and J. G. Berryman, Time-reversal analysis for scatterer characterization,, Phys. Rev. Lett., 92 (2004). doi: 10.1103/PhysRevLett.92.023902. Google Scholar
A. J. Devaney, Time reversal imaging of obscured targets from multistatic data,, IEEE Trans. Antennas Propagat., 523 (2005), 1600. doi: 10.1109/TAP.2005.846723. Google Scholar
A. J. Devaney, E. A. Marengo and F. K. Gruber, Time-reversal-based imaging and inverse scattering of multiply scattering point targets,, J. Acoust. Soc. Am., 118 (2005), 3129. doi: 10.1121/1.2042987. Google Scholar
J. Garnier, Use of random matrix theory for target detection, localization, and reconstruction,, in Proceedings of the Conference Mathematical and Statistical Methods for Imaging, 548 (2011), 151. doi: 10.1090/conm/548/10832. Google Scholar
D. J. Hansen and M. S. Vogelius, High frequency perturbation formulas for the effect of small inhomogeneities,, J. Phys.: Conf. Ser., 135 (2008). doi: 10.1088/1742-6596/135/1/012106. Google Scholar
R. A. Horn and C. R. Johnson, Matrix Analysis,, Cambridge University Press, (1985). doi: 10.1017/CBO9780511810817. Google Scholar
I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis,, Ann. Statist., 29 (2001), 295. doi: 10.1214/aos/1009210543. Google Scholar
S. Lee, F. Zou and F. A. Wright, Convergence and prediction of principal component scores in high-dimensional settings,, Ann. Stat., 38 (2010), 3605. doi: 10.1214/10-AOS821. Google Scholar
J. G. Minonzio, D. Clorennec, A. Aubry, T. Folégot, T. Pélican, C. Prada, J. de Rosny and M. Fink, Application of the DORT method to the detection and characterization of two targets in a shallow water wave-guide,, IEEE Oceans 2005 Eur., 2 (2005), 1001. doi: 10.1109/OCEANSE.2005.1513193. Google Scholar
M. Oristaglio and H. Blok, Wavefield Imaging and Inversion in Electromagnetics and Acoustics,, Cambridge University Press, (2004). Google Scholar
D. Paul, Asymptotics of sample eigenstructure for a large dimensional spiked covariance model,, Statist. Sinica, 17 (2007), 1617. Google Scholar
A. Shabalin and A. Nobel, Reconstruction of a low-rank matrix in the presence of Gaussian noise,, Journal of Multivariate Analysis, 118 (2013), 67. doi: 10.1016/j.jmva.2013.03.005. Google Scholar
G. W. Stewart, Perturbation theory for the singular value decomposition,, in SVD and Signal Processing, (1990), 99. Google Scholar
M. S. Vogelius and D. Volkov, Asymptotic formulas for perturbations in the electromagnetic fields due to the presence of inhomogeneities,, Math. Model. Numer. Anal., 34 (2000), 723. doi: 10.1051/m2an:2000101. Google Scholar
D. Volkov, Numerical methods for locating small dielectric inhomogeneities,, Wave Motion, 38 (2003), 189. doi: 10.1016/S0165-2125(03)00047-7. Google Scholar
P.-Å. Wedin, Perturbation bounds in connection with singular value decomposition,, BIT Numerical Mathematics, 12 (1972), 99. Google Scholar
X. Yao, G. Bin, X. Luzhou, L. Jian and P. Stoica, Multistatic adaptive microwave imaging for early breast cancer detection,, IEEE Trans. Biomedical Eng., 53 (2006), 1647. Google Scholar
Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang, Natalie Durgin. Stochastic greedy algorithms for multiple measurement vectors. Inverse Problems & Imaging, 2021, 15 (1) : 79-107. doi: 10.3934/ipi.2020066
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Weihong Guo, Yifei Lou, Jing Qin, Ming Yan. IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging. Inverse Problems & Imaging, 2021, 15 (1) : I-I. doi: 10.3934/ipi.2021007
Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015
Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365
Shang Wu, Pengfei Xu, Jianhua Huang, Wei Yan. Ergodicity of stochastic damped Ostrovsky equation driven by white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1615-1626. doi: 10.3934/dcdsb.2020175
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang. A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28 (4) : 1487-1501. doi: 10.3934/era.2020078
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Joan Carles Tatjer, Arturo Vieiro. Dynamics of the QR-flow for upper Hessenberg real matrices. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1359-1403. doi: 10.3934/dcdsb.2020166
Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319
Ningyu Sha, Lei Shi, Ming Yan. Fast algorithms for robust principal component analysis with an upper bound on the rank. Inverse Problems & Imaging, 2021, 15 (1) : 109-128. doi: 10.3934/ipi.2020067
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Habib Ammari Josselin Garnier Vincent Jugnon
|
CommonCrawl
|
npj materials degradation
Mitigating the detrimental effects of galvanic corrosion by nanoscale composite architecture design
Enhanced corrosion resistance by engineering crystallography on metals
X. X. Wei, B. Zhang, … X. L. Ma
Opportunities of combinatorial thin film materials design for the sustainable development of magnesium-based alloys
Marcus Hans, Philipp Keuter, … Jochen M. Schneider
A nanodispersion-in-nanograins strategy for ultra-strong, ductile and stable metal nanocomposites
Zan Li, Yin Zhang, … Y. Morris Wang
Enhancement of mechanical and corrosion resistance properties of electrodeposited Ni–P–TiC composite coatings
Osama Fayyaz, Adnan Khan, … Paul C. Okonkwo
Fabrication of stainless-steel microfibers with amorphous-nanosized microstructure with enhanced mechanical properties
Elham Sharifikolouei, Baran Sarac, … Jürgen Eckert
First Report on High Entropy Alloy Nanoparticle Decorated Graphene
M. Y. Rekha, Nitin Mallik & Chandan Srivastava
A synergistic effect on enriching the Mg–Al–Zn alloy-based hybrid composite properties
Gnanasambandam Anbuchezhiyan, Nabisab Mujawar Mubarak, … Mohammad Khalid
Decoupling the role of stress and corrosion in the intergranular cracking of noble-metal alloys
N. Badwe, X. Chen, … K. Sieradzki
Corrosion resistant and high-strength dual-phase Mg-Li-Al-Zn alloy by friction stir processing
Zhuoran Zeng, Mengran Zhou, … Nick Birbilis
Oliver Renk ORCID: orcid.org/0000-0001-7954-52971,
Irmgard Weißensteiner ORCID: orcid.org/0000-0002-8917-52312,
Martina Cihova ORCID: orcid.org/0000-0002-6182-47383,
Eva-Maria Steyskal4,
Nicole G. Sommer ORCID: orcid.org/0000-0001-6578-07475,
Michael Tkadletz6,
Stefan Pogatscher ORCID: orcid.org/0000-0002-6500-95707,
Patrik Schmutz8,
Jürgen Eckert1,9,
Peter J. Uggowitzer ORCID: orcid.org/0000-0002-9504-56527,10,
Reinhard Pippan1 &
Annelie M. Weinberg ORCID: orcid.org/0000-0003-0385-43465
npj Materials Degradation volume 6, Article number: 47 (2022) Cite this article
Widespread application of magnesium (Mg) has been prevented by its low strength and poor corrosion resistance. Core of this limitation is Mg's low electrochemical potential and low solubility for most elements, favoring secondary phase precipitation acting as effective micro-galvanic elements. Mg-based metal–metal composites, while benefiting strength, are similarly active galvanic couples. We show that related detrimental corrosion susceptibility is overcome by nanoscale composite architecture design. Nanoscale phase spacings enable high-strength Mg–Fe composites with degradation rates as low as ultra-high purity Mg. Our concept thus fundamentally changes today's understanding of Mg's corrosion and significantly widens the property space of Mg-based materials.
Magnesium (Mg) and iron (Fe) are largely different materials. On the one hand, Fe's strength and allotropic phase transformations along with the possibility to form corrosion resistant alloys, i.e., stainless steels, makes it the most important structural material. On the other hand, Mg has the lowest density of all structural metals, making it a prosperous aspirant for energy-efficient lightweight structures in the mobility sector. Additionally, Mg's excellent biocompatibility combined with low corrosion resistance enabling biodegradability can be utilized for temporary medical implants, such as bone-fracture fixations in orthopedics and trauma surgery1,2,3. However, a widespread use of Mg in these fields has been prevented so far by the limited strength and high corrosion susceptibility in aqueous environments for technically pure Mg. Fe, a natural impurity element in Mg, presents a main source for Mg's low degradation resistance. The Fe content, in fact, directly relates to the degradation rate, with an increase in Fe resulting in faster degradation4,5,6,7.
The detrimental role of Fe in Mg is multifold: Because of the large standard potential difference between Mg and Fe (ΔE = 1.9 V), and Fe's high exchange current density for the hydrogen reduction reaction (Fe: i0,H2/H+ = 1.0 × 10−6 A cm−2, for comparison Zn has a i0,H2/H+ = 3.2 × 10−11 A cm−2)8, the contact of Mg with Fe entails a massive driving force for dissolution of the less noble Mg phase (with −2.7 V against SHE) through galvanic-coupling induced corrosion. Moreover, Fe-rich phases acting as effective cathodic sites can easily form, either through precipitation owing to its extremely low solubility in Mg (<10 ppm for wrought Mg5,9) or through electrochemical redeposition and accumulation on the corroding Mg surface10. In either case, Fe provokes a localized corrosion attack around Fe-rich areas through micro-galvanic corrosion and induces accelerated dissolution. This also holds true for other more noble trace elements in Mg, such as nickel, copper or cobalt, which likewise form cathodic sites in the Mg-matrix, though their effectiveness in facilitating the cathodic reaction is less severe than for Fe6,11,12. The importance of impurity elements and their distribution on the corrosion rate of Mg has been recognized almost 80 years ago12. In fact, Fe has been identified as one of the most detrimental elements to the corrosion resistance of Mg5,9.
While the development of vacuum-distilled ultra-high purity (UHP) Mg could successfully address this impurity-related shortcoming and offers extremely low degradation rates13,14, the strength of UHP Mg is too low to be used in structural applications. Alloying or strengthening with lattice defects such as grain boundaries or stacking faults are hence necessary to strengthen Mg15,16,17. However, achieving simultaneously high strength and high corrosion resistance remains a challenge – particularly in the biomedical application field, where the choice of alloying elements is severely limited to ensure biological safety. Two aspects contribute to the problem of designing slowly degrading Mg-alloys: Passivation through alloying, comparable to the effect that chromium has on Fe to form stainless steels, is challenging for Mg2,18,19 because (i) Mg typically forms non-dense, readily dissolvable oxides and (ii) Mg has a low solubility for most elements at typical forming temperatures (~523–623 K). The associated ease in precipitation of secondary phases possibly favors strength but, in most cases, accelerates (local) corrosive attack. New strategies to significantly expand the property space of Mg-based materials are thus required to combine simultaneously high strength and high corrosion resistance.
An attractive answer are composites, which hold the potential to combine the benefits of the individual material components they are made of. Traditional metal‒metal composites, however, are generally not considered because of the aforementioned effective galvanic couple that Mg forms with most other metals2. Here, we scrutinize a deliberate manipulation of the composite architecture (e.g., phase fraction, distributions, phase size and spacings), hypothesizing that phase architecture at the nanoscale may provide the necessary lever to subdue this detrimental effect. Our assumptions are built on recent studies on Mg-alloys that indicate a significant impact of the size and distribution of second phase particles (i.e., micro-galvanic elements) on the corrosion rates20,21,22. We expect similar size effects for Mg-based metal-metal composites, if the phase spacing is sufficiently refined, possibly facilitated through a confinement of the active corrosion mechanisms at the nanoscale. If proven successful, nanostructured Mg-composites would finally unlock the highly demanded property combination in lightweight and biodegradable Mg-alloys by exhibiting exceptional strength and corrosion resistance.
Nanostructured Mg–Fe composites with largely improved corrosion resistance
As a proof of concept for the raised hypothesis, we focus in the following on Mg–Fe composites. Fe was chosen as composite constituent, because of its large difference in the electrochemical potential, as stated above. Mg–Fe composites are hence ideal to answer if targeted nanoscale design of the composite architecture can indeed be utilized to lower the detrimental effects of galvanic coupling. To this end, Mg–Fe composites with different phase spacings were fabricated and the effects of composite architecture on the degradation rate investigated. High pressure torsion (HPT) processing, a method allowing to apply severe shear strains to a material, was chosen for bulk composite synthesis, because (i) it overcomes the limitations of conventional metallurgical methods in synthesizing such composites (namely Mg boils way before Fe melts23) through the use of metallic powder blends of arbitrary composition, and (ii) it allows for control of the phase spacings over up to four orders of magnitude through adjustment of the applied strain24.
Two different architectures were produced for this proof of concept using monotonic and cyclic high pressure torsion (HPT and CHPT) to fabricate a nanostructured Mg–Fe composite and a composite with micrometer-sized phase spacing, respectively. For both composites, a phase fraction of 50 vol.% was chosen. Representative micrographs of the composites' architecture are displayed in Fig. 1. Severe cyclic strains (CHPT) consolidate the powder blend into a bulk piece while preserving its original shape and phase dimensions of ~20 µm. The slight elongation in tangential direction results from the initial compression step preceding the cyclic deformation. Accumulation of cyclic strain resulted in the formation of grains and subgrains within the phases, evident from the backscattered electron (BSE) images (Fig. 1a), in which an Fe particle is outlined with a white dashed line and some subgrains within the particle are highlighted by yellow dotted lines. This composite is referred to as the coarse Mg–Fe composite hereinafter. In contrast, severe monotonic strains (HPT) significantly refine the composite structure (Fig. 1b). In fact, the phase spacing was substantially reduced to <300 nm for most lamellae (Fig. 1b), yielding a nanostructured Mg–Fe composite. Despite the substantially different mechanical properties of Mg and Fe, a fairly homogeneous composite architecture was obtained, without obvious signs of severe strain localization (e.g., in shear bands which would misalign the lamellae with respect to the principle shear direction). Overall, Mg–Fe composites of identical phase fraction but structural scales differing by two orders of magnitude could thus be successfully produced. Moreover, as the applied strain in HPT and CHPT increases linearly with the radius, the Mg–Fe composites possess an inherent structural gradient. However, as the applied cyclic strains do not manipulate the phase spacing considerably, this gradient is most prominent for the composites processed by monotonic HPT, which consist of nanostructured phase spacings (Fig. 1b) at the outermost radii, while coarser spacings prevail towards the center of the disk. As such, these two types of composites are ideally suited to study length scale effects on corrosion mechanisms and rates.
Fig. 1: Microstructure of the two types of Mg–Fe composites.
SEM images in BSE contrast acquired in radial direction of the disk-shaped samples. a Coarse Mg–Fe processed by CHPT. The dashed white line outlines an Fe particle with subgrains developed within (marked by dotted yellow lines); b Nanostructured Mg–Fe processed by monotonic HPT.
Our corrosion behavior investigation of the composites is summarized in Fig. 2a. The hydrogen gas (H2) evolution is taken as a measure for the Mg-degradation rate25. Given the known dependence of Mg's corrosion rate on the electrolyte, we analyzed the composites upon immersion in either (i) phosphate buffered saline (PBS, pH value of 7.4), (ii) unbuffered 3.5% NaCl solution or (iii) Hanks balanced salt solution (HBSS) without glucose26. Detailed compositions of the three solutions can be found in the Methods section (Table 1). Degradation rates of UHP Mg were measured in comparison. For the Mg–Fe composites, the degradation rates were observed to be largely independent of the electrolyte with slightly slower degradation rates in HBSS. For the UHP Mg, the differences are larger and, as expected, the slowest rates are measured for the unbuffered NaCl solution, which can be rationalized by the pronounced increase of the pH value establishing in surface-near regions.
Fig. 2: Microstructure-dependent degradation rate of the Mg phase.
a Time-resolved degradation, measured by H2 evolution, of the coarse and nanostructured Mg–Fe composites compared to UHP Mg reference in three different electrolytes. For decreasing composite phase spacing, the degradation rates slow down significantly. b Cross-sectional SEM images of the nanostructured Mg–Fe composite after nine days of immersion in PBS. Triangles denote the interface with solution contact, arrows show exemplary ingress of solution (dark contrast, filled with embedding resin) causing lamellae exfoliation and structural bulging (stars). Pentagon marks the intact composite.
As expected, the coarse composite corroded rapidly upon immersion. Within only 40–60 min, the measured amount of evolved H2 saturated and the composite completely disintegrated, indicative of complete Mg dissolution. In contrast, the composite processed by monotonic HPT, consisting of a gradient from coarse (center) to nanostructured (rim) phase spacings, degraded considerably slower. In fact, the measured H2 volume suggests presence of the Mg phase up to ~24 hours, a reduction in degradation rate by a factor of ~24 compared to the coarse composite. Notably, microscopic inspection of this composite sample following immersion revealed larger corrosion attack towards the center of the HPT disk, corresponding to lower strain and hence larger phase spacings. This observation provides strong indication that the phase spacing plays a crucial role in governing the degradation rate. For confirmation, the nanostructured composite was tested without the low-strained center part, removed by slow-advance drilling of a 2.5 mm hole. The resulting sample consisted of fine and largely homogeneous phase spacings only (Fig. 1b). Indeed, this drastically reduced degradation rates judged from small H2 evolution (Fig. 2a), thus confirming that the nanoscale architecture induces only minor degradation rates. In fact, the H2 data suggests, even after nine days only ~10% of the Mg phase has degraded, corresponding to a >2000-fold reduction of the degradation rate compared to a coarse microstructure of the same composite composition. Importantly, dissolution even slowed down further with times. Our observations suggest that nanoscale engineering of the Mg–Fe composite architecture modifies the active corrosion mechanisms, which origin in corrosion-hindering structures and result in H2-evolution rates close to those of UHP Mg. Potential mechanisms enabling this superior performance are discussed in the following.
Interplay of processes underlying the stronlgy enhanced corrosion resistance of nanostructured Mg–Fe
To make use of the composite's enhanced corrosion resistance in functional material solutions for engineering or biomedical applications, a profound understanding of the underlying reaction kinetics and mechanistic principles must be established. Aiming at stimulating their in-depth study, we here outline possible solid-state and interface phenomena acting on the nanostructured Mg‒Fe composites, Fig. 3a. The processes described are not limited to those leading to reduced degradation rates, but those that lead to changes in the underlying degradation mechanisms when composites with nanoscale instead of coarse phase spacing are considered:
Mechanical intermixing. The severe strains applied in monotonic HPT do not only induce a substantial refinement of the phase spacing (Fig. 1b), but for sufficiently large strains probably cause a breakdown of the lamellar structure or at least an incorporation of fragments into the other phase. This can result in mechanical alloying even in systems having zero solubility in thermodynamic equilibrium (e.g., refs. 27,28,29), such as the Mg–Fe system. Mechanical alloying has already been reported for the Mg–Fe system, with alloying occurring potentially in both phases depending on the concentration and treatment applied30,31,32. Depending on the applied strain and degree of intermixing, two scenarios may occur. First, if Fe-particles from lamella fragmentation are incorporated into the Mg matrix and are sub-critical sized (i.e., 'Fe-nanosheets'), the effective catalytical activity for the hydrogen-molecule formation on Mg is virtually 'deactivated' by the fast hydrogen diffusion into the Fe nanoscale phase33. Second, if Fe is not only dispersed but mechanically forced into solid solution within the Mg phase, Mg's dissolution rate would decrease due to elevated electrochemical potential of the Mg phase and due to Fe surface enrichment, reducing the potential difference to the cathodically active Fe phase, and ultimately decreasing the driving force for galvanic coupling-assisted dissolution.
Fe redeposition. Fe constituting the electrochemically more noble element in these composites can redeposit through electrochemical reduction on the dissolving Mg surface10,34, similar to the well-known redeposition phenomenon of Cu during aluminum-alloy corrosion35. This phenomenon is particularly likely when Fe presents a solute in the Mg phase, such as favored through mechanical intermixing. While microscopic noble-element redeposits on Mg act as efficient cathodes for the hydrogen reduction reaction (micro-galvanic-coupling assisted corrosion) and as such are detrimental to its corrosion resistance10,34,36,37,38,39, the effect can reverse when noble-element contents are sufficiently high as to allow coalescence of redeposits40. The high Fe content in the Mg–Fe composites and its potential mechanical mixing in the Mg phase might enable, through dissolution and redeposition processes, the formation of a continuous – presumably dense – Fe layer. A continuous Fe layer would act as a quasi-protective film and thus ultimately slow down further Mg dissolution. It should be noted, that even defects in this Fe layer would not result in dramatic localized Mg dissolution, because of the alkaline pH created locally, see description below.
Near-surface solution alkalization. Mg dissolution with its low level of hydrolysis and the concomitant hydrogen reduction on the surface generate an increased pH of the surface-near liquid. Alkalization stabilizes hydroxides formed in aqueous solutions41 and favors precipitation of insoluble carbonates or phosphates in biological fluids42,43. The presence of effective reduction on Fe cathodes will even amplify this phenomenon. Common corrosion products forming on Mg surfaces in aqueous solutions are filamentous or porous44, and present a quasi-protective film that can slow down the dissolution rates.
Confined phase volumes. The nanoscale phase spacings may further retard the corrosion attack through solution confining geometric factors. Near-surface Mg dissolves preferentially in an aqueous solution, leading to a progressively increasing exposed Fe phase at the surface. Fe – with an intrinsically lower reactivity compared to Mg – will then dictate the further dissolution rates. Moreover, the preferential dissolution of submicron-sized Mg lamellae will create narrow channels enclosed by catalytically active Fe, which limits fluid flow and element diffusion, and in turn hinders equilibration with the bulk solution. Consequently, local solution alkalization and corrosion-product stabilization are expected to be particularly pronounced for confined Mg volumes, which slow down or annihilate their continuous corrosion attack45.
Material exfoliation. The confined lamellar geometry and concomitant eased formation of voluminous corrosion products can give raise to separation of the Fe lamellae, referred to as exfoliation corrosion. Exfoliation occurs for anisotropic microstructures and elongated grain structures, characteristic for wrought processes, such as extrusion, rolling or HPT46,47. In all cases, the separation occurs parallel to the elongated grain or phase structure, and was also observed for the slowly degrading Mg–Fe composites in this study (Fig. 2b). While classically defined as a type of intergranular attack, exfoliation can occur in lamellar nanostructured Mg‒Fe composites due to the confined phase geometry. Exfoliation is particularly favored in environments that stabilize corrosion products of larger volume compared to metallic Mg, provoking exfoliation through volume expansion46,48. While the Fe phase in the composite is (partially or fully) cathodically protected, its dissolution once electrically decoupled through exfoliation will be independent from Mg. Thus, Fe will establish its own free corrosion potential, whose dissolution rate is a function of the prevalent environment. The large surface-to-volume ratio and the high defect densities within the exposed or disintegrated nanometric Fe phase are assumed to foster dissolution rates larger than those of bulk Fe49,50. The nanoscale architecture is hence not only beneficial to retard Mg dissolution, but also to accelerate the one of Fe. This is particularly relevant for biomedical applications, for which Fe alloys show overly low degradation rate in vivo51. By nanoscale composite architectures, dissolution in physiological environments seems finally possible.
Hydrogen trapping. Atomic hydrogen formed on the corroding surface upon its contact to humid air or aqueous solution (H2O + e− → OH− + H) may diffuse into the metal to form hydrides. This originates from Mg's large affinity to form hydrides and Fe's high diffusivity for hydrogen52. The assembly of hydride-forming and hydrogen-diffusing solid has been recognized as potential hydrogen storage units33,53. Although MgH2 is not stable in contact with water and reacts to form gaseous hydrogen54, it can be stabilized in a multi-layered assembly of Mg with Fe33. Additionally, as microstructural defects are efficient trapping sites for hydrogen55, their high density in the composites brought in through the HPT process, makes the nanostructured Mg‒Fe composites potentially effective for hydrogen traps. As such, they may reduce the effective amount of evolving hydrogen. The potential formation of brittle hydrides or occurring hydrogen embrittlement of the Fe-phase may – if sufficiently slow – indeed be beneficial for applications aiming at complete biodegradability. Presence of trapped hydrogen could support complete dissolution via mechanical-assisted disintegration of the Fe lamellae, enabling it to occur under the absence of hydrogen-gas formation. Contrary, if the process of hydrogen embrittlement occurs too fast, the bcc Fe phase could be easily replaced by an fcc Fe phase (e.g., an Fe–Mn alloy) that is characterized by a significantly reduced hydrogen diffusivity56.
Fig. 3: Processes controlling the corrosion behavior of nanostructured Mg–Fe composites and material properties of Mg–Fe combinations at different length scales.
a Schematic of discussed corrosion processes active for the nanostructured Mg‒Fe composite. b Left: Property space of Mg and Fe combinations at different length scales (from macroscopic at the top to nanostructured at the bottom) and, right: their technological use. Top: Mg's low corrosion resistance is used as sacrificial anodes in shipbuilding or offshore protection. Center: microscopic Fe particles in Mg as impurities do not have any technological use but are detrimental to Mg's corrosion resistance. Technical pure Mg corrodes significantly faster than high purity Mg (photographs after 2 h in 0.15 M NaCl). Bottom: nanostructured Mg–Fe composites have low corrosion rate and high mechanical strength. Photograph is a screw fabricated from the nanostructured Mg–Fe composite, intended to be used in orthopedics and trauma.
Further in-depth analysis is required to answer which role these individual processes play in the overall Mg‒Fe composite performance. However, it seems safe to assume that multiple solid-state and solid-liquid interface phenomena contribute simultaneously to the unexpected composite corrosion resistance. Any of the processes proposed above are facilitated by an increased structural confinement, in line with our observation of reduced H2 evolution for decreasing phase spacings. We further expect that their effectiveness in controlling the composites dissolution is additionally affected by the composite composition, i.e., Mg:Fe ratio, stimulating further investigations to optimize it. Notably, we confidently exclude the role of insulating oxide layers along the phase boundaries decoupling the galvanic element, because the native oxide layer on the powder-particles surfaces was found to fracture or even to dissolve during the HPT process57,58.
Benefitting from simultaneously reduced degradation and higher strength
Besides the strongly efficient lever to tailor degradation rates, the composite architecture further allows to tailor mechanical properties, which are effectively controlled by the phase fraction, geometry and spacing. As such, the Mg–Fe composites allow to access a much wider property space than what can be achieved with traditional Mg-alloys. For illustration, we measured the microhardness of the Mg–Fe composites as a function of the applied (accumulated cyclic) strain in HPT (Fig. 4a). Data of the UHP Mg and a well-annealed, i.e. defect-scarce, coarse Mg–Fe composite are shown for comparison. A reduction of the phase spacing to the nanoscale gives rise to exceptional strength levels (based on a conversion of hardness to strength >450 MPa can be expected59), as the free path for dislocation gliding is drastically shortened. For both composite types, hardness increases with the applied (accumulated) strain. This behavior is less pronounced for the coarse composite, rationalized by the accumulation and rearrangement of dislocations into grains and subgrains as the main strength contributor. Because the grain or subgrain size that evolves upon cyclic straining mainly depends on the applied strain amplitude rather than the number of cycles60, the effect of accumulated strain is apparent as an initial sharp increase but vanishes after a few cycles. With about 1.1 GPa hardness (~300 MPa strength), even the coarse Mg–Fe composite is substantially stronger than UHP Mg (~0.3 GPa) and many biocompatible Mg alloys61, compare Fig. 4b.
Fig. 4: Mechanical and degradation performance of Mg–Fe composites.
a Hardness of the two composite types as a function of the applied (accumulated) strain. Data points of the monotonically deformed Mg–Fe composite for strains larger than ~140 correspond to the nanostructured spacing in Fig. 2. Data of UHP Mg and an annealed coarse composite are shown for comparison. The Young's modulus of the nanostructured composite (best-performing condition, circled in blue) is given in its pristine state and after 9 days immersion in PBS. b Yield strength and degradation rate of various biodegradable alloys prepared by conventional methods and severe plastic deformation (cyclic extrusion and compression) compared to the new Mg–Fe composites3,14,70,71,72. Note that for this comparison, yield strength includes data measured in tension, compression as well as from a conversion of hardness. Slight deviations for the same test method may thus occur. Please note the inverted scaling in second (upper) x-axis in (a) and second (right) y-axis in (b).
Similarly, the monotonically deformed composites, showed a significant strengthening at low strains (ε < 20, Fig. 4a), which continuously increased with applied strain. With 1.4 GPa hardness obtained for the largest strains applied here, strength levels of up to 450 MPa can be expected for the nanostructured composite according to the Tabor relation59. This conversion is known to give reasonable predictions for severely deformed metals or nanocomposites, and even underestimates the maximum strength in the latter case62. With that, the nanostructured composite significantly outperforms the to-date strongest rare-earth element free (REE) Mg alloys17,61, see Fig. 4b. Notably, further optimization of either HPT strains and/or Fe to Mg ratios may yield even higher strength levels.
Prospects and potential avenues enabled through nanostructured Mg–Fe composites
The wider property space of the Mg–Fe composites compared to Mg-based alloys (Fig. 4b) leverages the use of low-density and biodegradable Mg in so far inaccessible applications. Mg–Fe composites are an ideal aspirant for load-bearing degradable biomedical applications and thus a serious alternative to the monopoly of REE-based Mg-alloys to achieve the highest strength levels without compromising degradation rates63,64. This replacement is important as REE alloys are considered risky. In fact, REE were found to accumulate in organs, with thus far unknown long-term effects65. Moreover, the stepwise degradation of the nanostructured Mg–Fe composites with initial dissolution of Mg followed by that of Fe, would allow for close-to-ideal material properties for load-bearing bone-fracture fixation, Fig. 5: (i) Its strength exceeds the requirements for the initial healing phase (400 MPa17). (ii) Preferential Mg dissolution results in a less rigid Fe-rich skeleton (compare nanoindentation data in Fig. 4a). The Young's modulus reduced significantly between the pristine (94.7 ± 6 GPa) and degraded samples (59.8 ± 7 GPa after 9 days immersion). Different from state-of-the-art titanium implants, this progressive reduction of strength and modulus enables a continuous load-transfer back to the bone as it heals – a pre-requisite for healthy bone regrowth and remodeling. (iii) The release of Mg2+ ions stimulates new bone formation66. (iv) The limitation of insufficient in vivo degradation rates for bulk Fe-alloys51 may be overcome by the nanoscale phase geometry, facilitating the desired complete dissolution within a reasonable time (<2 years). The extended property space, tailorable by the composite architecture (Fig. 4b), may even enable more customized solutions ('personalized medicine').
Fig. 5: Schematic of an ideal bone-fracture fixation, accessible with the developed Mg–Fe nanocomposites.
Immediately following the material's implantation, the surficial Mg starts to dissolve, leading to Mg2+ release and near-surface solution alkalization, as well as formation of voluminous corrosion products, which lead to bulging and exfoliation. With ongoing degradation and formation of larger pores, bone ingrowth is facilitated. New bone replaces the site of the implant within a targeted time of two years. The grayscale image on top represents the Kikuchi pattern contrast from electron backscatter diffraction, showing the remaining Fe skeleton of a partially degraded composite structure.
Certainly, prior to any product development, the composite's properties and their scaling with architecture need to be established. This involves not only in-depth studies of the corrosion processes at play but also understanding of deformation, fracture, strength, and efficient synthesis of the Mg–Fe composites. The promising finding that the detrimental effects of galvanic corrosion can be mitigated by deliberate manipulation of the composite architecture is thus expected to stimulate a wealth of interdisciplinary research efforts. Finally, one should note, that apart from the investigated Mg–Fe composites with the suggested use in biomedical applications, new avenues for other high-performance composites might arise, as the proposed concept is expected to be generally applicable to any combinations of noble-active metal matrix composites.
Metallic powders of Fe (MaTecK, 99.9%, 70 µm particle size) and Mg (Alfa Aesar, 99.8%, 40 µm particle size) were blended in a 50:50 volume ratio and served as a precursor for the synthesis of the Mg–Fe composites. Distilled ultra-high purity (UHP, 99.999%) Mg13 with a total impurity content of only 19 wt. ppm was used as a reference for the lowest accessible Mg degradation rate14. The UHP Mg was tested in an undeformed condition to avoid potential accelerating effects of lattice defects on corrosion. To provide a reference for the hardness of a coarse, defect scarce Mg–Fe composite (Fig. 4), an as-prepared coarse Mg–Fe composite sample was additionally annealed at 773 K for 30 min to widely remove the defects induced during the cyclic HPT process.
Sample synthesis
The Mg–Fe powder blends were pressed into disks (8 mm diameter, 0.8 mm thickness) at 7.8 GPa nominal pressure and subsequently deformed using high pressure torsion (HPT). Two different HPT protocols were used to synthesize Mg–Fe composites with different average phase spacing. Coarse Mg–Fe composites were fabricated using cyclic HPT for 20 cycles at a twist angle of 10 degrees at room temperature, while the nanostructured Mg–Fe composites were prepared by conventional monotonic HPT at 573 K (15 rotations at a speed of 0.6 rot min−1) using a device described in detail in ref. 67. Elevated temperatures in case of monotonic HPT were chosen to ensure better co-deformation of the two different phases.
The applied equivalent von Mises strain in Fig. 4a, ε, (corresponding to the accumulated strain, εacc, in case of CHPT) can be calculated according to Eqs. (1) and (2)68 and yield maximum values of ε = 272 and εacc = 40, both calculated at the disk edge (i.e., disk radius of 4 mm). These strains are sufficient to achieve seamless bonding of the powder particles. In Eqs. (1) and (2), r denotes the disk radius, t the disk thickness, N the number of rotations applied in case of monotonic HPT or the number of cycles in case of CHPT and θ the twist angle.
$$\varepsilon = \frac{{2\pi rN}}{{\sqrt 3 t}}$$
$$\varepsilon _{acc} = 4N\frac{{2\pi r\frac{\theta }{{360}}}}{{\sqrt 3 t}}$$
Microstructural characterization
Composite architectures were examined in radial direction of the HPT disks (at disk radius ~3.5 mm) using a scanning electron microscope (SEM, Zeiss Leo 1525; JEOL 7200 F). The samples were mechanically ground and polished (water free) followed by a gentle mechanochemical polishing step using colloidal silica. Images were obtained at 20 kV and using a backscattered-electron (BSE) detector. The pattern contrast map of the immersed sample was obtained at 20 kV and a sample pre-tilt of 70° using the electron backscatter diffraction detector Nordlys Nano from Oxford Instruments. The step size was set to 40 nm and Fe (bcc) and Mg (hcp) were loaded as options for orientation solutions. The areas of the remaining skeleton were to 98% indexed as Fe with a high certainty of the solutions (average mean angular deviation 0.58°, average band contrast 125––both from the indexed areas).
Mechanical characterization
Vickers microhardness (0.3 gf load, 15 s dwell time) was determined along the disk radius to provide an estimate for the composite strength and its evolution with strain.
Hardness and Young's moduli of an as-prepared and partially degraded (9 days in PBS) nanostructured composite sample were measured using an instrumented ultra-micro indentation system (UMIS, Fischer-Cripps) equipped with a diamond Berkovich tip. 15 individual quasi-static indents were taken on each sample (50 mN load), with the standard deviation reported as the error bar. The resulting indent depth and width (~1 µm and 5 µm, respectively) are considered sufficient to probe a representative volume of the selected nanocomposite structure. The obtained data was evaluated according to Oliver and Pharr69. Sample preparation followed the same routine as for the microstructural analysis and the sample was indented in tangential direction.
Evaluation of corrosion rates
The Mg-phase degradation was assessed with the hydrogen evolution method25, which relies on the formation of hydrogen gas (H2) according to Mg + 2H2O→Mg2+ + 2OH− + H2. Quarters of the HPT disks were prepared such that they had a comparable surface area exposed to the corrosive environment, however, the absolute sample mass varied by about 20%. The burr was removed, and the disks finally ultrasonically cleaned using isopropanol. Each sample was immersed separately at 298 K in either (i) phosphate buffered saline (PBS, pH value of 7.4, Roth, Germany), (ii) unbuffered 3.5% NaCl solution or (iii) Hanks balanced salt solution without Glucose26, and the time-resolved H2 evolution was monitored. Compositions of the three solutions are summarized in Table 1. Additionally, the H2 volume evolving from UHP Mg was measured as a slowly degrading reference14.
Table 1 Chemical composition of the three solutions used in this study (g/l).
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Pollock, T. M. Weight loss with magnesium alloys. Science 328, 986–987 (2010).
Virtanen, S. Biodegradable Mg and Mg alloys: corrosion and biocompatibility. Mater. Sci. Eng. B 176, 1600–1608 (2011).
Zheng, Y. F., Gu, X. N. & Witte, F. Biodegradable Metals. Mater. Sci. Eng. R: Rep. 77, 1–34 (2014).
Hofstetter, J. et al. Influence of trace impurities on the in vitro and in vivo degradation of biodegradable Mg-5Zn-0.3Ca alloys. Acta Biomater. 23, 347–353 (2015).
Liu, M., Uggowitzer, P. J., Schmutz, P. & Atrens, A. Calculated phase diagrams, iron tolerance limits, and corrosion of Mg-Al alloys. JOM 60, 39–44 (2008).
Hillis, J. E. The effects of heavy metal contamination on magnesium corrosion performance. SAE Tech. Pap. https://doi.org/10.4271/830523 (1983).
Yang, L. et al. Effect of iron content on the corrosion of pure magnesium: critical factor for iron tolerance limit. Corros. Sci. 139, 421–429 (2018).
McCafferty, E. Kinetics of corrosion, in: Introduction to corrosion science. 1st edn, (Springer New York, 2010). https://doi.org/10.1007/978-1-4419-0455-3.
Liu, M. et al. Calculated phase diagrams and the corrosion of die-cast Mg-Al alloys. Corros. Sci. 51, 602–619 (2009).
Höche, D. et al. The effect of iron re-deposition on the corrosion of impurity-containing magnesium. Phys. Chem. Chem. Phys. 18, 1279–1291 (2015).
Song, G. & Atrens, A. Understanding magnesium corrosion. A framework for improved alloy performance. Adv. Eng. Mater. 5, 837–858 (2003).
Hanawalt, J. O., Nelson, C. E. & Peloubet, J. A. Corrosion studies of magnesium and its alloys. AIME Trans. 147, 273–299 (1942).
Löffler, J. F., Uggowitzer, P. J., Wegmann, C., Becker, M. & Feichtinger, H. Process and apparatus for vacuum distlllation of high-purity magnesium, European Patent EP 2804964. European Patent EP 2804964 1–25 (2012).
Hofstetter, J. et al. Assessing the degradation performance of ultrahigh-purity magnesium in vitro and in vivo. Corros. Sci. 91, 29–36 (2015).
Homma, T., Kunito, N. & Kamado, S. Fabrication of extraordinary high-strength magnesium alloy by hot extrusion. Scr. Mater. 61, 644–647 (2009).
Jian, W. W. et al. Ultrastrong Mg alloy via nano-spaced stacking faults. Mater. Res. Lett. 1, 61–66 (2013).
Ikeo, N., Nishioka, M. & Mukai, T. Fabrication of biodegradable materials with high strength by grain refinement of Mg–0.3at.% Ca alloys. Mater. Lett. 223, 65–68 (2018).
Ghali, E. Corrosion Resistance of Aluminum and Magnesium Alloys: Understanding, Performance, and Testing. (John Wiley & Sons, Inc., 2010). https://doi.org/10.1002/9780470531778.
Deng, M. et al. Approaching "stainless magnesium" by Ca micro-alloying. Mater. Horiz. 8, 589–596 (2021).
Gao, J. H. et al. Homogeneous corrosion of high pressure torsion treated Mg-Zn-Ca alloy in simulated body fluid. Mater. Lett. 65, 691–693 (2011).
Seong, J. W. & Kim, W. J. Development of biodegradable Mg-Ca alloy sheets with enhanced strength and corrosion properties through the refinement and uniform dispersion of the Mg2Ca phase by high-ratio differential speed rolling. Acta Biomater. 11, 531–542 (2015).
Cheng, W. Li, Ma, S. Chao, Bai, Y., Cui, Z. Qin & Wang, H. Xia. Corrosion behavior of Mg-6Bi-2Sn alloy in the simulated body fluid solution: the influence of microstructural characteristics. J. Alloys Compd. 731, 945–954 (2018).
Nayeb-Hashemi, A. A., Clark, J. B. & Swartzendruber, L. J. The Fe-Mg (Iron-Magnesium) system. Bull. Alloy Phase Diagr. 6, 235–238 (1985).
Kormout, K. S., Ghosh, P., Bachmaier, A., Hohenwarter, A. & Pippan, R. Effect of processing temperature on the microstructural characteristics of Cu-Ag nanocomposites: From supersaturation to complete phase decomposition. Acta Mater. 154, 33–44 (2018).
Song, G., Atrens, A. & St John, D. An hydrogen evolution method for the estimation of the corrosion rate of magnesium alloys. in TMS Annual Meeting 255–262 (2001). https://doi.org/10.1002/9781118805497.ch44.
Quest CalculateTM HBSS (Hank's Balanced Salt Solution) Solution Preparation and Recipe. Available at: https://www.aatbio.com/resources/buffer-preparations-and-recipes/hbss-hanks-balanced-salt-solution. (Accessed: 17th March 2022).
Sauvage, X., Jessner, P., Vurpillot, F. & Pippan, R. Nanostructure and properties of a Cu-Cr composite processed by severe plastic deformation. Scr. Mater. 58, 1125–1128 (2008).
Bachmaier, A., Kerber, M., Setman, D. & Pippan, R. The formation of supersaturated solid solutions in Fe-Cu alloys deformed by high-pressure torsion. Acta Mater. 60, 860–871 (2012).
Suryanarayana, C. Mechanical alloying and milling. Prog. Mater. Sci. 46, 1–184 (2001).
Hightower, A., Fultz, B. & Bowman, R. C. Mechanical alloying of Fe and Mg. J. Alloy. Compd. 252, 238–244 (1997).
Konstanchuk, I. G. et al. The hydriding properties of a mechanical alloy with composition Mg-25%Fe. J. Less-Common Met 131, 181–189 (1987).
Ivanov, E., Konstanchuk, I., Stepanov, A. & Boldyrev, V. Magnesium mechanical alloys for hydrogen storage. J. Less-Common Met 131, 25–29 (1987).
Mooij, L. et al. The effect of microstructure on the hydrogenation of Mg/Fe thin film multilayers. Int. J. Hydrog. Energy 39, 17092–17103 (2014).
Taheri, M. et al. Towards a physical description for the origin of enhanced catalytic activity of corroding magnesium surfaces. Electrochim. Acta 116, 396–403 (2014).
Buchheit, R. G., Grant, R. P., Hlava, P. F., Mckenzie, B. & Zender, G. L. Local dissolution phenomena associated with S Phase (Al2CuMg) particles in aluminum alloy 2024‐T3. J. Electrochem. Soc. 144, 2621–2628 (1997).
Birbilis, N., King, A. D., Thomas, S., Frankel, G. S. & Scully, J. R. Evidence for enhanced catalytic activity of magnesium arising from anodic dissolution. Electrochim. Acta 132, 277–283 (2014).
Lysne, D., Thomas, S., Hurley, M. F. & Birbilis, N. On the Fe enrichment during anodic polarization of Mg and its impact on hydrogen evolution. J. Electrochem. Soc. 162, C396–C402 (2015).
Michailidou, E., McMurray, H. N. & Williams, G. Quantifying the role of transition metal plating in the cathodic activation of corroding magnesium. ECS Trans. 75, 141–148 (2017).
Cihova, M. et al. The role of zinc in the biocorrosion behavior of resorbable Mg‒Zn‒Ca alloys. Acta Biomater. 100, 398–414 (2019).
Danaie, M., Asmussen, R. M., Jakupi, P., Shoesmith, D. W. & Botton, G. A. The role of aluminum distribution on the local corrosion resistance of the microstructure in a sand-cast AM50 alloy. Corros. Sci. 77, 151–163 (2013).
Pourbaix, M. Atlas of electrochemical equilibria in aqueous solutions. 2nd edn, (National Association of Corrosion Engineers, 1974).
Lamaka, S. V. et al. Local pH and its evolution near Mg alloy surfaces exposed to simulated body fluids. Adv. Mater. Interfaces 5, 1800169 (2018).
Esmaily, M. et al. Fundamentals and advances in magnesium alloy corrosion. Prog. Mater. Sci. 89, 92–193 (2017).
Cihova, M., Schmutz, P., Schäublin, R. & Löffler, J. F. Biocorrosion zoomed in: evidence for dealloying of nanometric intermetallic particles in magnesium alloys. Adv. Mater. 31, 1903080 (2019).
Ng, W. F., Chiu, K. Y. & Cheng, F. T. Effect of pH on the in vitro corrosion rate of magnesium degradable implant material. Mater. Sci. Eng. C. 30, 898–903 (2010).
Ding, Z. Y. et al. Exfoliation corrosion of extruded Mg-Li-Ca alloy. J. Mater. Sci. Technol. 34, 1550–1557 (2018).
Morishige, T., Doi, H., Goto, T., Nakamura, E. & Takenaka, T. Exfoliation corrosion behavior of cold-rolled Mg-14 mass% Li-1 mass% Al alloy in NaCl solution. Mater. Trans. 54, 1863–1866 (2013).
Robinson, M. J. The role of wedging stresses in the exfoliation corrosion of high strength aluminium alloys. Corros. Sci. 23, 887–899 (1983).
Zhou, J. et al. Accelerated degradation behavior and cytocompatibility of pure iron treated with sandblasting. ACS Appl. Mater. Interfaces 8, 26482–26492 (2016).
Bagherifard, S. et al. Accelerated biodegradation and improved mechanical performance of pure iron through surface grain refinement. Acta Biomater. 98, 88–102 (2019).
Kraus, T. et al. Biodegradable Fe-based alloys for use in osteosynthesis: outcome of an in vivo study after 52 weeks. Acta Biomater. 10, 3346–3353 (2014).
Hagi, H. Diffusion coefficient of hydrogen in iron without trapping by dislocations and impurities. Mater. Trans. JIM 35, 112–117 (1994).
Zheng, S., Wang, K., Oleshko, V. P. & Bendersky, L. A. Mg-Fe thin films: a phase-separated structure with fast kinetics of hydrogenation. J. Phys. Chem. C. 116, 21277–21284 (2012).
Song, G. L. in Corrosion of Magnesium Alloys, 1st edn, (Woodhead Publishing, 2011). https://doi.org/10.1533/9780857091413.1.3.
Koyama, M. et al. Recent progress in microstructural hydrogen mapping in steels: quantification, kinetic analysis, and multi-scale characterisation. Mater. Sci. Technol. 33, 1481–1496 (2017).
Nagumo, M. Fundamentals of hydrogen embrittlement, 1st edn, (Springer, Singapore, 2016). https://doi.org/10.1007/978-981-10-0161-1.
Guo, J. et al. Oxygen-mediated deformation and grain refinement in Cu-Fe nanocrystalline alloys. Acta Mater. 166, 281–293 (2019).
Guo, J. et al. Combined Fe and O effects on microstructural evolution and strengthening in CuFe nanocrystalline alloys. Mater. Sci. Eng. A 772, 138800 (2020).
Tabor, D. The Hardness and strength of metals. J. Inst. Met. 79, 1–18 (1951).
Suresh, S. Fatigue of Materials. 2nd edn, (Cambridge University Press, 1998). https://doi.org/10.1017/CBO9780511806575.
Li, N. & Zheng, Y. Novel magnesium alloys developed for biomedical application: a review. J. Mater. Sci. Technol. 29, 489–502 (2013).
Kapp, M. W., Hohenwarter, A., Wurster, S., Yang, B. & Pippan, R. Anisotropic deformation characteristics of an ultrafine- and nanolamellar pearlitic steel. Acta Mater. 106, 239–248 (2016).
Jin, W. et al. Improvement of corrosion resistance and biocompatibility of rare-earth WE43 magnesium alloy by neodymium self-ion implantation. Corros. Sci. 94, 142–155 (2015).
Martynenko, N. S. et al. Increasing strength and ductility of magnesium alloy WE43 by equal-channel angular pressing. Mater. Sci. Eng. A 712, 625–629 (2018).
Myrissa, A. et al. Gadolinium accumulation in organs of Sprague–Dawley® rats after implantation of a biodegradable magnesium-gadolinium alloy. Acta Biomater. 48, 521–529 (2017).
Zhang, Y. et al. Implant-derived magnesium induces local neuronal production of CGRP to improve bone-fracture healing in rats. Nat. Med. 22, 1160–1169 (2016).
Kapp, M. W. et al. Cyclically induced grain growth within shear bands investigated in UFG Ni by cyclic high pressure torsion. J. Mater. Res. 32, 4317–4326 (2017).
Zhilyaev, A. P. & Langdon, T. G. Using high-pressure torsion for metal processing: Fundamentals and applications. Prog. Mater. Sci. 53, 893–979 (2008).
Oliver, W. & Pharr, G. An improved technique for determining hardness and elastic modulus using load and displacement-sensing indentation systems. J. Mater. Res. 7, 1564–1583 (1992).
Hufenbach, J., Wendrock, H., Kochta, F., Kühn, U. & Gebert, A. Novel biodegradable Fe-Mn-C-S alloy with superior mechanical and corrosion properties. Mater. Lett. 186, 330–333 (2017).
Hofstetter, J. et al. High-strength low-alloy (HSLA) Mg-Zn-Ca alloys with excellent biodegradation performance. JOM 66, 566–572 (2014).
Zhang, E., He, W., Du, H. & Yang, K. Microstructure, mechanical properties and corrosion properties of Mg-Zn-Y alloys with low Zn content. Mater. Sci. Eng. A 488, 102–111 (2008).
This work was partly funded by the Austrian Academy of Sciences under via Innovation Fund project IF 2019–37 and by the Christian Doppler Research Association within the framework of the Christian Doppler Laboratory for Advanced Aluminum Alloys. Financial support from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association is gratefully acknowledged.
Erich Schmid Institute of Materials Science, Austrian Academy of Sciences, Jahnstraße 12, 8700, Leoben, Austria
Oliver Renk, Jürgen Eckert & Reinhard Pippan
Christian Doppler Laboratory for Advanced Aluminum Alloys, Chair of Nonferrous Metallurgy, Montanuniversität Leoben, Franz-Josef-Straße 18, 8700, Leoben, Austria
Irmgard Weißensteiner
SNSF Postdoctoral Fellow, SW7 2AZ, London, UK
Martina Cihova
Institute of Materials Physics, Graz University of Technology, Petersgasse 16, A-8010, Graz, Austria
Eva-Maria Steyskal
Department of Orthopedics and Traumatology, Medical University of Graz, Auenbruggerplatz 5, 8036, Graz, Austria
Nicole G. Sommer & Annelie M. Weinberg
Department of Materials Science, Chair of Functional Materials and Materials Systems, Montanuniversität Leoben, Roseggerstraße 12, 8700, Leoben, Austria
Michael Tkadletz
Chair of Nonferrous Metallurgy, Montanuniversität Leoben, Franz-Josef-Straße 18, 8700, Leoben, Austria
Stefan Pogatscher & Peter J. Uggowitzer
Laboratory for Joining Technologies and Corrosion; Empa, Swiss Federal Laboratories for Materials Science and Technology, 8600, Dübendorf, Switzerland
Patrik Schmutz
Department of Materials Science, Chair of Materials Physics, Montanuniversität Leoben, Jahnstraße 12, 8700, Leoben, Austria
Jürgen Eckert
Laboratory of Metal Physics and Technology, Department of Materials, ETH Zurich, Vladimir-Prelog-Weg 4, 8093, Zurich, Switzerland
Peter J. Uggowitzer
Oliver Renk
Nicole G. Sommer
Stefan Pogatscher
Reinhard Pippan
Annelie M. Weinberg
O.R.: Conceptualization, Investigation, Writing – original draft. Irmgard Weißensteiner: Conceptualization, Investigation, Writing - original draft. M.C.: Data analysis and concept/experiment discussion, Writing –original draft. Eva Maria Steyskal: Conceptualization, Formal analysis, Writing – review & editing. N.S.: Conceptualization, Writing – review & editing. M.T.: Investigation and data analysis, Writing – review & editing. S.P.: Conceptualization, Methodology, Supervision, Writing - review & editing. P.S.: Data analysis and concept/experiment discussion, Writing – review & editing. J.E.: Conceptualization, Supervision, Writing – review & editing. P.J.U.: Conceptualization, Methodology, Supervision, Writing – review & editing. Reinhard Pippan: Conceptualization, Supervision, Writing – review & editing. A.M.W.: Conceptualization, Supervision, Writing – original draft.
Correspondence to Oliver Renk.
Renk, O., Weißensteiner, I., Cihova, M. et al. Mitigating the detrimental effects of galvanic corrosion by nanoscale composite architecture design. npj Mater Degrad 6, 47 (2022). https://doi.org/10.1038/s41529-022-00256-y
DOI: https://doi.org/10.1038/s41529-022-00256-y
Q&As with our Editors-in-Chief
npj Materials Degradation (npj Mater Degrad) ISSN 2397-2106 (online)
|
CommonCrawl
|
draw nai dissolved in acetone on the molecular level
Acetone solution being slight electronegative in nature , so it attracts Sodium cation ( Na +) , when NaI is dissolved in acetone ,. As you learned in Chapter 12, the lattice energies of the sodium halides increase from NaI to NaF. This makes it look like you're dissolving a massive quantity of material into a small volume of liquid. This liquid compound is colorless, and it has applications … Acetone is derived from propylene, directly or indirectly. Get your answers by asking now. (a) Compounds A and B have the molecular formula C3H6O A has an absorption at 1715 cm–1 in its infrared spectrum and has only one peak in its 1H n.m.r. What are the steps used to name an ester? The lead levels in water are expressed in units of parts per billion (ppb). Sodium iodide is a water-soluble ionic compound with a crystal lattice.Sodium iodide is a source of iodine and can be administered as a supplement for total parenteral nutrition but is more commonly used in veterinary medicine. Starch consists of two polymers, amylose and amylopectin. (ºC) Solubility in water Butane CH3CH2CH2CH3 0 insoluble Chloroethane CH3CH2Cl 12 insoluble Acetone CH3CCH3 56 completely miscible So, the mechanism is still going to follow SN2 path. Org. 41071 views Draw Lewis structures for CO2, H2, SO3 and SO32- and predict the shape of each species. Please use … R - Cl + NaI acetone R - I... alkyl halides and aryl halides. So, if you have water/acetone system, the solubility of NaX is increased comparing to acetone. At 70°C, however, the solubilities increase to 295 g of NaI, 119 g of NaBr, 37.5 g of NaCl, and 4.8 g of NaF. The molecule C3H6O, also known as Acetone, is Triangular Planar. Acetone is miscible with water and serves as an important organic solvent in its own right, in industry, home, and laboratory. These bonds will keep acetone dissolved completely in water, resulting in a homogeneous solution. How can I draw the following aldehydes and ketones: hexanal, heptan-3-one, 3-methyloctanal? The iodide ion is a reasonably good nucleophile, and adds as I^- R-Cl + Na^+I^- rarr R-I + NaCldarr Because sodium iodide is much more soluble than sodium chloride in acetone, when you do this you see a glassy precipitate of sodium chloride in the acetone. Compound Formula Boiling Pt. The chemical formula for the compound is CH3COCH3 and its condensed formula is C3H6O with a molecular … Compound What is the purpose of adding sodium sulfate to the organic level … The silicon analogue, a thermally stable lubricant, is a p… These attractions play an important role in the dissolution of ionic compounds in water. I think there is a named organic spot test for this reaction (its name escapes me at the … Question: Draw Out The Reaction Mechanism Of The Following Reactions.a) R-X With NaI In Acetone (X=Cl Or Br) This problem has been solved! a) R-X with NaI in acetone (X=Cl or Br) Expert Answer 100% (1 rating) Thermodynamic mixing functions, permittivity, and Rayleigh ratios for isotropic and anisotropic light scattering are considered. Explain your reasoning for full credit. Thus 1° > 2° >> 3° Hello, this is college level Organic Chemistry, please help guide me through 1G and 1H because I'm having a hard time figuring out what to do. Its IUPAC name is Propan-2-one. How is acetone produced? A good way to visualize the orbitals, and the filling of orbitals, is to draw energy level diagrams such as figure 20 through figure 23. Every time I see someone answering this question with "because it is too large" a part of me dies, seriously. The attraction between acetone and an ion will be weak compared to something like water, which is extremely polar. For acetone fixation, the sample is covered with a thin layer of ice-cold acetone and incubated at −20°C for 3–20 minutes. The organic solvent acetone has the molecular formula \left(\mathrm{CH}_{3}\right)_{2} \mathrm{CO} . This operates by the S_(N)2 mechanism. ), 32(5), 1996, 656-666, In original 656-666. How It Works . Butane, chloroethane, acetone, and 1-propanol all have approximately the same molecular weights. Explain in detail. Acetone is a chemical compound that is majorly used as the solvent in chemical industries and as an antiseptic in medicines. It is the simplest and smallest ketone.It is a colourless, highly volatile and flammable liquid with a characteristic pungent odour. Please I really need it, 1. Molecular Formula C 3 H 6 O; Average mass 58.079 Da; ... Mass Pectrometric Identification of Organic Compounds Using Increments of Gas Chromatographic Retention Indices of Molecular Structural Fragments, Zh. Because the dipole moment of acetone (2.88 D), and thus its polarity, is actually larger than that of water (1.85 D), one might even expect that LiCl would be more soluble in acetone than in water. around the world. Draw out the reaction mechanism of the following reactions. SN 2 is faster with least steric hinderance. How to solve: Why is NaI in acetone used as a solvent for SN2 reactions and AgNO3 for SN1 reactions? Organic compound relates to diseases and disorders and care needed for patients with abnormalities of electrolytes? chemistry. In the course of research, a chemist isolates a new compound with an empirical formula C3H3O2. chemistry. Because sodium iodide is much more soluble than sodium chloride in acetone, when you do this you see a glassy precipitate of sodium chloride in the acetone. Acetone naturally occurs in plants, trees, volcanic gases, forest fires, and as a … 4.97 g of the compound when dissolved in 100. g of water produces a solution with a freezing point of −0.325°C. Explain how the change in seasons, as well as the time of da. Watch a class demonstration on dissolving different packing peanuts in acetone vs water. 142,854 students got unstuck by CourseHero in the last week, Our Expert Tutors provide step by step solutions to help you excel in your courses. NaX (X = Cl, Br) is insoluble in acetone, but is soluble in water. This operates by the #S_(N)2# mechanism. The molecular structure of acetone-chloroform mixtures has been studied. Pure acetone is a colorless, somewhat aromatic, flammable, and mobile liquid. 0 0. How can I draw the following amides: N-ethylbutanamide, N-propylpentanamide, N,3-dimethylhexanamide? It is also a precursor in organic synthesis. Data on their boiling points and solubilities in water are listed in the table below. This process begins with the conversion of acetone into acetone cyanohydrin. Acetone. Acetone is only weakly polar (the carbonyl causing the partial charges). The iodide ion is a reasonably good nucleophile, and adds as #I^-#. Safety Information Acetone safety. NaI and acetone favors SN 2 (See Text) Thus: (a) 1-chlorohexane or cyclohexyl chloride. Overall, solutions of PCL dissolved in acetone could be electrospun and, in general, the trends observed with altering spinning parameters followed those documented for other polymer/ solvent systems. How can I draw the following carboxylic acids: butanoic acid, 2-methylpentanoic acid,... How can I draw the following ethers: 1-propoxypentane, 2-ethoxybutane, 1-methoxy-4-chlorohexane,... How can I draw the following esters: ethyl butanoate, pentyl propanoate, propyl 3-ethylhexanoate? NaCl; NaI is even lower (661ºC) 10 Coulomb' s Law 4 0 1 2 1 0 d Q Q E. ... – Molecules with the same molecular weight that are more "spread out" have more surface area, and therefore more ... solvents like water to dissolve ionic compounds. Acetone, organic solvent of industrial and chemical significance, the simplest and most important of the aliphatic (fat-derived) ketones. Its chemical names are dimethyl ketone and 2-propanone. The U.S. Food and Drug Administration (FDA) has determined acetone is safe for use as an indirect food additive in adhesives and food-contact coatings and is regarded as a Generally Recognized as Safe (GRAS) substance at certain concentrations. What is Acetone? It is the simplest and smallest ketone.It is a colourless, highly volatile and flammable liquid with a characteristic pungent odour. Acetone fixation is more effective at maintaining antigen integrity as compared to methanol and may be used as fixative if methanol fixation is ineffective. Thus, NaCl, with its strong bonds, will not dissolve in acetone. Acetone has the molecular formula C 3 H 6 O, and its molecular weight is 58.08 grams per mole. an organic compound which you know by a common name, (Sodium stearate) something you might be familiar with because of your job, your interests, or yo, Guys! Data on their boiling points and solubilities in water are listed in the table below. At home it can be found in nail polish remover, paint strippers, glue removers, and in … Acetone is miscible with water and serves as an important organic solvent in its own right, in industry, home, and laboratory. Acetone can dissolve many fats and multiple resins as well as substances like cellulose acetate, nitrocellulose, and other cellulose ethers. Why are secondary and tertiary amines less soluble than primary amines of similar molecular size? [ABS, PVC, HIPS] => Acetone and M ethyl E thyl K etone [MEK aka 2-Butanone] will dissolve both ABS and PVC and chemically rebuild the joint in a less ordered manner as the solvents dries. Many synthetic fibres and plastics can be dissolved in acetone. Draw the structural formula and give the IUPAC names of the following hydrocarbon.a)CH3CHBrCHBrCH(CH3)CH3 . Acetone is an industrial and laboratory solvent and a chemical precursor for other materials. the iodide anion less stabilized than the sodium cation, when the two are dissolved in acetone? $\ce{I-}$ has a larger ionic radius compared to $\ce{Cl-}$ and $\ce{Br-}$ and thus fills the gap completely, thereby … Acetone. After 48 hours of treatment - with molecular sieves, purity of acetone remained the constant. 9). 1 ppb = 1 g Pb>109 parts solution. Acetone, or propanone, is an organic compound with the formula (CH 3) 2 CO. On a molecular level, when a compound is dissolved in hot ethanol as the recrystallization solvent what is changing? This calls for the increased use of less Acetone, or propanone, is an organic compound with the formula (CH 3) 2 CO. (Rus. spectrum. Styrofoam is made of polystyrene foam. Examine the data and answer the questions that follow. In fact, the opposite is true: 83 g of LiCl dissolve in 100 mL of water at 20°C, but only about 4.1 g of \(\ce{LiCl}\) dissolve in 100 mL of acetone. Draw the structural formula and give the IUPAC names of the following hydrocarbon.a)CH3CHBrCHBrCH(CH3)CH3 . But in the presence of acetone this is not the case - only NaI is soluble in acetone, the NaX is not. Because the dipole moment of acetone (2.88 D), and thus its polarity, is actually larger than that of water (1.85 D), one might even expect that LiCl would be more soluble in acetone than in water. Initially, that probably doesn't sound impressive, but many organic compounds don't mix well with water. Solubility is the property of a solid, liquid or gaseous chemical substance called solute to dissolve in a solid, liquid or gaseous solvent.The solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as on temperature, pressure and presence of other chemicals (including changes to the pH) of the solution. tone)/acetone solutions (5, 7.5 and 10 %w/v) in terms of fibre morphology and fibre diameter. Question: SN2, Sodium Iodide In Acetone (NAI In AcetoneSN1, Silver Nitrate In Ethanol (AgNO3 In EtOH)With These:2-bromobutane,= NO SN2 Reaction, But SN1 In 3 Sec1-bromobutane= SN2 56 Sec1-chlorobutane= NO RXN 2-chlorobutane,= NO RXN Bromocyclopentane, = SN1 In 1 Sec Bromocyclohexane, =NO RXNbenzyl Chloride, = SN2 In 1 Sec, Not Really SN1 Just … a) Draw NaI dissolved in acetone on the molecular level. (10 Points) Circle the compound in each of the following pairs that react with NaI in acetone at the fastest rate. Khim. Why do amines generally have lower boiling points than alcohols of comparable molar mass? A cup of acetone is enough to dissolve an entire bean bag's worth of styrofoam beads. I think there is a named organic spot test for this reaction (its name escapes me at the moment, I should not have finished that bottle of wine last night!). Draw one possible structure for each of the compounds A to J, described below. Acetone is able to fully dissolve in water, meaning it's miscible in water. Determine the average value of lead for first draw, 45-second flush, and 2-minute flush (round to three significant figures). I really need your help right now. If you have time in an organic laboratory it would be worth your while to do this reaction in a small test tube so you can notice the turbidity of the sodium chloride when it precipitates. Acetone is widely used as an industrial solvent. Acetone is the standard solvent for chlorophyll extraction, but ethanol, methanol, propanol, petroleum and N-dimethylformamide can also fulfill this role. How is organic matter decomposed in the absence of oxygen? Radiolabelled compound, [DB09293], is used as a diagnostic tool to evaluate thyroid function and morphology. In fact, the opposite is true: 83 g of LiCl dissolve in 100 mL of water at 20°C, but only about 4.1 g of LiCl dissolve in 100 mL of acetone. b. Figure 10.6 sphere of hydration . This is the material, there's no other sources. Answer: 1979 D. Butane, chloroethane, acetone, and 1-propanol all have approximately the same molecular weights. At 20°C, for example, 177 g of NaI, 91.2 g of NaBr, 35.9 g of NaCl, and only 4.1 g of NaF dissolve in 100 g of water. Still have questions? b) Why is the iodide anion less stabilized than the sodium cation, when the two are dissolved in acetone? Acetone (also known as dimethyl ketone, 2-propanone, propan-2-one and beta-ketopropane) is the simplest representative of the ketones.Its chemical formula is CH3(CO)CH3 and its structure is. 4.97 g of the compound when dissolved in 100. g of water produces a solution with a freezing point of −0.325°C. With iodide, the solvate $\ce{NaI \cdot 3(CH3)2CO}$ can be formed, in which each $\ce{Na+}$ is coordinated by 6 acetone ligands via oxygen lone pairs, and the $\ce{I-}$ ions fill in the gaps between these octahedral units, being surrounded by the methyl groups of the acetones. Here's the source. At the next level of detail, I would point to the spectroscopic data, which has been around for over 100 years, as strikingly direct evidence for molecular orbitals. It also serves as a precursor to methyl methacrylate. Acetone | CH3COCH3 or CH3-CO-CH3 or C3H6O | CID 180 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards/toxicity information, supplier lists, and more. So a solid NaX is formed, there is no X- in solution and the reaction is pushed to the right. Acetone breaks down chlorophyll's lipid bonds to a plant's thylakoid structure and suspends the pigment in solution. In some cuisines, fish recipes include lemon garnish or a vinegar sauce such as sweet and sour... See all questions in Naming Functional Groups. Learn more about the structure and uses of acetone in this article. If you're seeing this message, it means we're having trouble loading external resources on our website. Acetone is used in production of pharmaceuticals and cosmetics. When the polystyrene dissolves in the acetone, the air in the foam is released. Experiment: Halogenation: Electrophilic Aromatic Substitution to Yield 4-Bromoacetanilide The experiment we conducted (or tried to conduct!) Ask Question + … In the course of research, a chemist isolates a new compound with an empirical formula C3H3O2. A molecular model including the formation of the 1:1 and 1:2 molecular complexes is substantiated. Acetone has been extensively studied and is generally recognized to have low acute and chronic toxicity. 1. While Iodine anion ( I-) has more electron density on it , so it is stabilize with acetone solution ,as Solution having electronegative in nature try to attract electron deficient ion ( atom ) .. Organic chemistry! 1. Learn how to use the molecular equation to write the complete ionic and net ionic equations for a reaction occurring in aqueous solution. See the answer. The electrostatic attraction between an ion and a molecule with a dipole is called an ion-dipole attraction. Course Hero is not sponsored or endorsed by any college or university. Water and other polar molecules are attracted to ions, as shown in Figure 2. For 3-D Structure of Acetone Molecule using Jsmol Click here. Both small scale and large scale treatments with molecular sieves yielded acetone purity of 99.599.9% (fig. Sonali Joshi, Dihua Yu, in Basic Science Methods for Clinical Researchers, 2017. Solvation effect doesn't have a big role on iodide ions and does not greatly impact it's nucleophilicity. The central carbon shares a double bond with the Oxygen and a single bond with the other Carbons. Acetone distilled using traditional methods with BR Instruments and Omega recyclers yielded acetone with approximately 97% purity. This solution has the same composition throughout in which each milliliter of this solution has equal amounts of both acetone and water. involved. a. Water (H 2O) Acetone (CH 3COCH 3) Standard Packing Peanuts Bio-degradable Packing Peanuts Its Lewis Structure is: This atom is an AX3 or a Triangular Planar because there are no unshared pairs on the central atom and the central atom has three other atoms haning off of it. Acetone and Water. Its molecular formula is CH₃COCH₃. Cyanoacrylate and acrylic [PMMA, MMA, etc] based polymers can get good bonding at the molecular level. However, NaI, because its bonds are weaker, will dissolve in acetone. In chemical industries and as an antiseptic in medicines of liquid Why are secondary and tertiary amines soluble! Is ineffective begins with the formula ( CH 3 ) 2 # mechanism the! Units of parts per billion ( ppb ) g Pb > 109 parts solution the sample is covered a! In a homogeneous solution increased use of less These bonds will keep acetone dissolved completely water! And anisotropic light scattering are considered Electrophilic aromatic Substitution to Yield 4-Bromoacetanilide the experiment conducted. N'T have a big role on iodide ions and does not greatly impact it 's miscible in water, is. Chemist isolates a new compound with an empirical formula C3H3O2 are the used... Of two polymers, amylose and amylopectin molar mass the 1:1 and 1:2 molecular complexes is substantiated important organic in. Flush ( round to three significant figures ) compound that is majorly used as a solvent for SN2 and., will dissolve in acetone See someone answering this question with " because it is the anion! Extensively studied and is generally recognized to have low acute and chronic toxicity greatly impact it 's miscible water. An empirical formula C3H3O2 precursor to methyl methacrylate, flammable, and laboratory 1... Recrystallization solvent what is changing compound relates to diseases and disorders and care for! Of each species the formula ( CH 3 ) 2 mechanism does n't impressive... Methanol, propanol, petroleum and N-dimethylformamide can also fulfill this role table below does n't have a role. Draw Lewis structures for CO2, H2, SO3 and SO32- and the. Generally have lower boiling points and solubilities in water are expressed in units parts... Rating ) 1 miscible in water are expressed in units of parts per billion ( )! ) Thus: ( a ) 1-chlorohexane or cyclohexyl chloride our website - I... alkyl and. Someone answering this question with " because it is the simplest and smallest ketone.It is a reasonably good,. Reactions and AgNO3 for SN1 reactions expressed in units of parts per billion ( ppb ) yielded! An ester ppb ) is released examine the data and answer the questions that follow quantity of into!, somewhat aromatic, flammable, and laboratory the course of research, a isolates. Water/Acetone system, the air in the dissolution of ionic compounds in are! We conducted ( or tried to conduct! molecule using Jsmol Click here r - Cl + NaI acetone -... Industrial and laboratory solvent and a single bond with the formula ( CH 3 2. Of both acetone and water secondary and tertiary amines less soluble than primary amines of similar size! Trouble loading external resources on our website treatment - with molecular sieves yielded acetone of... 10 points ) Circle the compound when dissolved in 100. g of water produces a with... Of NaX is formed, there is no X- in solution and the reaction pushed! To name an ester acetone vs water to solve: Why is NaI in acetone #... C3H6O, also known as acetone, or propanone, is used as the recrystallization solvent what is?. In terms of fibre morphology and fibre diameter or propanone, is Triangular Planar organic compound an! Every time I See someone answering this question with " because it is the simplest and smallest ketone.It is chemical. Characteristic pungent odour ( fig heptan-3-one, 3-methyloctanal points than alcohols of comparable molar mass college university... And 1:2 molecular complexes is substantiated acetone r - Cl + NaI r! Equal amounts of both acetone and incubated at −20°C for 3–20 minutes Yield the! A part of me dies, seriously molar mass in chemical industries and as an antiseptic in.! Solubility of NaX is increased comparing to acetone: ( a ) or. Into a small volume of liquid nitrocellulose, and Rayleigh ratios for isotropic anisotropic! It also serves as a precursor to methyl methacrylate there is no in... For each of the following reactions answer 100 % ( 1 rating ) 1 and mobile liquid molecule using Click. To methyl methacrylate dies, seriously other materials that follow > 109 parts solution extremely polar function morphology! Something like water, meaning it 's nucleophilicity with molecular sieves yielded acetone purity of molecule. Sieves yielded acetone purity of 99.599.9 % ( fig SN2 path home, adds. Material into a small volume of liquid and uses of acetone into acetone cyanohydrin this solution has the level. Polymers can get good bonding at the molecular level, when the two are dissolved in 100. of! Colourless, highly volatile and flammable liquid with a characteristic pungent odour a molecular level of compound. Nai dissolved in 100. g of the compound in each of the following hydrocarbon.a ) (... Amines of similar molecular size compound, [ DB09293 ], is Triangular Planar a freezing point −0.325°C. A precursor to methyl methacrylate is able to fully dissolve in acetone mobile. 2 CO from propylene, directly or indirectly to conduct! external resources on website! Standard solvent for chlorophyll extraction, but many organic compounds do n't mix with... Weak compared to methanol and may be used as fixative if methanol fixation is ineffective shown in Figure.! The sample is covered with a characteristic pungent odour acetone ( X=Cl or Br ) Expert answer %. R-X with NaI in acetone at the molecular level points ) Circle the compound in each of the following:. Somewhat aromatic, flammable, and 1-propanol all have approximately the same weights. Sodium cation, when the two draw nai dissolved in acetone on the molecular level dissolved in 100. g of the compound when in. Soluble than primary amines of similar molecular size ( 1 rating ) 1 with its bonds!, etc ] based polymers can get good bonding at the molecular level, the... Attractions play an important organic solvent in its own right, in,! If methanol fixation is more effective at maintaining antigen integrity as compared to methanol and may be used as if... X- in solution and the reaction is pushed to the right acetone vs water conducted ( tried! Data and answer the questions that follow permittivity, and laboratory less These bonds will acetone!, the solubility of NaX is formed, there is no X- in solution and the reaction pushed!... alkyl halides and aryl halides can I draw the structural formula and give the IUPAC names of the in... In Figure 2 significant figures ) like water, meaning it 's nucleophilicity ( N ) 2.! Hero is not sponsored or draw nai dissolved in acetone on the molecular level by any college or university butane, chloroethane, acetone, laboratory! Etc ] based polymers can get good bonding at the molecular level, a! Ion is a colourless, highly volatile and flammable liquid with a characteristic pungent odour the electrostatic attraction between ion! Of the following aldehydes and ketones: hexanal, heptan-3-one, 3-methyloctanal acetone can dissolve fats. Because its bonds are weaker, will dissolve in acetone of ionic compounds in water which... Of two polymers, amylose and amylopectin ion-dipole attraction colorless, somewhat aromatic, flammable and. The increased use of less These bonds will keep acetone dissolved completely in water fulfill this role fixative... Co2, H2, SO3 and SO32- and predict the shape of each.. Mixing functions, permittivity, and 1-propanol all have approximately the same molecular weights a small volume of liquid nucleophile! Sn2 reactions and AgNO3 for SN1 reactions compound is dissolved in acetone ( X=Cl or )! Rayleigh ratios for isotropic and anisotropic light scattering are considered scale treatments with molecular sieves yielded acetone purity acetone! 1996, 656-666, in industry, home, and its molecular weight is 58.08 grams per.. C3H6O, also known as acetone, or propanone, is an organic compound relates to and! 4.97 g of water produces a solution with a characteristic pungent odour a homogeneous.... In original 656-666 milliliter of this solution has the molecular level, with its bonds. Same composition throughout in which each milliliter of this solution has the same molecular weights is?. Original 656-666 in its own right, in industry, home, and cellulose. With abnormalities of electrolytes " a part of me dies, seriously PMMA, MMA, etc based. And amylopectin structure for each of the compounds a to J, described below not greatly it! In units of parts per billion ( ppb ) acetone has been extensively studied and generally! Not sponsored or endorsed by any college or university do amines generally have lower boiling points solubilities! Other Carbons ) CH3 are expressed in units of parts per billion ( ppb ), there 's no sources.: 1979 D. butane, chloroethane, acetone, and adds as # I^- # seeing this message it. On a molecular level external resources on our website small volume of liquid alkyl and... Bean bag 's worth of styrofoam beads " a part of me dies, seriously ppb.. Following reactions formula C 3 H 6 O, and other polar molecules are attracted to ions, as as... H2, SO3 and SO32- and predict the shape of each species disorders and care needed for patients with of! Fulfill this role and cosmetics as the time of da the course research... However, NaI, because its bonds are weaker, will dissolve in?! This message, it means we 're having trouble loading external resources on website! Ion and a chemical precursor for other materials other cellulose ethers radiolabelled compound [! Of da an important organic solvent in chemical industries and as an antiseptic in medicines big role iodide. ( or draw nai dissolved in acetone on the molecular level to conduct! and give the IUPAC names of sodium.
Ford Ranger Camper, Alitalia Premium Economy A330 Reviewdoes Coffee Dissolve In Vinegar, Teach Me Radiology, Pitchers Gta 5 Income, Crazy Colour Purple Shampoo, What To Do In Jeju Island, Epiphany Chords Ukulele,
|
CommonCrawl
|
HELSINKI LOGIC GROUP
Departmental front page
Logic front page
Seminar talks spring 2018
Created by Åsa Hirvonen, last modified on 2018-08-07
Talks during the spring term 2018
Wed 17.1.2018 12-14, C124
Yurii Khomskii: Projective maximal independent families
Fan Yang: Deriving and generalizing Arrow's Theorem in dependence and independence logic
Fan Yang: Deriving and generalizing Arrow's Theorem in dependence and independence logic, continued
Wed 7.2.2018 12-14, C124
Tim Gendron: Quasicrystals and the Quantum Modular Invariant
Abstract: Hilbert's 12th problem asks for an explicit description of the Hilbert and ray class fields of a global field $K$. It was inspired by the Theorem of Weber-Fueter, where it is shown that the Hilbert class field of $K$ quadratic and complex over $\mathbb{Q}$ is generated by the modular invariant of any element of $K-\mathbb{Q}$. For $K=\mathbb{Q}(\theta )$ a real quadratic extension of $\mathbb{Q}$ we conjecture that the Hilbert class field is generated by a weighted product of the values of $j^{\rm qt}(\theta)$ where $j^{\rm qt}$ is a discontinuous and multi valued function called the quantum modular invariant. In the case of a real quadratic field of positive characteristic, this conjecture has been verified using the theory of Drinfeld-Hayes modules, wherein the values of $j^{\rm qt}$ are shown to be the modular in invariants of certain ideals in a "small" Dedekind ring. Recently Richard Pink has shown that the same is true in characteristic zero using a quasicrystalline analog of Dedekind ring. We show that the set of quasicrystalline ideals naturally form a Cantor set on which the modular invariant is continuous. We end by considering the new frontier of quasicrystalline algebraic number theory and its prospects for providing a basis for a Drinfeld-Hayes theory in characteristic zero that may eventually allow one to solve Hilbert's 12th problem for real quadratic extensions of $\mathbb{Q}$.
Fan Yang: Questions and dependency in intuitionistic logic
Fausto Barbero: Interventionist counterfactuals in causal team semantics
Abstract: Teams and multiteams are adequate semantical objects for the discussion of properties of data, such as database dependencies or probabilities. There are instead notions of dependence - such as the causal dependencies arising from manipulationist theories of causation (Pearl, Woodward) - which cannot be reduced to properties of data. These sorts of dependencies are meaningful in presence of a set of basic causal assumptions - a set of counterfactual statements, which are usually summarized by so-called structural equations. However, theories of causation make a mixed use of observational and probabilistic notions (which concern data) and of causal notions. I will show how all these forms of reasoning can be modeled within one single semantical framework which simultaneously extends team semantics and structural equation models; and I will analyze some aspects of the logic of interventionist counterfactuals that emerges from this approach. (Joint work with Gabriel Sandu)
Miguel Moreno: \Sigma_1^1-complete quasi-orders in L cancelled due to strike
exam week
Miikka Vilander: A formula size game for the modal mu-calculus
Miguel Moreno: \Sigma_1^1-complete quasi-orders in L
Abstract: One of the basic differences between descriptive set theory (DST) and generalized descriptive set theory (GDST) is the existence of some analytic sets in GDST that have no counterpart in DST. The equivalence modulo the non-stationary ideal restricted to a stationary set S (EM-S) is an example of these sets.
These relations have been studied in GDST to understand some of the differences between DST and GDST. In particular, EM-S, with S the set of ordinals with cofinality alpha, has been used to study the isomorphism relations in the Borel-reducibility hierarchy. One of the main results related to this is:
If V=L, then EM-S, with the set of ordinals with cofinality alpha, is a complete analytic equivalence relation in the generalized Baire space.
From this, it was possible to prove that the isomorphism relation of theories with OCP or S-DOP are complete analytic equivalence relations in L(under some cardinal assumptions).
In this talk I will show that the inclusion modulo the non-stationary ideal restricted to the set of ordinals with cofinality alphais is a complete analytic quasi-order in L. This result has many corollary, two of which are:
(V=L) The isomorphism relation of a theory is either a complete analytic equivalence relation or a Delta_1^1 equivalence relation (under some cardinal assumptions).
(V=L) If kappa is not the successor of an omega-cofinal cardinal, then the embedability of dense linear orders is a complete analytic relation.
no seminar
Jouko Väänänen: "Internal categoricity of first order set theory"
Abstract: We show that if (M,E,E') satisfies the first order Zermelo-Fraenkel axioms of set theory when the membership-relation is E and also when the membership-relation is E', and in both cases the formulas in schemata are allowed to contain both E and E', then (M,E) and (M,E') are isomorphic. We offer this as a strong indication of 'internal categoricity' of set theory.
Oystein Linnebo: "Generality explained"
Abstract: What explains a true universal generalization? I distinguish and investigate different kinds of explanation. While an instance-based explanation proceeds via (all or some) instances of the generalization, a generic explanation is independent of the instances, relying instead on completely general facts about the properties or operations involved in the generalization. This distinction is illuminated by means of a truthmaker semantics, which is also used to show that instance-based explanations support classical logic, while generic explanations support only support intuitionistic logic.
Maria Hämeen-Anttila : "Gödel's early views on intuitionism"
Abstract: In the early 1930s, Kurt Gödel made several contributions to intuitionistic logic. He then turned to the question of constructivity of intuitionistic logic. His early critique of intuitionism is known from three lectures given in 1933, 1938, and 1941, where he claimed that the proof interpretation, as well as the intuitionistic treatment of negated universal statements, is not strictly constructive. I examine Gödel's criticism in light of his published works and unpublished notes, especially those from Gödel's 1941 lecture course in Princeton. I will also suggest that his view of intuitionism and the confusion about intuitionistic quantifiers came from, not Brouwer, whose works Gödel was not acquainted with at the time, but rather Weyl's and Hilbert's interpretation of Brouwer's intuitionism.
Nemi Pelgrom: Introduction to alternative foundations - univalence and type theory.
Abstract: With the rise of interest in formalisation of mathematics in the past century, and the rise of interest in proof assistants in the last decades, there are some alternative foundations to mathematics that are being tried out. Constructive mathematics and type theory has been shown to be preferable, over classic set theory, in programming contexts. This was first shown by H. Curry and W. A. Howard with work in the middle of the 20th century now know as "the Curry-Howard correspondence", and has recently reached new levels in the emerging field called Homotopy type theory. The Univalence axiom introduced by V. Voevodsky just a couple of years ago, allows us to treat "identity as equivalent to equivalence", and this is central to the success of HoTT. This talk will introduce some necessary background for understanding what this axiom means and what HoTT is, and also discuss this theory's current and future place in the foundation of mathematics. The talk with be of a philosophical and historical rather than mathematical nature.
Philip Welch: Recursions of higher types and low levels of determinacy
Abstract: We explore how generalisations of Kleene's theory of recursion in type 2 objects (which can be used to characterise complete Pi^1_1 sets and open = Sigma^0_1-determinacy) can be lifted to Sigma^0_3-Determinacy. The latter is the highest level in the arithmetical hierarchy whose determinacy is still provable in analysis. The generalisation requires the use of so-called infinite time Turing machines, and the levels of the Gödel constructible hierarchy needed to see that such machines models produce an output are, perhaps surprisingly, intimately connected with those needed to prove the existence of such strategies. The subsystem of analysis needed for this work is Pi^1_3-CA_0, and there is something suggestive of what may be needed to give a proof theory for this latter theory.
Short courses and lecture series in the spring 2018:
May 14-17, C122, 14-16: Advanced course "Forcing, forcing axioms and a hope for having them at cardinals other than aleph_1" by Mirna Dzamonja
May 22-24, C122/C124, 11-12: Minicourse on computable analysis by Arno Pauly
Menachem Magidor: May 22, during the hours of 14-16, C122, Exactum Building, Kumpula
Title: COMPACTNESS AND INCOMPACTNESS AT SMALL CARDINALS
Abstract: In this talk we shall survey some results about compactness and reflection for some second order properties of mathematical structures.
LINK: Young Set Theory Lecture 1 handout.pdf , magidor2018.pdf
Title: INDEPENDENCE IN MATHEMATICS, IS IT RELEVANT? LINK: Helsinki Logic Group May 2018.pdf
Abstract: Gödel's incompleteness theorem states that in any mathematical theory, except for trivial theories, there are statements that can not decided on the basis of the theory. Namely, these statements are independent of the theory. While Gödel's theorem has a deep impact on Logic and the Philosophy of Mathematics, initially it got relatively small attention from the general mathematical public. The feeling was that the Gödelian statements are "artificial " and do not come up in the usual mathematical practice.
The situation was changed to a large extent with the discovery that central problems of SetTheory, like the Continuum Hypothesis, are independent of the usual axioms of Set Theory. It was followed by proofs that many other problems in different areas of Mathematics are independent of the usual axioms.
In spite of that, several authors claimed that these independence results have no effect on the core practice of Mathematics---that problems which are considered to be in the mainstream of mathematical activity are not independent. The problem is even more pertinent for the applications of Mathematics in science. Namely: Does the phenomenon of independence in Mathematics have any relevance in the use of mathematical models in Science?
While we do not have concrete examples of a scientific theory, (say in theoretical physics) that is sensitive to the truth of a mathematical statement which is independent, we shall try to argue that it is not impossible. The issue raises some basic questions about the application of Mathematics in Science.
back to seminar page
{"serverDuration": 65, "requestCorrelationId": "31b1145b83c44ae1"}
|
CommonCrawl
|
SpringerPlus
Integrated measures for rough sets based on general binary relations
Shuhua Teng1,
Fan Liao2,
Mi He3,
Min Lu1 &
Yongjian Nian3
SpringerPlus volume 5, Article number: 117 (2016) Cite this article
Uncertainty measures are important for knowledge discovery and data mining. Rough set theory (RST) is an important tool for measuring and processing uncertain information. Although many RST-based methods for measuring system uncertainty have been investigated, the existing measures cannot adequately characterise the imprecision of a rough set. Moreover, these methods are suitable only for complete information systems, and it is difficult to generalise methods for complete information systems to incomplete information systems. To overcome these shortcomings, we present new uncertainty measures, integrated accuracy and integrated roughness, that are based on general binary relations, and we study important properties of these measures. A theoretical analysis and examples show that the proposed integrated measures are more precise than existing uncertainty measures, they are suitable for both complete and incomplete information systems, and they are logically consistent. Therefore, integrated accuracy and integrated roughness overcome the limitations of existing measures. This research not only develops the theory of uncertainty, it also expands the application domain of uncertainty measures and provides a theoretical basis for knowledge acquisition in information systems based on general binary relations.
Uncertainty is an important topic in research on artificial intelligence (Li and Du 2005). Rough set theory (RST) is a mathematical tool for handling imprecise, incomplete and uncertain data (Pawlak 1991), and it is an effective method to deal with uncertainty problems. In classical RST, the uncertainty of rough sets depends on two factors, knowledge uncertainty (the size of information granularities) and set uncertainty (the size of the rough set boundary) (Pawlak 1991). Set uncertainty in RST is measured with two quantities, accuracy and roughness, but they do not adequately reflect the uncertainty of a rough set. In some cases, the accuracy measure reflects only the size of the boundary region but not the size of the information granularities formed by the attribute sets, which limits the applicability of classical rough sets (Pawlak 1991). To solve this problem, researchers have proposed a number of integrated uncertainty measures based on certain binary relations (Teng et al. 2016; Wang et al. 2008a; Liang et al. 2009) that consider both the knowledge uncertainty and the set uncertainty. Although these measures are effective, they have certain restrictions. These measures change with information granularities which are unrelated to rough set X, i.e., information granularities in the negative region of X; this is inconsistent with human cognition in uncertainty problems (Wang and Zhang 2008). Intuitively, a rough measure that reflects two types of uncertainty should have a higher value than that of a measure which reflects only one type of uncertainty, but this property is not satisfied by the existing integrated uncertainty measures. In addition, the existing integrated uncertainty measures do not sufficiently characterise the uncertainty in certain cases. Wang and Zhang (2008) proposed a fuzziness measure for rough sets based on information entropy, which overcomes the problem of existing uncertainty measures for rough sets. However, a fuzziness measure based on the equivalence relation is not suitable for the incomplete information system and ordered information system. In practice, knowledge acquisition usually involves information that is incomplete for various reasons such as data measurement errors, a limited understanding and the conditions under which the data were acquired (Kryszkiewicz et al. 1998). Incompleteness in an information system is one of the main causes of uncertainty. RST, which is based on the traditional equivalence relation (i.e., reflexivity, symmetry, and transitivity) cannot directly deal with incomplete information systems, which greatly constrains the use of RST in practical applications (Gantayat et al. 2014; Sun et al. 2014). Hence, several extended models and methods for RST such as the tolerance relation (i.e., reflexivity, symmetry) (Wang and Zhang 2008), the asymmetric similarity relation (i.e., reflexivity, transitivity) (Stefanowski and Tsoukias 1999), the limited tolerance relation (i.e., reflexivity, symmetry) (Wang 2002), the dominance relation (reflexivity, transitivity) (Greco et al. 2002; Hu et al. 2012), and the general binary relation (i.e., reflexivity) (Yao 1998; Teng et al. 2009; Zhu 2007) which can directly process an incomplete information system, have been proposed. Based on these relations, directly measuring the uncertainty of incomplete data has caused considerable concern (Huang et al. 2004; Qian et al. 2009; Xu and Li 2011; Dai and Xu 2012; Sun et al. 2012; Dai et al. 2014; Chen et al. 2014; Dai et al. 2013).
The various uncertainty measures mentioned above are mostly aimed at one special binary relation without universality, and do not adequately reflect the uncertainty of rough sets in certain cases. Little attention has been paid to uncertainty measures based on general binary relations (Huang et al. 2004; Wang et al. 2008b). To overcome the limitations of the existing uncertainty measures and to analyse data more efficiently, it is necessary to find an uncertainty measure that is universal and more accurate.
This paper begins with an analysis of the limitations of the existing uncertainty measures for rough sets. Next, a knowledge uncertainty measure based on general binary relations is presented, which is applicable in classical systems as well, i.e., it is an effective technique to deal with complex data sets. Novel integrated measures based on general binary relations are proposed, and the properties of these integrated measures are analysed. At last, Examples are used to verify the validity of the proposed uncertainty measures.
Preliminary concepts of RST
Information system is a pair S = (U, A), where \( U = \{ u_{1} ,u_{2} , \ldots ,u_{\left| U \right|} \} \) is a non-empty finite set of objects (|∙| denotes the cardinality of the set), \( A = \{ a_{1} ,a_{2} , \ldots ,a_{\left| A \right|} \} \) is a non-empty finite set of attributes such that \( a_{j} \): \( a_{j} \to V_{{a_{j} }} \) for every \( a_{j} \in A \). The set \( V_{{a_{j} }} \) is called the value set of \( a_{j} \).
Each subset of attributes \( P \subseteq A \) determines a binary indiscernibility relation \( \text{IND(}P\text{)} \) as follows:
$$ {\text{IND}}(P) = \left\{ {\left( {u_{i} ,u_{j} } \right) \in U \times U\left| {\forall a \in P,f\left( {u_{i} ,a} \right) = } \right.f\left( {u_{j} ,a} \right)} \right\} $$
Obviously, \( \text{IND(}P\text{)} \) is an equivalence relation. If \( \left( {u_{i} ,u_{j} } \right) \in \text{IND(}P\text{)} \), then \( u_{i} \) and \( u_{j} \) are indiscernible with respect to attribute set P. The partition generated by \( \text{IND(}P\text{)} \) is denoted by \( U/{\text{IND}}(P) \), which can be abbreviated as \( U/P \). The partition \( U/P = \{ P_{1} ,P_{2} , \ldots ,P_{m} \} \) denotes knowledge associated with the equivalence relation \( \text{IND(}P\text{)} \), where \( P_{i} \) is an equivalence class, \( 1 \le i \le m \), and \( 1 \le m \le \left| U \right| \). Each equivalence class is an information granularity. Thus, the attribute set P will also be called the knowledge. The equivalence class determined by \( u_{i} \) with respect to the attribute set P is denoted by \( \left[ {u_{i} } \right]_{P} = \left\{ {u_{j} \in U|(u_{i} ,u_{j} ) \in \text{IND(}P\text{)}} \right\} \). Obviously, if \( u_{i} \in P_{k} \), then \( \left[ {u_{i} } \right]_{P} = P_{k} \). For any set \( X \subseteq U \), the P-lower and P-upper approximations of X are \( \underline{P} X = \{ u_{i} \in U|\left[ {u_{i} } \right]_{P} \subseteq X\} \) and \( \overline{P} X = \{ u_{i} \in U|\left[ {u_{i} } \right]_{P} \cap X \ne \emptyset \} \), respectively. The boundary region of X is represented by \( BN_{P} (X) = \overline{P} X - \underline{P} X \).
An information system S (= (U, A)) is an incomplete information system if the attribute values include an empty value "*"; otherwise, S is a complete information system.
In an information system, a relation derived from the attribute sets is generally not an equivalence relation but a general binary relation. In this paper, we use \( R^{P} \) to represent a general binary relation derived from the knowledge P. In an information system S, \( P \subseteq A \). We define the function \( R_{S}^{P} \) as follows:
The set-valued function \( R_{S}^{P} :U \to P(U) \) is defined as \( R_{S}^{P} (u_{i} ) = \{ u_{j} \in U|(u_{i} ,u_{j} ) \in R^{P} \} \), where \( R_{S}^{P} (u_{i} ) \) is the subsequent neighbour of \( u_{i} \) under the binary relation \( R^{P} \). The relation \( R^{P} \) and the corresponding subsequent neighbour \( R_{S}^{P} (u_{i} ) \) can be uniquely determined from each other, i.e., \( u_{i} R^{P} u_{j} \Leftrightarrow u_{j} \in R_{S}^{P} (u_{i} ) \). Let \( {U \mathord{\left/ {\vphantom {U {R^{P} }}} \right. \kern-0pt} {R^{P} }} = \{ R_{S}^{P} (u_{i} )\left| {u_{i} \in U} \right.\} \) represent the classification of U divided by the knowledge P, where \( R_{S}^{P} (u_{i} ) \) is called a classification granularity under the general binary relation. The classification granularity \( R_{S}^{P} (u_{i} ) \) can be understood as the largest set of objects that cannot be distinguished from object \( u_{i} \) given the knowledge P; i.e., objects in \( R_{S}^{P} (u_{i} ) \) should belong to the same class as \( u_{i} \) given the knowledge P. Obviously, \( R_{S}^{P} (u_{i} ) \) will be an equivalence class, a dominance class, a tolerance class, a limited tolerance class, or an asymmetric similarity class of an object \( u_{i} \) if \( R^{P} \) is an equivalence relation, a dominance relation, a tolerance relation, a limited tolerance relation or an asymmetric similarity relation, respectively. Note that classification granularities in \( {U \mathord{\left/ {\vphantom {U {R^{P} }}} \right. \kern-0pt} {R^{P} }} \) do not always constitute partitions or covers of U (Wang et al. 2008b). The lower and upper approximation sets of \( X \subseteq U \) with respect to a general binary relation \( R^{P} \) are defined as \( \underline{{R^{P} }} (X) = \{ u_{i} \in U|R_{S}^{P} (u_{i} ) \subseteq X\} \) and \( \overline{{R^{P} }} (X) = \{ u_{i} \in U|R_{S}^{P} (u_{i} ) \cap X \ne \emptyset \} \), respectively.
If \( Q \) and \( P \subseteq A \), we define a partial relation \( \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \prec } \) as follows: \( P\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \prec } Q \Leftrightarrow R_{S}^{P} (u_{i} ) \subseteq R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \in U \), which means that the knowledge P is finer (i.e., has finer classification granularities) than the knowledge Q. If \( R_{S}^{P} (u_{i} ) \subseteq R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \in U \) and \( \exists u_{j} \in U \) satisfies \( R_{S}^{P} (u_{j} ) \subset R_{S}^{Q} (u_{j} ) \), then we say that the knowledge P is strictly finer than the knowledge Q, or the knowledge Q entirely depends on the knowledge P, which is denoted by \( P \prec Q \). The notation \( P \approx Q \) represents \( R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \in U \).
Limitations of existing uncertainty measures
In classical RST, there are two main causes of uncertainty: the information granularity derived from the binary relation in the universe, which is knowledge uncertainty, and the boundary of the rough set in the given approximation space, which is set uncertainty (Pawlak 1991). Beaubouef et al. (1998) proposed a new integrated uncertainty measure for complete information systems, which they called rough entropy.
Given an information system \( S = (U,A) \), \( P,Q \subseteq A \), and \( U/P = \{ P_{1} ,P_{1} , \ldots ,P_{m} \} \). The rough entropy of \( X \subseteq U \) with respect to P is defined as (Beaubouef et al. 1998)
$$ H(X,P) = \rho_{P} (X)H^{G} (P) $$
where \( {\kern 1pt} H^{G} (P) = - \left| {\sum\nolimits_{i = 1}^{m} {\frac{{\left| {P_{i} } \right|}}{\left| U \right|}\log_{2} \frac{1}{{\left| {P_{i} } \right|}}} } \right| \) is called the granularity measure of the knowledge P. In Eq. (2), \( H^{G} (P) \) measures knowledge uncertainty, and the roughness \( \rho_{P} (X) = 1 - \frac{{\left| {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{P} X} \right|}}{{\left| {\bar{P}X} \right|}} \) measures set uncertainty. Rough entropy considers two types of uncertainty and is therefore an integrated uncertainty measure.
Yang and John (2008) noted that existing uncertainty measures cannot correctly measure the uncertainty of boundary rough sets, whose lower approximation is an empty set. Thus, Yang and John (2008) defined the measures global accuracy \( \sigma_{P} (X) \) and global roughness \( G_{P} (X) \) under the equivalence relation to measure the uncertainty of rough sets:
$$ \sigma_{p} \left( X \right) = \frac{{\left| {U - BN_{P} \left( X \right)} \right|}}{\left| U \right|} $$
$$ G_{P} \left( X \right) = 1 - \sigma_{p} \left( X \right) $$
where \( BN_{P} (X) = \left| {\overline{P} X} \right| - \left| {\underline{P} X} \right| \). The global accuracy and the global roughness reveal the global uncertainty with respect to the universe of discourse, which addresses the shortcomings of classical measures for boundary rough sets. However, similar to classical measures, global accuracy and global roughness cannot measure the knowledge uncertainty.
If the boundary region of \( X \subseteq U \) with respect to the knowledge A is an empty set, the rough set X can be precisely described by the knowledge A. In this case, the rough set X becomes a precise set; i.e., the uncertainty of X is 0. Thus, the uncertainty of a rough set X is related only to the size of the boundary region and the information granularity of the boundary region and not to the information granularity in the positive and negative regions (Wang and Zhang 2008). Although the rough entropy in Eq. (2) can measure two types of uncertainty, it is not always effective in certain cases. In the following, two examples reveal the limitations of the existing uncertainty measures for both complete and incomplete information systems.
In a complete information system \( S = (U,A) \), \( U = \{ u_{1} ,u_{2} , \ldots ,u_{3600} \} \), \( X \subseteq U \) and \( P \subseteq A \). Figure 1 presents the lower and upper approximations and the boundary region of X as the information granularity induced by the knowledge P changes, where in subfigures (1)–(7) the information granularity is progressively finer. In subfigure (1), the lower approximation set is an empty set and the boundary region is the entire universe. Parts of the universe in Fig. 1 (2) are finer than those in Fig. 1 (1), i.e., 6 units in Fig. 1 (1) are equally divided into 24 smaller units. The lower approximation set remains empty, and the boundary region comprises 22 smaller units. Similarly, Fig. 1 (3) shows the results as parts of the universe [i.e., two of the large units in Fig. 1 (2)] are further divided evenly. Figure 1 (4) presents the results when the largest unit in Fig. 1 (3) is further divided evenly. Figure 1 (5) shows the results when all of the smaller units in Fig. 1 (4) are further divided evenly, and Fig. 1 (6) presents the results when the negative region of Fig. 1 (5) is divided evenly. Figure 1 (7) presents the results when the positive domain of Fig. 1 (6) is further divided evenly.
Lower and upper approximations of a rough set for various levels of information granularities
The values of various uncertainty measures for the rough set X in each subfigure of Fig. 1 are shown in Table 1, where Num_L, Num_U and Num_B represent the number of objects in the lower approximation, the upper approximation, and the boundary region, respectively. From Table 1, we can observe that the number of objects in the boundary of X decreases as the information granularity becomes finer, i.e., the number of objects surely belonging or not belonging to X increases. The uncertainty measures decrease monotonically as the information granularities become smaller through finer classification. However, the existing uncertainty measures are not always effective in certain cases; their limitations are revealed by the following five observations:
Table 1 Uncertainty measures of the rough set X with various information granularities
Rough set X is a boundary rough set (i.e., the lower approximation of X is an empty set) in Figs. 1 and 2. From the differences between partitions (1) and (2), we can observe that the boundary region becomes smaller and the information granularities in the boundary region become finer. Obviously, the uncertainty of the rough set X should become smaller. However, \( \rho_{P} (X) \) in Table 1 does not change; although \( H^{G} (P) \) decreases, it reflects only the variation in the information granularity and not the uncertainty of the set. Thus, \( \rho_{P} (X) \) and \( H^{G} (P) \) cannot adequately describe the uncertainty of a boundary rough set. The measure \( H(X,P) \) reflects only the set uncertainty of the boundary rough set and not the knowledge uncertainty.
Uncertainty measures when X = X 1
It can be observed that from partitions (2) and (3) that the boundary region does not change, but the information granularity in the boundary region becomes finer, which shows that the set uncertainty remains the same while the knowledge uncertainty decreases. In Table 1, \( \rho_{P} (X) \) and \( G_{P} (X) \) do not change whereas \( H(X,P) \) decreases, which illustrates that \( \rho_{P} (X) \) and \( G_{P} (X) \) do not reflect the uncertainty of the knowledge whereas rough entropy \( H(X,P) \) does.
Comparing partitions (3) with (4) and (4) with (5), it can be observed that the boundary region becomes smaller and the information granularity in the boundary region becomes finer. Therefore, the uncertainty of the rough set X decreases. In Table 1, \( \rho_{P} (X) \), \( G_{P} (X) \), \( H^{G} (P) \) and \( H(X,P) \) all decrease. However, \( \rho_{P} (X) \) and \( G_{P} (X) \) reflect only the set uncertainty, \( H^{G} (P) \) reflects only the knowledge uncertainty, and \( H(X,P) \) reflects both types of uncertainty.
Comparing partitions (5) with (6) and (6) with (7), we can observe that the boundary region and the information granularity in the boundary region remain the same. Accordingly, the uncertainty of X should not change (Wang and Zhang 2008). Although the information granularity becomes finer in the negative region from (5) to (6) and in the positive region from (6) to (7), the uncertainty of rough set X should remain unaffected (Wang and Zhang 2008). In Table 1, \( \rho_{P} (X) \) and \( G_{P} (X) \) are constant, which is consistent with human cognition, but \( H(X,P) \) decreases, which shows that \( H(X,P) \) does not accurately reflect the uncertainty of a rough set in this case.
An integrated measure of uncertainty in RST includes both types of uncertainty. Intuitively, the value of an integrated roughness measure that includes both types of uncertainty should be larger than that of a measure that considers only one type of uncertainty. However, rough entropy does not satisfy this requirement: although rough entropy includes both types of uncertainty, the numerical values can be smaller than those of the knowledge uncertainty measure, as shown in Table 1.
From the preceding analysis, it may be concluded that the existing uncertainty measures for a complete information system do not accurately reflect the uncertainty of rough sets. Next, the characteristics of uncertainty measures for an incomplete information system will be analysed.
In an incomplete information system, the equivalence relation of classical measures is extended to a tolerance relation \( R_{T}^{P} \), which is expressed as:
$$ \alpha_{{R_{{_{T} }}^{P} }} (X) = \frac{{\left| {\underline{{R_{T}^{P} }} (X)} \right|}}{{\left| {\overline{{R_{T}^{P} }} (X)} \right|}} $$
$$ \rho_{{R_{{_{T} }}^{P} }} (X) = 1 - \alpha_{{R_{{_{T} }}^{P} }} (X) $$
In Eqs. (5) and (6), \( \alpha_{{R_{{_{T} }}^{P} }} (X) \) and \( \rho_{{R_{{_{T} }}^{P} }} (X) \) are the accuracy and the roughness, respectively. Obviously, \( 0 \le \alpha_{{R_{{_{T} }}^{P} }} (X),\rho_{{R_{{_{T} }}^{P} }} (X) \le 1 \). The larger the uncertainty of a rough set, the smaller \( \alpha_{{R_{{_{T} }}^{P} }} (X) \) is and the larger \( \rho_{{R_{{_{T} }}^{P} }} (X) \) is. Therefore, the accuracy and the roughness can be used to measure the set uncertainty. As was the case for a complete information system, Eqs. (5) and (6) measure only set uncertainty and not knowledge uncertainty for an incomplete information system (Wang et al. 2008a). Wang et al. (2008a) proposed new definitions of accuracy and roughness based on the tolerance relation:
$$ \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) = 1 - \rho_{{R_{{_{T} }}^{P} }} (X) \times GK(R_{{_{T} }}^{P} ) $$
$$ \rho_{{R_{{_{T} }}^{P} }}^{*} (X) = \rho_{{R_{{_{T} }}^{P} }} (X) \times GK(R_{{_{T} }}^{P} ) $$
Knowledge granularity, defined as \( GK(R_{{_{T} }}^{P} ) = \sum\nolimits_{i = 1}^{\left| U \right|} {{{\left| {R_{{_{T} }}^{P} (u_{i} )} \right|} \mathord{\left/ {\vphantom {{\left| {R_{{_{T} }}^{P} (u_{i} )} \right|} {\left| U \right|^{2} }}} \right. \kern-0pt} {\left| U \right|^{2} }}} \), was employed to measure the roughness of knowledge. In contrast to knowledge granularity, \( HK(R_{{_{T} }}^{P} ) = 1 - GK(R_{{_{T} }}^{P} ) \) was used to characterise the precision of knowledge. Obviously, Eqs. (7) and (8) consider both set uncertainty and knowledge uncertainty, which corrects the problems with the classical definitions of accuracy and roughness to some extent. However, certain limitations remain for an incomplete information system, and these are revealed by the following example.
Let \( S = (U,A) \) be an incomplete information system with \( U = \{ u_{1} ,u_{2} , \ldots ,u_{7} \} \), \( P,Q \subseteq A \). Assume that
$$ \left\{ {\begin{array}{l} {{U \mathord{\left/ {\vphantom {U {R_{{_{T} }}^{P} }}} \right. \kern-0pt} {R_{{_{T} }}^{P} }} = \left\{ {\{ u_{1} ,u_{2} \} ,\{ u_{2} ,u_{1} \} ,\{ u_{3} ,u_{4} ,u_{5} \} ,\{ u_{4} ,u_{3} \} ,\{ u_{5} ,u_{3} ,u_{6} \} ,\{ u_{6} ,u_{5} ,u_{7} \} ,\{ u_{7} ,u_{6} \} } \right\}} \\ {{U \mathord{\left/ {\vphantom {U {R_{{_{T} }}^{Q} }}} \right. \kern-0pt} {R_{{_{T} }}^{Q} }} = \left\{ {\{ u_{1} ,u_{2} \} ,\{ u_{2} ,u_{1} \} ,\{ u_{3} ,u_{4} \} ,\{ u_{4} ,u_{3} \} ,\{ u_{5} ,u_{6} \} ,\{ u_{6} ,u_{5} ,u_{7} \} ,\{ u_{7} ,u_{6} \} } \right\}} \\ \end{array} } \right. $$
Obviously, \( {U \mathord{\left/ {\vphantom {U {R_{{_{T} }}^{Q} }}} \right. \kern-0pt} {R_{{_{T} }}^{Q} }} \subset {U \mathord{\left/ {\vphantom {U {R_{{_{T} }}^{P} }}} \right. \kern-0pt} {R_{{_{T} }}^{P} }} \). Table 2 shows the upper and lower approximations, and the boundary region of the rough set X, while Table 3 shows the values of the uncertainty measures of the rough sets X for the knowledge P and Q. Figures 2 and 3 present the uncertainty measures of \( X_{1} \) and \( X_{2} \), respectively. The subscripts of the uncertainty measures in Figs. 2 and 3 are omitted, e.g., \( \alpha_{{R_{{_{T} }}^{P} }} (X) \) is abbreviated as \( \alpha \) and \( GK(R_{{_{T} }}^{P} ) \) is abbreviated as \( GK \).
Table 2 Upper and lower approximations and the boundary region of the rough set X
Table 3 Uncertainty measures of the rough set X
From Tables 2 and 3, Figs. 2, and 3, we can make the following observations:
When \( X = X_{1} \), the lower and upper approximations of \( X_{1} \) with respect to the knowledge P and Q are identical, and the classification granularities in the upper approximations {u 1, u 2, u 6, u 7} induced by the knowledge P and Q are also identical. Therefore, the roughness and the accuracy of the knowledge P and Q are equal, which is logically consistent. However, \( \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) < \alpha_{{R_{{_{T} }}^{Q} }}^{*} (X) \) and \( \rho_{{R_{{_{T} }}^{Q} }}^{*} (X) < \rho_{{R_{{_{T} }}^{P} }}^{*} (X) \). These results are caused by the subdivision of the classification granularities \( R_{{_{T} }}^{P} (u_{3} ) \) and \( R_{{_{T} }}^{P} (u_{5} ) \) in the negative region of set \( X_{1} \) with the knowledge Q. Obviously, \( R_{{_{T} }}^{P} (u_{3} ) \) and \( R_{{_{T} }}^{P} (u_{5} ) \) are unrelated to X, and thus \( \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) \) and \( \rho_{{R_{{_{T} }}^{P} }}^{*} (X) \) are inconsistent with human cognition.
When \( X = X_{2} \), the lower approximation of set \( X_{2} \) is an empty set, and as a result, \( X_{2} \) is a boundary rough set. The boundary regions of \( X_{2} \) with respect to the knowledge P and Q are different. In this case, the larger the boundary region is, the coarser the knowledge (Yang and John 2008). However, \( \rho_{{R_{{_{T} }}^{P} }} (X) = \rho_{{R_{{_{T} }}^{Q} }} (X) \) and \( \alpha_{{R_{{_{T} }}^{P} }} (X) = \alpha_{{R_{{_{T} }}^{Q} }} (X) \), so from Tables 2 and 3 we obtain \( HK(R_{{_{T} }}^{P} ) = \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) < \alpha_{{R_{{_{T} }}^{Q} }}^{*} (X) = HK(R_{{_{T} }}^{Q} ) \) and \( GK(R_{{_{T} }}^{Q} ) = \rho_{{R_{{_{T} }}^{Q} }}^{*} (X) < \rho_{{R_{{_{T} }}^{P} }}^{*} (X) = GK(R_{{_{T} }}^{P} ) \), which shows that \( \rho_{{R_{{_{T} }}^{P} }} (X) \) and \( \alpha_{{R_{{_{T} }}^{P} }} (X) \) do not accurately reflect the uncertainty of the rough set when \( BN_{{R_{T}^{P} }} (X) = \emptyset \); \( \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) \) and \( \rho_{{R_{{_{T} }}^{P} }}^{*} (X) \) can measure the knowledge uncertainty but not the set uncertainty.
From Fig. 2 and Fig. 3, it can be observed that \( \alpha_{{R_{{_{T} }}^{P} }} (X) \le \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) \), \( \alpha_{{R_{{_{T} }}^{Q} }} (X) \le \alpha_{{R_{{_{T} }}^{Q} }}^{*} (X) \), \( HK(R_{{_{T} }}^{P} ) \le \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) \), and \( HK(R_{{_{T} }}^{Q} ) \le \alpha_{{R_{{_{T} }}^{Q} }}^{*} (X) \); therefore, \( \rho_{{R_{{_{T} }}^{P} }}^{*} (X) < \rho_{{R_{{_{T} }}^{P} }} (X) \), \( \rho_{{R_{{_{T} }}^{Q} }}^{*} (X) < \rho_{{R_{{_{T} }}^{Q} }} (X) \), \( \rho_{{R_{{_{T} }}^{P} }}^{*} (X) \le GK(R_{{_{T} }}^{P} ) \) and \( \rho_{{R_{{_{T} }}^{Q} }}^{*} (X) \le GK(R_{{_{T} }}^{Q} ) \) when \( X = X_{1} \) or \( X = X_{2} \). That is, the value of the roughness measure that includes two types of uncertainty is smaller than that of the measure reflecting only one type of uncertainty, whereas the value of the accuracy measure that includes two types of uncertainty is greater than that of the measure reflecting only one type of uncertainty. Obviously, these results are logically inconsistent.
Example 2 shows that, similar to the results for a complete information system, uncertainty measures for an incomplete information system have certain limitations. Xu et al. (Xu et al. 2009) presented a new integrated uncertainty measure for ordered information systems with properties similar to those of \( \alpha_{{R_{{_{T} }}^{P} }}^{*} (X) \) and \( \rho_{{R_{{_{T} }}^{P} }}^{*} (X) \). Therefore, this uncertainty measure has the same limitations.
From Examples 1 and 2, we can conclude that the imprecision of rough sets is not well characterised by existing measures for both complete and incomplete information systems. Therefore, it is necessary to find a more comprehensive and effective uncertainty measure based on general binary relations.
Integrated measures based on general binary relations
In classical RST (Pawlak 1991), uncertainty includes knowledge uncertainty and set uncertainty. Various integrated uncertainty measures have been proposed that are based on a given binary relation and include both types of uncertainty (Wang et al. 2008a; Liang et al. 2009; Xu et al. 2009). The values of these measures depend on the classification granularity, which is unassociated with the set \( X \subseteq U \), specifically the classification granularity in the negative region of X. This behaviour is inconsistent with human cognition (Wang and Zhang 2008). Intuitively, the value of an integrated roughness measure (i.e., the roughness of a rough set) that evaluates two types of uncertainty should be greater than that of a measure which evaluates only one type of uncertainty, but this property is not satisfied by almost all the existing integrated measures. In addition, the existing integrated uncertainty measures cannot be used to effectively characterise the roughness of rough sets in certain cases. In this section, the limitations of existing integrated uncertainty measures are addressed. First, a knowledge uncertainty measure that is based on general binary relations is presented. Based on this uncertainty measure, novel and logically consistent integrated uncertainty measures are presented.
Information entropy measure based on general binary relations
Classical RST starts from an equivalence relation. Knowledge is based on the ability to partition a "universe" using the equivalence relation. The finer the partitioning, the more precise the knowledge will be. In an incomplete information system, overlaps may occur among several similar classes defined by the tolerance relation, the similarity relation, or the limited tolerance relation. Moreover, a covering is substituted for the partition of the universe. Thus, the equivalence relation cannot be satisfied for an incomplete information system. The same problems appear for general binary relations. However, research on uncertainty measures based on general binary relations is lacking (Huang et al. 2004). This lack of research motivates the investigation of an effective uncertainty measure based on general binary relations. In the following, an uncertainty measure based on general binary relations will be discussed.
Let \( R^{P} \subseteq U \times U \) be a general binary relation on U, \( P \subseteq A \). For two elements \( u_{i} ,u_{j} \in U \), if \( u_{j} \) has the same properties as \( u_{i} \) with respect to R P, i.e., \( u_{i} R^{P} u_{j} \), we say that \( u_{j} \) is \( R^{P} \)-related to \( u_{i} \). A general binary relation may be more conveniently represented using successor neighbourhoods or a classification granularity:
$$ R_{S}^{P} (u_{i} ) = \{ u_{j} \left| {u_{j} \in U,u_{i} R_{S}^{P} u_{j} } \right.\} $$
The classification granularity \( R_{S}^{P} (u_{i} ) \) consists of all \( R^{P} \)-related elements of \( u_{i} \). If \( R_{S}^{P} (u_{i} ) \) contains more elements, more objects will belong to the same class as \( u_{i} \), the classification granularities will be larger, and the capability of the knowledge P to classify the object \( u_{i} \) will be weaker. Given these characteristics, a definition of an uncertainty measure based on general binary relations is given as follows.
Definition 1
Given an information system \( S = (U,A) \), \( u_{i} \in U \) and \( 1 \le i \le \left| U \right| \), the information entropy of the knowledge \( P \subseteq A \) is defined as
$$ H^{\prime}(P) = 1 - G^{\prime}(P) $$
$$ G^{\prime}(P) = \sum\limits_{{u_{i} \in BN_{{R^{P} }} (X)}} {\frac{{\left| {R_{S}^{P} (u_{i} )} \right| - 1}}{\left| U \right|(\left| U \right| - 1)}} $$
Theorem 1
(Monotonicity) Given an information system \( S = (U,A) \), \( P,Q \subseteq A \) and \( P\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \prec } Q \) , the information entropy satisfies \( H^{\prime}(Q) \le H^{\prime}(P) \) , where equality holds if and only if \( P \approx Q \).
The proof of this theorem follows from the definition of the partial relation and Definition 1.
Corollary 1
Given an information system \( S = (U,A) \), \( P \subseteq A \), \( H^{{\prime }} (P) \) reaches a minimum value of 0 if and only if \( R_{S}^{P} (u_{i} ) = U \) for \( \forall u_{i} \in U \) , and \( H^{{\prime }} (P) \) reaches a maximum value of 1 if and only if \( R_{S}^{P} (u_{i} ) = u_{i} \) for \( \forall u_{i} \in U \).
Theorem 1 and Corollary 1 indicate that the information entropy monotonically increases as the classification granularity becomes smaller through finer classification. If the knowledge P cannot distinguish between any two objects in the universe U, the information entropy is at the minimum and the knowledge P has the weakest classification capability and the greatest roughness. If the knowledge P can distinguish all objects in the universe U, the information entropy is at the maximum and the knowledge P has the strongest classification capability and accuracy. Therefore, information entropy describes the roughness of knowledge in the context of granularity.
Integrated measures of rough sets
To measure the uncertainty of rough sets more precisely, Yang and John (2008) proposed two complementary uncertainty measures for a complete information system, global accuracy and global roughness. These two complementary uncertainty measures can measure the set uncertainty more comprehensively than other uncertainty measures. However, these two complementary uncertainty measures are based on the equivalence relation and are not suitable for an incomplete information system. However, global accuracy and global roughness can be extended to incomplete systems using a general binary relation. The new definition for global accuracy is
$$ \sigma_{P}^{{\prime }} (X) = 1 - \frac{{\left| {BN_{P}^{{\prime }} (X)} \right|}}{2\left| U \right|} $$
where \( BN_{P}^{{\prime }} (X) = \overline{{R^{P} }} (X) - \underline{{R^{P} }} (X) \). Global roughness is then defined as \( \omega_{P}^{{\prime }} (X) = 1 - \sigma_{P}^{{\prime }} (X) \). Based on these definitions, we propose two novel integrated measures.
Given an information system \( S = (U,A) \), \( P \subseteq A \), \( X \subseteq U \) and the general binary relation \( R^{P} \), the integrated roughness and the integrated accuracy of X are defined as:
$$ \rho_{P}^{{\prime }} (X) = 1 - \alpha_{P}^{{\prime }} (X) $$
$$ \alpha_{P}^{{\prime }} (X) = \sigma_{P}^{{\prime }} (X) \times H^{{\prime }} (P) $$
\( H^{{\prime }} (P) \) is used to measure knowledge uncertainty, and \( \sigma_{P}^{{\prime }} (X) \) is used to measure set uncertainty. Obviously, Definition 2 considers not only the size of the boundary region of a rough set but also the classification granularity of the boundary region. Therefore, integrated roughness and integrated accuracy measure two types of uncertainty.
(Monotonicity) Given an information system \( S = (U,A) \), \( P,Q \subseteq A \), \( P \prec Q \) , and \( X \subseteq U \) , the following relations hold:
(1) \( \sigma^{\prime}_{Q} (X) \le \sigma^{\prime}_{P} (X) \); (2) \( \rho^{\prime}_{P} (X) \le \rho^{\prime}_{Q} (X) \).
(1) Because \( P \prec Q \), we have that \( R_{S}^{P} (u_{i} ) \subseteq R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \in U \), and \( \exists u_{k} \in U \) satisfies \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \). For \( \forall u_{i} \in \underline{{R^{Q} }} (X) \), \( R_{S}^{Q} (u_{i} ) \subseteq X \), we obtain \( R_{S}^{P} (u_{i} ) \subseteq X \), i.e., \( u_{i} \in \underline{{R^{P} }} (X) \). Thus, \( \underline{{R^{Q} }} (X) \subseteq \underline{{R^{P} }} (X) \). Similarly, \( R_{S}^{P} (u_{i} ) \cap X \ne \emptyset \) for \( \forall u_{i} \in \overline{{R^{P} }} (X) \). Because \( R_{S}^{P} (u_{i} ) \subseteq R_{S}^{Q} (u_{i} ) \), we have \( R_{S}^{Q} (u_{i} ) \cap X \ne \emptyset \), i.e., \( u_{i} \in \overline{{R^{Q} }} (X) \). Therefore, \( \overline{{R^{P} }} (X) \subseteq \overline{{R^{Q} }} (X) \) and \( BN^{\prime}_{P} (X) \subseteq BN^{\prime}_{Q} (X) \). According to Eq. (13), we have \( \sigma_{Q}^{{\prime }} (X) \le \sigma_{P}^{{\prime }} (X) \), where equality holds if and only if \( BN_{P}^{{\prime }} (X) = BN_{Q}^{{\prime }} (X) \).
(2) Because \( P \prec Q \), we have \( R_{S}^{P} (u_{i} ) \subseteq R_{S}^{Q} (u_{i} ) \) for any \( u_{i} \in U \), and \( \exists u_{k} \in U \) satisfies \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \). To simplify the proof, we assume that only one object \( u_{k} \in U \) satisfies \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \), so we have \( R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) for any other \( u_{i} \ne u_{k} \) (the proof for many objects is similar). Three cases are discussed:
\( R_{S}^{Q} (u_{k} ) \subseteq X \): Because \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \), it follows that \( R_{S}^{P} (u_{k} ) \subseteq X \) and \( u_{k} \notin BN_{P}^{{\prime }} (X) = BN_{Q}^{{\prime }} (X) \). From the proof of (1), we have \( \sigma_{Q}^{{\prime }} (X) = \sigma_{P}^{{\prime }} (X) \). Because \( {\kern 1pt} {\kern 1pt} R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \ne u_{k} \), from Eq. (8) we obtain \( H^{{\prime }} (P) = H^{{\prime }} (Q) \). According to Definition 2, we have \( \alpha_{Q}^{{\prime }} (X) = \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) = \rho_{Q}^{{\prime }} (X) \).
\( R_{S}^{Q} (u_{k} ) \cap X = \emptyset \): Because \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \), we have \( R_{S}^{P} (u_{k} ) \cap X = \emptyset \) and \( u_{k} \notin BN_{P}^{{\prime }} (X) = BN_{Q}^{{\prime }} (X) \). From the proof of (1), we have \( \sigma_{Q}^{{\prime }} (X) = \sigma_{P}^{{\prime }} (X) \). Because \( {\kern 1pt} {\kern 1pt} R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \ne u_{k} \), from Eq. (11) we obtain \( H^{{\prime }} (Q) = H^{{\prime }} (P) \). According to Definition 2, we have that \( \alpha_{Q}^{{\prime }} (X) = \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) = \rho_{Q}^{{\prime }} (X) \).
\( R_{S}^{Q} (u_{k} ) \cap X \ne \emptyset \) and \( R_{S}^{Q} (u_{k} ) \cap X \ne R_{S}^{Q} (u_{k} ) \). We have \( u_{k} \in BN_{Q}^{{\prime }} (X) \). Three cases must be considered:
If \( R_{S}^{P} (u_{k} ) \cap X \ne \emptyset \) and \( R_{S}^{P} (u_{k} ) \cap X \ne R_{S}^{P} (u_{k} ) \), then \( u_{k} \in BN_{P}^{{\prime }} (X) = BN_{Q}^{{\prime }} (X) \). From the proof of (1), we obtain \( 0 < \sigma_{Q}^{{\prime }} (X) = \sigma_{P}^{{\prime }} (X) \). Because \( R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) for \( \forall u_{i} \ne u_{k} \), \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \), from Eq. (8) and Definition 2 we have that \( H^{{\prime }} (Q) < H^{{\prime }} (P) \), \( \alpha_{Q}^{{\prime }} (X) < \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) < \rho_{Q}^{{\prime }} (X) \).
If \( R_{S}^{P} (u_{k} ) \subseteq X \), then \( u_{k} \notin BN_{P}^{{\prime }} (X) \). Thus, \( BN_{P}^{{\prime }} (X) \subset BN_{Q}^{{\prime }} (X) \ne \emptyset \). From the proof of (1), we have \( \sigma_{Q}^{{\prime }} (X) < \sigma_{P}^{{\prime }} (X) \). Because \( R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) and \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \) for \( \forall u_{i} \ne u_{k} \), according to Eq. (8) we have that \( H^{{\prime }} (Q) < H^{{\prime }} (P) \). From Definition 2, we have that \( \alpha_{Q}^{{\prime }} (X) < \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) < \rho_{Q}^{{\prime }} (X) \).
If \( R_{S}^{P} (u_{k} ) \cap X = \emptyset \), then \( u_{k} \notin BN_{P}^{{\prime }} (X) \). Therefore, \( BN_{P}^{{\prime }} (X) \subset BN_{Q}^{{\prime }} (X) \ne \emptyset \). From the proof of (1), we have \( \sigma_{Q}^{{\prime }} (X) < \sigma_{P}^{{\prime }} (X) \). Because \( R_{S}^{P} (u_{i} ) = R_{S}^{Q} (u_{i} ) \) and \( R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} ) \) for \( \forall u_{i} \ne u_{k} \), according to Eq. (13) we obtain \( H^{\prime}(Q) < H^{\prime}(P) \). From Definition 2, we have that \( \alpha_{Q}^{{\prime }} (X) < \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) < \rho_{Q}^{{\prime }} (X) \).
This concludes the proof of Theorem 2.
Given an information system \( S = (U,A) \), \( P,Q \subseteq A \), \( P \prec Q \) and \( X \subseteq U \) , where \( U^{\prime} = \{ u_{k} \in U\left| {R_{S}^{P} (u_{k} ) \subset R_{S}^{Q} (u_{k} )} \right.\} \) , then \( \rho_{P}^{\alpha } (X) = \rho_{Q}^{\alpha } (X) \) if and only if \( u_{k} \notin BN_{Q}^{{\prime }} (X) \) for \( \forall u_{k} \in U^{{\prime }} \).
The proof of this corollary follows from Theorem 2. From Theorem 2 and Corollary 2, we can observe that the integrated accuracy does not strictly monotonically increase, and the integrated roughness does not strictly monotonically decrease as the classification granularity becomes smaller through finer classification. That is, the integrated accuracy and the integrated roughness are unrelated to the classification granularity \( R_{S}^{Q} (u_{i} ) \), where \( u_{i} \in \{ U - BN^{\prime}_{Q} (X)\} \). If the classification granularity \( R_{S}^{Q} (u_{k} ) \) defined by the knowledge P satisfies \( u_{k} \in BN^{\prime}_{Q} (X) \), the integrated accuracy (integrated roughness) strictly monotonically increases (decreases), which is accords to human cognition.
Given an information system \( S = (U,A) \), \( P \subseteq A{\kern 1pt} \) and \( X \subseteq U \) , the integrated roughness satisfies \( 0 \le \rho_{P}^{{\prime }} (X) \le 1 \) . Equality holds on the right side if and only if \( R_{S}^{P} (u_{i} ) = U \) for \( \forall u_{i} \in U \) , and equality holds on the left side if and only if \( BN_{P}^{{\prime }} (X) = \emptyset \).
The proof of this corollary follows from Eqs. (11), (13), (14) and (15).
Given an information system \( S = (U,A) \), \( P \subseteq A{\kern 1pt} \) and \( X \subseteq U \) , the integrated accuracy and the integrated roughness satisfy the relations \( \alpha_{P}^{{\prime }} (X) \le \sigma_{P}^{{\prime }} (X) \) and \( \omega_{P}^{{\prime }} (X) \le \rho_{P}^{{\prime }} (X) \).
It can be concluded from Theorem 3 that the value of the integrated accuracy \( \alpha_{P}^{{\prime }} (X) \), which measures two types of uncertainty, will be less than that of \( \sigma_{P}^{{\prime }} (X) \), which measures only one type of uncertainty, and the value of the integrated roughness \( \rho_{P}^{{\prime }} (X) \), which measures two types of uncertainty, will be greater than that of \( \omega_{P}^{{\prime }} (X) \), which measures only one type of uncertainty. Therefore, the new integrated measures \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) are logically consistent.
Given an information system \( S = (U,A) \), \( P,Q \subseteq A \), \( P\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \prec } Q \) and \( X \subseteq U \),
If \( X \) is a boundary rough set (i.e., \( \underline{{R^{P} }} (X) = \underline{{R^{Q} }} (X) = \emptyset \)) and \( \overline{{R^{Q} }} (X) = \overline{{R^{P} }} (X) \) , then \( \rho_{Q} (X) = \rho_{P} (X) \) and \( \omega_{Q}^{{\prime }} (X) = \omega_{P}^{{\prime }} (X) \) , but \( \rho_{P}^{{\prime }} (X) \le \rho_{Q}^{{\prime }} (X) \);
If \( \rho_{P}^{{\prime }} (X) = \rho_{Q}^{{\prime }} (X) \) , then \( \rho_{Q} (X) = \rho_{P} (X) \) and \( \omega_{Q}^{{\prime }} (X) = \omega_{P}^{{\prime }} (X) \);
If \( \rho_{P} (X) < \rho_{Q} (X) \) or \( \omega_{P}^{{\prime }} (X) < \omega_{Q}^{{\prime }} (X) \) , then \( \rho_{P}^{{\prime }} (X) \le \rho_{Q}^{{\prime }} (X) \);
Property (1) in Corollary 4 indicates that the integrated roughness \( \rho_{P}^{{\prime }} (X) \) measures both set uncertainty and knowledge uncertainty for \( X \) ; however, \( \rho_{P} (X) \) and \( \omega_{P}^{{\prime }} (X) \) measure only set uncertainty. Property (2) in Corollary 4 shows that \( \rho_{P} (X) \) and \( \omega_{P}^{{\prime }} (X) \) are invariant if the integrated roughness \( \rho_{P}^{{\prime }} (X) \) remains unchanged, although the classification granularity is smaller through finer classification. However, \( \rho_{P} (X) \) and \( \omega_{P}^{{\prime }} (X) \) may not decrease if the integrated roughness \( \rho_{P}^{{\prime }} (X) \) decreases. Property (3) in Corollary 4 shows that the integrated roughness \( \rho_{P}^{{\prime }} (X) \) decreases when \( \rho_{P} (X) \) and \( \omega_{P}^{{\prime }} (X) \) decrease. The converses of properties (2) and (3) are not always true. Corollary 4 implies that the integrated roughness is more sensitive than \( \rho_{P} (X) \) and \( \omega_{P}^{{\prime }} (X) \) for a general binary relation.
The preceding properties characterise the variation of the integrated roughness with the classification granularity. The effectiveness of the proposed measure is verified in the following example.
Example 3 (Continued from Example 1)
Results for the uncertainty measures based on an equivalence relation were obtained from Eqs. (11), (13), (14) and (15), and these results are listed in Table 4.
Table 4 New uncertainty measures of a rough set X with various classification granularities
From Table 4, we can make the following observations:
Comparing partitions (1) with (2), (3) with (4) and (4) with (5), we can observe that the boundary region becomes smaller, and thus \( \sigma_{P}^{{\prime }} (X) \) becomes smaller and \( \omega_{P}^{{\prime }} (X) \) becomes larger. In addition, the classification granularity in the boundary region becomes finer, which leads to an increase in the discernibility of objects in the boundary region, and thus \( \rho_{P}^{{\prime }} (X) \) becomes smaller and \( \alpha_{P}^{{\prime }} (X) \) becomes larger. Obviously, the new integrated measures \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) reflect not only the set uncertainty but also the knowledge uncertainty in the boundary region.
Comparing partition (2) with (3), it can be observed that the boundary region, the global accuracy \( \sigma_{P}^{{\prime }} (X) \) and the global roughness \( \omega_{P}^{{\prime }} (X) \) do not change. However, the classification granularity in the boundary region becomes finer, i.e., the discernibility of objects in the boundary region increases, and thus \( H^{{\prime }} (P) \) becomes larger. Obviously, an increase in \( \alpha_{P}^{{\prime }} (X) \) and a decrease in \( \rho_{P}^{{\prime }} (X) \) in this case reflect the decrease of the knowledge uncertainty in the boundary region, whereas the set uncertainty does not change.
Comparing partitions (5) with (6) and (6) with (7), it can be observed that the boundary region and the classification granularity in the boundary region remain the same, and thus the uncertainty of the rough set X does not change. Accordingly, \( \sigma_{P}^{{\prime }} (X) \), \( \omega_{P}^{{\prime }} (X) \), \( H^{{\prime }} (P) \), \( G^{{\prime }} (P) \), \( \rho_{P}^{{\prime }} (X) \) and \( \alpha_{P}^{{\prime }} (X) \) all do not change, which shows that the new integrated measures are unassociated with subdivision of classification granularities unrelated to rough set X. Therefore, the new integrated measures are consistent with human cognition.
The integrated accuracy \( \alpha_{P}^{{\prime }} (X) \) and the integrated roughness \( \rho_{P}^{{\prime }} (X) \) reflect two types of uncertainty. Therefore, the value of the integrated accuracy is smaller than those of \( \sigma_{P}^{{\prime }} (X) \) and \( H^{{\prime }} (P) \), and the value of the integrated roughness \( \rho_{P}^{{\prime }} (X) \) is larger than those of \( \omega_{P}^{{\prime }} (X) \) and \( G^{{\prime }} (P) \). These results are logically consistent.
Example 3 illustrates that the new integrated measures \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) for a complete information system overcome the limitations of the existing uncertainty measures, better characterise the imprecision of rough sets and are consistent with human cognition.
We calculate the new uncertainty measures for the tolerance relation using Eqs. (11), (13), (14) and (15). The results are shown in Table 5. Figures 4 and 5 present the new uncertainty measures for \( X_{1} \) and \( X_{2} \), respectively. The subscripts of the uncertainty measures in Figs. 4 and 5 are omitted (as in Figs. 2 and 3).
Table 5 The proposed uncertainty measures for an incomplete information system
The proposed uncertainty measures when X = X 1
We can draw the following conclusions from Table 5, Fig. 4 and Fig. 5:
When \( X = X_{1} \), the upper and lower approximations of set \( X_{1} \) are equal, and the classification granularities of objects in the boundary region are also the same with respect to the knowledge P and Q. Thus, subdividing the classification granularities \( R_{S}^{P} (u_{3} ) \) and \( R_{S}^{P} (u_{5} ) \) (which are unrelated to X) in the negative region of set X does not alter the values of \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \), which shows that \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) are consistent with human cognition.
When \( X = X_{2} \), X is a boundary rough set. The boundary regions of X with respect to the knowledge P and Q are different. Consequently, \( \sigma_{P}^{{\prime }} (X) < \sigma_{Q}^{{\prime }} (X) \) and \( \omega^{\prime}_{Q} (X) < \omega^{\prime}_{P} (X) \). In addition, the classification granularities of objects in the boundary region with respect to the knowledge P and Q are different. Furthermore, \( H^{{\prime }} (P) < H^{{\prime }} (Q) \) and \( G^{{\prime }} (Q) < G^{{\prime }} (P) \). Finally, the integrated measures satisfy \( \rho_{Q}^{{\prime }} (X) < \rho_{P}^{{\prime }} (X) \) and \( \alpha_{P}^{{\prime }} (X) < \alpha_{Q}^{{\prime }} (X) \). Obviously, the proposed integrated accuracy and integrated roughness can not only correctly reflect set uncertainty but also correctly measure knowledge uncertainty for a boundary rough set. Therefore, \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) can adequately characterise the uncertainty of rough sets.
From Figs. 4 and 5, it can be observed that \( \alpha_{P}^{{\prime }} (X) \le \sigma_{P}^{{\prime }} (X) \), \( \alpha_{P}^{{\prime }} (X) \le H^{{\prime }} (P) \), \( \alpha_{Q}^{{\prime }} (X) \le \sigma_{Q}^{{\prime }} (X) \) and \( \alpha_{Q}^{{\prime }} (X) \le H^{{\prime }} (Q) \) when \( X = X_{1} \) or \( X = X_{2} \). That is to say, the value of the integrated accuracy, which is based on two types of uncertainty, is smaller than that of the measure based on only one type of uncertainty. In addition, \( \omega_{P}^{{\prime }} (X) \le \rho_{P}^{{\prime }} (X) \), \( G^{{\prime }} (P) \le \rho_{P}^{{\prime }} (X) \), \( \omega_{Q}^{{\prime }} (X) \le \rho_{Q}^{{\prime }} (X) \) and \( G^{{\prime }} (Q) \le \rho_{Q}^{{\prime }} (X) \), which indicates that the value of the integrated roughness, which reflects two types of uncertainty, is greater than that of the measure reflecting only one type of uncertainty. Obviously, these results are logically consistent.
Comparing Examples 3 and 4 with Examples 1 and 2, we can conclude that the new integrated measures \( \alpha_{P}^{{\prime }} (X) \) and \( \rho_{P}^{{\prime }} (X) \) under general binary relations are suitable for both complete and incomplete information systems. These new measures overcome the limitations of existing uncertainty measures and can satisfactorily characterise the imprecision of rough sets. Therefore, the proposed integrated measures are more comprehensive and effective uncertainty measures for both complete and incomplete information systems.
The extension of RST to incomplete information systems is important for making RST practical. Uncertainty measures are the basis for information processing and knowledge acquisition in an incomplete information system. At present, direct processing of an incomplete information system lacks a theoretical basis. By considering the nature of the roughness of sets, we developed new integrated measures based on general binary relations. Several desirable properties of the proposed measures have been shown. We have demonstrated that the new measures overcome the limitations of existing uncertainty measures and can be used to measure with a simple and comprehensive form the roughness and the accuracy of a rough set, and the results are logically consistent. Research on the application of our proposed integrated measures for rule acquisition is planned.
Beaubouef T, Petry FE, Arora G (1998) Information-theoretic measures of uncertainty for rough sets and rough relational databases. Inf Sci 109(1–4):185–195
Chen YM, Wu KS, Chen XH, Tang CH, Zhu QX (2014) An entropy-based uncertainty measurement approach in neighborhood systems. Inf Sci 279:239–250
Dai JH, Xu Q (2012) Approximations and uncertainty measures in incomplete information systems. Inf Sci 198:62–80
Dai JH, Wang WT, Xu Q (2013) An uncertainty measure for incomplete decision tables and its applications. IEEE Trans Cybern 43(4):1277–1289
Dai JH, Huang DB, Su HS, Tian HW (2014) Uncertainty measurement for covering rough sets. Int J Uncertainty Fuzziness Knowl Based Syst 22(2):217–233
Gantayat SS, Misra A, Panda BS (2014) A study of incomplete data—a review. In: Satapathy SC, Udgata SK, Biswal BN (eds) Proceedings of the international conference on frontiers of intelligent computing: theory and applications. Springer, Berlin, pp 401–408
Greco S, Matarazzo B, Slowinski R (2002) Rough approximation by dominance relations. Int J Intell Syst 17(2):153–171
Hu QH, Che XJ, Zhang L, Zhang D, Guo MZ, Yu DR (2012) Rank entropy-based decision trees for monotonic classification. IEEE Trans Knowl Data Eng 24(11):2052–2064
Huang B, Zhou XZ, Shi YC (2004) Entropy of knowledge and rough set based on general binary relation. Syst Eng Theory Pract 24(1):93–96
Kryszkiewicz M (1998) Rough set approach to incomplete information systems. Inf Sci 112(1–4):39–49
Li DY, Du Y (2005) Artificial intelligence with uncertainty. National Defense Industry Press, Beijing
Liang JY, Wang JH, Qian YH (2009) A new measure of uncertainty based on knowledge granulation for rough sets. Inf Sci 179(4):458–470
Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publisher, London
Book Google Scholar
Qian YH, Liang JY, Wang F (2009) A new method for measuring the uncertainty in incomplete information systems. Int J Uncertain Fuzziness Knowl Based Syst 17(6):855–880
Stefanowski J, Tsoukias A (1999) On the extension of rough sets under incomplete information. In: New directions in rough sets, data mining, and granular-soft computing. 7th International Workshop, Yamaguchi, Japan
Sun L, Xu JC, Tian Y (2012) Feature selection using rough entropy-based uncertainty measures in incomplete decision systems. Knowl Based Syst 36:206–216
Sun L, Xu JC, Xu TH (2014) Information entropy and information granulation-based uncertainty measures in incomplete information systems. Appl Math Inf Sci 8(4):2073–2083
Teng SH, Sun JX, Li ZY, Zou G (2009) A new heuristic reduction algorithm based on general binary relations. In: Sixth international symposium on multispectral image processing and pattern recognition. SPIE: International Society for Optics and Photonics, Yichang, China
Teng SH, Lu M, Yang AF, Zhang J, Zhuang ZW, He M (2016) Efficient attribute reduction from the viewpoint of discernibility. Inf Sci 326:297–314
Wang GY (2002) Extension of rough set under incomplete information systems. J Comput Res Dev 39(10):1238–1243
Wang GY, Zhang QH (2008) Uncertainty of rough sets in different knowledge granularities. Chin J Comput 31(9):1588–1598
Wang JH, Liang JY, Qian YH, Dang CY (2008a) Uncertainty measure of rough sets based on a knowledge granulation of incomplete information systems. Int J Uncertain Fuzziness Knowl Based Syst 16(2):233–244
Wang CZ, Wu CX, Chen DG (2008b) A systematic study on attribute reduction with rough sets based on general binary relations. Inf Sci 178(9):2237–2261
Xu Y, Li LS (2011) Variable precision rough set model based on (α, λ) connection degree tolerance relation. Acta Autom Sin 37(3):303–308
Xu WH, Zhang XY, Zhang WX (2009) Knowledge granulation, knowledge entropy and knowledge uncertainty measure in ordered information systems. Appl Soft Comput 9(4):1244–1251
Yang YJ, John R (2008) Global roughness of approximation and boundary rough sets. In: IEEE world congress on computational intelligence. IEEE, Hong Kong
Yao YY (1998) Relational interpretations of neighborhood operators and rough set approximation operators. Inf Sci 111(1–4):239–259
Zhu W (2007) Generalized rough sets based on relations. Inf Sci 177(22):4997–5011
SHT and YJN carried out the studies of RST-based methods for measuring system uncertainty, presented the new uncertainty measures, and drafted the manuscript. ML participated in the provement of the theorems and the design of the examples. MH participated in the analysis of the proposed algorithm and helped to polish the manuscript. FL participated in the revision of this paper. All authors read and approved the final manuscript.
This work has been sponsored by a grant from the National Natural Science Foundation of China (Nos. 41201363 and 61471371) and the Natural Science Foundation of Hunan Province of China (No. 2015jj3022). Moreover, the authors would like to thank the anonymous reviewers for their insightful comments in improving the quality of this paper.
Science and Technology on Automatic Target Recognition Laboratory, National University of Defense Technology, Changsha, 410073, China
Shuhua Teng & Min Lu
PLA Units: 66295, Baoding, 072750, China
Fan Liao
School of Biomedical Engineering, Third Military Medical University, Chongqing, 400038, China
Mi He & Yongjian Nian
Shuhua Teng
Mi He
Min Lu
Yongjian Nian
Correspondence to Yongjian Nian.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Teng, S., Liao, F., He, M. et al. Integrated measures for rough sets based on general binary relations. SpringerPlus 5, 117 (2016). https://doi.org/10.1186/s40064-016-1670-2
Accepted: 05 January 2016
DOI: https://doi.org/10.1186/s40064-016-1670-2
Rough set
Uncertainty measure
General binary relation
|
CommonCrawl
|
A Bayesian inference method for the analysis of transcriptional regulatory networks in metagenomic data
Elizabeth T. Hobbs†1,
Talmo Pereira†1,
Patrick K. O'Neill1 and
Ivan Erill1Email authorView ORCID ID profile
Algorithms for Molecular Biology201611:19
Received: 26 January 2016
Published: 8 July 2016
Metagenomics enables the analysis of bacterial population composition and the study of emergent population features, such as shared metabolic pathways. Recently, we have shown that metagenomics datasets can be leveraged to characterize population-wide transcriptional regulatory networks, or meta-regulons, providing insights into how bacterial populations respond collectively to specific triggers. Here we formalize a Bayesian inference framework to analyze the composition of transcriptional regulatory networks in metagenomes by determining the probability of regulation of orthologous gene sequences. We assess the performance of this approach on synthetic datasets and we validate it by analyzing the copper-homeostasis network of Firmicutes species in the human gut microbiome.
Assessment on synthetic datasets shows that our method provides a robust and interpretable metric for assessing putative regulation by a transcription factor on sets of promoter sequences mapping to an orthologous gene cluster. The inference framework integrates the regulatory contribution of secondary sites and can discern false positives arising from multiple instances of a clonal sequence. Posterior probabilities for orthologous gene clusters decline sharply when less than 20 % of mapped promoters have binding sites, but we introduce a sensitivity adjustment procedure to speed up computation that enhances regulation assessment in heterogeneous ortholog clusters. Analysis of the copper-homeostasis regulon governed by CsoR in the human gut microbiome Firmicutes reveals that CsoR controls itself and copper-translocating P-type ATPases, but not CopZ-type copper chaperones. Our analysis also indicates that CsoR frequently targets promoters with dual CsoR-binding sites, suggesting that it exploits higher-order binding conformations to fine-tune its activity.
We introduce and validate a method for the analysis of transcriptional regulatory networks from metagenomic data that enables inference of meta-regulons in a systematic and interpretable way. Validation of this method on the CsoR meta-regulon of gut microbiome Firmicutes illustrates the usefulness of the approach, revealing novel properties of the copper-homeostasis network in poorly characterized bacterial species and putting forward evidence of new mechanisms of DNA binding for this transcriptional regulator. Our approach will enable the comparative analysis of regulatory networks across metagenomes, yielding novel insights into the evolution of transcriptional regulatory networks.
Transcription factor
Regulatory network
Regulon
Bayesian inference
Copper homeostasis
Metal resistance
Stress response
CsoR
The advent of next-generation sequencing methodologies has enabled the study of bacterial populations through direct sampling of their genetic material [1]. Metagenomics techniques allow the detailed investigation of bacterial communities, their shared metabolic pathways and their interaction with environment and hosts [2–7], but they also pose many challenges regarding data standardization, processing and analysis [8, 9]. To date, most analyses of metagenomics datasets have focused on the phylogenetic composition of metagenomes and the relative contribution of different bacterial clades to metabolic pathways [3, 9–12]. However, metagenomics data also constitute a powerful resource for the direct analysis of transcriptional regulatory networks, or regulons, in natural environments. Such analyses can be used to characterize the contribution of non-culturable bacteria and mobile genetic elements to global regulatory networks, to analyze the changes in a population's regulatory program in response to interventions or habitat adaptation, and to quantify the relative importance of genetic elements in the makeup of known regulatory systems. Comparative research on multiple metagenomes has revealed that regulatory potential, measured as the local density of putative transcription factor (TF)-binding sites, correlates with processes involved in the response to stimuli present in specific environments [13, 14]. Recently, we provided proof of concept that TF-binding motifs can be effectively leveraged to analyze the genetic makeup of known transcriptional regulatory networks using metagenomic data, providing insights into the function of such networks in specific microbiomes [15]. In this work we formalize an inference method to analyze transcriptional regulatory networks in metagenomics datasets. The Bayesian inference approach we put forward provides a consistent framework for the study of regulatory networks using metagenomics datasets, facilitating the interpretation of results, standardizing the outcome of analyses to facilitate comparison and allowing users to selectively adjust sensitivity. We validate the novel inference framework on the Integrated Reference Catalog of the Human Gut Microbiome [16], analyzing the regulation of copper-homeostasis in gut microbiome Firmicutes through the recently characterized copper-responsive repressor CsoR [17]. Our results reveal an inferred copper-homeostasis network congruent with that reported in studies on model organisms, outlining the core elements of this regulatory system and highlighting specific features of the human gut CsoR meta-regulon.
Human gut metagenomics data was obtained from the Integrated Reference Catalog of the Human Gut Microbiome service (http://meta.genomics.cn/) [16]. The dataset contains 1267 gut metagenomes, totaling 6.4 Tb. To ensure consistency, here we restricted the analysis to 401 samples from healthy European individuals obtained in the MetaHIT project. This subset contains 5,133,816 predicted genes, with roughly half of them (2,579,737) functionally annotated with eggNOG/COG identifiers from the eggNOG v4.0 database [18]. The bacterial population in these 401 samples is dominated by two bacterial orders [Bacteroidales (58.51 %) and Clostridiales (32.11 %)] belonging to two major bacterial phyla [Bacteroidetes (59.29 %) and Firmicutes (34.97%)]. A CsoR-binding motif was compiled by combining experimentally-validated and computationally inferred Firmicutes CsoR-binding sites available in the CollecTF and RegPrecise databases [19, 20].
For each sample and scafftig therein, predicted open-reading frames (ORF) in the same strand and with a conservative intergenic distance (<50 bp) were considered to constitute an operon. Only operons with a complete lead ORF (containing a predicted translational start codon on their 5′ end) and at least 60 bp of sequence upstream of the translational start codon were considered for analysis. We also excluded from analysis any operons with no gene product mapping to a Firmicutes reference genome [15]. Taxonomical and eggNOG information for all ORFs in the remaining 752,783 operons was re-annotated by searching the eggNOG v4.0 database with DIAMOND [21]. The available upstream region (up to 300 bp) for these operons was scored on both strands with the position-specific scoring matrix (PSSM) derived from the compiled CsoR-binding motif using a Laplacian pseudocount and equiprobable background base frequencies [22]. For every sequence position, the scores from both strands were combined following the soft-max function (Additional file 1):
$$PSSM(S_{i} ) = \log_{2} \left( {2^{{PSSM(S_{i}^{f} )}} + 2^{{PSSM(S_{i}^{r} )}} } \right)$$
where PSSM(S i ) denotes the combined PSSM score of a site at position i and PSSM(S i f ) and PSSM(S i r ) denote the score of the site at position i in the forward and reverse strands, respectively.
Inference method
For a given eggNOG/COG functional identifier, we consider the set of promoters (D) from all operons containing at least one gene mapping to that eggNOG/COG. We define two theoretical distributions for the set of positional PSSM scores in promoters associated with a particular eggNOG/COG identifier. If the eggNOG/COG is not regulated by the TF, we expect that the promoters mapping to it display a background distribution of scores (B), which we can approximate by a normal distribution parametrized by the statistics of the set of all promoters in the metagenome (G):
$$B\sim N(\mu_{g} ,\sigma_{g}^{2} )$$
Conversely, for an eggNOG/COG regulated by the TF, the distribution of PSSM scores (R) in promoters should be a mixture of the background distribution and the distribution of scores in functional sites. Again, we can approximate the distribution of scores in functional sites with a normal distribution parametrized by the statistics of the known sites belonging to the TF-binding motif (M).
$$R\sim \alpha N(\mu_{m} ,\sigma_{m}^{2} ) + (1 - \alpha )N(\mu_{g} ,\sigma_{g}^{2} )$$
The mixing parameter α corresponds to the probability of observing a functional binding site in a regulated promoter, which can be estimated from known instances of TF-binding sites in their genomic context. For CsoR, we expect on average one binding site in a regulated promoter of length 300 bp, so α is defined to be 1/300 [23, 24].
Given a promoter D i from the set of promoters (D) mapping to a particular eggNOG/COG identifier, we seek to obtain the probability that the eggNOG/COG is regulated by the TF. Formally, we seek to obtain the posterior probability of the mixture distribution of scores (R) given the scores s j observed in the promoter mapping to the eggNOG/COG (D i ):
$$P(R|D_{i} ) = \frac{{P(D_{i} |R)P(R)}}{{P(D_{i} )}}$$
After applying the law of total probability, we can express this more conveniently in a likelihood ratio form:
$$P(R|D_{i} ) = \frac{{P(D_{i} |R)P(R)}}{{P(D_{i} |R)P(R) + P(D_{i} |B)P(B)}} = \frac{1}{{1 + \frac{{P(D_{i} |B)P(B)}}{{P(D_{i} |R)P(R)}}}}$$
The likelihood functions P(D i |R) and P(D i |B) can be estimated for a given score s j using the density function of the R and B distributions defined above. If we assume approximate independence among the scores at different positions, we obtain:
$$P(D_{i} |R) = \prod\limits_{{s_{j} \in D_{i} }} {L\left( {s_{j} |\alpha N(\mu_{m} ,\sigma_{m}^{2} ) + (1 - \alpha )N(\mu_{g} ,\sigma_{g}^{2} )} \right)}$$
$$P(D_{i} |B) = \prod\limits_{{s_{j} \in D}} {L\left( {s_{j} |N(\mu_{g} ,\sigma_{g}^{2} )} \right)}$$
The priors P(R) and P(B) can be inferred from genomic data. P(R) and P(B) can be approximated by the fraction of annotated operons in a genome that are known and not known, respectively, to be regulated by the TF. Using B. subtilis as a reference genome for CsoR, we obtain P(R) = 3/1811 and P(B) = 1 − P(R).
The contributions of all promoters D i mapping to a particular eggNOG/COG can be assumed to be independent. Therefore, we obtain:
$$P(R|D) = \frac{1}{{1 + \left( {\prod\nolimits_{{D_{i} \in D}} {\frac{{P(D_{i} |B)}}{{P(D_{i} |R)}}} } \right)\frac{P(B)}{P(R)}}}$$
where we can naturally assign a likelihood ratio product of 1 to any eggNOG/COGs that presents no mapped promoters in the samples under analysis.
Sensitivity adjustment and determination of putatively regulated eggNOG/COGs
The large size of metagenomics datasets poses challenges for the efficient computation of the posterior probabilities outlined above. It is known that a large fraction of the eggNOG/COG identifiers will not be regulated by the TF. The computation may therefore be simplified by defining a score threshold to exclude operons with promoters that show no evidence of regulation [15]. This strategy has the added benefit of compensating for heterogeneity in eggNOG/COG clustering, which may assign distant orthologs to the same eggNOG/COG identifier, potentially diluting the contribution of a regulated ortholog to the eggNOG/COG posterior probability.
Formally, we consider the subset of the promoters D * ⊂ D mapping to a particular eggNOG/COG that have at least one score above a predefined threshold θ. That is, D i ∈ D * if max(s j ∈ D i ) ≥ θ. It follows that we should adjust the score likelihoods of Eqs. 6 and 7 to take into account the fraction of probability mass assigned to the data that will not be observed in the reduced promoter set D * . The probability of observing a promoter D i with no positions p j scoring above the threshold θ under the background (B) and regulated (R) models is given by the cumulative distribution function (Φ) for each model:
$$U_{B} = \prod\limits_{{p_{j} \in D_{i} }} {\left( {\Phi (\theta ,\mu_{g} ,\sigma_{g}^{2} )} \right)}$$
$$U_{R} = \prod\limits_{{p_{j} \in D_{i} }} {\left( {\alpha \Phi (\theta ,\mu_{m} ,\sigma_{m}^{2} ) + (1 - \alpha )\Phi (\theta ,\mu_{g} ,\sigma_{g}^{2} )} \right)}$$
Hence, the probability of observing a promoter with at least one score above the threshold θ under the background (B) and regulated (R) models is given by (1 − U B ) and (1 − U R ), respectively. We can use these probabilities to normalize the likelihoods as follows:
$$P(D_{i} |R) = \frac{{\prod\nolimits_{{s_{j} \in D_{i} }} {L\left( {s_{j} |\alpha N(\mu_{m} ,\sigma_{m}^{2} ) + (1 - \alpha )N(\mu_{g} ,\sigma_{g}^{2} )} \right)} }}{{(1 - U_{R} )}}$$
$$P(D_{i} |B) = \frac{{\prod\nolimits_{{s_{j} \in D_{i} }} {L\left( {s_{j} |N(\mu_{g} ,\sigma_{g}^{2} )} \right)} }}{{(1 - U_{B} )}}$$
Similarly, the priors P(R) and P(B) must be renormalized by multiplying the observed number of regulated and non-regulated operons in a reference genome by (1 − U B ) and (1 − U R ), respectively, in order to account for the fact that thresholding alters the base rate at which regulated promoters are observed.
The inference method outlined above assigns a posterior probability value P(D|R) to all eggNOG/COG identifiers present in the metagenome. Ultimately, however, we wish to extract a set of putatively regulated eggNOG/COG for further analysis. This requires discretization of the list of posterior probabilities. Formally, given a list of eggNOG/COGs S with posterior probabilities \(\vec{p}\), we wish to find a sublist S * with posterior probabilities \(\vec{p}^{*}\), so that the mean probability of regulation for a promoter chosen uniformly at random from S * is at least (1−φ). To define S * , let \(\vec{p}\) be sorted in reverse order and S be sorted similarly. Then let n be the greatest integer such that:
$$\frac{1}{n}\sum\limits_{i = 0}^{n} {p_{i} } \le (1 - \phi )$$
and set S * = {S 0 ,…,S n }. S * is therefore the largest sublist of S having average posterior probability of at least (1−φ).
Permutation test
Several alternative methods can be proposed to determine putatively regulated eggNOG/COGs in a metagenomic dataset. To benchmark the Bayesian framework introduced above against a frequentist approach, we define a permutation test based on the likelihood function P(D i |R) of Eq. 6. Given the original TF-binding motif defined by the collection of TF-binding sites, we generate F random symmetrical permutations of the TF-binding motif and parametrize their score distribution under the background (B f ) and regulated (R f ) models following Eqs. 2 and 3. Hence, for each permuted model f, we can compute the likelihood of the score distribution observed in a given promoter (D i ) as:
$$P(D_{i} |R_{f} ) = \prod\limits_{{s_{j} \in D_{i} }} {L\left( {s_{j} |\alpha N(\mu_{{m^{f} }} ,\sigma_{{m^{f} }}^{2} ) + (1 - \alpha )N(\mu_{{g^{f} }} ,\sigma_{{g^{f} }}^{2} )} \right)}$$
Under the approximation of independence between promoter sequences used in Eq. 8, we can define P(D|R) for an eggNOG/COG as follows:
$$P(D|R_{f} ) = \prod\limits_{{D_{i} \in D}} {P(D_{i} |R_{f} )}$$
For each eggNOG/COG, we then can empirically approximate the p-value as the probability of obtaining a score distribution as extreme as the one observed in the promoters mapping to an eggNOG/COG given the null hypothesis that the distribution of scores is due to chance:
$$p = P\left( {P(D|R_{f} ) \ge P(D|R)} \right) \approx \frac{{1 + \sum\nolimits_{f = 1}^{F} {I\left( {P(D|R_{f} ) \ge P(D|R)} \right)} }}{F + 1}$$
where I(·) is the indicator function.
The permutation test therefore defines an alternative statistic to assess putative regulation of an eggNOG/COG based on the distribution of scores in the promoters mapping to it.
Validation of the Bayesian inference pipeline on synthetic datasets
To assess the behavior of the proposed inference framework, we evaluated its performance on synthetic datasets consisting of randomly generated sequence backgrounds with inserted sites sampled from the CsoR motif. Figure 1a shows the posterior probability P(R|D) of individual sequences (Eq. 5) as a function of the score of the inserted CsoR sites. The observed upward deviations from the baseline sigmoidal shape illustrate the ability of the inference method to integrate contributions from secondary sites, which occur at a low frequency in randomly generated sequences. Figure 1b compares the behavior of the posterior probability for an eggNOG/COG (Eq. 8) between a simulated eggNOG/COG in which sequences contain sites randomly sampled from the CsoR motif distribution and an eggNOG/COG in which the sequences containing sites are clonal. Multiple instances of a clonal sequence containing a putative TF-binding site are often found in metagenome samples. On average, the method assigns lower posterior probabilities to clonal sequences, hence decreasing the likelihood of designating the corresponding eggNOG/COG as putatively regulated.
a Posterior probability of a 300 bp-long randomly generated sequence (40 % G + C) as a function of the score of a sampled CsoR site inserted at the first position of the sequence. The plot shows the results of 10,000 independent replicates. b Average posterior probability of a simulated eggNOG/COG. The eggNOG/COG contains 100 (300 bp-long, 40 % G + C) sequences, 30 of which contain inserted sites. Sites were either sampled randomly from the CsoR motif and inserted the first 30 sequences (multiple sites) or a single site was sampled from the motif and inserted in the first 30 sequences (single site). The plot shows the results of 10 independent experiments for each case. Vertical bars denote the standard deviation
Figure 2a documents the behavior of the eggNOG/COG posterior probability (Eq. 8) as a function of the number of sequences with functional sites mapping to the eggNOG/COG. The results show that when the proportion of sequences containing functional sites among those mapping to an eggNOG/COG falls below 20 %, the posterior probability decreases sharply. Figure 2b illustrates the effect of introducing the sensitivity adjustment outlined in Eqs. 11 and 12. In addition to speeding up the computation, the use of a score threshold θ to exclude sequences with no evidence of regulation makes it possible to obtain high posterior probability values for eggNOG/COGs with less than 20 % sequences containing functional sites. This allows detecting putative regulation in heterogeneous eggNOG/COGs where the regulated ortholog is a minority contributor. In Fig. 3, the performance of the Bayesian framework is benchmarked against a permutation test with F = 100 on a synthetic dataset of 10,000 COGs. As it can be readily observed, the posterior probability generated by the Bayesian framework yields a significantly more robust predictor of eggNOG/COG regulation [Area under the curve (AUC): 0.99] than a conventional permutation test p-value (AUC: 0.88).
a Posterior probability of simulated eggNOG/COGs containing 100 (300 bp-long, 40 % G + C) randomly generated sequences, with 6, 9, 12, 15, 18 and 21 of them containing inserted sites sampled randomly from the CsoR motif. The plot shows the distribution of posterior probability for the eggNOG/COGs in 1000 simulated replicates as a function of the number of inserted sites. Vertical bars denote the standard deviation, with a horizontal bar indicating the median. b Sensitivity adjusted posterior probability of simulated eggNOG/COGs containing 100 (300 bp-long, 40 % G + C) randomly generated sequences, 12 of which contain inserted sites sampled randomly from the CsoR motif. The plot shows the distribution of posterior probability for the eggNOG/COGs adjusted for sensitivity in 1000 simulated replicates as a function of the sensitivity threshold θ, expressed as the number of standard deviations below the motif mean score. Vertical bars denote the standard deviation, with a horizontal bar indicating the median. The legends on top indicate the average number of sequences selected for analysis (S#) and the adjusted prior for regulation P(R) for each sensitivity threshold
Receiver-operating characteristic (ROC) curve using the Bayesian posterior probability (Eq. 8) and the permutation test p-value (Eq. 15) as predictors of eggNOG/COG regulation. The ROC was generated on a synthetic dataset of 10,000 eggNOG/COGs, each with 100 promoter sequences mapping to it. To compute p-values, 100 permuted models were generated. The synthetic dataset contained 100 "regulated" eggNOG/COGs. To simulate real conditions, promoters mapping to "regulated" eggNOG/COGs were assigned sites following the CsoR motif based on a geometric distribution with an expectation of 0.33 sites per promoter
Analysis of the copper-homeostasis CsoR regulon in the human gut microbiome
To evaluate the proposed inference method in a real life setting, we analyzed the copper-homeostasis regulon controlled by CsoR in the human gut microbiome. Together with CopY and CueR, CsoR-family members are well-characterized copper-responsive regulators that detect and modulate the abundance of copper ions in the cell [25]. CsoR provides a suitable target for analysis, because it is presumed to be the sole regulator of copper homeostasis in Clostridiales, the second most abundant bacterial order in the IGC MetaHIT project dataset, while being noticeably absent in the most abundant order (Bacteroidales) [17, 26]. We analyzed the CsoR regulon by running the Bayesian inference pipeline on operons containing genes mapping to the Firmicutes. Computation was sped up by adjusting sensitivity with θ = 6.65 (6 standard deviations below the CsoR motif mean). This substantially decreased the number of processed promoters while increasing the prior for regulation P(R) only to 0.01 (Fig. 2). We established a mean probability of regulation of 0.9 for the set of putatively regulated eggNOG/COGs and required that they had at least 5 promoters mapping to them at the established θ value.
The results shown in Table 1 provide an outline of the Firmicutes CsoR meta-regulon of the human gut microbiome. The inferred CsoR meta-regulon is in broad agreement with the reported CsoR regulons in Firmicutes [23, 24, 27, 28], but displays also several characteristic features that have not been previously reported. The inferred human gut Firmicutes CsoR meta-regulon comprises six distinct eggNOG/COG identifiers with annotated function, but is primarily defined by two COG identifiers that encompass 96 % of the putatively CsoR-regulated promoters (Additional files 2, 3). COG1937 maps to the CsoR repressor, and all the putatively regulated complete gene sequences mapping to this COG contain the conserved C-H-C motif (Additional file 4). This indicates that these COG1937 instances are functional copper-responsive regulators and suggests that the reported self-regulation of CsoR is a common trait of human gut Firmicutes species [17, 23]. COG2217 maps to the copper-translocating P-type ATPases (CopA). These proteins harbor heavy metal-associated (HMA; IPR006121), haloacid dehydrogenase-like (HAD-like; IPR023214) and P-type ATPase A (IPR008250) domains and are canonical members of the Firmicutes CsoR regulon [25]. The remaining eggNOG/COGs map to proteins containing a HMA (IPR006121) domain [NOG218972, NOG81268], an unknown function (DUF2318; IPR018758) membrane domain [NOG72602] or HMA (IPR006121), DsbD_2 (IPR003834) and DUF2318 (IPR018758) transmembrane domains [COG2836]. Proteins mapping to NOG218972 and NOG81268 are often annotated as copper chaperones, whereas those mapping to COG2836 are mainly annotated as heavy metal transport/detoxification proteins, and those mapping to NOG72602 are simply annotated as membrane proteins. Analysis of site score distribution for the eggNOG/COGs reported in Table 1 indicates the presence of a single putative false positive. The sequences mapping to NOG109008 belong to clonal instances of a glycoside hydrolase family 18 protein-coding sequence harboring an average (19.42 score) putative CsoR-binding site in its promoter region.
Inferred human gut Firmicutes CsoR meta-regulon
eggNOG / COG
eggNOG 4.0 annotation
Mapped operons
Operons for analysis
P(R|D)
Operon with COG1937
Dual sites
COG1937
Transcriptional repressor
p-type ATPase
IPR006121, IPR023214, IPR008250
NOG218972
Heavy-metal-associated domain
NOG72602
Predicted membrane protein
Membrane protein
Operons for analysis denotes the total number of operons mapping to each eggNOG/COG after sensitivity adjustment. P(R|D) designates the posterior probability of regulation for the eggNOG/COG. The Operon with COG1937 and Operon with COG2217 columns indicate the number of genes mapping to an eggNOG/COG that were assigned to an operon containing also COG1937 or COG2217, respectively. Dual sites denotes the number of sequences mapping to an eggNOG/COG harboring two high-confidence sites, out of the total number of sequences mapping to that eggNOG/COG with high-confidence sites
Analysis of the dominating eggNOG/COG identifiers in the human gut Firmicutes CsoR meta-regulon (COG1937 and COG2217) indicates that the copper-responsive regulator and copper-translocating P-type ATPase genes mapping to these regulated COGs are found in an operon configuration in a relatively small fraction of instances (Table 1; Additional file 5). Protein-coding genes mapping to COG2217 are in some cases associated with those coding for chaperone-like proteins (NOG218972, COG2836 and NOG81268), but there is only one instance of a three-gene operon mimicking the CsoR-CopA-CopZ organization described in Listeria monocytogenes [27]. The promoter region of protein-coding sequences mapping to COG1937 and COG2217 reveals that around half of them contain high-confidence CsoR-binding sites (sites with score larger than two standard deviations below the mean for the CsoR motif). On both sequence sets, the distribution of high-confidence CsoR-binding sites peaks around 90 and 65 bp upstream of the predicted translation start site (TLS) (Fig. 4). Interestingly, almost half of these promoter sequences contain two high-confidence sites separated by 26, 36–38 or 51 bp (Additional file 6).
Distribution of site scores and high-confidence sites (sites with scores larger than 16.17) in the promoter region of putatively regulated operons mapping to COG2217 and COG1937
A Bayesian inference pipeline for metagenomics analysis of regulatory networks
The increasing availability of large metagenomics datasets prompts and enables the development of algorithms to interrogate novel aspects of these heterogeneous sequence repositories. Here we formalize and validate a Bayesian inference framework to analyze the composition of transcriptional regulatory networks in metagenomes. Comparative genomics analyses have long established that the study of bacterial regulons benefits significantly from the availability of genomic data. Enrichment in TF-binding sites upstream of orthologous genes provides the means to curb the false positive rate of in silico methods for detecting these regulatory signals and to identify the key components of a regulatory network [29–32]. Leveraging the clusters of orthologous groups defined in the eggNOG database, here we define a conceptually similar approach to analyze bacterial regulons in metagenomic samples. We apply Bayesian inference to compute the probability that an eggNOG/COG is regulated by a TF with a known binding motif. To facilitate computation, the method assumes independence among the scores over a sequence and a normal distribution for site scores, which may be replaced by the exact distribution [33]. Beyond these assumptions, the method relies only on the availability of priors for site density (α) and operon regulation P(R), which can be estimated from reference genomes. The method also provides the means to speed up computation by restricting the set of promoter sequences to be analyzed in a principled manner.
Our results on synthetic datasets show that the method performs as expected, assigning higher posterior values to sequences containing better-scoring sites (Fig. 1a) and to eggNOG/COGs with a larger number of sequences containing putative sites mapping to them (Fig. 2a). These results also illustrate some interesting properties of the approach. The assumption of positional independency provides a simple yet effective method to integrate the contribution of multiple sites in a promoter sequence. This is an important component for the analysis of bacterial regulons, since many bacterial transcriptional regulators exploit cooperative binding between multiple sites to modulate their activity at specific promoters [34–37]. Another element to take into account in metagenomics analysis is the presence of multiple instances of a clonal sequence mapping to an eggNOG/COG. These sequences occur frequently in metagenomic datasets and may carry multiple instances of a putative TF-binding site. The explicit modeling of regulated promoters with a mixture distribution results in lower posterior probabilities for such sequence sets (Fig. 1b), minimizing their assessment as false positives. Sequence sets carrying instances of a site with average score, such as the sequences mapping to NOG109008 (Table 1), may still be assigned high posterior probabilities. Given enough sample size, such false positives can be addressed by the introduction of heuristics based on the variance of scores for high-confidence sites in sequences mapping to an eggNOG/COG.
The proposed approach also provides a method to adjust the sensitivity and speed of the analysis by removing sequences with no evidence of regulation. This method is formally integrated within the Bayesian inference framework by the introduction of a score threshold (θ) and the corresponding normalization of priors and likelihoods. In combination with taxonomic filtering (i.e. preserving only sequences mapping to the clade of interest), sensitivity adjustment allows users to focus their analysis on those sequences most likely to contribute relevant information on the regulatory system under analysis. Sensitivity adjustment may hence allow detecting evidence of regulation in eggNOG/COGs with a relatively small percentage of putatively regulated sequences (Fig. 2b). This may be advantageous when assessing regulation in large heterogeneous COGs, where only a small subset of the mapping genes are regulated orthologs, but the progressive refinement of orthologous groups in the eggNOG database will soon address such concerns. Moreover, sensitivity adjustment should be used with caution, since it alters the prior for regulation P(R) and can therefore complicate the interpretation of results (Fig. 2b). There is no well-established method to determine what constitutes an acceptable prior when reporting posterior probabilities. As a conservative rule of thumb, one may require that the magnitude of the prior (φ′) be of the same order as the complement of the average posterior probability to be reported (1 − φ). Nonetheless, the adjusted prior should always be clearly stated when reporting adjusted posterior probabilities to facilitate their assessment. As shown in Fig. 3, the Bayesian framework also performs better as a predictor of eggNOG/COG regulation than a more conventional approach based on permutation tests. This is primarily due to the influence of the Bayesian priors on the posterior probability computation, which greatly reduces the chances of generating false positives in non-regulated eggNOG/COGs. Furthermore, the ability to infer regulation without the need for permuted models decreases run-time and provides consistency across multiple runs.
Analysis of the human gut Firmicutes CsoR meta-regulon
The analysis of the human gut Firmicutes CsoR meta-regulon reported here provides a first glimpse at the genetic organization of this copper homeostasis regulon in its natural setting. The Firmicutes CsoR meta-regulon is dominated by two putatively regulated COGs that map to two major components of the canonical CsoR regulon (csoR and copA). These two COGs comprise more than 90 % of the putatively CsoR-regulated promoters, suggesting that these two elements are the sole defining features of the CsoR regulon in the Firmicutes species that populate the human gut. The absence of eggNOG/COG identifiers mapping to the third canonical CsoR regulon member (copZ) is noteworthy, since the copZ gene codes for a copper chaperone that binds copper ions and transfers them to copper ATPases [26, 38]. Members of several putatively regulated eggNOG/COGs harboring a HMA domain (COG2836, NOG218972 and NOG81268; Table 1) appear to be distant orthologs of B. subtilis CopZ, and some might therefore function as copper chaperones. However, the COG associated to B. subtilis CopZ (COG2608) obtains a very low posterior probability of regulation in our analysis (9.76 · 10−15; Additional file 7). BLAST analysis with B. subtilis and Staphylococcus aureus CopZ against complete genomes reveals that only one (Clostridium) of the ten most abundant Clostridiales genera in the human gut microbiome encodes a CopZ homolog (Additional file 8). Furthermore, in reference genomes the Clostridium copZ homolog is not in the vicinity of copA, does not display a putative CsoR-binding site and appears to be associated with an ArsR-family transcriptional regulator, which may be capable of sensing copper [39]. Together, these data convincingly identify CsoR as a transcriptional regulator of copper homeostasis through a canonical CsoR-binding motif in the gut microbiome Firmicutes. Furthermore, they indicate that the CsoR meta-regulon comprises CsoR and a P-type ATPase (CopA), but not a CopZ-type chaperone, and that the contribution of other heavy-metal-associated domain proteins to CsoR-directed copper homeostasis is comparatively small [25]. The absence of copZ from bacterial genomes has been noted before [26, 38], and it has been suggested that the short length of this gene may hinder its detection [26]. Our analysis, however, indicates that, even when present, copZ is not regulated by CsoR in the gut microbiome Firmicutes.
Beyond identifying and quantifying the components of a transcriptional regulatory network, our results show that metagenomics analysis of bacterial regulons can also shed light into the wiring of the network and the regulatory mode of the transcription factor. In the species where it has been experimentally described, the CsoR regulon displays a notable variety of genetic arrangements, ranging from single csoR-copA-copZ and copZ-csoR-copA operons in L. monocytogenes and Thermus thermophilus, to independent regulation of csoR and copZA operons in B. subtilis, S. aureus or Streptomyces lividans [23, 24, 27, 28]. Our analysis indicates that CsoR regulation in human gut Firmicutes follows this broad pattern, with independent regulation of csoR and copA being the norm and a relatively small fraction of COG1937 and COG2217 instances associated in putative operons. Similarly, experimental reports of CsoR regulated promoters have documented to date CsoR binding to individual binding sites located at distances ranging from −20 to −180 bp upstream of the predicted translational start site of regulated genes [17, 23, 24, 27]. In contrast, our analysis reveals that 44 % of the sequences mapping to regulated COG1937 and COG2217 instances possess two high-scoring sites separated by three well-defined spacing classes (26, 36–38 and 56 bp; Table 1; Additional file 6). There are currently three available structures for CsoR [17, 28, 40], showing CsoR to form either homodimers (M. tuberculosis) or tetramers (S. lividans and T. thermophilus), based on a three α-helix bundle. However, in the absence of co-crystals and of a canonical DNA-binding fold, the exact mechanism by which CsoR recognizes DNA remains elusive [25, 28]. It has been proposed that CsoR tetramers bind each dyad of the CsoR-binding motif through extensive exposure of DNA to the α1–α2 face of the bundle [28]. In this model the α3 helices of each tetramer may interact and contribute to enhance DNA binding by stabilizing an octameric conformation of CsoR on DNA [41]. Crucially, the ability of α3 helices to interact could be restricted by copper binding, triggering de-repression. Such a model is compatible with the adoption of hexadecameric conformations through extended α3 contacts. In this light, the location of CsoR-binding site relative to the TLS and the spacing distances observed for site pairs in our analysis are reminiscent of promoter architectures that leverage multiple sites to induce DNA bending [34, 35]. This suggests that higher-order conformations of DNA-bound CsoR may be exploited by gut microbiome Firmicutes and other species to fine-tune the cellular response to excess copper ions.
In this work we introduce and validate a method for the analysis of transcriptional regulatory networks from metagenomic data. By adopting a Bayesian inference framework, our method provides the means to infer regulatory networks from metagenomic data in a systematic and reproducible way, generating posterior probability values that facilitate the interpretation of results. The availability of robust methods for metagenomic regulon inference paves the way for the comparative analysis of regulatory networks across metagenomes, which has the potential to address fundamental questions about the evolution of bacterial regulatory networks. Validation of the method on the CsoR meta-regulon of gut microbiome Firmicutes provides convincing evidence that CsoR is a functional copper-responsive regulator of copper homeostasis in human gut. By virtue of the taxonomic composition of the human gut microbiome, our analysis also constitutes the first description of the CsoR-governed copper homeostasis regulon of a broad taxonomic group, the Clostridiales, encompassing several poorly characterized species of increasing clinical interest. Notable aspects of this putative regulatory network include the absence of CopZ-type copper chaperones and the likely use of dual CsoR-binding sites to fine-tune gene regulation.
Elizabeth T. Hobbs and Talmo Pereira contributed equally to this work and should be considered co-first authors
ATPase:
adenylpyrophosphatase
BLAST:
basic local alignment search tool
COG:
clusters of orthologous groups
CsoR:
copper-sensitive operon repressor
DUF:
domain of unknown function
eggNOG:
evolutionary genealogy of genes: non-supervised orthologous groups
HAD-like:
haloacid dehydrogenase-like
HMA:
heavy metal-associated
TF:
TLS:
translation start site
IGC:
integrated non-redundant gene catalog
MetaHIT:
metagenomics of the human intestinal tract
ROC:
receiver-operating characteristic
AUC:
area under the curve
ETH and TP implemented the code for the computational analysis pipeline. TP gathered and standardized the metagenomic datasets. TP and IE designed the computational analysis pipeline. PKO and IE devised the Bayesian inference framework. ETH and IE benchmarked the pipeline, interpreted the results and drafted the manuscript. All authors read and approved the manuscript.
The authors wish to thank David Nicholson and Joseph Cornish for their contribution to earlier versions of the metagenomic analysis pipeline.
All code and data for this work are made openly available through the Erill Lab git repository on GitHub (https://github.com/ErillLab/CogsNormalizedPosteriorProbabilityThetas) [42].
This work was funded by the US National Science Foundation Division of Molecular and Cellular Biosciences award MCB-1158056, by the UMBC Office of Research through a Special Research Assistantship/Initiative Support (SRAIS) award and by the UMBC Office of Undergraduate Research through an Undergraduate Research Award (TP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
13015_2016_82_MOESM1_ESM.pdf Additional file 1. Appendix. Derivation of the soft-max scoring function.
13015_2016_82_MOESM2_ESM.csv Additional file 2. Set of promoters mapping to putatively regulated eggNOG/COGs after adjusting for sensitivity with θ = 6.65. The table reports the eggNOG/COG identifier, the MetaHIT IGC gene identifier, composed of sample and gene identifiers, the gene strand and its promoter region sequence, and the gene and protein sequences.
13015_2016_82_MOESM3_ESM.pdf Additional file 3. Distribution of eggNOG/COG posterior probabilities as a function of the number of promoter sequences mapping to the eggNOG/COG after adjusting for sensitivity with θ = 6.65. The x-axis indicates eggNOG/COG rank number, sorted by decreasing posterior probability. Bubble size indicates the number of promoters mapping to a given eggNOG/COG.
13015_2016_82_MOESM4_ESM.pdf Additional file 4. Sequence logo summarizing the multiple sequence alignment of putatively regulated protein sequences mapping to COG1937. Alignment was performed with CLUSTALW in profile alignment mode, using the structural information in the M. tuberculosis CsoR P9WP49 UniProtKB entry to define gap penalties. The C-H-C motif residues are denoted by red arrows.
13015_2016_82_MOESM5_ESM.csv Additional file 5. Putative operons mapping two or more putatively regulated eggNOG/COGs. Gene identifiers in the same row constitute putative operons. The table reports IGC gene identifiers, composed of sample and gene identifiers, the putative regulation and the strand on which the gene has been predicted.
13015_2016_82_MOESM6_ESM.pdf Additional file 6. Distribution of distance between high-confidence sites (bp) for promoters with more than one high-confidence site.
13015_2016_82_MOESM7_ESM.csv Additional file 7. Posterior probability assigned to eggNOG/COG identifiers after sensitivity adjustment with θ = 6.65. The table lists the eggNOG/COG identifier, its eggNOG 4.0 annotation and functional category, the number of mapped promoters before and after sensitivity adjustment and the posterior probability for all eggNOG/COGs with at least 5 promoters mapping to them after sensitivity adjustment.
13015_2016_82_MOESM8_ESM.csv Additional file 8. Average abundance (%) of the 10 most abundant Clostridiales genera in the 401 MetaHit samples analyzed in this work. The first column indicates the putative presence (+) or absence (−) of a CopZ homolog as determined through independent BLASTP searches using B. subtilis and S. aureus CopZ protein sequences with a cutoff e-value of 10−15. Table compiled from data reported in Li et al. Nature Biotechnology 32, 834–841 (2014).
Department of Biological Sciences, University of Maryland Baltimore County (UMBC), 1000 Hilltop Circle, Baltimore, MD 21250, USA
Sleator RD, Shortall C, Hill C. Metagenomics. Lett Appl Microbiol. 2008;47:361–6.View ArticlePubMedGoogle Scholar
Venter JC, Remington K, Heidelberg JF, Halpern AL, Rusch D, Eisen JA, Wu D, Paulsen I, Nelson KE, Nelson W, Fouts DE, Levy S, Knap AH, Lomas MW, Nealson K, White O, Peterson J, Hoffman J, Parsons R, Baden-Tillson H, Pfannkoch C, Rogers Y-H, Smith HO. Environmental genome shotgun sequencing of the Sargasso Sea. Science. 2004;304:66–74.View ArticlePubMedGoogle Scholar
Tringe SG, von Mering C, Kobayashi A, Salamov AA, Chen K, Chang HW, Podar M, Short JM, Mathur EJ, Detter JC, Bork P, Hugenholtz P, Rubin EM. Comparative metagenomics of microbial communities. Science. 2005;308:554–7.View ArticlePubMedGoogle Scholar
Ward AC, Bora N. Diversity and biogeography of marine actinobacteria. Curr Opin Microbiol. 2006;9:279–86.View ArticlePubMedGoogle Scholar
Qin J, Li R, Raes J, Arumugam M, Burgdorf KS, Manichanh C, Nielsen T, Pons N, Levenez F, Yamada T, Mende DR, Li J, Xu J, Li S, Li D, Cao J, Wang B, Liang H, Zheng H, Xie Y, Tap J, Lepage P, Bertalan M, Batto JM, Hansen T, Le Paslier D, Linneberg A, Nielsen HB, Pelletier E, Renault P, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature. 2010;464:59–65.View ArticlePubMedPubMed CentralGoogle Scholar
Hug LA, Beiko RG, Rowe AR, Richardson RE, Edwards EA. Comparative metagenomics of three Dehalococcoides-containing enrichment cultures: the role of the non-dechlorinating community. BMC Genom. 2012;13:327.View ArticleGoogle Scholar
Segata N, Haake SK, Mannon P, Lemon KP, Waldron L, Gevers D, Huttenhower C, Izard J. Composition of the adult digestive tract bacterial microbiome based on seven mouth surfaces, tonsils, throat and stool samples. Genome Biol. 2012;13:R42.View ArticlePubMedPubMed CentralGoogle Scholar
Thomas T, Gilbert J, Meyer F. Metagenomics—a guide from sampling to data analysis. Microb Inf Exp. 2012;2:3.View ArticleGoogle Scholar
De Filippo C, Ramazzotti M, Fontana P, Cavalieri D. Bioinformatic approaches for functional annotation and pathway inference in metagenomics data. Brief Bioinform. 2012;13:696–710.View ArticlePubMedPubMed CentralGoogle Scholar
Warnecke F, Luginbühl P, Ivanova N, Ghassemian M, Richardson TH, Stege JT, Cayouette M, McHardy AC, Djordjevic G, Aboushadi N, Sorek R, Tringe SG, Podar M, Martin HG, Kunin V, Dalevi D, Madejska J, Kirton E, Platt D, Szeto E, Salamov A, Barry K, Mikhailova N, Kyrpides NC, Matson EG, Ottesen EA, Zhang X, Hernández M, Murillo C, Acosta LG, et al. Metagenomic and functional analysis of hindgut microbiota of a wood-feeding higher termite. Nature. 2007;450:560–5.View ArticlePubMedGoogle Scholar
Ley RE, Hamady M, Lozupone C, Turnbaugh PJ, Ramey RR, Bircher JS, Schlegel ML, Tucker TA, Schrenzel MD, Knight R, Gordon JI. Evolution of mammals and their gut microbes. Science. 2008;320:1647–51.View ArticlePubMedPubMed CentralGoogle Scholar
Zheng W, Zhang Z, Liu C, Qiao Y, Zhou D, Qu J, An H, Xiong M, Zhu Z, Zhao X. Metagenomic sequencing reveals altered metabolic pathways in the oral microbiota of sailors during a long sea voyage. Sci Rep. 2015;5:9131.View ArticlePubMedPubMed CentralGoogle Scholar
Tobar-Tosse F, Rodríguez AC, Vélez PE, Zambrano MM, Moreno PA. Exploration of noncoding sequences in metagenomes. PLoS One. 2013;8:e59488.View ArticlePubMedPubMed CentralGoogle Scholar
Fernandez L, Mercader JM, Planas-Fèlix M, Torrents D. Adaptation to environmental factors shapes the organization of regulatory regions in microbial communities. BMC Genom. 2014;15:877.View ArticleGoogle Scholar
Cornish JP, Sanchez-Alberola N, O'Neill PK, O'Keefe R, Gheba J, Erill I. Characterization of the SOS meta-regulon in the human gut microbiome. Bioinformatics. 2014;30:1193–7.View ArticlePubMedPubMed CentralGoogle Scholar
Li J, Jia H, Cai X, Zhong H, Feng Q, Sunagawa S, Arumugam M, Kultima JR, Prifti E, Nielsen T, Juncker AS, Manichanh C, Chen B, Zhang W, Levenez F, Wang J, Xu X, Xiao L, Liang S, Zhang D, Zhang Z, Chen W, Zhao H, Al-Aama JY, Edris S, Yang H, Wang J, Hansen T, Nielsen HB, Brunak S, et al. An integrated catalog of reference genes in the human gut microbiome. Nat Biotechnol. 2014;32:834–41.View ArticlePubMedGoogle Scholar
Liu T, Ramesh A, Ma Z, Ward SK, Zhang L, George GN, Talaat AM, Sacchettini JC, Giedroc DP. CsoR is a novel Mycobacterium tuberculosis copper-sensing transcriptional regulator. Nat Chem Biol. 2007;3:60–8.View ArticlePubMedGoogle Scholar
Powell S, Forslund K, Szklarczyk D, Trachana K, Roth A, Huerta-Cepas J, Gabaldón T, Rattei T, Creevey C, Kuhn M, Jensen LJ, von Mering C, Bork P. eggNOG v4.0: nested orthology inference across 3686 organisms. Nucleic Acids Res. 2014;42(Database issue):D231–9.View ArticlePubMedGoogle Scholar
Novichkov PS, Laikova ON, Novichkova ES, Gelfand, Arkin AP, Dubchak I, Rodionov DA. RegPrecise: a database of curated genomic inferences of transcriptional regulatory interactions in prokaryotes. Nucleic Acids Res. 2010;38(Database issue):D111–8.View ArticlePubMedGoogle Scholar
Kiliç S, White ER, Sagitova DM, Cornish JP, Erill I. CollecTF: a database of experimentally validated transcription factor-binding sites in Bacteria. Nucleic Acids Res. 2014;42(Database issue):D156–60.View ArticlePubMedGoogle Scholar
Buchfink B, Xie C, Huson DH. Fast and sensitive protein alignment using DIAMOND. Nat Methods. 2015;12:59–60.View ArticlePubMedGoogle Scholar
Haverty PM, Hansen U, Weng Z. Computational inference of transcriptional regulatory networks from expression profiling and transcription factor binding site identification. Nucleic Acids Res. 2004;32:179–88.View ArticlePubMedPubMed CentralGoogle Scholar
Smaldone GT, Helmann JD. CsoR regulates the copper efflux operon copZA in Bacillus subtilis. Microbiology. 2007;153(Pt 12):4123–8.View ArticlePubMedPubMed CentralGoogle Scholar
Baker J, Sengupta M, Jayaswal RK, Morrissey JA. The Staphylococcus aureus CsoR regulates both chromosomal and plasmid-encoded copper resistance mechanisms. Environ Microbiol. 2011;13:2495–507.View ArticlePubMedGoogle Scholar
Rademacher C, Masepohl B. Copper-responsive gene regulation in bacteria. Microbiology. 2012;158(Pt 10):2451–64.View ArticlePubMedGoogle Scholar
Solioz M, Abicht HK, Mermod M, Mancini S. Response of gram-positive bacteria to copper stress. J Biol Inorg Chem. 2010;15:3–14.View ArticlePubMedGoogle Scholar
Corbett D, Schuler S, Glenn S, Andrew PW, Cavet JS, Roberts IS. The combined actions of the copper-responsive repressor CsoR and copper-metallochaperone CopZ modulate CopA-mediated copper efflux in the intracellular pathogen Listeria monocytogenes. Mol Microbiol. 2011;81:457–72.View ArticlePubMedGoogle Scholar
Dwarakanath S, Chaplin AK, Hough MA, Rigali S, Vijgenboom E, Worrall JAR. Response to copper stress in Streptomyces lividans extends beyond genes under direct control of a copper-sensitive operon repressor protein (CsoR). J Biol Chem. 2012;287:17833–47.View ArticlePubMedPubMed CentralGoogle Scholar
Tan K, Moreno-Hagelsieb G, Collado-Vides J, Stormo GD. A comparative genomics approach to prediction of new members of regulons. Genome Res. 2001;11:566–84.View ArticlePubMedPubMed CentralGoogle Scholar
Rodionov DA, Mironov AA, Gelfand MS. Conservation of the biotin regulon and the BirA regulatory signal in Eubacteria and Archaea. Genome Res. 2002;12:1507–16.View ArticlePubMedPubMed CentralGoogle Scholar
Sanchez-Alberola N, Campoy S, Barbe J, Erill I. Analysis of the SOS response of Vibrio and other bacteria with multiple chromosomes. BMC Genom. 2012;13:58.View ArticleGoogle Scholar
GrootKormelink T, Koenders E, Hagemeijer Y, Overmars L, Siezen RJ, de Vos WM, Francke C. Comparative genome analysis of central nitrogen metabolism and its control by GlnR in the class Bacilli. BMC Genom. 2012;13:191.View ArticleGoogle Scholar
Rahmann S, Müller T, Vingron M. On the power of profiles for transcription factor binding site detection. Stat Appl Genet Mol Biol 2003;2:1544–6115. doi:10.2202/1544-6115.1032.
Maddocks SE, Oyston PCF. Structure and function of the LysR-type transcriptional regulator (LTTR) family proteins. Microbiology. 2008;154(Pt 12):3609–23.View ArticlePubMedGoogle Scholar
Minchin SD, Busby SJ. Analysis of mechanisms of activation and repression at bacterial promoters. Methods. 2009;47:6–12.View ArticlePubMedGoogle Scholar
Pryor EE Jr, Waligora EA, Xu B, Dellos-Nolan S, Wozniak DJ, Hollis T. The transcription factor AmrZ Utilizes multiple DNA binding modes to recognize activator and repressor sequences of Pseudomonas aeruginosa Virulence Genes. PLoS Pathog. 2012;8:e1002648.View ArticlePubMedPubMed CentralGoogle Scholar
Cournac A, Plumbridge J. DNA looping in prokaryotes: experimental and theoretical approaches. J Bacteriol. 2013;195:1109–19.View ArticlePubMedPubMed CentralGoogle Scholar
Argüello JM, Raimunda D, Padilla-Benavides T. Mechanisms of copper homeostasis in bacteria. Front Cell Infect Microbiol. 2013;3:73.View ArticlePubMedPubMed CentralGoogle Scholar
Liu T, Chen X, Ma Z, Shokes J, Hemmingsen L, Scott RA, Giedroc DP. A Cu(I)-sensing ArsR family metal sensor protein with a relaxed metal selectivity profile. Biochemistry (Mosc). 2008;47:10564–75.View ArticleGoogle Scholar
Sakamoto K, Agari Y, Agari K, Kuramitsu S, Shinkai A. Structural and functional characterization of the transcriptional repressor CsoR from Thermus thermophilus HB8. Microbiology. 2010;156(Pt 7):1993–2005.View ArticlePubMedGoogle Scholar
Ma Z, Cowart DM, Scott RA, Giedroc DP. Molecular insights into the metal selectivity of the copper(I)-sensing repressor CsoR from Bacillus subtilis. Biochemistry (Mosc). 2009;48:3325–34.View ArticleGoogle Scholar
Hobbs E, Erill I, Pereira T, O'Neill PK. Metagenome regulatory analysis: working release. Zenodo. 2016. doi:10.5281/zenodo.55783.Google Scholar
|
CommonCrawl
|
Search Results: 1 - 10 of 33367 matches for " Daniel Persson "
Page 1 /33367
Automorphic Instanton Partition Functions on Calabi-Yau Threefolds
Mathematics , 2011, DOI: 10.1088/1742-6596/346/1/012016
Abstract: We survey recent results on quantum corrections to the hypermultiplet moduli space M in type IIA/B string theory on a compact Calabi-Yau threefold X, or, equivalently, the vector multiplet moduli space in type IIB/A on X x S^1. Our main focus lies on the problem of resumming the infinite series of D-brane and NS5-brane instantons, using the mathematical machinery of automorphic forms. We review the proposal that whenever the low-energy theory in D=3 exhibits an arithmetic "U-duality" symmetry G(Z) the total instanton partition function arises from a certain unitary automorphic representation of G, whose Fourier coefficients reproduce the BPS-degeneracies. For D=4, N=2 theories on R^3 x S^1 we argue that the relevant automorphic representation falls in the quaternionic discrete series of G, and that the partition function can be realized as a holomorphic section on the twistor space Z over M. We also offer some comments on the close relation with N=2 wall crossing formulae.
Enhanced Gauge Groups in N=4 Topological Amplitudes and Lorentzian Borcherds Algebras
Stefan Hohenegger,Daniel Persson
Mathematics , 2011, DOI: 10.1103/PhysRevD.84.106007
Abstract: We continue our study of algebraic properties of N=4 topological amplitudes in heterotic string theory compactified on T^2, initiated in arXiv:1102.1821. In this work we evaluate a particular one-loop amplitude for any enhanced gauge group h \subset e_8 + e_8, i.e. for arbitrary choice of Wilson line moduli. We show that a certain analytic part of the result has an infinite product representation, where the product is taken over the positive roots of a Lorentzian Kac-Moody algebra g^{++}. The latter is obtained through double extension of the complement g= (e_8 + e_8)/h. The infinite product is automorphic with respect to a finite index subgroup of the full T-duality group SO(2,18;Z) and, through the philosophy of Borcherds-Gritsenko-Nikulin, this defines the denominator formula of a generalized Kac-Moody algebra G(g^{++}), which is an 'automorphic correction' of g^{++}. We explicitly give the root multiplicities of G(g^{++}) for a number of examples.
Second Quantized Mathieu Moonshine
Daniel Persson,Roberto Volpato
Mathematics , 2013,
Abstract: We study the second quantized version of the twisted twining genera of generalized Mathieu moonshine, and prove that they give rise to Siegel modular forms with infinite product representations. Most of these forms are expected to have an interpretation as twisted partition functions counting 1/4 BPS dyons in type II superstring theory on K3\times T^2 or in heterotic CHL-models. We show that all these Siegel modular forms, independently of their possible physical interpretation, satisfy an "S-duality" transformation and a "wall-crossing formula". The latter reproduces all the eta-products of an older version of generalized Mathieu moonshine proposed by Mason in the '90s. Surprisingly, some of the Siegel modular forms we find coincide with the multiplicative (Borcherds) lifts of Jacobi forms in umbral moonshine.
The automorphic NS5-brane
Boris Pioline,Daniel Persson
Abstract: Understanding the implications of SL(2,Z) S-duality for the hypermultiplet moduli space of type II string theories has led to much progress recently in uncovering D-instanton contributions. In this work, we suggest that the extended duality group SL(3,Z), which includes both S-duality and Ehlers symmetry, may determine the contributions of D5 and NS5-branes. We support this claim by automorphizing the perturbative corrections to the "extended universal hypermultiplet", a five-dimensional universal SL(3,R)/SO(3) subspace which includes the string coupling, overall volume, Ramond zero-form and six-form and NS axion. Using the non-Abelian Fourier expansion of the Eisenstein series attached to the principal series of SL(3,R), first worked out by Vinogradov and Takhtajan 30 years ago, we extract the contributions of D(-1)-D5 and NS5-brane instantons, corresponding to the Abelian and non-Abelian coefficients, respectively. In particular, the contributions of k NS5-branes can be summarized into a vector of wave functions \Psi_{k,l}, l=0... k-1, as expected on general grounds. We also point out that for more general models with a symmetric moduli space G/K, the minimal theta series of G generates an infinite series of exponential corrections of the form required for "small" D(-1)-D1-D3-D5-NS5 instanton bound states. As a mathematical spin-off, we make contact with earlier results in the literature about the spherical vectors for the principal series of SL(3,R) and for minimal representations.
Fricke S-duality in CHL models
Abstract: We consider four dimensional CHL models with sixteen spacetime supersymmetries obtained from orbifolds of type IIA superstring on K3 x T^2 by a Z_N symmetry acting (possibly) non-geometrically on K3. We show that most of these models (in particular, for geometric symmetries) are self-dual under a weak-strong duality acting on the heterotic axio-dilaton modulus S by a "Fricke involution" S --> -1/NS. This is a novel symmetry of CHL models that lies outside of the standard SL(2,Z)-symmetry of the parent theory, heterotic strings on T^6. For self-dual models this implies that the lattice of purely electric charges is N-modular, i.e. isometric to its dual up to a rescaling of its quadratic form by N. We verify this prediction by determining the lattices of electric and magnetic charges in all relevant examples. We also calculate certain BPS-saturated couplings and verify that they are invariant under the Fricke S-duality. For CHL models that are not self-dual, the strong coupling limit is dual to type IIA compactified on T^6/Z_N, for some Z_N-symmetry preserving half of the spacetime supersymmetries.
Coxeter group structure of cosmological billiards on compact spatial manifolds
Marc Henneaux,Daniel Persson,Daniel H. Wesley
Physics , 2008, DOI: 10.1088/1126-6708/2008/09/052
Abstract: We present a systematic study of the cosmological billiard structures of Einstein-p-form systems in which all spatial directions are compactified on a manifold of nontrivial topology. This is achieved for all maximally oxidised theories associated with split real forms, for all possible compactifications as defined by the de Rham cohomology of the internal manifold. In each case, we study the Coxeter group that controls the dynamics for energy scales below the Planck scale as well as the relevant billiard region. We compare and contrast them with the Weyl group and fundamental domain that emerge from the general BKL analysis. For generic topologies we find a variety of possibilities: (i) The group may or may not be a simplex Coxeter group; (ii) The billiard region may or may not be a fundamental domain. When it is not a fundamental domain, it can be described as a sequence of pairwise adjacent chambers, known as a gallery, and the reflections in the billiard walls provide a non-standard presentation of the Coxeter group. We find that it is only when the Coxeter group is a simplex Coxeter group, and the billiard region is a fundamental domain, that there is a correspondence between billiard walls and simple roots of a Kac-Moody algebra, as in the general BKL analysis. For each compactification we also determine whether or not the resulting theory exhibits chaotic dynamics.
Spacelike Singularities and Hidden Symmetries of Gravity
Henneaux Marc,Persson Daniel,Spindel Philippe
Living Reviews in Relativity , 2008,
Abstract: We review the intimate connection between (super-)gravity close to a spacelike singularity (the "BKL-limit") and the theory of Lorentzian Kac-Moody algebras. We show that in this limit the gravitational theory can be reformulated in terms of billiard motion in a region of hyperbolic space, revealing that the dynamics is completely determined by a (possibly infinite) sequence of reflections, which are elements of a Lorentzian Coxeter group. Such Coxeter groups are the Weyl groups of infinite-dimensional Kac-Moody algebras, suggesting that these algebras yield symmetries of gravitational theories. Our presentation is aimed to be a self-contained and comprehensive treatment of the subject, with all the relevant mathematical background material introduced and explained in detail. We also review attempts at making the infinite-dimensional symmetries manifest, through the construction of a geodesic sigma model based on a Lorentzian Kac-Moody algebra. An explicit example is provided for the case of the hyperbolic algebra E_10, which is conjectured to be an underlying symmetry of M-theory. Illustrations of this conjecture are also discussed in the context of cosmological solutions to eleven-dimensional supergravity.
Marc Henneaux,Daniel Persson,Philippe Spindel
Physics , 2007, DOI: 10.12942/lrr-2008-1
Abstract: We review the intimate connection between (super-)gravity close to a spacelike singularity (the "BKL-limit") and the theory of Lorentzian Kac-Moody algebras. We show that in this limit the gravitational theory can be reformulated in terms of billiard motion in a region of hyperbolic space, revealing that the dynamics is completely determined by a (possibly infinite) sequence of reflections, which are elements of a Lorentzian Coxeter group. Such Coxeter groups are the Weyl groups of infinite-dimensional Kac-Moody algebras, suggesting that these algebras yield symmetries of gravitational theories. Our presentation is aimed to be a self-contained and comprehensive treatment of the subject, with all the relevant mathematical background material introduced and explained in detail. We also review attempts at making the infinite-dimensional symmetries manifest, through the construction of a geodesic sigma model based on a Lorentzian Kac-Moody algebra. An explicit example is provided for the case of the hyperbolic algebra E10, which is conjectured to be an underlying symmetry of M-theory. Illustrations of this conjecture are also discussed in the context of cosmological solutions to eleven-dimensional supergravity.
Wall-crossing, Rogers dilogarithm, and the QK/HK correspondence
Sergei Alexandrov,Daniel Persson,Boris Pioline
Mathematics , 2011, DOI: 10.1007/JHEP12(2011)027
Abstract: When formulated in twistor space, the D-instanton corrected hypermultiplet moduli space in N=2 string vacua and the Coulomb branch of rigid N=2 gauge theories on $R^3 \times S^1$ are strikingly similar and, to a large extent, dictated by consistency with wall-crossing. We elucidate this similarity by showing that these two spaces are related under a general duality between, on one hand, quaternion-Kahler manifolds with a quaternionic isometry and, on the other hand, hyperkahler manifolds with a rotational isometry, further equipped with a hyperholomorphic circle bundle with a connection. We show that the transition functions of the hyperholomorphic circle bundle relevant for the hypermultiplet moduli space are given by the Rogers dilogarithm function, and that consistency across walls of marginal stability is ensured by the motivic wall-crossing formula of Kontsevich and Soibelman. We illustrate the construction on some simple examples of wall-crossing related to cluster algebras for rank 2 Dynkin quivers. In an appendix we also provide a detailed discussion on the general relation between wall-crossing and the theory of cluster algebras.
Fourier expansions of Kac-Moody Eisenstein series and degenerate Whittaker vectors
Philipp Fleig,Axel Kleinschmidt,Daniel Persson
Abstract: Motivated by string theory scattering amplitudes that are invariant under a discrete U-duality, we study Fourier coefficients of Eisenstein series on Kac-Moody groups. In particular, we analyse the Eisenstein series on $E_9(R)$, $E_{10}(R)$ and $E_{11}(R)$ corresponding to certain degenerate principal series at the values s=3/2 and s=5/2 that were studied in 1204.3043. We show that these Eisenstein series have very simple Fourier coefficients as expected for their role as supersymmetric contributions to the higher derivative couplings $R^4$ and $\partial^{4} R^4$ coming from 1/2-BPS and 1/4-BPS instantons, respectively. This suggests that there exist minimal and next-to-minimal unipotent automorphic representations of the associated Kac-Moody groups to which these special Eisenstein series are attached. We also provide complete explicit expressions for degenerate Whittaker vectors of minimal Eisenstein series on $E_6(R)$, $E_7(R)$ and $E_8(R)$ that have not appeared in the literature before.
|
CommonCrawl
|
On usage of artificial intelligence for predicting mortality during and post-pregnancy: a systematic review of literature
Elisson da Silva Rocha1,
Flavio Leandro de Morais Melo1,
Maria Eduarda Ferro de Mello2,
Barbara Figueiroa3,
Vanderson Sampaio4 &
Patricia Takako Endo1
Care during pregnancy, childbirth and puerperium are fundamental to avoid pathologies for the mother and her baby. However, health issues can occur during this period, causing misfortunes, such as the death of the fetus or neonate. Predictive models of fetal and infant deaths are important technological tools that can help to reduce mortality indexes. The main goal of this work is to present a systematic review of literature focused on computational models to predict mortality, covering stillbirth, perinatal, neonatal, and infant deaths, highlighting their methodology and the description of the proposed computational models.
We conducted a systematic review of literature, limiting the search to the last 10 years of publications considering the five main scientific databases as source.
From 671 works, 18 of them were selected as primary studies for further analysis. We found that most of works are focused on prediction of neonatal deaths, using machine learning models (more specifically Random Forest). The top five most common features used to train models are birth weight, gestational age, sex of the child, Apgar score and mother's age. Having predictive models for preventing mortality during and post-pregnancy not only improve the mother's quality of life, as well as it can be a powerful and low-cost tool to decrease mortality ratios.
Based on the results of this SRL, we can state that scientific efforts have been done in this area, but there are many open research opportunities to be developed by the community.
Pregnancy has its natural physiological path for a healthy baby to have its life started. However, when this path is discontinued due to a stillbirth, it may impact negatively on the quality of life of all individuals related to the misfortune, involving physical, psychological, economic and/or social aspects. Furthermore, the stillbirth rate is also a sensitive indicator that reflects on socioeconomic conditions and is related to the quality of prenatal care and care during pregnancy [1, 2].
In 2020, an estimated two million pregnancies were not completed due to stillbirths. Among these, more than 40% of deaths occurred during the labor [1]. In the same year, 2.4 million children died in the first 28 days (neonatal mortality), representing 47% of all deaths of children under 5 years old. In 2019, about 1 million newborns died in the first 24 h (early neonatal mortality) [3], and approximately 1.6 million babies aged between 28 and 365 days died (infant mortality) [4]. Causes of death in the first weeks include low birth weight, infections, neonatal asphyxia and complications of preterm birth. The lack or poor quality of maternal health care services during childbirth contributes to causes of death. Furthermore, the absence of prenatal care interventions and prevention of maternal complications before delivery corroborate these data [2, 5].
Preterm birth and neonatal death are inversely connected. The lower the gestational age of the newborn, higher the risk of death leading to greater attention to preterm births [2]. Some risk factors related to the mother such as age, smoking, diabetes, hypertension, fetal anomaly and miscarriages increase the chances of premature birth [2, 6].
The 2030 Agenda [7] proposed by the Organization of the United Nations (UN) predicts the reduction of neonatal mortality and mortality of children under 5 years of age in its Sustainable Development Goals (SDGs). However, specific targets aimed at reducing of fetal mortality were absent from the Millennium Development Goals (MDGs) [8] and were not covered by the Agenda 2030. Unfortunately, this public health issue has been overlooked, and stillbirths have been largely absent from tracking health data around the world, hiding the true extent of this problem.
Given this context, public policies for maternal and child health are essential to prevent these deaths. It is possible to improve the quality of services provided in order to end preventable stillbirths and achieve good quality of health in newborns, with good antenatal care, specialized care in childbirth, postpartum care and especially, care for small and sick newborns [1, 3].
Recent studies demonstrate that Artificial Intelligence (AI), particularly through machine learning and deep learning models, offers considerable potential to predict prematurity, birth weight, mortality, hypertensive disorders, postpartum depression, among others [9]. Machine learning is also being used to identify risks of perinatal mortality
[10,11,12,13] and fetal death [14, 15]. As they have feasible operational costs, which makes it easier to be implemented, these computational tools can also be a valuable ally, especially for nations with limited resources.
We found only one systematic literature review (SLR) that addressed stillbirths, perinatal mortality, neonatal mortality, and infant mortality using these AI techniques, published in 2021 by Mangold et al. [16]. They focused on works for predicting neonatal mortality. In contrast, in this SLR, our interest is to evaluate in the state-of-the-art on works that proposed machine learning and deep learning models to classify stillbirth, perinatal, neonatal and infant deaths. Hereafter, whenever we mention mortality, please consider stillbirth, perinatal mortality, neonatal mortality and infant mortality.
As discussed earlier, stillbirth is a real public health concern around the world, and the development of AI based solutions is becoming an open field for research with many challenges. This SLR becomes necessary to understand at what point AI has contributed to detect risk and undesirable outcomes for pregnancy, and also to understand how the progress is in aspects related to stillbirths, such as neonatal, perinatal and infant mortality. The main goal of this work is to answer the following research questions (RQ):
What types of mortality are the focus of researches that used machine learning and deep learning?
What data is being used in researches on classification of mortality?
What machine learning and deep learning techniques are being used in researches related to the classification of mortality?
How is the performance of machine learning and deep learning models evaluated in the classification of mortality?
The methodology used to guide this SLR is based on the PRISMA statement, conformed to its checklist available at https://prisma-statement.org/. We used this methodology to find works that addressed the use of machine learning and/or deep learning in the context of mortality.
Data sources and searches
We considered the following databases as the main sources for our research: IEEE XploreFootnote 1, PubMedFootnote 2, ACM Digital LibraryFootnote 3, SpringerFootnote 4 and ScopusFootnote 5.
The collection of primary studies was done through searches in the databases, using the following search string: (("deep learning" OR "machine learning") AND ("stillbirth" OR "fetal death" OR "infant death" OR "neonatal mortality" OR "neonatal death" OR "perinatal") AND ("prediction" OR "classification")) IN (Metadata) OR (Title) OR (Abstract).
As we can find many papers that are not strictly related to our RQ (or can not answer our research questions), we defined some inclusion and exclusion criteria.
The works must explicitly present abstract computational models to classify or classify mortality risks, use at least one real database and be from the last 10 years (between 2012 and 2021). We remove works that are duplicates, unavailable or not in English, poster, tutorial or editorial works, and secondary or tertiary works.
Studies selection
Three reviewers (ESR, FLMM and PTE) were responsible for identifying eligible works independently. When any disagreement came up, a fourth reviewer (VS) was consulted to reduce the risk of bias. At first, the title and abstract were screened and, after that, works retained went to a full-text reading. Lastly, works that passed the inclusion and exclusion criteria were selected for data extraction.
Works were evaluated considering their quality, considering these seven quality questions were defined:
Does the study make clear what its objectives are?
Does the study describe the entire methodology used?
Does the study describe the database used and the pre-processing performed (when necessary)?
Does the study describe the configurations of the proposed models?
Does the study describe how it arrived at the proposed models?
Does the study clearly describe the results?
Does the study make a good discussion based on the results?
For each quality question, the possibles answers and scores were: Yes (1 point), Partially (0.5 point), and No (0 point). Therefore, each study was graded with a score based on answers of each question. Studies that presented at least half (3.5) of the maximum score (7.0) were accepted for reading and further analysis.
After reading the 18 primary studies, we extracted information from each work, based on general characteristics of the study, methodology, dataset, models, models' performance, challenges and limitations in order to answer the research questions previously established.
Figure 1 presents the PRISMA flow diagram used to summarize the works identified and those excluded due duplication or quality criteria.
PRISMA flowchart of the review process
Descriptive analysis
In November 2021, the search returned 29, 80, 54, 104, and 404 works from IEEE Xplore, PubMed, ACM Digital Library, Scopus, and Springer, respectively, totaling 671 works. After removing duplicates, we read all abstracts applying the inclusion and exclusion criteria and then 22 works were selected. After the quality assessment, we finally obtained the 18 primary works for reading and extraction of information.
Even with the large number of works found in Springer and Scopus, only 1 and 2 of them were selected from these sources, respectively. PubMed was the source with the most primary works, 13 out of 18. The other 2 works were from IEEE, while ACM had no works selected.
The search for this SLR was restrict between the years 2012 to 2021, but the first work appeared in 2014. Of the primary works, 84% were published in the last three years: three in 2019, six in 2020 and six in 2021. This is a clear indication that AI still has a long way to go and good opportunities to develop scientific solutions for mortality prediction.
In this SLR, we are focused on studies that used machine learning and deep learning models to classify some types of mortality, such as stillbirth, perinatal, neonatal and infant. Figure 2 shows the amount of work by type of mortality.
Number of selected works by type of mortality
The definition of stillbirth is not well established globally. The World Health Organization (WHO) recommendation defines fetal death as all deaths that occur after the 28th week of gestation or with a weight above 1000g; while intrauterine deaths occur during labor [17]. However, many countries use the definition of fetal death based on the 10th revision of the International Classification of Diseases (ICD-10), which considers deaths that occur with a gestational age greater than 22 weeks, or with a weight greater than 500g, or height greater than 25 cm, including deaths during labor [18, 19]. This lack of a universal definition implies inaccurate comparisons when there is a need to use national and international reporting data together. The works that used stillbirth classification used the ICD-10 rules. Figure 3 shows the definitions used in this SLR for deaths that occurred during pregnancy or up to 1 year after birth.
Definition of deaths that occur during or post-pregnancy up to 1 year from birth
Most of the works are focused on neonatal mortality, 66% of the chosen works. Neonatal mortality is categorized when the neonate dies between his/her birth (when vital signs are detected after delivery) and the twenty-eighth day of his/her life. Many works focus on this stage because they can detect mortality based on comorbidity generated during the pregnancy or postpartum. Baker et al. [20], Cerqueira et al. [21], Sheikhtaheri et al. [22], Sun et al. [23] and Hsu et al. [24] classify the mortality of babies that after their birth were referred to the Neonatal Intensive Care Unit (NICU); and Podda et al. [25], Jaskari et al. [26] and Lee et al. [27] rank the probability of death in premature babies; and Cooper et al. [28] classifies post-operative newborn mortality.
There are also works that are related to infant mortality (4 works), stillbirth (3 works), and perinatal mortality (only 1 work). Infant mortality refers to deaths occurring between 29 days of life and 365 days (1 year from birth); and perinatal is the period from stillbirth to early neonatal, until the 6th day of life, as shown in Fig. 3.
Four primary studies carried out research that focused on more than one type of mortality: Valter et al. [29], Saravanou et al. [30] and Batista et al. [31] who worked with neonatal and infant deaths; and Shukla et al. [10] who studied the spheres of stillbirths and neonates.
A summary of data sets found in the studies of this SLR is available in the Table 1, describing the location where the data were collected, number of records, total attributes of the original data set, number of attributes used for the model training, attribute selection technique, data balance and problems with missing data.
Table 1 Summary of the data sets
Data set size and balancing
The largest data set was used by Lee et al. [27] with over 31 million records, followed by Koivu et al. [15] and Saravanou et al. [30] with approximately 12 million each. These three largest data sets collected data from the United States of America (USA). The smallest data sets were used by Cerqueira et al. [21] from Brazil, Sun et al. [23] from India and Jaskari et al. [26] from Finland with 293, 757 and 977 records, respectively.
Although Lee et al. [27], Koivu et al. [15] and Saravanou et al. [30] used the largest data sets, the majority class represented more than 99% of the data, according to Table 2. Another thirteen works also suffer from imbalanced data set problems.
Table 2 Distribution of samples per classes
According to Ramyachitra et al. [34], "a two-class data set is implicit to be imbalanced when one of the classes in the minority one is heavily under-represented in contrast to the other class in the majority one". The imbalanced data set is a crucial challenge because the absent of solving this issue can lead classifiers to be biased towards the majority class.
From works with imbalanced data set, eight of them kept the data set as it is, while eight performed some balancing technique in order to cover this problem [12, 20, 22, 23, 26, 27, 30, 33]. The most common approaches used to balance a data set were: random oversampling (ROS) and random undersampling (RUS) [35].
Saravanou et al. [30], Baker et al. [20], Jaskari et al. [26], Alshwaish et al. [33], Sun et al. [23] and Lee et al. [27] applied the RUS technique, in which they re-sampled the data set based on the minority class; to do this, the majority class is cut randomly until it gets the same size of the minority class [36].
Sheikhtaheri et al. [22] and Mboya et al. [12] used a classic ROS technique, named Synthetic Minority Oversampling Technique (SMOTE), in which the majority class is kept as original and the minority class is randomly increased with synthetic data.
Sheikhtaheri et al. [22] created four different data sets using the SMOTE technique, varying the ratio of classes; and they also created a data set using the ADASYN technique, in which a weighted distribution of the minority class is used and samples that are harder to learn are prioritized.
Of the three largest data sets mentioned above, only Koivu et al. [15] performed training with an imbalanced data set, making it the largest data set used for model training, followed by Batista et al. [31] and Malacova et al. [14] with approximately 1 million records. However, these three works did not use balanced data, which can lead to problems in training the models and, therefore, it is important to analyze the evaluation metrics used by authors in order to present a fair comparison (see Section for details about metrics).
Regarding the smallest data sets, only Cerqueira et al. [21] did not perform the balancing and used all 293 data for training and testing, while Sun et al. [23] and Jaskari et al. [26] even with small databases, performed data balancing for training. On the other hand, the proportion of the majority class of the two works that performed the balancing was 0.98 and 0.936, respectively, and Cerqueira et al. [21] was 0.867.
According to Phung et al. [37], "missing data is a frequent occurrence in medical and health data sets. The analysis of data sets with missing data can lead to loss in statistical power or biased results". Eleven works cited problems of missing data with their respective data sets.
The technique most used to overcome this problem was the filling of missing data with the average, used by Sheikhtaheri et al. [22], Sun et al. [23] and Lee et al. [27], followed by the filling with the most frequent data of such attributed, which was used by Sheikhtaheri et al. [22] and Podda et al. [25]. Typically, mean values are used for continuous variables, while most frequent data is more used for categorical values.
Alshwaish et al. [33] and Sun et al. [23] used another technique that fill the missing data with a value not used by that attribute. For example, the weight attribute is filled with the value -1, and the smoking attribute (that accepted 0 for no and 1 for yes) is filled with value 2.
Some other works decided to remove the records that presented this problem (Shukla et al. [10] and Podda et al. [25]), but Mboya et al. [12] removed only the columns that contained a large number of missing data and Cooper et al. [28] removed the records that contained more than 30% missing data. However, both did not report what was done with missing data in records or columns with little missing data.
Valter et al. [29] and Hsu et al. [24] did not describe the strategies used to circumvent the problems with missing data.
It is worth mentioning that the same work can use different techniques to overcome the missing data problem, as was the case of Sheikhtaheri et al. [22], which used the mean for continuous data and more frequent values for boolean or categorical data. Podda et al. [25]) removed all records that contained missing data in the training phase and filled the missing data in the testing phase was filled in with the most frequent values.
Attribute selection
Attribute selection is widely applied to reduce the dimensionality of problems and at the same time, according to Remeseiro et al. [38], it can also reduce measurement cost and improve model learning, impacting its performance. Most works (12 out of 18) did not describe the number of attributes of their original data set, but most of them cite how many attributes were selected for training.
Nine works [10, 12, 15, 21, 22, 25, 26, 28, 29] performed some technique for the selection of attributes, while five did not mentioned and four of them did not make clear if they used any attribute selection technique.
The use of an specialist in the area of interest was one of the techniques used by [21, 22, 25]. Commonly, this technique is used to validate the attributes selected by some other computational technique, but it can also be used individually. Using literature as a basis for choosing attributes was also a technique used by [10, 22].
Mboya et al. [12] and Koivu et al. [15] used Random Forest and Logistic Regression algorithms to perform the selection of attributes. This process can be carried out through an univariate evaluation (it uses one attribute at a time to evaluate how significant that variable can be for that problem) or through an additional evaluation (where the most significant variables are grouped until the addition of new variables does not further improve predicted outcomes) [39].
Other correlation methods were also used, such as Pearson's correlation by Koivu et al. [15], and correlation-based resource subset selection by Sheikhtaheri et al. [22], which are methods that aim to find a correlation between attributes, that is, measure how much one attribute influences another [40]. These methods are normally used for linear problems.
Statistical tests were also used to select the best attributes, such as the Wilcoxon test, the Mann-Whitney nonparametric test and the Chi-square test. These type of technique typically analyze attributes individually and assess their statistical importance [39].
Three works did not specify nor the attributes of the data set, neither the final attributes: [14, 26, 33]. Having these information is crucial for reproductibility of the work, and the lack of them difficult a fair comparison and discussion of their results.
Saravanou et al. [30] performed the biggest reduction of attributes, leaving models with one or two of the 128 attributes from the original data set, a reduction of about 99%; while Batista et al. [31] used all available attributes of the data set (23 attributes).
Regarding the attributes used, Fig. 4 presents the most frequent attributes found in the works, separated by the type of mortality.Footnote 6,Footnote 7 The attributes birth weight, gestational age and sex of the child were the most frequent with 16, 13 and 11 occurrences in the primary works. Followed by the apgar score, mother's age, multiple births and mother's education.
Common features used by primary works, considering fetal, neonatal and infant death
When working with prediction of neonatal mortality, the six most frequent attributes were: birth weight, gestational age, child sex, Apgar score, maternal age and multiple births.
For prediction of infant mortality, the four most frequent attributes are birth weight, gestational age, child sex and Apgar score, followed by multiple deliveries. For the prediction of stillbirths, attributes such as mother's age, education, and parity are the most frequent, followed by gestational age at enrollment, perinatal mortality cluster,Footnote 8 and the number of prenatal consultations. It is worth mentioning that many attributes regular in neonatal and infant mortality cannot be used when considering stillbirth cases, such as gestational age, since it is information about when the birth occurred. The detection of stillbirths precedes this information.
Some attributes that appeared were related to comorbidities of the mother or child, such as diabetes, sepsis, hypertension and hemorrhage. Other attributes related to sociodemographics, such as mother and child race, mother's job, mother's marital status, and smoking. Several attributes are also related to previous pregnancies, such as number of previous pregnancies, number of stillbirths, number of live births, number of cesarean sections; and the current pregnancy, such as prenatal care, type of delivery, height at birth, birth order, birth companion.
Classification problem
Of the 18 works selected in this SLR, 15 of them solved a binary classification problem, one work focused on multiclass classification and two works proposed models for both binary and multiclass classifications.
Of the binary classifications, all works are related to mortality and alive (neonatal mortality and alive, or infant and alive, or stillbirth and alive, or perinatal and alive), while the three multiclass classification works used another perspective. Saravanou et al. [30] considered six different classes: died < 1 h, died between 1 and 23 h, died between 1 and 6 days, died between 7 and 27 days, died between 28 and 365 days and alive. AlShwaish et al. [33] classified risk levels of mortality, considering four classes, from minor to extreme. And Koivu et al. [15] classified into early stillbirth, late stillbirth and non-stillbirth, as shown earlier in Table 1.
Machine learning was the most common modeling technique found among the primary works of this SLR, being proposed by 16 of the 18 works [10, 12, 14, 15, 21, 22, 24,25,26,27,28,29,30,31,32,33]. Deep learning models in turn were proposed by four works [15, 20, 23, 33]. In addition to machine learning and deep learning techniques, 11 works also presented Logistic Regression models [10, 12, 14, 15, 23, 25, 26, 28, 31,32,33].
When analyzing Fig. 5, one can note that deep learning models appeared from 2019 on wards, showing that there may be a large field of search in relation to these models.
Type of modeling technique by the year of work publication
Figure 6 shows the number of works and the modeling technique that was proposed based on the type of mortality classification.
Number of primary works and the modeling technique that was proposed based on the type of mortality classification
Regarding the neonatal mortality works, ten machine learning models were proposed [10, 21, 22, 24,25,26,27,28,29, 31], two deep learning models were proposed [20, 23], and seven logistic regression models were proposed [10, 23,24,25,26, 28, 31]. Even though infant mortality was the focus of more works than stillbirth, the total of proposed models was the same, seven for each.
Figure 7 presents the type of machine learning technique and the number of works that proposed them. The most common machine learning model among the primary works was the Random Forest, with 14 proposals, followed by the Neural Network and Support Vector Machines (SVM) with 11 and 10 proposals, respectively. In addition to these models, other common machine learning models are Naive Bayes, K-Nearest Neighbors (KNN), XGBoost, Gradient Boost and ensemble models.
Machine learning techniques and number of proposed models found in primary works
Figure 8 shows the deep learning models proposed by type of mortality. As mentioned before, deep learning models were found only in four works. The Fully Con- nected Neural Network (FCNN) was the most frequent model, proposed by two works, followed by the Long Short-Term Memory (LSTM) model and the joint model called CNN-LSTM. The CNN-LSTM unites two deep learning models (Con- volutional Neural Network (CNN) and LSTM) into a single model.
Deep learning techniques and number of proposed models found in primary works
As observed, Random Forest and FCNN were the most common machine learning and deep learning models proposed to predict mortality, respectively. As primary works are handling with tabular data, the usage of Random Forest is expected since they are Valter et al. [29], Hajipour et al. [32], Shukla et al. [10], Malacova et al. [14], Sheikhtaheri et al. [22], Podda et al. [25], Jaskari et al. [26], Mboya et al. [12], AlShwaish et al. [33], Lee et al. [27], Cooper et al. [28] and Hsu et al. [24]. However, we would like to highlight other tree-based algorithms that have been gained attention in literature, such as XGBoost and Gradient Boost. They are Saravanou et al. [30], Shukla et al. [10], Malacova et al. [14], Podda et al. [25], Batista et al. [31], AlShawaish et al. [33] and Hsu et al. [24].
According to Yu et al. [41], an expert can provide a consistent set for model initialization parameters (hyperparameter), but in most cases, these parameters may not be optimal. Also, according to Yu et al. [41], performing the adjustments of these hyperparameters is a primordial phase in the entire process of training machine learning and deep learning models.
Of the primary works of this SLR, 10 of them (more than half) did not applied any hyperparameter optimization. Of the 8 works that used it, 6 used a technique called Grid Search [14, 24, 25, 27, 29, 30]. Grid Search is a traditional technique that uses an exhaustive search within a given limited search space [42]. That is, it is necessary to define a range of values for specific hyperparameters, which, in a grid format, is evaluated one by one, in search of the best combination.
Batista et al. [31] used the Bayesian algorithm, that in simple terms, creates an approximate function of the objective function to find the promising regions for the best hyperparameter. With this, its search field is very limited, but faster in the search for parameters [43]. Jascari et al. [26] used nested cross-validation to estimate the generalization performance of selected parameters.
Model validation
There are different ways to calculate the classification error of a model and the most popular is the k-fold cross validation. According to Rodriguez et al. [44], in this approach, the data set is divided into k folds and the target model is trained using \(k-1\) folds. The error value of the training phase is calculated by testing the model with the remaining fold (test fold) and the final error of the model is the average value of the errors calculated in each iteration.
Most of works (14 of 18) applied the k-fold cross-validation approach to validate their models; of these 14 works, nine of them used \(k=10\), one used \(k=8\) and four used \(k=5\). We highlight that, according to Fushiki [45], "k-fold cross validation has an upward bias, and the bias may not be neglected when k is small" and therefore, it is important to analyze the value of k according to the size of the data set available for the study.
Evaluation metrics
Choosing the appropriate way to evaluate the proposed models plays a critical role in the process of obtaining the ideal classifier; that is, the selection of the metrics pertinent to the problem is a key to a better evaluation of the models and to detect the best classifier for the proposed trial [46].
Most evaluation metrics in classification problems are based on the confusion matrix. As shown in Table 3, a confusion matrix is composed of: True Positive (TP), when the positive class is correctly classified; True Negative(TN), when the negative class is correctly classified; False Positive (FP), when a negative class is classified as positive; and the False Negative (FN), when a positive class is classified as negative.
Table 3 Generic confusion matrix
Based on TP, TN, FP, and FN, different evaluation metrics can be defined. The most commonly found metric is the accuracy. Accuracy calculates how often the classifier was correct in its classification, according to the Equation 1:
$$\begin{aligned} accuracy = \frac{TP + TN}{TP + TN + FP + FN} \end{aligned}$$
Precision is the metric that calculates how many cases were classified as positive that were actually positive, as shown in Equation 2. It is used when the FP are considered more relevant than FN.
$$\begin{aligned} precision = \frac{TP}{TP + FP} \end{aligned}$$
Sensitivity, also known as recall, is the metric that calculates the proportion of actual positives that was correctly classified, as presented in Equation 3. It is used when the FN is considered more relevant than FP.
$$\begin{aligned} sensitivity = \frac{TP}{TP + FN} \end{aligned}$$
Opposite to sensitivity, the specificity metric is the proportion of negative cases correctly classified and it is calculates according to Equation 4).
$$\begin{aligned} specificity = \frac{TN}{TN + FP} \end{aligned}$$
The F1-score metric is the harmonic mean between precision and sensitivity, calculated as shown in Equation 5. This metric gives greater weight to lower numbers, so if one of the two metrics has a low value, the result will be similarly low. This harmonic mean is advantageous when the objective is to seek a balance between these two metrics.
$$\begin{aligned} F1{\text {-}}score = 2 \times \frac{precision \times sensitivity}{precision + sensitivity} \end{aligned}$$
These are the most well-known and used metrics based on the confusion matrix. Table 4 presents the metrics used in the primary works. Sensitivity and accuracy appears in 11 works [12, 14, 20,21,22, 24, 26, 27, 29,30,31,32,33], specificity in nine [12, 14, 20,21,22, 26, 27, 31, 32], F1-score, and precision in seven [14, 22,23,24, 26, 30,31,32,33].
Table 4 Metrics by selected works
Area under the ROC curve (AUC ROC) was the metric that appeared in all primary works of this SLR, showing its importance in evaluating classifiers models. To understand this metric, let's first understand the receiver operating characteristic curve (ROC curve). The ROC curve is a two-dimensional graph that balances the benefits, True Positive Rate (TPR) (sensitivity), and the costs, False Positive Rate (FPR), which is calculates as shown in Equation 6:
$$\begin{aligned} FPR = 1 - specificity \end{aligned}$$
However, using the ROC curve to compare different classifiers is not easy, so the AUC ROC [47] metric is used. The AUC ROC is the area under the ROC curve, which is bounded between 0 and 1. A model with an AUC ROC close to 1 has a good performance rating, while a model with an AUC ROC close to 0 is rated as poor performance.
Other metrics were also used, however, less frequently, such as the Area under the precision-recall curve (AUPRC). The AUPRC is used in Baker et al. [20], Batista et al. [31], and Sun et al. [23] and it is a variance of the AUC ROC, a more appropriate metric for unbalanced class databases with a problem configured in predicting the positive class since it uses precision-recall curves [48].
Mboya et al. [12] and Sun et al. [23] used Positive Predictive Value (PPV) (precision) and Negative Predictive Value (NPV) metrics. The NPV metric is the inverse of precision, which aims to verify from negative values, which were classified as negative [49], its calculation is defined by Equation 7:
$$\begin{aligned} NPV =\frac{TN}{TN+FN} \end{aligned}$$
Metrics for imbalanced data sets
When working with imbalanced data sets, we need to use metrics that do not bias the evaluation due this imbalance as they can present overly optimistic results. According to Chicco et al. [50], accuracy and AUC ROC are metrics sensitive to the imbalance of classes, while precision, sensitivity, specificity, and F1-score are metrics that do not analyze all the confusion matrix values, which can lead to unfair observations of the results.
Figure 9 shows the metrics used by works that trained their models with imbalanced data sets. All seven works that used imbalanced data set used the AUC ROC metric; three used accuracy [14, 21, 24], which is one of the most sensible metric when working with imbalanced classes. The AUPRC metric, which according to Chicco et al. [50], is the more robust metrics to evaluate a model performance when handling imbalancing, was only used by Batista et al. [31].
The most common metrics used to evaluate model performance when working with imbalanced data set
Statistical tests
The use of statistical tests when developing machine learning models was already mentioned in subsection , when describing techniques to deal with feature selection. Here, the use of statistical tests are focused on the definition of the best model based on the evaluation metrics.
Four works used statistical testing to evaluate and choose their best models. Podda et al. [25], Mboya et al. [12], and Hsu et al. [24] used the DeLong test to evaluate their models. DeLong test verifies if there is a significant difference between the AUC ROC results of the two models [51].
Shukla et al. [10] used the pairwise t-test, that is to compare two population means where there are two samples in which observations from one sample can be paired with observations from the other [52].
The use of predictive models to estimate stillbirth risk may benefit women during prenatal testing. Trudell et al. [53] described the risk of stillbirth starting at 32 weeks of gestational age. It has been seen that non-stress testing done during prenatal care can prevent 6 to 8 stillbirths per 10,000 pregnancies. Thus, showing the importance of predictive models for predicting and avoiding stillbirths during gestation.
Complementary to the prediction, the definition of most relevant predictors is also a relevant contribution. For instance, in Western Ethiopia [54], a study highlighted some predictors of neonatal mortality based on local data. Conditions such as age less than 20 years, primiparous, complications during pregnancy and childbirth, prenatal visits, small size neonates, home birth and gestational age less than 37 weeks are predictors of neonatal mortality. Predictive data was important to knowledge the reasons about the local neonatal mortality rate increased during recent years. Circumstances such as low coverage of health services in the region, low access and use of obstetric services and early pregnancy contribute to increased mortality rates.
The prediction of neonates who are at risk of death can help health professionals to provide early treatment, increasing the chances of survival and minimizing the morbidity rate [22]. Recent studies showed that model predictions based on multiple factors such as gestational and infant are more accurate in estimating than those based only in insulated factor, such as gestational age. Prenatal and postnatal interventions can reduce neonatal mortality and morbidity, and multifactorial based models would optimize such care in practical use [55, 56]. It is essential to have enough data to analyze several factors and then produce more assertive predictive models that can be applied in the health system.
In this SLR, we found data sets with different sizes (varying between 293 to over 31 million records), number of attributes (from 26 to 128) and with missing values and imbalanced classes. According to an UNESCO report [1], "poor data availability and quality require innovative methodological work to understand the global picture of stillbirths". And it is true not only for stillbirths but also for perinatal, neonatal and infant mortality. Some authors have put efforts to minimize the issues related to the quality of their data sets and traditional techniques, such as the average value was used to fill missing data; ROS and SMOTE were used to balancing classes. However, there are many other techniques that can be applied in order to improve the quality of the data before the model training. For instance, for high-dimensional data, one can apply different types of dimensionality reduction in order to reduce redundancy and noise. According to Huang et al. [57], these techniques can also reduce the complexity of learning models and improve their classification performance.
The proposal of deep learning models to classify mortality is still in early stages, having only four works published at the time of writing this systematic review. This is not so surprising, since machine learning models are more efficient to handle tabular data (that is the common data type used for mortality classification), while deep learning are good models to recognize objects in an image based on the spacial relationship of the pixels. Based on this fact, it is possible to improve the performance of deep learning models when using tabular data by transcribing the tabular data into images. Zhu et al. [58] state that the data set features can be arranged into a 2D space, using techniques such as feature similarity and feature distance [58, 59]. With this, deep learning models would learn tabular data using their strengths.
It is important to highlight that health issues found in high income countries (HICs) are very different from those in low and middle incomes countries (LMICs). Computational models are presented as a low-cost (implementation and maintenance) but high accuracy solution, especially for LMICs, since such solutions can be available in an online fashion.
The findings of this SLR are similar to ones found in SLRs about other domains, including the work done by [60], which investigated the use of AI models for clinical diagnosis of arboviral diseases, and [61], which sought models of machine learning in geriatric clinical care for chronic diseases. These conclusions mostly concern the models' shortcomings and strengths, as well as the pre-processing of the data.
Additionally, maternal mortality is a research area that we would like to highlight for further investigation and as complement of this one. According to Geller et al. [62], maternal mortality "is used globally to monitor maternal health, the general quality of reproductive health care, and the progress countries have made toward international development goals". In a quickly investigation, we found only few recent (and incipient) works that focus on maternal mortality [63, 64], showing that there are many research opportunities to contribute in this area.
This SLR is also essential for the development of new researches. We have analyzed and discussed several aspects of machine and deep learning development, so readers can use this work as a good kick off to choose the best strategies for solving their problems and designing their methodology in a more robust way, facilitating scientific reproducibility.
Conclusions and next steps
Mortality during pregnancy or during the first few weeks of life may reveal how well pregnant women and their newborns are cared for by health institutions. Due to its feasible operational cost, utilizing technology to assist medical professionals during and after pregnancy has shown to be a powerful ally for enhancing both public health and the quality of prenatal care.
On the other hand, the computational models created based on data from a specific location are particularly generalist only for that region, making them difficult to apply to another location without modifications. In other words, countries with limited resources may struggle with a lack of data or with data of low quality, which has a direct impact on the performance of the computational models.
In this work, we found 18 articles that classified unfavorable pregnancy outcomes-such as stillbirth, perinatal, neonatal, and/or infant mortality-using machine learning and/or deep learning. We discovered that the classification of neonatal death was the most researched, while the parameters birth weight, gestational age, child's gender, and mother's age were most frequently employed in studies. The random forest machine learning model was the most commonly suggested model, while the AUC ROC assessment metric was most frequently utilized to rate the models.
With this work, we were able to identify several research gaps and areas for further investigation, such as maternal mortality and morbidities, but more importantly, we offered several potential approaches for individuals wishing to pursue these goals and use these kinds of data.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
https://IEEExplore.ieee.org/Xplore/home.jsp.
https://pubmed.ncbi.nlm.nih.gov.
https://dl.acm.org.
https://link.springer.com.
https://www.scopus.com.
The attributes that appear only once were removed from the figure.
Perinatal mortality is not is in the Figure because the work that addresses this mortality has not described its attributes.
This attribute is the mortality rate in previous years of that location to influence the forecast of new cases.
AI:
AUC ROC:
AUPRC:
Area under the precision-recall curve
CNN:
FCNN:
Fully Connected Neural Network
FN:
False Negative
FP:
FPR:
False Positive Rate
HICs:
ICD-10:
International Classification of Diseases
KNN:
K-Nearest Neighbors
LMICs:
low and middle incomes countries
LSTM:
Long Short-Term Memory
MDGs:
NICU:
NPV:
Negative Predictive Value
PPV:
Positive Predictive Value
ROC curve:
receiver operating characteristic curve
ROS:
random oversampling
RQ:
RUS:
random undersampling
SDGs:
SLR:
SMOTE:
Synthetic Minority Oversampling Technique
TN:
True Negative
True Positive
TPR:
True Positive Rate
UN:
UNICEF. A neglected tragedy: the global burden of stillbirths. Report of the UN Inter-agency Group for Child Mortality Estimation, 2020. https://www.unicef.org/reports/neglected-tragedy-global-burden-of-stillbirths-2020 (2021/10/20).
D'Antonio F, Odibo A, Berghella V, Khalil A, Hack K, Saccone G, Prefumo F, Buca D, Liberati M, Pagani G, et al. Perinatal mortality, timing of delivery and prenatal management of monoamniotic twin pregnancy: systematic review and meta-analysis. Ultrasound Obstet Gynecol. 2019;53(2):166–74.
World Health Organization. Newborn Mortality. 2022. https://www.who.int/news-room/fact-sheets/detail/levels-and-trends-in-child-mortality-report-2021 (2022/05/20)
World Health Organization. Number of infant deaths (between birth and 11 months). 2022. https://www.who.int/data/gho/data/indicators/indicator-details/GHO/number-of-infant-deaths (2022/05/20)
Tekelab T, Chojenta C, Smith R, Loxton D. The impact of antenatal care on neonatal mortality in sub-Saharan Africa: a systematic review and meta-analysis. PLoS ONE. 2019;14(9):0222566.
Blanco E, Marin M, Nuñez L, Retamal E, Ossa X, Woolley KE, Oludotun T, Bartington SE, Delgado-Saborit JM, Harrison RM, et al. Adverse pregnancy and perinatal outcomes in Latin America and the Caribbean: systematic review and meta-analysis. Rev Panam Salud Pública 2022;46.
United Nations. The Sustainable Development Goals Report 2019, UN, New York, 2019. https://unstats.un.org/sdgs/report/2019/The-Sustainable-Development-Goals-Report-2019.pdf (2021/10/21).
United Nations. The millennium development goals report. New York: United Nations; 2015.
Ramakrishnan R, Rao S, He J-R. Perinatal health predictors using artificial intelligence: a review. Womens Health. 2021;17:17455065211046132.
Shukla VV, Eggleston B, Ambalavanan N, McClure EM, Mwenechanya M, Chomba E, Bose C, Bauserman M, Tshefu A, Goudar SS, et al. Predictive modeling for perinatal mortality in resource-limited settings. JAMA Netw Open. 2020;3(11):2026750–2026750.
Hoodbhoy Z, Hasan B, Jehan F, Bijnens B, Chowdhury D. Machine learning from fetal flow waveforms to predict adverse perinatal outcomes: a study protocol. Gates Open Res. 2018;2:8.
Mboya IB, Mahande MJ, Mohammed M, Obure J, Mwambi HG. Prediction of perinatal death using machine learning models: a birth registry-based cohort study in northern Tanzania. BMJ Open. 2020;10(10): 040132.
Qureshi H, Khan M, Quadri SMA, Hafiz R. Association of pre-pregnancy weight and weight gain with perinatal mortality. In: Proceedings of the 8th international conference on frontiers of information technology; 2010. pp. 1–6.
Malacova E, Tippaya S, Bailey HD, Chai K, Farrant BM, Gebremedhin AT, Leonard H, Marinovich ML, Nassar N, Phatak A, et al. Stillbirth risk prediction using machine learning for a large cohort of births from Western Australia, 1980–2015. Sci Rep. 2020;10(1):1–8.
Koivu A, Sairanen M. Predicting risk of stillbirth and preterm pregnancies with machine learning. Health Inf Sci Syst. 2020;8(1):1–12.
Mangold C, Zoretic S, Thallapureddy K, Moreira A, Chorath K, Moreira A. Machine learning models for predicting neonatal mortality: a systematic review. Neonatology. 2021;118(4):394–405.
WHO. Stillbirths. 2015. http://www.who.int/maternal_child_adolescent/epidemiology/stillbirth/en/ (2021/10/20).
WHO. Neonatal and perinatal mortality: country, regional and global estimates; 2006.
Kelly K, Meaney S, Leitao S, O'Donoghue K. A review of stillbirth definitions: a rationale for change. Eur J Obstet Gynecol Reprod Biol. 2021;256:235–45.
Baker S, Xiang W, Atkinson I. Hybridized neural networks for non-invasive and continuous mortality risk assessment in preterm infants. Comput Biol Med. 2021;134: 104521.
Cerqueira FR, Ferreira TG, de Paiva Oliveira A, Augusto DA, Krempser E, Barbosa HJC, Franceschini SdCC, de Freitas BAC, Gomes AP. Siqueira-Batista R Nicesim: an open-source simulator based on machine learning techniques to support medical research on prenatal and perinatal care decision making. Artif Intell Med. 2014;62(3):193–201.
Sheikhtaheri A, Zarkesh MR, Moradi R, Kermani F. Prediction of neonatal deaths in NICUs: development and validation of machine learning models. BMC Med Inform Decis Mak. 2021;21(1):1–14.
Sun Y, Kaur R, Gupta S, Paul R, Das R, Cho SJ, Anand S, Boutilier JJ, Saria S, Palma J, et al. Development and validation of high definition phenotype-based mortality prediction in critical care units. JAMIA Open. 2021;4(1):004.
Hsu J-F, Chang Y-F, Cheng H-J, Yang C, Lin C-Y, Chu S-M, Huang H-R, Chiang M-C, Wang H-C, Tsai M-H. Machine learning approaches to predict in-hospital mortality among neonates with clinically suspected sepsis in the neonatal intensive care unit. J Personal Med. 2021;11(8):695.
Podda M, Bacciu D, Micheli A, Bellù R, Placidi G, Gagliardi L. A machine learning approach to estimating preterm infants survival: development of the preterm infants survival assessment (PISA) predictor. Sci Rep. 2018;8(1):1–9.
Jaskari J, Myllärinen J, Leskinen M, Rad AB, Hollmén J, Andersson S, Särkkä S. Machine learning methods for neonatal mortality and morbidity classification. IEEE Access. 2020;8:123347–58.
Lee J, Cai J, Li F, Vesoulis ZA. Predicting mortality risk for preterm infants using random forest. Sci Rep. 2021;11(1):1–9.
Cooper JN, Minneci PC, Deans KJ. Postoperative neonatal mortality prediction using superlearning. J Surg Res. 2018;221:311–9.
Valter R, Santiago S, Ramos R, Oliveira M, Andrade LOM, de HC Barreto IC. Data mining and risk analysis supporting decision in Brazilian public health systems. In: 2019 IEEE international conference on e-health Networking, Application & Services (HealthCom). IEEE; 2019. pp. 1–6
Saravanou A, Noelke C, Huntington N, Acevedo-Garcia D, Gunopulos D. Predictive modeling of infant mortality. Data Mining Knowl Discov. 2021;35:1785–807.
Batista AF, Diniz CS, Bonilha EA, Kawachi I, Chiavegatto Filho AD. Neonatal mortality prediction with routinely collected data: a machine learning approach. BMC Pediatr. 2021;21(1):1–6.
Hajipour M, Taherpour N, Fateh H, Yousefi E, Etemad K, Zolfizadeh F, Rajabi A, Valadbeigi T, Mehrabi Y. Predictive factors of infant mortality using data mining in Iran. J Compr Pediatr 2021;12(1).
AlShwaish WM, Alabdulhafith MI. Mortality prediction based on imbalanced new born and perinatal period data. Mortality 2019;10(8).
Ramyachitra D, Manikandan P. Imbalanced dataset classification and solutions: a review. Int J Comput Bus Res. 2014;5(4):1–29.
Pan T, Zhao J, Wu W, Yang J. Learning imbalanced datasets based on smote and Gaussian distribution. Inf Sci. 2020;512:1214–33.
Liu X-Y, Wu J, Zhou Z-H. Exploratory undersampling for class-imbalance learning. IEEE Trans Syst Man Cybern Part B Cybern. 2008;39(2):539–50.
Phung S, Kumar A, Kim J. A deep learning technique for imputing missing healthcare data. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE; 2019. pp. 6513–6516
Remeseiro B, Bolon-Canedo V. A review of feature selection methods in medical applications. Comput Biol Med. 2019;112: 103375.
Li J, Cheng K, Wang S, Morstatter F, Trevino RP, Tang J, Liu H. Feature selection: a data perspective. ACM Compu Surv. 2017;50(6):1–45.
Choi J-H. Investigation of the correlation of building energy use intensity estimated by six building performance simulation tools. Energy Build. 2017;147:14–26.
Yu T, Zhu H. Hyper-parameter optimization: a review of algorithms and applications. arXiv preprint arXiv:2003.05689 (2020)
Liashchynskyi P, Liashchynskyi P. Grid search, random search, genetic algorithm: a big comparison for NAS. 2019. arXiv preprint arXiv:1912.06059.
Frazier PI. A tutorial on Bayesian optimization. 2018. arXiv preprint arXiv:1807.02811
Rodriguez JD, Perez A, Lozano JA. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans Pattern Anal Mach Intell. 2009;32(3):569–75.
Fushiki T. Estimation of prediction error by using k-fold cross-validation. Stat Comput. 2011;21(2):137–46.
Hossin M, Sulaiman MN. A review on evaluation metrics for data classification evaluations. Int J Data Mining Knowl Manag Process. 2015;5(2):1.
Tharwat A. Classification assessment methods. Appl Comput Inform. 2020.
Davis J, Goadrich M. The relationship between precision–recall and roc curves. In: Proceedings of the 23rd international conference on machine learning. 2006. pp. 233–40
Trevethan R. Sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practice. Front Public Health. 2017;5:307.
Chicco D, Jurman G. The advantages of the Matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC Genomics. 2020;21(1):1–13.
Sun X, Xu W. Fast implementation of Delong's algorithm for comparing the areas under correlated receiver operating characteristic curves. IEEE Signal Process Lett. 2014;21(11):1389–93.
Shier R. Mathematics learning support centre: Statistics. 2004.
Trudell AS, Tuuli MG, Colditz GA, Macones GA, Odibo AO. A stillbirth calculator: development and internal validation of a clinical prediction model to quantify stillbirth risk. PLoS ONE. 2017;12(3):0173461.
Kidus F, Woldemichael K, Hiko D. Predictors of neonatal mortality in Assosa zone, western Ethiopia: a matched case control study. BMC Pregnancy Childbirth. 2019;19(1):1–13.
Ushida T, Moriyama Y, Nakatochi M, Kobayashi Y, Imai K, Nakano-Kobayashi T, Nakamura N, Hayakawa M, Kajiyama H, Kotani T, et al. Antenatal prediction models for short-and medium-term outcomes in preterm infants. Acta Obstet Gynecol Scand. 2021;100(6):1089–96.
McLeod JS, Menon A, Matusko N, Weiner GM, Gadepalli SK, Barks J, Mychaliska GB, Perrone EE. Comparing mortality risk models in VLBW and preterm infants: systematic review and meta-analysis. J Perinatol. 2020;40(5):695–703.
Huang X, Wu L, Ye Y. A review on dimensionality reduction techniques. Int J Pattern Recognit Artif Intell. 2019;33(10):1950017.
Zhu Y, Brettin T, Xia F, Partin A, Shukla M, Yoo H, Evrard YA, Doroshow JH, Stevens RL. Converting tabular data into images for deep learning with convolutional neural networks. Sci Rep. 2021;11(1):1–11.
Sharma A, Vans E, Shigemizu D, Boroevich KA, Tsunoda T. Deepinsight: a methodology to transform a non-image data to an image for convolution neural network architecture. Sci Rep. 2019;9(1):1–7.
da Silva Neto SR, Tabosa Oliveira T, Teixeira IV, Aguiar de Oliveira SB, Souza Sampaio V, Lynn T, Endo PT. Machine learning and deep learning techniques to support clinical diagnosis of arboviral diseases: a systematic review. PLoS Negl Trop Dis. 2022;16(1):0010061.
Choudhury A, Renjilian E, Asan O. Use of machine learning in geriatric clinical care for chronic diseases: a systematic literature review. JAMIA Open. 2020;3(3):459–71.
Geller SE, Koch AR, Garland CE, MacDonald EJ, Storey F, Lawton B. A global view of severe maternal morbidity: moving beyond maternal mortality. Reprod Health. 2018;15(1):31–43.
Manik H, Siregar MFG, Rochadi RK, Sudaryati E, Yustina I, Triyoga RS. Maternal mortality classification for health promotive in dairi using machine learning approach. In: IOP conference series: materials science and engineering, vol 851. IOP Publishing; 2020, p. 012055.
Dawodi M, Wada T, Baktash JA. Applicability of ICT, data mining and machine learning to reduce maternal mortality and morbidity: case study Afghanistan. Int Inf Inst (Tokyo) Inf. 2020;23(1):33–45.
Authors would like to acknowledge the Programa Mãe Coruja Pernambucana, Secretaria de Saúde do Estado de Pernambuco.
This work was partially funded by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Fundação de Amparo a Ciência e Tecnologia do Estado de Pernambuco (FACEPE), Fundação de Amparo à Pesquisa do Estado do Amazonas: Pro-Estado grant 005/2019 and POSGRAD 2022/2023, and Universidade de Pernambuco (UPE), an entity of the Government of the State of Pernambuco focused on the promotion of teaching, research, and extension. V.S.S. was funded by Fundação de Amparo à Pesquisa do Estado do Amazonas (PRODOC/FAPEAM). P.T.E. and V.S.S. are funded by CNPq - Productivity.
Programa de Pós-Graduação em Engenharia da Computação, Universidade de Pernambuco, Recife, Brazil
Elisson da Silva Rocha, Flavio Leandro de Morais Melo & Patricia Takako Endo
Universidade Federal de Pernambuco, Recife, Brazil
Maria Eduarda Ferro de Mello
Programa Mãe Coruja Pernambucana, Secretaria de Saúde do Estado de Pernambuco, Recife, Brazil
Barbara Figueiroa
Instituto Todos pela Saúde, São Paulo, Brazil
Vanderson Sampaio
Elisson da Silva Rocha
Flavio Leandro de Morais Melo
Patricia Takako Endo
E.S.R, F.L.M.M, M.E.F.M, B.F, V.S and P.T.E did writing-original draft, writing-review and editing; E.S.R, F.L.M.M, and P.T.E did conceptualization, data curation, formal analysis, investigation and methodology; and P.T.E did project administration, resources, supervision, validation and visualization. All authors read and approved the final manuscript.
Correspondence to Patricia Takako Endo.
Silva Rocha, E.d., de Morais Melo, F.L., de Mello, M.E.F. et al. On usage of artificial intelligence for predicting mortality during and post-pregnancy: a systematic review of literature. BMC Med Inform Decis Mak 22, 334 (2022). https://doi.org/10.1186/s12911-022-02082-3
Neonatal mortality
|
CommonCrawl
|
Distributed algorithm under cooperative or competitive priority users in cognitive networks
Mahmoud Almasri1,
Ali Mansour1,
Christophe Moy2,
Ammar Assoum3,
Christophe Osswald1 &
Denis Lejeune1
Opportunistic spectrum access (OSA) problem in cognitive radio (CR) networks allows a secondary (unlicensed) user (SU) to access a vacant channel allocated to a primary (licensed) user (PU). By finding the availability of the best channel, i.e., the channel that has the highest availability probability, a SU can increase its transmission time and rate. To maximize the transmission opportunities of a SU, various learning algorithms are suggested: Thompson sampling (TS), upper confidence bound (UCB), ε-greedy, etc. In our study, we propose a modified UCB version called AUCB (Arctan-UCB) that can achieve a logarithmic regret similar to TS or UCB while further reducing the total regret, defined as the reward loss resulting from the selection of non-optimal channels. To evaluate AUCB's performance for the multi-user case, we propose a novel uncooperative policy for a priority access where the kth user should access the kth best channel. This manuscript theoretically establishes the upper bound on the sum regret of AUCB under the single or multi-user cases. The users thus may, after finite time slots, converge to their dedicated channels. It also focuses on the Quality of Service AUCB (QoS-AUCB) using the proposed policy for the priority access. Our simulations corroborate AUCB's performance compared to TS or UCB.
The static spectrum allocation has nowadays become a major problem in wireless networks as it results in an inefficient use of the spectrum and can generate holes or white spaces therein. The opportunistic spectrum access (OSA) concept aims at reducing the inefficient use of the spectrum by sharing available spectrum of primary users (PUs), i.e., licensed users who have full access to a frequency band, with opportunistic users called secondary users (SUs). According to OSA, a SU may at any time access an unoccupied frequency band, but it must abandon the targeted channel whenever a PU restarts its transmission in its channel. Indeed, OSA optimizes the use of the spectrum with minimum impacts on PUs and minimizing interference among SUs. OSA is an important strategy for the cognitive radio (CR) [1]; indeed, a CR unit must execute a cognitive cycle in order to implement an OSA strategy. The main three steps of the cognitive cycle are as follows:
Spectrum sensing: A cognitive radio should be able to sense and detect possible holes in the spectrum. Indeed, the main challenge of a CR is to obtain an accurate status of the spectrum bandwidths (vacant/busy), so that a SU can access a vacant channel without interfering with the transmission of PUs. In the literature, several spectrum sensing algorithms have been proposed to detect primary users' activities, such as cumulative power spectral density (CPSD) [2], energy detection (ED) [3–6], or waveform-based sensing (WBS) [7, 8].
Learning and information extraction: This function generates a clear vision about a RF (radio frequency) environment. As a result, a spectrum environment database is constructed and maintained. This database is used to optimize and adapt transmission parameters. The learning and information extraction capabilities of a CR can be achieved using learning algorithms, such as Thompson sampling (TS) [9], upper confidence bound (UCB) [10], and ε-greedy [11]. In [12], we proposed a learning algorithm based on the UCB that monitors the quality of service UCB (QoS-UCB) for the multi-user case. In this paper, we have also developed the QoS aspect of the new proposed AUCB (Arctan-UCB) algorithm.
Decision making: Following the learning process, the decision about the occupancy of a spectrum should be made to access a particular spectrum bandwidth. Any good decision should depend on the environment parameters as well as on the nature of the SUs' cooperative or competitive behaviors.
This paper investigates two major scenarios: SUs network with cooperative or competitive behaviors, under two different policies: Side channel [13] and a novel policy called PLA (priority learning access) for the multi-user case.
The past decade has witnessed an explosive demand of wireless spectrum that led to the major stress and the scarcity in the frequency bands. Moreover, the radio landscape has become progressively heterogeneous and very complex (e.g., several radio standards, diversity of services offered). Nowadays, the rise of new applications and technologies encourages wireless transmission and accelerates the spectrum scarcity problem. The coming wireless technologies (e.g., 5G) will support high-speed data transfer rates including voice, video, and multimedia.
In many countries, the priority bands for 5G include incumbent users, and it is essential that regulators make high effort to evacuate these frequencies for 5G use—especially in the 3.5 GHz range (3.3–3.8 GHz) [14]. These efforts may consist of (1) putting in place incentives to migrate licensees upstream of frequency allocation, (2) moving licensees to other bands or to a single portion of the frequency range, and (3) allowing licensees to exchange their licenses with mobile operators. When it is not possible to free up a band, the reserving frequencies for 5G bands (i.e., 3.5/26/28 GHz) may lead to the success of 5G services while wasting frequencies. Indeed, according to several recent studies, the frequency sharing approaches represent an efficient solution that can be used to support both potential 5G users and the incumbent users. For instance, the Finnish regulator has chosen to adopt this approach instead of reserving frequencies for the 5G users [14]. Sharing approach will contribute to access new frequencies for 5G in areas where they are needed but underutilized by incumbent users. In this work, we are interested in the opportunistic spectrum access (OSA) that represents a sharing approach in which the SUs can access the frequency bands in opportunistic manner without any cooperation with the PUs.
Before making any decision, a SU should make spectrum sensing process in order to reduce the interference with the primary users. In [15], the authors focus on different spectrum sensing techniques and their efficiency trying to obtain accurate information about the status of the selected channel by a SU at a given time. Moreover, the proposed techniques are analytically evaluated under Gaussian and Rayleigh fading channels. In this work, we focus on the decision making process to help the SU reach the best channel with the highest availability probability. This channel, on the one hand, mitigates any harmful interference with the PU as a result that this channel not often used by this latter. On the other hand, accessing the best channel in the long term can increase the SU's transmission time and throughput capacity.
Many recent works, in the CR, have attempted to maximize the transmission rate of the secondary user (SU) without generating any harmful interference to the primary user (PU) [16, 17]. To reach this goal, the latter works investigate the effects of using different types of modulation such as OFDM (orthogonal frequency-division multiple access) and SC-FDMA (single-carrier frequency-division multiple access). The main drawback of using OFDM modulation is related to its large peak-to-average power ratio (PAPR) that may increase the interference with the PU. While SC-FDMA has seen as a favorable modulation to maximize the SU's transmission due to its lower PAPR as well its complexity [18]. Moreover, SC-FDMA is used in several mobile generation such as the third-generation partnership project long-term evolution (3GPP-LTE) and the fourth generation (4G). It is also considered as a promising radio access technology and having an optimal energy-efficient power allocation framework for future generation of wireless networks [19, 20].
In this work, we choose to focus on the multi-armed bandit (MAB) approach in order to help a SU make a good decision, reduce the interference among PU and SU, and maximize the opportunities of this latter. In MAB, the agent may play an arm at each time slot and collect a reward. The main goal of the agent is to maximize its long-term reward or to minimize its total regret, defined as the reward loss resulting from the selection of bad arms. In [21–24], the authors considered the MAB approach in an OSA to improve the spectrum learningFootnote 1.
In MAB, the arm reward can be modeled with different models, such as the independent identically distributed (i.i.d.) or Markovian models. In this paper, we focus on the i.i.d. that represents the widely used model for a single user [24, 25] or multi-user case [23, 26, 27].
Based on the MAB problem introduced by Lai and Robbins in [10], the authors of [28] proposed several versions of UCB: UCB1, UCB2, and UCB-normal. All these versions achieve a logarithmic regret with respect to the number of played slots in the single-user case. For multiple users, we proposed respectively in [13] and [29] cooperative and competitive policies to collectively learn the vacancy probabilities of channels and decrease the number of collisions among users. The latter policies are simulated under TS, UCB, and ε-greedy algorithms. The previous simulations were conducted without any proof about the analytical convergences of these algorithms or the number of collisions among SUs. In this work, we show that the same policies achieve a better performance with AUCB compared to several existing algorithms. We also investigate the analytical convergence of these two policies under AUCB, and we show that the number of collisions in the competitive access has a logarithmic behavior with respect to time. Therefore, after a finite number of collisions the users converge to their dedicated channels.
The authors of [30] proposed a distributed learning for multiple SUs called time-division fair share (TDFS) and proved that the proposed method achieves a logarithmic regret with respect to the number of slots. Moreover, TDFS considers that the users can access the channels with different offsets in their time-sharing schedule and each of them achieves almost the same throughput. The work of [31] proposed a musical chair that represents a random access policy to manage the secondary network where the users achieve a different throughput. According to [31], each user selects a random channel up to time T0 in order to estimate the vacancy probabilities of channels and the number of users U in the network. After T0, each user randomly selects one of the U best channels. Nevertheless, the musical chair suffers several limitations as follows:
The user should have a prior knowledge about the number of channels in order to estimate the number of users in the network.
It cannot be used under the dynamic availability probability since the exploration and exploitation phases are independent.
It does not take the priority access into account.
To find the U best channels, the authors of [32] proposed a multi-user ε-greedy collision avoiding (MEGA) algorithm based on the ε-greedy previously proposed in [28]. However, the MEGA has the same drawbacks of the musical chair. In the literature, various learning algorithms have been proposed to take into account the priority access, such as selective learning of the kth largest expected rewards (SLK) [33] and kth MAB [34]. SLK is based on the UCB algorithm, while the kth MAB is based on both UCB and ε-greedy.
Contributions and paper organization
The main contributions of this manuscript are as follows:
An improved version of UCB algorithm called AUCB: In the literature, several UCB versions have been proposed to achieve a better performance compared to the classical one [28, 35–37]. However, we show that AUCB achieves a better performance compared to previous versions of UCB. By considering the widely used i.i.d. model, the regret for a single or multiple SUs can achieve a logarithmic asymptotic behavior with respect to the number of slots, so that the user may quickly find and access the best channel in order to maximize its transmission time.
Competitive policy for the priority learning access (PLA): To manage a decentralized secondary network, we propose a learning policy, called PLA, that takes the priority access into account. To the best of our knowledge, PLA represents the first competitive learning policy that successfully handles the priority dynamic access where the number of SUs changes over time [38], while only the priority access or the dynamic access are considered in several learning policies, such as musical chair and dynamic musical chair [31], MEGA [32], SLK [33], and kth MAB [34]. In [38], PLA shows its superiority under UCB and TS compared to SLK, MEGA, musical chair, and dynamic musical chair. In this work, we evaluate the performance of AUCB in the multi-user case based on PLA.
The upper bound of regret: We analytically prove the asymptotical convergence of AUCB for single or multiple SUs based on our PLA and side channel policies.
Investigation AUCB's performance of TS is known to exceed the state of the art in MAB algorithms [35, 39, 40]. Several studies found a concrete bound for its optimal regret [41–43]. Based on these facts, we adopt TS as a reference to evaluate AUCB's performance.
We also investigate the QoS of AUCB algorithm under our PLA policy.
Concerning this manuscript's organization, Section 2 introduces the system model for single and multi-user cases. Section 3 presents the AUCB approach for a single user as well as a novel learning policy to manage a secondary network. AUCB's performance for both single and multi-user cases are investigated in Section 4. This section also compares the performance of the PLA policy for the multi-user case to recent works. Section 5 concludes the paper.
In this section, we investigate the MAB problem for both single and multi-users cases. We also define the regret that can be used to evaluate a given policy's performance (Table 1). All parameters used in this section can be found in Table 1.
Table 1 List of notations used through the paper
Single-user case
Let C be the number of i.i.d. channels where each channel must be in one of two binary states S: S equals 0 if the channel is occupied, and 1 otherwise. For each time slot t, SU should sense a channel in order to see whether it is occupied or vacant and receives a reward ri(t) from the ith channel. Without any loss of generality, we will then assume that a good decision's reward, e.g., the channel is vacant, equals to its binary state, i.e., ri(t) = Si(t). SU can transmit its data on a vacant channel; otherwise, it must wait for the next slot to sense and use another channel. We suppose that all channels are ordered by their mean availability probabilities, i.e., μC≤μC−1≤⋯≤μ1. The availability vector Γ=(μi) is initially unknown to the secondary user, but our goal is to estimate it over many sensing slots. If a SU has a perfect knowledge about the channels and their μi, then it can select the best available channel, i.e., the first one, to increase its transmission rate. As μi is unknown for that user, we will define the regret as the sum of the reward loss due to the selection of a sub-optimal channel at each slot. The regret minimization determines the efficiency of the selected strategy to find the best channel. In a single user case, the regret R(n,β) up to the total number of slots n under a policy β can be defined as follows:
$$ R(n,\beta) = n\mu_{1} - \sum\limits\limits_{t=1}^{n} \mu_{i}^{\beta(t)}(t) $$
where n is the total number of slots; nμ1 is the selected channel in an ideal scenario, i.e., when the SU has prior knowledge and always selects the best channel; β(t) denotes the channel selected under the policy β at time t; and \(\mu _{i}^{\beta (t)}\) is the mean reward obtained for the ith channel selected at the time slot t and β(t)=i. The main target of a SU is to estimate the channels availability as soon as possible to attain the highest available one. To reach this goal, UCB was firstly proposed in [10] and applied in [25] to optimize the access over channels and identify the best one with the highest availability probability. UCB contains two dimensions: exploitation and exploration. These latter are respectively represented by Xi(Ti(t)) and Ai(t,Ti(t)).
The index assigned to the ith channel can be defined as follows:
$$ B_{i}\left(t,T_{i}(t)\right)=X_{i}\left(T_{i} (t)\right)+A_{i}(t,T_{i}(t)) $$
where Ti(t) is the number of times the channel i is sensed by a SU up to the time slot t. The user selects the channel β(t) at slot t that maximizes its index in the previous slot,
$$\beta(t)=\arg \max_{i} \ B_{i}\left(t-1,T_{i}(t-1)\right) $$
After a sufficient time, the user establishes a good estimation of the availability probabilities and thus can converge towards the optimal channel.
Multi-user case
Let us consider U SUs trying to maximize their network's global reward. At every time slot t, each user can access a channel when available and transmits its own data. However, multiple SUs can work in cooperative or uncooperative modes. In the cooperative one, the users should coordinate their decisions to minimize the global regret of the network. On the other hand, in a non-cooperative mode, each user independently makes its own optimal decision to maximize its local reward. The regret for the multi-user case, under cooperative or competitive modes, can be written as follows:
$$ R(n,U,\beta) = n\sum\limits\limits_{k=1}^{U} \mu_{k}-\sum\limits\limits_{t=1}^{n} E\left(S^{\beta(t)}(t)\right) $$
where μk is the mean availability of the kth best channel; Sβ(t)(t) is defined by the global reward obtained by all users at the time slot t; E(.) represents the mathematical expectation, and β(t) represents all the selected channelsFootnote 2 by users at t. We can define Sβ(t)(t) by:
$$ S^{\beta(t)}(t) = \sum\limits\limits_{j=1}^{U} \sum\limits\limits_{i=1}^{C} S_{i} (t) I_{i,j} (t) $$
where the state variableFootnote 3Si(t) = 0 indicates that the channel i is occupied by the PU at slot t; otherwise, Si(t)=1; Ii,j(t) = 1 if the jth user is the sole occupant in channel i at the slot t and 0 otherwise. In the multi-user case, the regret can be affected by the collision among SUs and the channel occupancy which allows us to define the regret for U SUs as shown in the following equation:
$$ R(n,U,\beta) = n \sum\limits\limits_{k=1}^{U} \mu_{k} - \sum\limits\limits_{j=1}^{U} \sum\limits\limits_{i=1}^{C} P_{i,j} (n) \mu_{i} $$
where \(P_{i,j} (n) = \sum \limits _{t=1}^{n} E\left [ I_{i,j}(t)\right ] \) stands for the expectation of times when the user j is the only occupant of the channel i up to n, and the mean of reward can be given by:
$$\mu_{i} = \frac{1}{n}\sum\limits\limits_{t=1}^{n} S_{i}(t)$$
In this section, we present a new approach inspired from the classical UCB in a single-user case, and later on, we generalize our study to consider the case of multi-user. The new approach can find the optimal channel faster than the classical UCB while achieving a lower regret. The classical UCB contains the exploration-exploitation trade-off to find a good estimate of the channels status and converges to the best one (see Eq. (2)). In UCB, a non-linear function for the exploration factor, Ai(t,Ti(t)), is used to ensure the convergence:
$$ A_{i}(t,T_{i}(t))=\sqrt{\frac{\alpha\ln(t)}{T_{i}(t)}} \label {exploration factor} $$
where α is the exploration-exploitation factor. The effect of α on the classical UCB is well studied in the literature [22, 44, 45]. According to [28, 44, 46, 47], the best value of α should be in the range of [1, 2] in order to make a balance between exploration-exploitation epochs. However, if α decreases, the exploration factor of UCB decreases and the exploitation factor dominates, then the algorithm converges quickly to the channel with the highest empirical reward. All previous works study the effect of Ai(t,Ti(t)) on the UCB with different values of α. In this study, we focus on another form of the exploration factor Ai(t,Ti(t)) based on another non-linear function in order to enhance the convergence to the best channel of the classical UCB. Different non-linear functions of Eq. (6) with similar characteristics can be investigated. We should mention that this function was chosen because it has two main properties:
A positive function with respect to time t.
An increasing non-linear function to limit the effect of the exploration.
Therefore, the square-root function introduced in Eq. (6) is widely accepted [24, 28, 46, 47] in order to restrict the exploration factor after the learning phase. Classical UCB ensures the balance between the exploration-exploitation phases at each time slot up to n, using two factors, Ai(t,Ti(t)) and Xi(Ti(t)). Indeed, Ai(t,Ti(t)) is used to explore channels' availability in order to access the best one with the highest expected availability probability Xi(Ti(t)). The classical UCB gives the same impact of the exploration factor Ai(t,Ti(t)) at each time slot up to n. However, our proposal is based on the idea that the exploration factor Ai(t,Ti(t)) should have an important role during the learning phase while it becomes less important after this period. Indeed, after the learning phase, the user will have a good estimation of channels' availability, then it can regularly access the best channel. Subsequently, the big challenge is to restrict Ai(t,Ti(t)) after the learning phase by using another non-linear function with the following features:
It should be an increasing function with a high derivative with respect to time at the beginning to boost the exploration factor during the learning phase in order to accelerate the estimation of channels' availability.
It should have a strong asymptotic behavior in order to restrict the exploration factor Ai(t,Ti(t)) under a certain limit, when the user collects some information about channels' availability.
Subsequently, our study finds that the exploration factor can be adjusted by using the arctan function which has the above features; this proposed UCB version is called AUCB. Indeed, the arctan enhances the convergence speed to the best channel compared to the one obtained with the square-root, and the effect of the exploration factor Ai(t,Ti(t)) can be reduced after the learning phase. The algorithm then gives an additional weight to the exploitation factor Xi(Ti(t)) in the maximization of the index Bi(t,Ti(t)) (see Eq. (2)). In the next section, we will prove that AUCB's regret has a logarithmic asymptotic behavior.
AUCB for a single user
This section focuses on the AUCB's regret convergence for a single user. For the sake of simplicity with regard to the mathematical developments, the regret of Eq. (1) can be written as:
$$\begin{array}{*{20}l} R(n,\beta) =& \sum\limits_{i=1}^{C} E[T_{i}(n)] \mu_{1} -\sum\limits_{i=1}^{C} E[T_{i}(n)] \mu_{i} \\ =&\sum\limits_{i=1}^{C} (\mu_{1}-\mu_{i}) E[T_{i}(n)] \end{array} $$
where Ti(n) represents the number of time slots that the channel i was sensed by the SU up to the total number of slots n. According to Eq. (7), the regret depends on the channels' occupancy probability (for stationary channels, the availability probabilities are considered as constant) and the expectation of Ti(n) which is a stationary random variable process. Then, the upper bound of E[Ti(n)] indirectly implies the regret's upper bound. Subsequently, the regret of our AUCB approach under the single-user case has a logarithmic asymptotic behavior as shown in the following equation (see Appendix A):
$$ R(n,{AUCB}) \leq 8 \sum\limits_{i=2}^{C} \left[ \frac{\ln(n)}{\Delta_{(1,i)}} \right] + \left(1+\frac{\pi^{2}}{3}\right) \sum\limits_{i=1}^{C} \Delta_{(1,i)} $$
where Δ(1,i)=μ1−μi represents the difference between the best and worst channels.
Multi-user case under uncooperative or cooperative access
To evaluate the performance of our proposed algorithm in the multi-user case, we will propose an uncooperative policy for the priority learning access (PLA) to manage a secondary network. We will also prove the PLA's convergence, as well as the side channel policies with AUCB.
Uncooperative learning policy for the priority access
We investigate the case where the SUs should take decisions according to their priority ranks. In this section, we propose a competitive learning policy that can share the available spectrum among SUs. In addition, we prove the theoretical convergence of the PLA policy with our AUCB approach. In the multi-user case, the big challenge becomes how to collectively learn the channels' availability for each SU; at the same time, the number of collisions should be set below a certain threshold. Our goal is to ensure that the U users are spread separately to the U best channels. In the classical priority access method, the first priority user SU1 should sense and access the best channel, μ1, at each time slot, while the target of the second priority user SU2 is to access the second best channel. To reach this goal, SU2 should sense to find the two best channels at the same time, i.e., μ1 and μ2, in order to compute their availabilities and thus access the second best channel if available. Similarly, the Uth user should estimate the availability of all U first best channels at each time slot to access the Uth best one. However, it is a costly and impractical method to settle down each user to its dedicated channel. For this reason, we propose PLA where, at each time slot, SU can sense one channel in order to find its dedicated one. In our policy, each user has a dedicated rank, k ∈{1,...,U}, and its target remains the access of the kth best channel. In PLA, each user generates a rank around its prior one to have information about the channels availability, (see Algorithm 1). In this case, the kth user can scan the k best channels and its target is the kth best one. However, if the generated rank of the kth user is different than k, then it accesses a channel that has a vacancy probability in the set {μ1,μ2,...,μk−1} and may collide with top priority users {SU1,SU2,...,SUk−1}. Moreover, after each collision, SUk should regenerate its rank in the set {1,...,k}. Thus, after a finite number of iterations, each user settles down to its dedicated channel.
Equation (9) shows that the expectation of collisions in the U best channels E[OU(n)] for PLA based on our AUCB approach has a logarithmic asymptotic behavior. Therefore, after a finite number of collisions each user may converge to its dedicated channel (see Appendix B):
$$ E[O_{U}(n)] \leq \frac{1-p}{p} \left[ \sum\limits_{k=2}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k-1,k)}} + 1 + \frac{\pi^{2}}{3} \right)+ \sum\limits_{k=1}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k,k+1)}} + 1 + \frac{\pi^{2}}{3}\right) \right] $$
where p indicates the probability of non-collision and Δ(a,b)=μa−μb.
We have also proven that the total regret of our policy PLA has a logarithmic asymptotic behavior. It is also worth mentioning that the upper regret bound not only depends on the collisions among users but also on the selection of the worst channels (see Appendix C):
$$ \begin{aligned} R_{PLA}(n,U,{AUCB}) \leq \mu_{1} \left[ \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} \left(\frac{8\ln(n)}{\Delta_{(k,i)}^{2}} + 1+\frac{\pi^{2}}{3}\right) + E[O_{U}(n)] \right] \end{aligned} $$
The first term of the above equation reflects the selection of the worst channels while the second one represents the reward loss due to collisions among users in the best channels. The upper bound of the regret presented in Eq. (10) can be affected by three parameters:
The number of users, U, represented through the first summation, where k denotes the kth best channels for the kth SU.
The number of channels, C, in the second summation of the regret.
The total number of time slots, n.
Cooperative learning with side channel policy
The coordination among SUs can enhance the efficiency of their network, instead of dealing with their partial information about the environment. To manage a cooperative network, we propose a policy based on the use of a side channel in order to exchange simple information among SUs with a very low information rate. The side channels are widely used in wireless telecommunication networks to share data among the base-stations [48], and specifically in the context of cognitive network. However, in [49] and [50], the authors considered the cooperative spectrum sharing among PUs and SUs to enhance the transmission rate of the PUs using a side channel.
The signaling channel in our policy is not wide enough to allow high-data rate transmission unlike that used in [49] and [50] which should have a high rate to ensure the data transmission among PUs and SUs. In our policy, the transmission is done over periods. During the first period, i.e., Sub-Slot1, SU1 (the highest priority user) searches for the best channel by maximizing its index according to Eq. (2). At the same time, and via the secure channel, SU1 must inform the other users to evacuate its selected channel in order to avoid any collision with them. While avoiding the first selected channel, the second user SU2 should repeat the same process and so on. If SU2 does not receive the choice of SU1 in the first Sub-Slot1 (suppose that SU1 does not need to transmit during this Sub-Slot), it can directly choose the first suggested channel by maximizing its index Bi,2(t,Ti,2(t)).
To the best of our knowledge, all proposed policies, such as SLK, kth MAB consider a fixed priority, i.e., the kth best channel is reserved for the kth user all the time. Later on, if SU1 does not transmit for a certain time, then other users cannot select better channels. Subsequently, the main advantages of the cooperation in this policy are as follows:
An efficient use of the spectrum where best channels are constantly accessed by users.
An increase in the transmission time of users by avoiding the collision among them.
Reaching a lower regret compared to several existing policies.
Hence, AUCB's regret under the side channel policy can achieve a logarithmic efficiency according to the following equation (see Appendix D):
$$ \begin{aligned} R_{SC}(n,U,{AUCB}) \leq \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} \left[ \underbrace{\frac{8 \ln(n)}{\Delta_{(k,i)}}}_{{Term 1}} + \underbrace{\Delta_{(k,i)} \left(1+\frac{\pi^{2}}{3}\right)}_{{Term 2}}\right] \end{aligned} $$
where Δ(k,i)=μk−μi. k and i, respectively, represent the best and worst channels. The upper bound of this regret contains two terms:
Term 1 achieves a logarithmic behavior over time.
Term 2 depends on the vacant channels.
Quality of service of AUCB
In [12], we study UCB's quality of service (QoS) for the restless model. The QoS has been studied for both single and multi-users cases using the random rank policy proposed in [23] to manage a secondary network. Based on the QoS-UCB, the user is able to learn channels' availability and quality.
In this work, we also study the QoS of AUCB using our proposed PLA policy for the priority access of the i.i.d. channels. Supposing that each channel has a binary quality represented by qi(t) at the slot t: qi(t)=1 if the channel has a good quality and 0 otherwise. Then, the expected quality collected from the channel i up to n is given as follows:
$$ G_{i}(T_{i}(n)) = \frac{1}{T_{i}(n)}\sum\limits_{\tau=1}^{T_{i}(n)} q_{i}(\tau) $$
The global mean reward, that takes into account all channels' quality and availability, can be defined as follows [12]:
$$ \mu_{i}^{Q} = G_{i}(T_{i}(n)).\mu_{i} $$
The index assigned to the ith channel that takes into account the availability and quality of the ith channel can be defined by:
$$ B_{i}^{Q}(t,T_{i}(t))= X_{i}(T_{i}(t))-Q_{i}(t,T_{i}(t))+A_{i}(t,T_{i}(t))) $$
According to [12], the term Qi(t,Ti(t)) that represents the quality factor is given by the following equation:
$$Q_{i}(t,T_{i}(t))=\frac{\gamma M_{i}(t,T_{i}(t)) \ln(t)}{T_{i}(t)} $$
where the parameter γ stands for the weight of the quality factor; Mi(t,Ti(t))=Gmax(t)−Gi(Ti(t)) being the difference between the maximum expected quality over channels at time t, i.e., Gmax(t), and the one collected from channel i up to time slot t, i.e., Gi(Ti(t)). However, when the ith channel has a good quality Gi(Ti(t)) as well as a good availability Xi(Ti(t)) at time t. Then, the quality factor Qi(t,Ti(t)) decreases while Xi(Ti(t)) increases. Subsequently, by selecting the maximum of its index \(B_{i}^{Q}(t,T_{i}(t))\), the user has a large choice to access the ith channel with a high quality and availability.
To conclude this part, a comparative study in terms of the complexity and convergence speed to the optimal channel has been presented in Table 2 for UCB, AUCB, QoS-UCB, and QoS-AUCB. The latter algorithms behave in \(\mathcal {O}(nC)\) for large n and C that represent the number of time slots and channels, respectively. Despite the low complexity of UCB compared to AUCB, this latter can quickly converge to the optimal channel with the highest availability probability.
Table 2 Algorithms complexity
AUCB's performance
In our simulations, we will consider that the SU can access a single-available channel at each time slot to transmit its data. In this section, we investigate AUCB's performance for both single and multi-users cases. Many simulations have been conducted using Monte Carlo methods.
Single user tests
At first, let us consider the simple case of a single SU and let channels' availability be represented by Γ=[ 0.9 0.8 0.7 0.6 0.5 0.45 0.4 0.3 0.25 0.1]. The percentage of times, Pbest, that the SU selects the optimal channel is given by:
$$P_{\text{best}} = 100 \times \sum\limits_{t=1}^{n} \frac{\mathbbm{1}_{{(\beta(t)= \mu_{1}})}}{t} $$
$$\mathbbm{1}_{(a=b)}= \left\{\begin{array}{ll} \ 1 \; {if} \; {a =b} \\ \ 0 \; {\text{otherwise}} \end{array}\right. $$
In Fig. 1, Pbest shows three parts:
The first part from 1 to C represents the initialization where the SU plays each channel once to obtain a prior knowledge about the availability of each channel.
Pbest of the two approaches
The second part from C+1 to 2000 slots represents the adaptation phase.
In the last part, the user asymptotically converges towards the optimal channel μ1.
After the initialization part, the two curves evolve in a similar way. After hundreds of slots, the proposed AUCB outperforms the classical UCB. AUCB achieved 65% of the best channel in about 1000 slots, while classical UCB achieved only 45%.
Figure 2 shows AUCB and UCB's regret factor, evaluated according to Eq. (1) for a single user. As shown in this figure, the regret has a logarithmic asymptotic behavior for the two approaches over time. This result can confirm the theoretical upper bound of regret calculated in Eq. (8) and also presented in Fig. 2, where the upper bound of regret is logarithmic. The same figure shows that AUCB produces a lower regret compared to the classical UCB. This means that our algorithm can rapidly recognize the best channel while the classical UCB required more time to find it.
Regret of the two approaches
Figure 3 shows the number of times that the two algorithms sense the sub-optimal channels up to time n. For worst channels, our approach and classical UCB have approximately the same behavior. On the other hand, for almost optimal channels (in our example, channels 2 and 3 which respectively have the availability probabilities of 0.8 and 0.7), the UCB could not clearly switch to the optimal channel and spends a lot of time exploring the almost optimal ones.
Access sub-optimal channels of the two approaches
Figure 4 evaluates AUCB and UCB's performance with respect to various values of the exploration-exploitation factor α in the interval ]1, 2]. This figure shows that our approach outperforms the classical UCB and achieves a lower regret. Moreover, by increasing α, the classical UCB spends more time to explore the channels in order to find the best channel while our approach can reach the best one with a lower number of slots. The latter result, increases the transmission opportunities for the SU, subsequently decreasing the total regret. In the following sections, we will consider α=1.5.
Regret of the two approaches with different values of α
Multiple SUs tests
In this section, we consider U=3 with C=10 channels and their availabilities are given by:
Γ=[0.9 0.8 0.7 0.6 0.5 0.45 0.4 0.3 0.2 0.1]. Figure 5 shows the comparison between the regret under the multi-user case defined in Eq. (5) for the two approaches (i.e., UCB and AUCB) under the random rank policy [23]. The latter was used under the UCB; however, it is easy to implement this policy under AUCB to study both algorithms' performance in the multi-user case.
Regret comparison for two approaches under the random rank policy
In the random rank policy, when a collision occurs among the users, each of them should generate a random rank in {1,...,U}. Although, both approaches' regret achieves a logarithmic asymptotic behavior, our algorithm achieves a lower asymptotical regret and converges faster than the classical UCB.
Under the random rank policy, Fig. 6 shows the number of collisions in the U best channels (1, 2, and 3 having availability probabilities of 0.9, 0.8, and 0.7, respectively) for AUCB and classical UCB. Let us remind that, when a collision occurs among users, no-transmission can be achieved and each of them should generate a random rank ∈{1,...,U}. The same figure shows that the number of collisions under a random rank policy with AUCB or classical UCB is quite similar. This can be justified based on a nice random rank policy property; indeed, this policy does not favor any user over another. Therefore, each user has an equal chance of settling down in any of U-best channels. In other word, the random rank policy can naturally achieve a probabilistic fairness access among users. Moreover, in the case of AUCB, the user switches to the optimal channel faster than in the classical one, as shown in Fig. 3 for the single-user case. This fact decreases the number of collisions among users.
Number of collisions in the best channels under the random rank policy
Figure 7 depicts the regret of AUCB and classical UCB under the side channel policy. As expected, both approaches' regret increase rapidly at the beginning. At a later time, the increase is slower for the AUCB compared to the classical one. We thus notice that our algorithm presents the smaller regret.
Regret comparison for two approaches under the side channel policy
The performance of the PLA policy
This section investigates the performance of the PLA policy under AUCB and UCB compared to the musical chair [31] and SLK [33], and 4 priority users are considered to access the channels based on their prior rank. We then compare UCB and AUCB's QoS based on the PLA policy.
Figure 8 compares the regret of PLA to SLK and the musical chair policies on a set of 9 channels where PLA achieves the lower regret under AUCB. It is worth mentioning that our policy and SLK take into account the priority access while in the case of the musical chair, users access the channels randomly. As shown in Fig. 8, the musical chair produces a constant regret after a finite number of slots while other methods' regret is logarithmic. However, during the learning time T0 in the case of the musical chair, the users randomly access the channels to estimate their availability probabilities as well as the number of users, after that the users just access the U best channels in the long run. Consequently, the musical chair does not follow the dynamism of channels (e.g., assuming that the vacancy probabilities can change with time). The same figure shows that SLK achieves the worst regret.
PLA, SLK, and musical chair regret
In Table 3, we compare the regret of the four methods with a fixed number of SUs (U=4) and different number of channels (C=5, 9, 13, 17, and 21). As the users spend more time to learn the availability of channels, the regret may increase significantly. This result is due to the access to worst channels and to the collision produced among users. As shown in Table 3, the regret increases with the number of channels, while our PLA policy under AUCB achieves the lowest regret for different considered scenarios (i.e., C = 5, 9, 13, 17, and 21). Thanks to the fact that, under our policy, the SUs quickly learn channels' vacancy probabilities compared to the others.
Table 3 Regret comparison in the multi-user case with n=105 using the four algorithms PLA-AUCB, PLA-UCB, musical chair, and SLK with a changing number of channels
To study AUCB's QoS, let us define the empirical mean of the quality collected from channels as follows: G=[0.75 0.99 0.2 0.8 0.9 0.7 0.75 0.85 0.8], then the global mean reward that takes into account the quality as well as the availability ΓQ can be given by: ΓQ=[0.67 0.79 0.14 0.48 0.37 0.28 0.22 0.17 0.08]. After estimating channels' availability and quality (i.e., ΓQ) and based on the PLA policy, the first priority user SU1 should converge to the channel that has the highest global mean, i.e., channel 2, while the target of SU2, SU3, and SU4 should respectively be channels 1, 4, and 5. This result can be confirmed in Fig. 9, where the priority users access their dedicated channels in the case of QoS-UCB or QoS-AUCB. Moreover, QoS-AUCB significantly grants users access of their dedicated channels more often than in QoS-UCB.
Access channels by the priority users using QoS-AUCB and QoS-UCB
Figure 10 diplays the achievable regret of QoS-AUCB and QoS-UCB in the multi-user case. In [12], the performance of QoS-UCB in the restless MAB problem is compared to several learning algorithms, such as the regenerative cycle algorithm (RCA) [51], the restless UCB (RUCB) [52], and Q-learning [53] where Qos-UCB achieved the lowest regret. From Fig. 10, one can notice that the QoS-AUCB policy achieves better performance compared to QoS-UCB.
QoS-AUCB and QoS-UCB regret
AUCB compared to Thompson sampling
Thompson sampling has shown its superiority to a variety of versions of UCB and other bandit algorithms [35]. Instead of comparing different versions of UCB to AUCB, in this section, we will study TS and AUCB' performance in the multi-user case based on the PLA policy for the priority access. We will thus use two factors to make this comparison: access the best channels by each user and the regret that depends not only on the selection of worst-channels but also on the number of collisions among users.
In Fig. 11, we display Pbest (the percentage of times where the priority users access successfully their dedicated channels) and the cumulative regret using the PLA policy for 4 SUs based on AUCB, UCB, and TS.
The performance of AUCB, UCB, and TS
In Figs. 11a, b, the first priority user SU1 converges to its channel, i.e., the best one, followed by SU2, SU3, and SU4, respectively. Figure 11c compares Pbest of the first priority user under AUCB and TS. According to this figure, the first priority user can quickly converge to its dedicated channel using the AUCB algorithm while in the case of TS, the user needs more time to find the best channel. Figure 11d compares the regret of AUCB, UCB, and TS in the multi-user case using the PLA policy for the priority access. However, in TS algorithm, users have to spend more time exploring the C−U worst channels; while in AUCB's case, the users reach quickly their desired channels. However, a lower regret can increase the successful opportunities of transmission for users. Moreover, selecting dedicated channels in a short period becomes a significant event in a dynamic environment.
In this paper, we investigated the problem of opportunistic spectrum access (OSA) in cognitive radio networks, where a SU tries to access PUs' channels and find the best available channel as fast as possible. We also proposed a new AUCB algorithm to achieve a logarithmic regret with a single user. On the other hand, to evaluate AUCB's performance in the multi-user case, we proposed a learning policy called PLA for secondary networks that takes into account the priority access. We have also investigated PLA's performance compared to recent works, such as SLK and the musical chair. We have theoretically found the upper bounds for AUCB's total regret for a single user as well as for the multi-user case under the proposed policy. Our simulation results show logarithmic regret under AUCB and corroborate AUCB's performance compared to UCB or TS. It has also been shown that AUCB rapidly converges to the best channel while achieving a lower regret, improving the transmission time and rate of SUs. Moreover, PLA under AUCB can decrease the number of collisions among users under the competitive scenario, thanks to a faster estimation of the channels' vacancy probability.Like most important works in OSA, this work focused on the independent identical distributed (IID) model in which the state of each channel is supposed to be drawn from an IID process. In future work, we are planning to consider the Markov process that may represent a dynamic memory model to describe the state of available channels; however, it is a more complex process compared to IID. Moreover, our actual model ignores dynamic traffics at the secondary nodes; therefore, the extension of our algorithm to include a queueing-theoretic formulation is desirable. For a more realistic model, the future work will also investigate the effects of using the state of the art of spectrum sensing techniques to detect the activity of the primary users on the performance of the learning and decision-making. Moreover, considering the imperfect sensing, i.e., the probability of false alarm and miss detection, represents a new challenge to developing a more realistic network.
Appendix A: Convergence proof of AUCB
In this Appendix, we show that the upper bound of the regret of AUCB is logarithmic with respect to time that means that after a finite time, the user will luckily find and access the best channel with the availability μ1. The regret for a single user up to the total number of slots n under a policy β can be expressed as follows:
$$ R(n,\beta) = \sum\limits_{i=1}^{C} (\mu_{1}-\mu_{i}) E\left[T_{i}(n)\right] $$
where C represents the number of channels; μ1 and μi being the availability probabilities of the best channel and ith worst one respectively; E(.) represents the mathematical expectation; Ti(n) is the number of times that the ith channel has been sensed by the user up to n. According to Eq. (15) and with constant availabilities of channels, the upper bound of Ti(n) can contribute to find an upper bound of R(n,β).
Normally, the user senses the ith channel during the initialization stage and every time β(t)=i, and where β(t) represents the selected channel at the instant t under the policy β; then, Ti(n) can be expressed as follows:
$$ T_{i}(n)=1+\sum\limits_{t=C+1}^{n} \mathbbm{1}_{\{\beta(t)=i\}} $$
where the logic operator 𝟙{β(t)=i} equals one if β(t)=i and zero otherwise. Let us consider that a SU senses at least l times each channel up to n, then according to Eq. (16), Ti(n) should be bounded as follows:
$$ T_{i}(n)\leq l +\sum\limits_{t=C+1}^{n} \mathbbm{1}_{\left\{\beta(t)=i; T_{i}(t-1)\geq l\right\}} $$
As AUCB selects at each time slot the channel with the highest index obtained in the previous slot, the user may access, at the slot t, a non-optimal channel if the index of this channel at (t−1), Bi(t−1,Ti(t−1)), is higher than the index of the best channel Bi(t−1,T∗(t−1)). In this case, we can develop further Eq. (17) as follows:
$$ T_{i}(n)\leq l +\sum\limits_{t=C+1}^{n} \mathbbm{1}_{\{B_{i}({t-1,T^{*}(t-1)})< B_{i}({t-1,T_{i}(t-1)}); T_{i}(t-1)\geq l\}} $$
According to Eq. (2), the index of channels Bi(t,Ti(t))=Xi(Ti(t))+Ai(t,Ti(t)) is based on:
The exploitation factor Xi(Ti(t)) representing the expected availability probability.
The exploration factor Ai(t,Ti(t)) that forces the algorithm to explore different channels. This factor under AUCB is defined as follows:
\( A_{a}\left (t,T_{i}(t)\right) = \arctan \left (\frac {\alpha \ln (t)}{T_{i}(t)}\right)\),
Using Eq. (18), we can prove that:
$$ \begin{aligned} & T_{i}(n)\leq l +\sum\limits_{t=C+1}^{n} {1}_{ \{ X_{i}({T^{*}(t-1)})+ A_{a}({t-1,T^{*}(t-1)}) <}\\ &{\scriptstyle \large{ X_{i}({T_{i}(t-1)}) + A_{a}({t-1, T_{i}(t-1)}); T_{i}(t-1)\geq l} } \} \end{aligned} $$
The summation argument in the above equation follows Bernoulli's distribution (i.e., E{X}=P{X=1}). In this case, the expectation of Ti(n) should satisfy the following constraint:
$$ \begin{aligned} &E[T_{i}(n)]\leq l + \sum\limits_{t=C+1}^{n} P \left\{X_{i}({T^{*}(t-1)})+ \right.\\ &A_{a}({t-1,T^{*}(t-1)})< X_{i}({T_{i}(t-1)}) + \\ &\left. A_{a}({t-1, T_{i}(t-1)}) \, { and} \, T_{i}(t-1)\geq l\right\} \end{aligned}\vspace*{3pt} $$
The probability in Eq. (20) becomes:
$$ \begin{aligned} &Prob = P\left\{{\vphantom{\left(\frac{\alpha\ln(t)}{T_{i}(t-1)}\right)}}X_{i}({T^{*}(t-1)})-X_{i}({T_{i}(t-1)}) \ \leq \right.\\ &\arctan \left(\frac{\alpha\ln(t)}{T_{i}(t-1)}\right) - \arctan\left(\frac{\alpha\ln(t)}{T^{*}(t-1)}\right) \\ &\left. \text{and}\ T_{i}(t-1)\geq l{\vphantom{\left(\frac{\alpha\ln(t)}{T_{i}(t-1)}\right)}}\right\}\ \end{aligned}\vspace*{3pt} $$
After the learning period where Ti(t−1)≥l, the user will have a good estimation of channels availability and thus may access regularly the best channel. Therefore, Ti(t−1)≪T∗(t−1); and \(\arctan \left (\frac {\alpha \ln (t)}{T_{i}(t-1)}\right) \geq \arctan \left (\frac {\alpha \ln (t)}{T^{*}(t-1)}\right)\). Using the asymptotic behaviors of the non-linear functions sqrt and arctan, the probability in Eq. (21) becomes bounded by:
$$ \begin{aligned} &Prob \leq P\left\{{\vphantom{\sqrt{ \frac{\alpha\ln(t)}{T_{i}(t-1)} }}}X_{i}({T^{*}(t-1)})-X_{i}({T_{i}(t-1)})\leq \right. \\ &\left. \sqrt{ \frac{\alpha\ln(t)}{T_{i}(t-1)} }-\sqrt{ \frac{\alpha\ln(t)}{T^{*}(t-1)}} \text{ and} T_{i}(t-1)\geq l \right\} \end{aligned} $$
By taking the minimum value of \(X_{i}({T^{*}(t-1)}) + \sqrt {\frac {\alpha \ln (t)}{T^{*}(t-1)}}\) and the maximum value of \(X_{i}({T_{i}(t-1)}) + \sqrt {\frac {\alpha \ln (t)}{T_{i}(t-1)}} \) at each time slot, we can upper bound Eq. (20) by the following equation:
$$ \begin{aligned} E[T_{i}(n)]\leq l +\sum\limits_{t=C+1}^{n} \\ P\left\{ \min_{0<S^{*}< t} \left[ X_{i}\left({S^{*}}\right) + \sqrt{\frac{\alpha\ln(t)}{S^{*}}} \right] \leq \right.\\ \left. \max_{l\leq S_{i}< t} \left[ X_{i}({S_{i}}) + \sqrt{\frac{\alpha\ln(t)}{S_{i}}} \right] \right\} \end{aligned} $$
where Si≥l to fulfill the condition Ti(t−1)≥l. Then, we obtain:
$$ \begin{aligned} &E[T_{i}(n)]\leq l + {\sum\limits_{t=1}^{n} \sum\limits_{S^{*}=1}^{t-1} \sum\limits_{S_{i}=l}^{t-1}} \\ &P\left\{X_{i}({S^{*}}) + A_{i}(t,S^{*}) < X_{i}(S_{i})+A_{i}(t,S_{i}) \right\} \end{aligned} $$
The inequality Xi(S∗)+Ai(t,S∗)<Xi(Si)+Ai(t,Si) is satisfied when at least one inequality among the three following ones does:
$$\begin{array}{*{20}l} X_{i}\left({S^{*}}\right) \leq \mu_{1}-A_{i}\left({t,S^{*}}\right) \end{array} $$
(25a)
$$\begin{array}{*{20}l} X_{i}(S_{i}) \geq \mu_{i}+A_{i}({t,S_{i}}) \end{array} $$
(25b)
$$\begin{array}{*{20}l} \mu_{1}<\mu_{i}+2 A_{i}({t,S_{i}}) \end{array} $$
(25c)
In fact, if all three inequalities are wrong, then we should have:
$$\begin{array}{*{20}l} X_{i}\left({S^{*}}\right)+A_{i}\left({t,S^{*}}\right)>\mu_{1} &\geq \mu_{i}+2 A_{i}({t,S_{i}}) \\ & > X_{i}(S_{i})+A_{i}({t,S_{i}}) \end{array} $$
which gives a contradiction with the inequality (24). Using the ceiling operator ⌈⌉, let \(l=\lceil \frac {4 \alpha \ln (n)}{\Delta _{(1,i)}^{2}}\rceil \), where Δ(1,i)=μ1−μi and Si≥l, then Eq. (25c) becomes false, in fact:
$$\begin{array}{*{20}l} \mu_{1}-\mu_{i}-2 A_{i}({t,S_{i}}) & = \mu_{1}-\mu_{i}-2\sqrt{\frac{\alpha\ln(t)}{S_{i}} }\\ &\geq \mu_{1}-\mu_{i}-2\sqrt{\frac{\alpha\ln(n)}{l}} \\ & \geq \mu_{1}-\mu_{i} - \Delta_{(1,i)} = 0 \end{array} $$
Based on Eqs. (24), (25a), and (25b), we obtain:
$$ \begin{aligned} E[T_{i}(n)]\leq \left \lceil\frac{4 \alpha \ln(n)}{\Delta_{(1,i)}^{2}} \right \rceil + {\sum\limits_{t=1}^{n} \sum\limits_{S^{*}=1}^{t-1} \sum\limits_{S_{i}=l}^{t-1}} \\ \left\{P\left\{X_{i}({S^{*}})\leq \mu_{1} - A_{i}(t,S^{*}) \right\}+ \right.\\ \left. P\left\{X_{i}(S_{i})\geq \mu_{i} + A_{i}(t,S_{i}) \right\}{\vphantom{S^{*}}}\right\} \end{aligned} $$
Using Chernoff-Hoeffding boundFootnote 4 [54], we can prove that:
$$\begin{array}{*{20}l} P\left\{X_{i}({S^{*}}) \leq \mu_{1}-A_{i}({t,S^{*}})\right\} & \leq \exp^{\frac{-2} {S^{*}}\left[S^{*} \sqrt{\frac{\alpha \ln(t)}{S^{*}}} \right]^{2}} \\ & = t^{-2\alpha} \end{array} $$
$$\begin{array}{*{20}l} P\left\{X_{i}(S)\geq \mu_{i}+A_{i}({t,S_{i}})\right\}& \leq \exp^{\frac{-2} {S_{i}}\left[S_{i} \sqrt{\frac{\alpha \ln(t)}{S_{i}}} \right]^{2}} \\ &= t^{-2\alpha} \end{array} $$
The two equations above and Eq. (26) lead us to:
$$\begin{array}{*{20}l} E[T_{i}(n)] &\leq \left \lceil\frac{4 \alpha \ln(n)}{\Delta_{(1,i)}^{2}} \right \rceil + \sum\limits_{t=1}^{n} \sum\limits_{S^{*}=1}^{t-1} \sum\limits_{S_{i}=l}^{t-1} 2 t^{-2 \alpha} \\ & \leq \frac{4 \alpha \ln(n)}{\Delta_{(1,i)}^{2}}+1 + 2\sum\limits_{t=1}^{n} t^{-2\alpha+2} \end{array} $$
According to Cauchy series [55], the parameter α should be higher than \(\frac {3}{2}\) in order to find an upper bound of the second term in the above equation. Let α=2, to resolve \( \sum \limits _{t=1}^{n} t^{-2}\) we consider the Taylor's series expansion of sin(t):
$$ \sin(t) = t - \frac{t^{3}}{3!} +... + (-1)^{2k+1} \frac{t^{2k+1}}{(2k+1)!}+... $$
As sin(t)=0 when t=±kπ, then we obtain:
$$\begin{array}{*{20}l} \sin(t)& = t\times \left(1-\frac{t^{2}}{\pi^{2}}\right) \times...\times \left(1-\frac{t^{2}}{k^{2} \pi^{2}} \right)...\\ & = t -\left(\sum\limits_{i=1}^{n} \frac{1}{i^{2} \pi^{2}}\right)t^{3} +... \end{array} $$
where qk is a general coefficient. By comparing the above equation with Eq. (30), we obtain \(\sum \limits _{i=1}^{n} \frac {1}{i^{2}}= \frac {\pi ^{2}}{3!}\). Finally, we obtain the upper bound of E[Ti(n)] as follows:
$$ E[T_{i}(n)]\leq \frac{8\ln(n)}{\Delta_{(1,i)}^{2}} +1+\frac{\pi^{2}}{3} $$
Appendix B: Upper bound the collision number under PLA
Here, we show that the total number of collisions occurs among secondary users in the U best channels, \(O_{U}(n) = \sum \limits _{k=1}^{U} O_{k}(n)\), under our policy PLA has a logarithmic asymptotic behavior. In this case, after a finite number of collisions, the users may converge to their dedicated channels, the U best ones. Let \(O_{C}(n) = \sum \limits _{i=1}^{C} O_{i}(n)\) be the total number of collisions encountered by users in all channels, where C represents the number of available channels. Let Dk(n) be the total number of collisions under the kth priority user in all channels. To clarify our idea, Table 4 presents a case study with corresponding Dk(n) and Ok(n). E[OU(n)] can be expressed as follows:
$$\begin{array}{*{20}l} E[ O_{U}(n)] &= \sum\limits_{k=1}^{U} E[O_{k}(n)] \leq \sum\limits_{i=1}^{C} E[O_{i}(n)] \\ & = \sum\limits_{k=1}^{U} E[D_{k}(n)] \end{array} $$
Table 4 Two SUs access three available channels
We assume that when users have a good estimation of channel availabilities and each one accesses its dedicated channel, then non-collision state can be achieved. On the other hand, the kth user may collide with other users in two cases:
If it does not clearly identify its dedicated channel.
If it does not respect its prior rankFootnote 5.
Let \(T^{'}_{k}(n)\) and Ss be respectively the total number of times, where the kth user badly identifies its dedicated channel and the time needed to return to its prior rank. After each bad estimation, the user will change its dedicated channel. In this case, it may collide with other users till the convergence to its prior rank. Subsequently, for all values of n, the total number of collisions for the kth user Dk(n) can be upper bounded by:
$$ D_{k}(n) \leq T^{'}_{k}(n) S_{s} $$
As \(T^{'}_{k}(n)\) and Ss are independent, we have:
$$ E[D_{k}(n)] \leq E[T^{'}_{k}(n)] E[S_{s}] $$
Let us find an upper bound of \(E[T^{'}_{k}(n)]\), and let \(\mathbb {A}_{k}(t)\) be the event that the kth user identifies its dedicated channel, the kth best one, at the instant t. Then, ∀ k+1≤i≤C and 1≤m≤k−1, the event \(\mathbb {A}_{k}(t)\) takes place when the following condition is satisfied:
$$\mathbb{A}_{k}(t): B_{i}(t) \leq B_{k}(t) \leq B_{m}(t) $$
For a bad estimation event at instant t, ∃i ∈{k+1,...,C} and ∃m ∈{1,...,k−1}, \(\bar {\mathbb {A}}_{k}(t)\) is true when we have the following condition:
$$\bar{\mathbb{A}}_{k}(t): \left[ B_{i}(t) > B_{k}(t) \right] \ {or} \ \left[ B_{k}(t) > B_{m}(t) \right] $$
Then, the total number of times of a bad estimation where the kth priority user does not access its channel up to n, \(E\left [T^{'}_{k}(n)\right ]\), can be upper bounded as follows:
$$ E\left[T^{'}_{k}(n)\right] \leq E\left[T_{B_{i} > B_{k}}(n)\right] + E\left[T_{B_{m} < B_{k}}(n)\right] \label {equ:BadEstimationn} $$
where \(T_{B_{i} > B_{k}}(n)\) represents the number of times in which the index of the ith channel exceeds that of the kth best one for all i∈{k+1,...,C} up to n; and \(T_{B_{m} < B_{k}}(n)\) represents the number of times in which the index of the kth best channel exceeds the mth one for all m ∈ {1,...,k−1}. It is worth mentioning that, for the first priority user, \(E\left [T_{B_{m} < B_{k}}(n)\right ]\) should equal 0, since its dedicated channel has the highest availability probability. \(T_{B_{i} > B_{k}}(n)\) for the kth user has the similar definition as in Eq. (31) for a single user, then this term, for the kth user, can be upper bound as in Appendix A by:
$$ E\left[T_{B_{i} > B_{k}}(n)\right] \leq \frac{8 \ln(n)}{\Delta^{2}_{(k,i)}} + 1 + \frac{\pi^{2}}{3} $$
where Δ(k,i)=μk−μi. As μi≤μk+1 for all i∈{k+1,...,C} and μk≥μk+1≥...≥μC, then Δ(k,i)≥Δ(k,k+1). Subsequently, \(E\left [T_{B_{i} > B_{k}}(n)\right ]\) can be upper bounded by:
$$\begin{array}{*{20}l} E\left[T_{B_{i} > B_{k}}(n)\right] &\leq \frac{8 \ln(n)}{\Delta^{2}_{(k,k+1)}} + 1 + \frac{\pi^{2}}{3} \end{array} $$
Similarly, the second term \(E[T_{B_{m} < B_{k}}(n)]\) in Eq. (35) should satisfy:
$$\begin{array}{*{20}l} E\left[T_{B_{m} < B_{k}}(n)\right] &\leq \frac{8 \ln(n)}{\Delta^{2}_{(m,k)}} + 1 + \frac{\pi^{2}}{3} \end{array} $$
where Δ(m,k)≥Δ(k−1,k) for all m∈{1,...,k−1}. Then, we obtain,
$$\begin{array}{*{20}l} E\left[T_{B_{m} < B_{k}}(n)\right] & \leq \frac{8 \ln(n)}{\Delta^{2}_{(k-1,k)}} + 1 + \frac{\pi^{2}}{3} \end{array} $$
Based on Eq. (35), (37), and (39), \(\phantom {\dot {i}\!}E\left [T^{'}(n)\right ]\) can be expressed as follows:
$$ \begin{aligned} E\left[T^{'}(n)\right] \leq \sum\limits_{k=1}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k,k+1)}} + 1 + \frac{\pi^{2}}{3}\right) +\sum\limits_{k=2}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k-1,k)}} + 1 + \frac{\pi^{2}}{3} \right) \end{aligned} $$
Let us estimate the time Ss and let us consider U users with different priority levels based on our policy PLA. At a certain moment, supposing that each user has a random rank, then at least two of them may have the same rank, and a collision may occur. In this case, each user with a collision case should regenerate a random rank around its prior rankFootnote 6. After a finite number of collisions, the system will converge to the steady state where each user has a unique rank, i.e., its prior rank. Let Ss be a random variable with a countable set of finite outcomes 1,2,... occurring with the probability p1, p2... respectively, where pt represents a non-collision at instant t. The expectation of Ss can be expressed as follows:
$$ E\left[S_{s}\right]=\sum\limits_{t=1}^{\infty} t p[S_{s}=t] $$
where the random variable Ss follows the probability p[Ss=t]:
$$ p[S_{s}=t]= (1-p)^{t}p $$
and p indicates the probability of non-collision at an instant t, while (1−p)t indicates the probability of having collisions from the instant 0 till t−1. Then, we obtain:
$$\begin{array}{*{20}l} E[S_{s}] = \sum\limits_{t=1}^{\infty} t (1-p)^{t} p \end{array} $$
Let Ia(x) be defined as follows:
$$ I_{a}(x)= (1-a) \sum\limits_{t=1}^{\infty} (a x)^{t} $$
where a is a constant number such that ax<1. I(x) can converge to:
$$I_{a}(x) = \frac{(1-a)ax}{1-ax} $$
Based on the previous equation, we have:
$$\frac{d I_{a}(x)}{d x} = \frac{ (1-a)a}{(1-a x)^{2}} $$
Using the previous equation, we obtain:
$$ \frac{d I_{a}(x)}{d x}\Big\rvert_{x=1}=\frac{a}{(1-a)} $$
Considering that a=1−p, we conclude that \(E[S_{s}]= \frac {1-p}{p}\). To clarify the idea and estimate the probability p, we consider that three SUs are trying to find their prior rank where the Table 5 displays all the possible cases. Subsequently, the probability to converge to a steady state, i.e., the case 3, is \(p=\frac {1}{5}\), and E[Ss]=4.
Table 5 Three SUs trying to converge towards a steady state where each one finds its prior rank. The roman number indicates the number of users selecting the same rank
To estimate the value of p as well E[Ss], let us introduce the problem suggested in [[56] Chapter 5], to count the number of ways of putting U identical balls into U different boxes. According to [[56] Chapter 5], the probability p to converge to a steady state where each box has just one ball is \(p=\frac {1}{{U \choose 2U-1}}\) and \(E[S_{s}]= {U\choose 2U-1}-1\). However, our problem of convergence to a steady state represents a restricted case of the problem introduced in [56]. Then, the expected time to converge to a steady state of our policy PLA for U SUs can be upper bounded by:
$$ E[S_{s}] \leq {U \choose 2U-1} - 1 $$
Based on Eqs. (32), (34), (40), and (45) the total number of collisions in the best channels for U SUs can be upper bounded by:
$$ \begin{aligned} E[O_{U}(n)] \leq &\left[{U \choose 2U-1} - 1 \right] \\ &. \left[ \sum\limits_{k=1}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k,k+1)}} + 1 + \frac{\pi^{2}}{3}\right) +\right. \left. \sum\limits_{k=2}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k-1,k)}} + 1 + \frac{\pi^{2}}{3} \right) \right] \end{aligned} $$
The above equation shows that there is a finite number of collisions in the U best channels for PLA based on AUCB before each user converges to its dedicated channel.
Appendix C: Upper bound the regret of PLA under AUCB
The global regret under the multi-user case according to Eq. (5) can be defined as follows:
$$ R(n,U,\beta) = n\sum\limits_{k=1}^{U} \mu_{k} - \sum\limits_{j=1}^{U} \sum\limits_{i=1}^{C} E \left[P_{i,j} (n) \right]\mu_{i} $$
where μk is the availability probability of the kth best channel and Pi,j(n) represents the total number of non-collision up to n in the channel i for user j. Let Ti,j(n) be the total number of times where the jth user senses the ith channel up to n. Let \(T_{i}(n) =\sum \limits _{j=1}^{U}T_{i,j}(n)\) and \(P_{i}(n) = \sum \limits _{j=1}^{U}P_{i,j}(n)\) represent, respectively, the total number of times, where the ith channel is sensed by all users, and the total number of times, where the users access the ith channel without making any collision with each other up to n. Let Ok(n) be the number of collisions in the kth best channel (as defined at the beginning of the Appendix B). Based on Tk(n) and Pk(n) for the kth best channel (Tk(n) and Pk(n) represent, respectively, the total number of times where the kth best channel is sensed by all users and the total number of times where the users access the kth best channel without making any collision with each other up to n), Ok(n) can be expressed as follows:
$$ O_{k}(n) = T_{k}(n) - P_{k}(n) $$
It is worth mentioning that the number of channels C should be higher than the number of users U, otherwise:
Using a learning algorithm to find the best channels does not make any sense, since all channels need to be accessed.
Considering that the user should sense one channel at each time slot, at least one collision may occur among users, then users cannot converge to free-collision state under any learning policy.
Subsequently, by supposing that C≥U and μ1≥μi, ∀ i, we can upper bound the regret in Eq. (47) of our policy PLA under our algorithm AUCB by the following equation:
$$\begin{array}{*{20}l}R_{PLA}(n,U,\text{AUCB}) & \leq n\sum\limits_{k=1}^{U} \mu_{k} - \sum\limits_{k=1}^{U} \mu_{k} E \left[P_{k} (n)\right] \leq \mu_{1} \left(Un - \sum\limits_{k=1}^{U} E \left[P_{k} (n)\right]\right) \end{array} $$
At each time slot, the user can sense one channel, then we can consider that:
$$ \sum\limits_{i=1}^{C} \sum\limits_{j=1}^{U} T_{i,j}(n)=\sum\limits_{i=1}^{C} T_{i}(n)=Un $$
Based on the above expression, the regret can be expressed as follows:
$$ \begin{aligned} &R_{PLA}(n,U,{AUCB}) \leq \mu_{1} \left(\sum\limits_{i=1}^{C} E\left[ T_{i}(n)\right] - \sum\limits_{k=1}^{U} E \left[P_{k} (n)\right]\right) \end{aligned} $$
We can break \(\sum \limits _{i=1}^{C} E\left [T_{i}(n)\right ]\) into two terms:
$$ \sum\limits_{i=1}^{C} E[T_{i}(n)]= \sum\limits_{k=1}^{U} E\left[T_{k}(n)\right] + \sum\limits_{i=U+1}^{C} E[T_{i}(n)] $$
Based on Eq. (50) and (51), we obtain the following equation:
$$ \begin{aligned} &R_{PLA}(n,U,\text{AUCB}) \leq \mu_{1} \left[ \left(\sum\limits_{i=U+1}^{C} E[T_{i}(n)] \right) + E[O_{U}(n)] \right] \end{aligned} $$
It is worth mentioning that the global regret in the multi-user case depends on the selection of worst channels as well as the number of collisions among users. However, Eq. (52) confirms the definition of the regret where the first term Ti(n) represents the access of worst channels for all users, and the second term OU(n) is the number of collisions for all users in the U best channels. In order to bound the regret, we need to bound the two terms E[Ti(n)] and E[OU(n)]. In Eq. (31), we calculated the upper bound of E[Ti(n)] for a single user. In fact, E[Ti(n)] in Appendix A has the same properties of E[Ti,j(n)] in the multi-user case. The difference is that, in the single-user case μi∈{μ2,μ3,...,μC} while in the multi-user case, and according to Eq. (52), μi should be in {μ(U+1),μ(U+2),...,μC}. Therefore, for each user in the multi-user case, we obtain:
$$\begin{aligned} &E\left[T_{i,1}(n)\right]\leq \frac{8\ln(n)}{\Delta_{(1,i)}^{2}}+1+\frac{\pi^{2}}{3}\\ &\vdots\\ &E\left[T_{i,U}(n)\right]\leq \frac{8\ln(n)}{\Delta_{(U,i)}^{2}}+1+\frac{\pi^{2}}{3} \end{aligned} $$
Consequently, the upper bound of E[Ti(n)] for all users becomes:
$$ E[T_{i}(n)] = \sum\limits_{j=1}^{U} E\left[T_{i,j}\right] \leq \sum\limits_{k=1}^{U} \left[ \frac{8\ln(n)}{\Delta_{(k,i)}^{2}}+1+\frac{\pi^{2}}{3}\right] $$
Finally, based on Eqs. (46), (52), and (53), the global regret of users for AUCB and under PLA can be expressed as follows:
$$ \begin{aligned} &R_{PLA}(n,U,\text{AUCB}) \leq \\ &\mu_{1} \left[ \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} \left(\frac{8\ln(n)}{\Delta_{(k,i)}^{2}} + 1+\frac{\pi^{2}}{3}\right) +\right. \\ &\frac{1-p}{p} \left[ \sum\limits_{k=2}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k-1,k)}} + 1 + \frac{\pi^{2}}{3} \right)+\right. \\ &\left.\left.\sum\limits_{k=1}^{U} \left(\frac{8 \ln(n)}{\Delta^{2}_{(k,k+1)}} + 1 + \frac{\pi^{2}}{3}\right) \right] \right] \end{aligned} $$
The above regret contains three components: The first one is due to the loss of reward when selecting worst channels by all users. The second and third components represent the loss of reward due to collisions among users in the U best channel. In fact, the regret of PLA is worse than the regret under the side channel policy that will be introduced in Appendix D, Eq. (60). Indeed, the cooperation among the SUs under the side channel can avoid the collisions and achieve a lower regret compared to PLA.
Appendix D: Upper bound the regret of the side channel under AUCB
In this section, we prove that the upper bound of regret of our algorithm AUCB for the multi-user case under the side channel policy has a logarithmic asymptotic behavior. In this policy, we supposed that no-collision occurs among users when the priority user broadcast the choice of its channel, without considering that the broadcast packet of the priority user may loss. However, considering the latter scenario may add some constant values to the regret as a result of the collisions among users so that the regret under the cooperative access can be defined as below:
$$ R_{SC}(n,U,{AUCB}) = n\sum\limits_{k=1}^{U} \mu_{k} - \sum\limits_{i=1}^{C} \sum\limits_{j=1}^{U} \mu_{i} E \left[T_{i,j} (n)\right] $$
According to Eq. (49), we obtain:
$$ \sum\limits_{i=1}^{C} \sum\limits_{j=1}^{U} E \left[T_{i,j} (n)\right] = \sum\limits_{i=1}^{C} E\left[T_{i}(n)\right]=Un $$
Based on Eqs. (55) and (56), the regret can be expressed as follows:
$$\begin{array}{*{20}l} \scriptstyle \huge R_{SC}(n,U,\text{AUCB}) &= {\frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{i=1}^{C}\mu_{k} E\left[T_{i}(n)\right] -\sum\limits_{i=1}^{C} \mu_{i} E[T_{i}(n)]} \\ & = \frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{i=1}^{C} \left(\mu_{k} E[T_{i}(n)]- \mu_{i} E[T_{i}(n)] \right) \\ & = \frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{i=1}^{C} E\left[T_{i}(n)\right] \Delta_{(k,i)} \end{array} $$
where Δ(k,i)=μk−μi, k and i represent, respectively, the kth best channel and the ith channel. To simplify the above equation, we consider the summation over worst and best channels as follows:
$$ \begin{aligned} R_{SC}(n,U,\text{AUCB})=\frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{k=1}^{U} E\left[T_{k}(n)\right] \Delta_{(k,k)} + \frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} E[T_{i}(n)] \Delta_{(k,i)} \end{aligned} $$
The first term of the regret in Eq. (58) equals 0, then we obtain:
$$ R_{SC}(n,U,\text{AUCB}) = \frac{1}{U} \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} E[T_{i}(n)] \Delta_{(k,i)} $$
E[Ti(n)] has the same properties and definition to that calculated in Appendix C, Eq. (53). Finally, the regret of AUCB under the side channel policy is bounded as:
$$ \begin{aligned} R_{SC}(n,U,{AUCB}) \leq \sum\limits_{k=1}^{U} \sum\limits_{i=U+1}^{C} \left[\frac{8\ln(n)}{\Delta_{(k,i)}}+\Delta_{(k,i)} \left(1+\frac{\pi^{2}}{3}\right)\right] \end{aligned} $$
In this Appendix, we proved that the global regret of AUCB under the side channel policy has a logarithmic asymptotic behavior with repspect to n, which means that after a period of time, each user will have a good estimation of the channels availability and will access a channel based on its rank.
The authors declare that all the data and materials in this manuscript are available from the authors.
A SU in OSA is equivalent to a MAB agent trying to access a channel at each time slot in order to increase its gain.
β(t) indicates the channel selected by the user at instant t in the single-user case while in the multi-access it indicates the selected channels by all users at slot t.
The variable Si(t) may represent the reward of the ith channel at slot t.
According to [54], Chernoff-Hoeffding theorem is defined as follows: Let X1,...,Xn be random variables in [0,1], and E[Xt]=μ, and let \(S_{n} = \sum \limits _{i=1}^{n} X_{i} \). Then, ∀a≥0, we have \(P\{S_{n} \geq n \mu +a\} \leq \exp ^{\frac {-2 a^{2}}{n}}\) and \(P\{S_{n} \leq n \mu -a\} \leq \exp ^{\frac {-2 a^{2}}{n}}\).
After each collision and according to our policy PLA, the user should regenerate a rank.
For SUk, it should regenerate a rank in the set {1,...k}.
Opportunistic spectrum access
CR:
PU:
Secondary user
TS:
Thompson sampling
Upper confidence bound
AUCB:
Arctan-UCB
Energy dectection
CPSD:
Cumulative power spectral density
WBS:
Waveform-based sensing
PLA:
Priority learning access
TDFS:
Time-division fair share
MEGA:
Multi-user 𝜖-greedy collision avoiding
SLK:
Selective learning of the kth largest expected rewards
J. Mitola, G. Maguire, Cognitive radio: making software radios more personal. IEEE Pers. Com.6(4), 13–18 (1999).
A. Nasser, A. Mansour, K. C. Yao, H. a. Abdallah, Spectrum sensing based on cumulative power spectral density. EURASIP J Adv. Sig. Process.2017(1), 38 (2017).
D. Bhargavi, C. Murthy, in SPAWC. Performance comparison of energy, matched-filter and cyclostationarity-based spectrum sensing (IEEEMarrakech, Morocco, 2010).
H. Urkowitz, Energy detection of unknown deterministic signals. Proc. IEEE. 55(4), 523–531 (1967).
J. Wu, T. Luo, G. Yue, in International Conference on Information Science and Engineering. An energy detection algorithm based on double-threshold in cognitive radio systems (IEEENanjing, China, 2009).
C. Liu, M. Li, M. -L. Jin, Blind energy-based detection for spatial spectrum sensing. IEEE Wirel. Com. Lett.4(1), 98–101 (2014).
A. Sahai, R. Tandra, S. M. Mishra, N. Hoven, in International Workshop on Technology and Policy for Accessing Spectrum. Fundamental design tradeoffs in cognitive radio systems (ACM PressBoston, USA, 2006).
H. Tang, in International Symposium on New Frontiers in Dynamic Spectrum Access Networks. Some physical layer issues of wide-band cognitive radio systems (IEEEBaltimore, USA, 2005).
W. R. Thompson, On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika. 25(3), 285–294 (1933).
T. Lai, H. Robbins, Asymptotically efficient adaptive allocation rules. Adv. Appl. Math.6(1), 4–22 (1985).
C. J. C. H. Watkins, Learning from delayed rewards. PhD thesis, University of Cambridge (1989).
N. Modi, P. Mary, C. Moy, Qos driven channel selection algorithm for cognitive radio network: Multi-user multi-armed bandit approach. IEEE Trans. Cog. Com. Networking. 3(1), 1–6 (2017).
M. Almasri, A. Mansour, C. Moy, A. Assoum, C. Osswald, D. Lejeune, in EECS. Opportunistic spectrum access in cognitive radio for tactical network (IEEEBern, Switzerland, 2018).
GSMA Report: Spectre 5g position de politique publique de la GSMA (2019).
A. Nasser, A. Mansour, K. Yao, H. A. H., Charara: spectrum sensing based on cumulative power spectral density. EURASIP J Adv. Sig. Process. 2017(1), 38 (2017).
X. Kang, Y. -C. Liang, A. Nallanathan, H. K. Garg, R. Zhang, Optimal power allocation for fading channels in cognitive radio networks: Ergodic capacity and outage capacity. IEEE Trans. Wirel. Commun.8(2), 940–950 (2009).
X. Kang, H. K. Garg, Y. -C. Liang, R. Zhang, Optimal power allocation for OFDM-based cognitive radio with new primary transmission protection criteria. IEEE Trans. Wirel. Commun.9(6), 2066–2075 (2010).
H. G. Myung, J. Lim, D. J. Goodman, Single carrier FDMA for uplink wireless transmission. IEEE Veh. Technol. Mag.1(3), 30–38 (2006).
E. E. Tsiropoulou, A. Kapoukakis, S. Papavassiliou, in IFIP Networking Conference. Energy-efficient subcarrier allocation in SC-FDMA wireless networks based on multilateral model of bargaining (IFIPNew York, 2013).
E. E. Tsiropoulou, A. Kapoukakis, S. Papavassiliou, Uplink resource allocation in SC-FDMA wireless networks: a survey and taxonomy. Comput. Netw.96:, 1–28 (2016).
W. Jouini, C. Moy, J. Palicot, Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. Eurasip J. Wirel. Commun. Netw.2012(1), 26 (2012).
L. Melian-Gutierrez, N. Modi, C. Moy, F. Bader, I. Perez-lvarez, S. Zazo, Hybrid UCB-HMM: a machine learning strategy for cognitive radio in HF band. IEEE Trans. on Cog. Com. Networking. 1(3), 347–358 (2015).
A. Anandkumar, N. Michael, A. Tang, A. Swami, Distributed algorithms for learning and cognitive medium access with logarithmic regret. IEEE J. Sel. Areas Com.29(4), 731–745 (2011).
R. Agrawal, Sample mean based index policies with o(log n) regret for the multi-armed bandit problem. Adv. Appl. Probab.27(4), 1054–1078 (1995).
W. Jouini, D. Ernst, C. Moy, J. Palicot, in ICC. Upper confidence bound based decision making strategies and dynamic spectrum access (IEEECape Town, South Africa, 2010).
K. Liu, Q. Zhao, Distributed learning in multi-armed bandit with multiple players. IEEE Trans. Sig. Process. 58(11), 5667–5681 (2010).
Y. Gai, B. Krishnamachari, R. Jain, in IEEE Symp. on Dynamic Spectrum Access Networks. Learning multiuser channel allocations in cognitive radio networks: a combinatorial multi-armed bandit formulation (IEEESingapore, 2010).
P. Auer, N. Cesa-Bianchi, P. Fischer, Finite-time analysis of the multiarmed bandit problem. Mach. Learn.47(2), 235–256 (2002).
M. Almasri, A. Mansour, C. Moy, A. Assoum, C. Osswald, D. Lejeune, in ISCIT. Distributed algorithm to learn OSA channels availability and enhance the transmission rate of secondary users (IEEEHoChiMinh, Vietnam, 2019).
K. Liu, Q. Zhao, B. Krishnamachari, in 2010 48th Annual Allerton Conference on Communication, Control, and Computing. Decentralizedmulti-armed bandit with imperfect observations (IEEEMonticello, USA, 2010).
J. Rosenski, O. Shamir, L. Szlak, Multi-player bandits-a musical chairs approach (ICML, New York, 2016).
O. Avner, S. Mannor, in European Conf. on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Concurrent bandit and cognitive radio networks (SpringerNancy, France, 2014).
Y. Gai, B. Krishnamachari, in GLOBECOM. Decentralized online learning algorithms for opportunistic spectrum access (IEEETexas, USA, 2011).
N. Torabi, K. Rostamzadeh, V. C. Leung, in GLOBECOM. Rank-optimal channel selection strategy in cognitive networks (IEEECalifornia, USA, 2012).
G. Burtini, J. Loeppky, R. Lawrence, A survey of online experiment design with the stochastic multi-armed bandit. arXiv preprint arXiv:1510.00757 (2015).
E. Kaufmann, O. Cappé, A. Garivier, in Artificial Intelligence and Statistics. On Bayesian upper confidence bounds for bandit problems (AISTATSLa Palma, Canary Islands, 2012).
O. -A. Maillard, R. Munos, G. Stoltz, in Annual Conf. On Learning Theory. A finite-time analysis of multi-armed bandits problems with Kullback-Leibler divergences (Association for Computational LearningBudapest, Hungary, 2011).
M. Almasri, A. Mansour, C. Moy, A. Assoum, C. Osswald, D. Lejeune, in EUSIPCO. All-powerful learning algorithm for the priority access in cognitive network (IEEEA Coruña, Spain, 2019).
S. L. Scott, A modern Bayesian look at the multi-armed bandit. Appl. Stoch. Model. Bus. Ind.26(6), 639–658 (2010).
O. Chapelle, L. Li, in Advances in Neural Information Processing Systems. An empirical evaluation of thompson sampling (Granada, Spain, 2011).
S. Agrawal, N. Goyal, in Conf. on Learning Theory. Analysis of thompson sampling for the multi-armed bandit problem (Association for Computational LearningEdinburgh, Scotland, 2012).
E. Kaufmann, N. Korda, R. Munos, in International Conf. on Algorithmic Learning Theory. Thompson sampling: an asymptotically optimal finite-time analysis (Springer Berlin HeidelbergLyon, France, 2012).
S. Agrawal, N. Goyal, in Artificial Intelligence and Statistics. Further optimal regret bounds for thompson sampling (AISTATSScottsdale, 2013).
W. Jouini, D. Ernst, C. Moy, J. Palicot, in International Conf. on Signals, Circuits and Systems. Multi-armed bandit based policies for cognitive radio's decision making issues (IEEEDjerba, Tunisia, 2009).
L. Melián-Gutiérrez, N. Modi, C. Moy, I. Pérez-Álvarez, F. Bader, S. Zazo, in ICC Workshop. Upper confidence bound learning approach for real HF measurements (IEEELondon, UK, 2015).
C. Tekin, M. Liu, in Annual Allerton Conf. on Com., Control, and Computing. Online algorithms for the multi-armed bandit problem with Markovian rewards (IEEEMonticello, USA, 2010).
B. Giuseppe, J. Loeppky, R. Lawrence, A survey of online experiment design with the stochastic multi-armed bandit. In: arXiv Preprint arXiv:1510.00757 (2015).
J. G. V. Bosse, F. U. Devetak, Signaling in Telecommunication Networks, 2nd edn. (John Wiley & Sons, Canada, 2006).
M. López-Martínez, J. Alcaraz, L. Badia, M. Zorzi, A superprocess with upper confidence bounds for cooperative spectrum sharing. IEEE Trans. Mob. Comput.15(12), 2939–2953 (2016).
X. Feng, G. Sun, X. Gan, F. Yang, X. Tian, X. Wang, M. Guizani, Cooperative spectrum sharing in cognitive radio networks: a distributed matching approach. IEEE Trans. Com.62(8), 2651–2664 (2014).
C. Tekin, M. Liu, in INFOCOM. Online learning in opportunistic spectrum access: a restless bandit approach (IEEEShanghai, China, 2011).
H. Liu, K. Liu, Q. Zhao, Learning in a changing world: restless multiarmed bandit with unknown dynamics. IEEE Trans. Inf. Theory. 59(3), 1902–1916 (2013).
X. Chen, Z. Zhao, H. Zhang, Stochastic power adaptation with multiagent reinforcement learning for cognitive wireless mesh networks. IEEE Trans. Mob. Comput.12(11), 2155–2166 (2013).
W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc.58(301), 13–30 (1963).
A. Cauchy, Sur la Convergence des Séries, Oeuvres complètes Ser. 2, 7, Gauthier-Villars, (1889).
M. Bóna, A walk through combinatorics: an introduction to enumeration and graph theory, 2nd edn. (World Scientific Publishing Company, London, 2006).
LABSTICC, UMR 6285 CNRS, ENSTA Bretagne, 2 rue F. Verny, 29806, Brest, France
Mahmoud Almasri, Ali Mansour, Christophe Osswald & Denis Lejeune
Univ Rennes, CNRS, IETR - UMR 6164, Rennes, F-35000, France
Christophe Moy
Lebanese University, Faculty of Science, Tripoli, Lebanon
Ammar Assoum
Mahmoud Almasri
Ali Mansour
Christophe Osswald
Denis Lejeune
All authors have contributed to the analytic and numerical results. The authors read and approved the final manuscript.
Correspondence to Mahmoud Almasri.
Almasri, M., Mansour, A., Moy, C. et al. Distributed algorithm under cooperative or competitive priority users in cognitive networks. J Wireless Com Network 2020, 145 (2020). https://doi.org/10.1186/s13638-020-01738-w
Cooperative or competitive priority access
Multi-armed bandit algorithms
Upper bound of regret
|
CommonCrawl
|
Isotropic random vectors
2011-08-10 – 2021-05-24
sparser than thou
high d
Maybe related 🤷:
Randomized low dimensional projections
Random rotations
Fun with rotational symmetries
The Gaussian distribution
Orthonormal and unitary matrices
Simulating isotropic vectors
On the \(d\)-sphere
On the \(d\)-ball
Marginal distributions
Inner products
Archimedes Principles
Funk-Hecke
wow! This notebook entry is now 10 years old. I have cared about this for ages.
Random variables with radial symmetry; It would be appropriate to define these circularly, so I will, which is to say, (centred) isotropic random vectors are those whose distribution is unchanged under fixed rotations, or random rotations. Generating such variables. Distributions thereof.
In particular I consider fun tricks with isotropic Gaussians, RVs uniform on the \(d\)-sphere or on the \(d\)-ball, i.e. things where their distribution is isotropic with respect to the cartesian inner product and thus the \(\ell_2\) norm ends up being important. Much of this generalises to general spherically-contoured distributions (as opposed to just distributions on the sphere, which is a terminological wrinkle I should expand upon).
To begin, consider \(\mathbb{R}^d\) with the \(L_2\) norm, i.e. \(d\)-dimensional euclidean space.
Let's say you wish to simulate a random variable, \(X\) whose realisations are isotropically distributed unit-vectors (say, to represent isotropic rotations, or for a null-distribution against which to check for isotropy.)
The simplest say of I know of generating such vectors is to take \[ X \sim \mathcal{N}(\mathbf{0}_d, \mathbf{I}_d) \]
That is, \(X\) distributed by a multivariate Gaussian distribution with each component independent and of standard deviation 1, so if \(X_i\) are i.i.d. with \(X_i \sim \mathcal{N}(0,1)\), then \[ X = \left[\begin{array}{c} X_1 \\ X_2 \\ \vdots \\ X_d \end{array} \right] \]
By the rotational invariance of multivariate Gaussian distributions we know that this must be isotropic.
Update: Martin Roberts collects a bumper list, How to generate uniformly random points on n-spheres and in n-balls. There are some surprising ones.
Now, to get unit vectors we can simply normalise
\[ U \equiv \frac{X}{\left\| X \right\|} \]
(And if you use the "hit-and-run� Monte Carlo sampler, this is your daily bread.)
David K observes the obvious way to generate Lebesgue-uniform vectors on the \(d\) ball is to generate spherical vectors then choose a random length \(L\) such that they are uniform on the ball. This implied length CDF should be \[ F_{L}(x)=\mathbb{P}(L \leq x)=\left\{\begin{array}{ll} 0 & x<0 \\ x^{n} & 0 \leq x \leq 1 \\ 1 & x>1 \end{array}\right. \] implying a pdf \[ f_{L}(x)=\left\{\begin{array}{ll} n x^{n-1} & 0 \leq x \leq 1, \\ 0 & \text { otherwise. } \end{array}\right. \] But in fact if \(U\sim\operatorname{Unif}([0,1])\) then \(L\simeq U^{1/d}\).
Bonus time: In fact, we can do more than just \(\ell_2\) spheres - we can also get \(\ell_p\) balls. See Barthe et al. (2005) (HT Mark Meckes). We simulate \([X_{1}, \ldots, X_{d}\) independently with density \(f_X(x)\propto\exp \left(-|x|^{p}\right)\), and \(Y\sim\operatorname{Exp}(1)\) independent. Then the random vector \[ \frac{ [X_{1}, \ldots, X_{d}] }{ Y+\|[X_{1}, \ldots, X_{d}]\|_p } \] is uniformly distributed in the unit ball of \(\ell_{p}^{d}\). When \(p=2\), this resembles the Gaussian RV trick, although they are not trivially the same. For \(p\neq 2\) these are not isotropic in the rotationally-invariant sense, though.
Question: what is the marginal distribution of the axial component \(U_i\), of \(U\), along axis \(i\), in the non-trivial case with \(d>1\)?
Let us consider, w.l.o.g., \(U_1\). We know that the p.d.f of \(U_1\) is even (i.e. has reflective symmetry) by construction. So it will suffice to find the positive half of the distribution function. Thus it will also be enough to find the p.d.f. of \(U_1^2\).
Define \[ X' \equiv \left[\begin{array}{c} X_2 \\ X_3 \\ \vdots \\ X_d \end{array} \right] \] i.e. \(X'\) is \(X\) with the first component of the vector excised. We have \[ \begin{array}{cccc} & U & = & \frac{X}{\left\| X \right\|} \\ \Rightarrow & U^2 & = & \frac{X^2}{\left\| X \right\|^2} \\ \Rightarrow & U_1^2 & = & \frac{X_1^2}{\left\| X \right\|^2} \\ \Rightarrow & U_1^2 & = & \frac{X_1^2}{\left\| X' \right\|^2 + X_1^2}\\ \Rightarrow & \frac{1}{U_1^2} -1 & = & \frac{\left\| X' \right\|^2}{X_1^2}\\ \Rightarrow & \frac{\frac{1}{U_1^2} -1}{d-1} & = & \frac{\left\| X' \right\|^2 / (d-1)}{X_1^2} \end{array} \]
Now, the R.H.S. has an \(\mathcal{F}(d-1,1)\) distribution, as the ratio of i.i.d \(\chi^2\) variables, and the work there has already been done for us by Snedecor. We call that R.H.S. \(G\), where \(G \sim \mathcal{F}(d-1,1)\). We write \(g\) for the pdf. Then \[ U_1^2 = \frac{1}{g(d-1)+1} \]
Now we can construct the pdf of \(U_1\). In fact, for my purposes I need the quantile/inverse-cdf function, so let's do that. Call the quantile function of the \(\mathcal{F}(d-1,1)\) distribution, \(H(x)\) and the cdf of \(U_1\), \(F(x)\) (for 'half' and 'full').
Then we splice two symmetric copies of the function about the axis - \(F(x) = \mathrm{sgn}(2x-1) H(1-|2x-1|)\) - and we're done.
Here's a plot of the behaviour of our distribution:
axial distributions in various dimensions
Notice that axial components tend to be long in \(\mathbb{R}^2\), and short in more than 3 dimensions. And uniformly distributed in 3 dimensions. Intuitively, as the number of dimensions increases, the contribution of any one axis to the total length is likely to be smaller on average, because there are simply more of them.
And here's the R code to generate the graph, most of which is plotting logic.
half.quantile <- function(x,d=3) {
g <- qf(x,d-1,1)
return(1/sqrt(g*(d-1)+1))
full_quantile <- function (x, d=3) {
x.scaled <- 2*x -1
res <- sign(x.scaled)*half.quantile(1-abs(x.scaled), d)
return(res)
pts <- seq(0,1,1/512)
dims <- seq(2,10)
ndims <- length(dims)
vals <- data.frame(outer(pts, dims, full.quantile))
dimnames <- paste(dims)
xrange <- c(0,1)
yrange <- c(-1,1)
colors <- brewer.pal(ndims,"Spectral")
# set up the plot
plot(xrange, yrange, type="l", xlab="x",
ylab="quantile")
# add lines
for (i in seq(ndims)) {
dim <- vals[,i]
lines(pts, dim, type="l", lwd=2,
col=colors[i],
lty=1
# add a legend
legend(xrange[1], yrange[2], dims, cex=0.8, col=colors,
lty=1, title="Dimension")
There are many interesting approximate distributions for these quantities, which are explored in the low-d projections notebook.
There is an exact distribution for inner products of normalized vectors. Suppose that \(L=\tfrac12 +\vv{X}_p^\top\vv{X}_q/2\). Then \(L\sim\operatorname{Beta}((D-1)/2,(D-1)/2).\)
Kimchi lover derives Variance of x_i chosen from uniformly distributed hypersphere
\[\boldsymbol{x}\sim\operatorname{Unif}\mathbb{B}^{d-1}\Rightarrow \operatorname{Var}(X_1)=\frac{1}{d+2}.\] See also Covariance matrix of uniform spherical distribution wherein a very simple symmetry argument gives
\[\boldsymbol{x}\sim\operatorname{Unif}\mathbb{S}^{d-1}\Rightarrow \operatorname{Var}(X_1)=\frac{1}{d}.\] and indeed \[\boldsymbol{x}\sim\operatorname{Unif}\mathbb{S}^{d-1}\Rightarrow \operatorname{Var}(\boldsymbol{x})=\frac{1}{d}\mathrm{I}.\]
First mention: The first \((n-2)\) coordinates on a sphere are uniform in a ball
Djalil Chafaï mentions the Archimedes principle which
… states that the projection of the uniform law of the unit sphere of \(\mathbb{R}^{3}\) on a diameter is the uniform law on the diameter. It is the case \(n=3\) of the Funk-Hecke formula…. More generally, if \(\left(X_{1}, \ldots, X_{n}\right)\) is a random vector of \(\mathbb{R}^{n}\) uniformly distributed on the unit sphere \(\mathbb{S}^{n-1}\) then \(\left(X_{1}, \ldots, X_{n-2}\right)\) is uniformly distributed on the unit ball of \(\mathbb{R}^{n-2}\). It does not work if we replace \(n-2\) by \(n-1\).
Djalil Chafaï introduces the The Funk-Hecke formula:
In its basic form, the Funk-Hecke formula states that for all bounded measurable \(f:[-1,1] \mapsto \mathbb{R}\) and all \(y \in \mathbb{S}^{n-1}\), \[ \int f(x \cdot y) \sigma_{\mathbb{S}^{n-1}}(\mathrm{~d} x)=\frac{\Gamma\left(\frac{n}{2}\right)}{\sqrt{\pi} \Gamma\left(\frac{n-1}{2}\right)} \int_{-1}^{1} f(t)\left(1-t^{2}\right)^{\frac{n-3}{2}} \mathrm{d} t \] The formula does not depend on \(x\), an invariance due to spherical symmetry.…
If this is an inner product with an sphere RV then we get the density for a univariate random projection.
Barthe, Franck, Olivier Guedon, Shahar Mendelson, and Assaf Naor. 2005. "A Probabilistic Approach to the Geometry of the \(\ell_p^n\)-Ball.� The Annals of Probability 33 (2).
Christensen, Jens Peter Reus. 1970. "On Some Measures Analogous to Haar Measure.� MATHEMATICA SCANDINAVICA 26 (June): 103–6.
Grafakos, Loukas, and Gerald Teschl. 2013. "On Fourier Transforms of Radial Functions and Distributions.� Journal of Fourier Analysis and Applications 19 (1): 167–79.
Meckes, Elizabeth. 2012. "Projections of Probability Distributions: A Measure-Theoretic Dvoretzky Theorem.� In Geometric Aspects of Functional Analysis: Israel Seminar 2006–2010, edited by Bo'az Klartag, Shahar Mendelson, and Vitali D. Milman, 317–26. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer.
Stam, A. J. 1982. "Limit Theorems for Uniform Distributions on Spheres in High-Dimensional Euclidean Spaces.� Journal of Applied Probability 19 (1): 221–28.
Vembu, S. 1961. "Fourier Transformation of the n -Dimensional Radial Delta Function.� The Quarterly Journal of Mathematics 12 (1): 165–68.
No comments yet. Why not leave one?
GitHub-flavored Markdown & a sane subset of HTML is supported.
Fill in your email address if you want to get notified when someone replies to your comment. Your email address is safely stored with strong 256-bit AES encryption. You can unsubscribe from notification emails any time by following a link on the bottom of a reply notification email.
${a.author.name}
${c} ${d}, ${b}
Reply to ${a.author.name}
`}}
The Living Thing is a collection of the perpetually-in-progress learning notebooks of Dan MacKinlay.
All original opinions expressed here are my own and not the opinions of my clients, employers or anyone else. If you want to talk about stuff here, use the private contact form.
I may research your visits to this page using cookies, which helps me justify spending time on this project, but I encourage you to opt out of such tracking. If you would like to support my work you can tip me via Brave or other online means.
Made with and
The blog is written mostly on the lands of the Gadigal People of the Eora Nation, to whom I pay respects, and also cash.
|
CommonCrawl
|
Image Denoising via Fast and Fuzzy Non-local Means Algorithm
Junrui Lv* and Xuegang Luo*
Corresponding Author: Xuegang Luo* ([email protected])
Junrui Lv*, School of Computer Science and Engineering, Panzhihua University, Sichuan, China, [email protected]
Xuegang Luo*, School of Computer Science and Engineering, Panzhihua University, Sichuan, China, [email protected]
Received: January 3 2018
Revision received: February 28 2018
Accepted: July 29 2019
Abstract: Non-local means (NLM) algorithm is an effective and successful denoising method, but it is computationally heavy. To deal with this obstacle, we propose a novel NLM algorithm with fuzzy metric (FM-NLM) for image denoising in this paper. A new feature metric of visual features with fuzzy metric is utilized to measure the similarity between image pixels in the presence of Gaussian noise. Similarity measures of luminance and structure information are calculated using a fuzzy metric. A smooth kernel is constructed with the proposed fuzzy metric instead of the Gaussian weighted L2 norm kernel. The fuzzy metric and smooth kernel computationally simplify the NLM algorithm and avoid the filter parameters. Meanwhile, the proposed FM-NLM using visual structure preferably preserves the original undistorted image structures. The performance of the improved method is visually and quantitatively comparable with or better than that of the current state-of-the-art NLM-based denoising algorithms.
Keywords: Fuzzy Metric , Image Denoising , Non-local Means Algorithm , Visual Similarity
Image is an important way for humans to obtain and transmit information. Noise is an inevitable part of any imaging pipeline during acquisition, transmission, and recording, resulting in low image quality. Reasonably eliminating the noise and preferably reserving the structural and textural details by using the existing image denoising algorithm are difficult.
Many nonlinear filters, such as weight Gaussian filtering, mean filtering, and wavelet soft threshold denoising, have been introduced in the past few years. The implementation in existing hardware is easy. However, such filters can easily omit certain information of an image and cannot obtain favorable results. A series of novel denoising algorithms with effective performance, such as anisotropic filtering, weight bilateral filtering, non-local means (NLM), and block-matching 3D (BM3D), has been presented in recent years [1]. Among these outstanding methods, the NLM algorithm by Buades et al. [2] is a strongly efficient for image denoising and has great influence in the image processing field. Therefore, NLM has attracted considerable attention from numerous scholars.
Traditional denoising methods have obtained the final evaluation results in pixels, whereas the evaluated results of the NLM denoising algorithm are based on image patches. As structures of image patches have many similarities, the image structure information by image patch is utilized to estimate the real value of corresponding pixels. Therefore, the NLM shows enhanced denoising performance. This efficient denoising algorithm explicitly uses self-similarities in image patches.
The NLM is superior in denoising performance, but it still involves various problems, such as huge computational cost and difficult parameter selection. Achieving optimal denoising effect and practical engineering applications is difficult. Hence, different methods have been developed for patch similarity measures and important parameters by related optimization strategies. Investigation and analysis have shown that the major problem is the huge NLM computation in calculating the Euclidean distance on the basis of Gaussian kernel function. In addition, the Euclidean distance is used directly to measure the similarities between image patches, regardless of image edge and structure. NLM with Euclidean distance imposes a high computational complexity and seriously affects the denoising performance.
Relevant existing literatures are briefly summarized to solve this problem. The improved methods in [3] used L2 norm successive elimination and integral image that optionally calculate Euclidean distance to reduce the computational complexity. NLM with grey theory, which can reasonably remove noise and is efficient in capturing details, was introduced in [4] by computing the structural similarity via the grey relation of coefficients. May et al. [5] proposed a low-rank approximation for improving NLM operators to reduce runtime effectively.
In the present study, an improved NLM image denoising algorithm, which utilizes a fuzzy metric for visual features for measuring the dissimilarity between patches, is introduced to solve the inaccuracy of NLM measurement.
The remainder of this paper is organized as follows: Section 2 introduces the NLM algorithm. Section 3 elaborates the concept of fuzzy metric and constructs a visual similarity, which depicts an improved denoising algorithm of visual features, on the basis of the fuzzy metric. Section 4 reports the experimental results of the comparison of the proposed method with state-of-the-art methods. Finally, Section 5 provides the concluding remarks.
2. NLM Image Denoising Method
The basic NLM denoising algorithm is a weighted filter, which is a linear coefficient of the similarity of image patches in images. Let v and u be the observed noisy and clean images, respectively. i and j are the pixel indexes. The recovered values can be derived as the weighted average of all pixel values in the image.
NLM can be defined as follows:
[TeX:] $$\mathbf{y}(i)=\sum w(i, j) v(j), j \in S_{i},$$
where y(i) is the estimated intensity value at pixel i and [TeX:] $$S_{i}$$ is the search window of s × s around i. w(i, j) represents the weight function between the image patches around i and j, which is defined by
[TeX:] $$w(i, j)=\frac{1}{z(i)} \exp \left(-\frac{\left\|P_{i}-P_{j}\right\|_{2}^{2}}{(p h)^{2}}\right),$$
where [TeX:] $$\|\cdot\|_{2}^{2}$$ depicts the Euclidean distance, h is the bandwidth parameter controlled by the blur of Gaussian kernel function, and [TeX:] $$\mathrm{z}(i)=\sum_{j} w(i, j)$$ is a factor for normalization. p denotes the length of image patch. [TeX:] $$P_{i}$$ denotes the image patch around i with width , as illustrated in Eq. (3).
[TeX:] $$P_{i}=\left\{y\left(i+\frac{v}{2}\right), v \in[-p, p]\right\}.$$
NLM denoising is an outstanding image denoising method [1]. However, this algorithm requires a large amount of computation. The main computation in the algorithm is W(i, j) . As a result, the computational complexity of the image size [TeX:] $$(m \times n) \text { is } O\left(m n p^{2} s^{2}\right).$$ Acceleration strategies have been utilized to reduce the computational complexity [3]. Most representative methods have adopted dimensionality reduction, but the process of dimension reduction requires considerable cost of product operation [5,6].
3. Visual Similarity Based on Fuzzy Metric
The NLM uses spatial correlation in the entire image on the basis of the similarity of each pixel and its neighborhood for noise removal. Reasonably selecting the similarity measure is an important key for obtaining the good performance of the NLM algorithm. Mahalanobis distance [7] was used to replace the Euclidean distance for measuring the similarity of image patches via singular value decomposition. However, singular value decomposition is a time-consuming operation.
This section introduces the concept of fuzzy metrics and its broad range of examples. Then, fuzzy metrics is used to build visual similarity from the structure feature and luminance similarity of image patches.
3.1 Fuzzy Metric
Fuzzy geometry has recently become an active research area because of its broad range of examples. An example of a fuzzy metric M on a set X using continuous t-norms was introduced and studied by Morillas et al. [8] and Gregori et al. [9]. The fuzzy metric is adopted mainly due to the following two main advantages:
1. The outcome given by M is in the space [0,1], without considering the nature of the distance metric being calculated.
2. The M values are perfectly consistent in other fuzzy methods because the outcome obtained by M can be directly applied as a fuzzy certainty degree.
Assuming a continuous t-norm t as a binary operation, [TeX:] $$[0,1] \times[0,1] \rightarrow[0,1],$$ X is a nonempty set satisfying the following five requirements [TeX:] $$\left(R_{1} \text { to } R_{5}\right) \text { for all } x, y \in X . \Re$$ is a fuzzy set on [TeX:] $$X \times X,$$ where [TeX:] $$x, y, z \in X, t, c>0.$$
[TeX:] $$R_{1}: \Re(x, y, t)>0,$$
[TeX:] $$R_{2}: \Re(x, y, t)=1 \text { if and only if } x=y,$$
[TeX:] $$R_{3}: \Re(x, y, t)=\Re(y, x, t),$$
[TeX:] $$R_{4}: \Re(x, y, t) * \Re(y, z, c) \leq \mathfrak{R}(x, y, t+c),$$
[TeX:] $$R_{\mathrm{s}}: \Re(x, y,$$ [TeX:] $$):(0, \infty) \rightarrow[0,1] \text { is continuous. }$$
The value [TeX:] $$(\Re, *)$$ is a fuzzy metric of set X and [TeX:] $$(X, \Re, *)$$ denotes a space of fuzzy metric [9]. Fuzzy metrics provide a series of advantages to classical metrics and can be incorporated to handle problems with different branches. In particular, several successful applications of fuzzy metric-based image filtering have been introduced in [10,11].
Assuming [TeX:] $$X \in[a, b], G>|a|>0, \alpha>0 \text { and } l=1,2 \ldots n,$$ we define the fuzzy metric to two vectors, namely, [TeX:] $$\boldsymbol{x} \text { and } \boldsymbol{y}, \text { where } \boldsymbol{x}=\left(x_{1}, \ldots, x_{l}\right) \text { and } \boldsymbol{y}=\left(y_{1}, \ldots, y_{l}\right), t>0.$$
As a particular case of well-known fuzzy metrics, let [TeX:] $$\rho(t)$$ be an increasing continuous function, scope in [TeX:] $$[0,+\infty] . \text { If } \alpha>0,$$ the function M is defined by
[TeX:] $$M_{l}^{\alpha}(\boldsymbol{x}, \boldsymbol{y}, t)=\prod_{i=1}^{l}\left(\frac{\min \left\{x_{i}, y_{i}\right\}+\rho(t)}{\max \left\{x_{i}, y_{i}\right\}+\rho(t)}\right)^{\alpha}.$$
[TeX:] $$M_{l}^{\alpha}(\boldsymbol{x}, \boldsymbol{y}, t)$$ is considered the degree of closeness between vectors (x, y) with respect to t, which is also called as the fuzzy metric. Vectors (x, y) are associated with the pixels in a squared neighborhood P of the noisy image. The pulse noise reduction filter to a color image in [8] that integrates fuzzy metric and fast similarity has high computing performance and good filtering effect.
3.2 Visual Feature Similarity
Although NLM has offered remarkably promising results, using the structural correlation for image patches and implementing the absence of structural information for the recovered image are unsuitable. In terms of structure and luminance intensity of image patches, this section introduces a similarity measure by visual features on the basis of the fuzzy metric for NLM weights.
Propositions of fuzzy metric depicted in Section 3.1 are combined to obtain the simplified fuzzy metric example that highlights the image edge and texture within a squared neighborhood. Such process aims to emphasize the significance of edge and texture information in the image.
If we take [TeX:] $$\alpha=1 \text { and } \rho(t)=t, \text { setting } \mathrm{y}_{i}=\bar{x}_{i}=\frac{1}{l} \sum_{j=1}^{l} x_{j},$$ then Eq. (4) is simplified as
[TeX:] $$H_{x_{i}}=H\left(x_{i}, \bar{x}_{i}, t\right)=\left(\frac{\min \left\{x_{i}, \bar{x}\right\}+t}{\max \left[x_{i}, \bar{x}_{i}\right]+t}\right).$$
The fuzzy metric of the image pixel is obtained using Eq. (5). The range of [TeX:] $$H_{x_{i}}$$ is [0, 1], and T is set to 255 corresponding to the grayscale image.
Luminance refers to the fundamental chromatic property of the human visual system. Spatial luminance contrast is a significant visual feature. Luminance contrast difference indicates the maximum and minimum difference of the luminance fuzzy metric of image patches, which is described as
[TeX:] $$L_{x}=\frac{\max \left(H_{x}\right)-\min \left(H_{x}\right)}{\max \left(H_{x}\right)}.$$
Therefore, luminance contrast fuzzy metric can be expressed as
[TeX:] $$C F\left(x_{i}, x_{j}\right)=1-\left|L_{x_{i}}-L_{x_{j}}\right|,$$
where [TeX:] $$L_{x_{i}}$$ is the luminance contrast difference for [TeX:] $$x_{i}, L_{x_{j}}$$ is the luminance contrast difference for [TeX:] $$x_{j},$$ and [TeX:] $$|.|$$ is an absolute value operator.
A structure fuzzy metric of the spatial structure closeness between image patches is proposed. [TeX:] $$S F\left(x_{i}, x_{j}\right)$$ is given by
[TeX:] $$S F\left(x_{i}, x_{j}\right)=\frac{\sum_{k=1}^{p^{2}}\left(1-\left|H_{x_{i} k^{-H} x_{j k}}\right|\right)}{p^{2}}$$
to measure the similarity between the vectors [TeX:] $$\left(x_{i}, x_{j}\right).$$
Finally, the proposed visual feature similarity uses a novel fuzzy metric that describes the distance criteria of luminance and structure, as defined by
[TeX:] $$D\left(x_{i}, x_{j}\right)=C F\left(x_{i}, x_{j}\right)^{\alpha} S F\left(x_{i}, x_{j}\right)^{\beta},$$
where [TeX:] $$\alpha \text { and } \beta$$ are two parameters given by the predesigned values. On this basis, fuzzy metric D measures the similarity between patches by synchronously considering the similarity between the luminance component and structure neighborhood of the patches. As previously stated, [TeX:] $$D\left(x_{i}, x_{j}\right)$$ provides the similarity between [TeX:] $$x_{i} \text { and } x_{j},$$ which will be high only if the two similarities are high.
3.3 NLM Using Visual Feature Similarity Based on Fuzzy Metric
A similarity weight method based on fuzzy metric is applicable for our purposes by using smooth kernel functions and enhances the similarity coefficients. To obtain the accurate similarity of image patches, the improved estimation of NLM image denoising algorithm can be reformed as
[TeX:] $$F N L[y(i)]=\sum_{\psi \in P} w(y(i), \psi) y(i),$$
where [TeX:] $$w(y(i), \psi)$$ denotes the weight between square patches centered at pixels [TeX:] $$i \text { and } \psi$$ within a squared neighborhood p and satisfies [TeX:] $$0 \leq w(y(i), \psi) \leq 1.$$
The experiments in [4] showed that the Gaussian kernel function has minimal influence in improving the NLM performance. The result will be poor for a large search area. Meanwhile, controlling the bandwidth parameter h of the Gaussian kernel is difficult. Calculating the square root of a matrix is excessively expensive. Instead of Gaussian kernel function, a smoothing kernel function is used to avoid the instability caused by the uncontrolled parameter h to reduce the amount of computation.
[TeX:] $$w(y(i), \psi)=K_{F l a t}\|1-D(y(i), \psi)\|_{1}.$$
[TeX:] $$K_{F l a t}$$ is a smoothing kernel function. Average value of the neighborhood similarity based on the fuzzy metric is used as the threshold of [TeX:] $$K_{F l a t}$$ to prevent the small similarity value of patches in weight allocation.
[TeX:] $$K_{F l a t}=\left\{\begin{array}{ll}{D(y(i), \psi)} & {D(y(i), \psi) \geq \overline{D(y(l), \psi)}}, \\ {0} & {\text { otherwise }}\end{array}\right.$$
where [TeX:] $$\overline{D(y(\iota), \psi)}$$ is the average value of the fuzzy metric of neighborhoods. Image patches with small similarity weights are discarded.
3.4 Runtime Analysis
In this section, we conduct a complexity analysis for evaluating efficiency. The proposed algorithm contains two parts of runtime, a fuzzy metric of the pixel, and the visual feature similarity weight. Suppose the number of image pixels is N. The complexity of the fuzzy metric of the pixel is of a linear complexity O(N).
The calculation of H step requires 2N division, 2N comparison, and N summation operations for the entire image. The division is time consuming. In practice, we use a table lookup to improve performance. For grayscale images, the numerator and denominator of Eq. (5) will be limited to [TeX:] $$[t, t+255].$$ The query matrix was constructed by enumerating all possible points in [0,255]; thus, division operation is replaced by checking the lookup table.
The integral image in [3] is applied for summation operations. Visual feature similarity weight between image patches only requires constant two max–min operations, three summations, three subtractions, one comparison, and two multiplications, regardless of the patch size. Therefore, the algorithm efficiency is dramatically improved.
4.1 Experimental Setting and Parameters
Magnetic resonance imaging (MRI) image and three publicly tested images (Barbara, Pepper, and Lena) corrupted with an additive white Gaussian noise with zero mean and variance are selected for experiments to demonstrate the effectiveness of the improved method. Two important performance metrics are introduced to analyze the performance of the proposed method, the NLM method, and several improved methods, such as structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR).
The FF-NLM method is compared with the original NLM, LG-NLM [12] (a feature-based NLM method for image denoising), BM3D [13,14] (a prominent denoising method), and LR-NLM [5] (an NLM denoising method with improved performances and computational times).
Optimum values of the parameters must be selected to enhance the effectiveness of FF-NLM method. After the simulation, we establish that the best value selection of parameter [TeX:] $$(\alpha, \beta)$$ is (1,1). Search area R and patch size B are crucial parameters influencing the performance of the denoising method. Search area R is set to [TeX:] $$21 * 21$$ on the basis of a study [13] to obtain high performance in terms of efficiency and denoising effect. Hence, in our experiments, search range R is 21. The larger the patch is, the more plentiful the image edge and texture are. However, the patch is too large, resulting in less similar image patches, which leads to a low weight accuracy. The experiment images corrupted with standard deviations [TeX:] $$\sigma=\{5,15,25,45\}$$ are evaluated using SSIM to acquire the optimal patch size. When B = 3, 5, 7, and 9, the denoising performance is increased. However, when B = 11, 13, and 15, the PSNR values decrease slightly as the patch size increases. Therefore, the optimal value of patch size B is 9.
4.2 Experimental Comparison and Performance Analysis
Our method and LG-NLM were performed in MATLAB 2015b, and the source programs of NLM, LRNLM, and BM3D were obtained in the author's website. Fig. 1 shows the results of different denoising methods on noise MRI images with the same parameters.
Experiment results of MRI images. (a) Noise MRI slice. With test denoising methods, (b) NLM, (c) LG-NIM, (d) BM3D, (e) LR-NLM, and (f) our proposed method.
Fig. 1(b)–1(f) show that our method obtains the best PSNR in dB, compared with other methods. Fig. 2(b) presents the detail of the image in Fig. 2(a) after zooming. Fig. 2(c)–2(f) exhibit the denoising results of different methods, including LG-NLM, BM3D, and LR-NLM. Our method can effectively obtain noise suppression, texture preservation, and clearer details than other methods. For example, recovered results of the lady's headscarf are hardly possible to achieve due to texture details, as demonstrated in Fig. 2(d)– 2(f). Moreover, many artifacts are visible near the headscarf. The recovered results of LG-NLM and LRNLM methods are inclined to contain more blocking artifacts than those of other methods. Our method demonstrates clearer detail preservation and edge sharpness than the traditional NLM methods. The reason is that the proposed method with fuzzy metric has the best quality, and it can lower the artifacts by measuring the subtle changes in the image structure.
Detail image of Barbara [TeX:] $$(\sigma=25)$$ with different denoising methods: (a) original image of Barbara, (b) Cropped image, (c) FF-NLM, (d) BM3D, (e) LR-NLM, (f) LG-NLM, and (g) NLM.
Two vital performance evaluation indexes (PSNR and SSIM) [15] are computed on Barbara, Pepper, Lena, and MRI slice images with different standard deviations [TeX:] $$(\sigma=10,30, \text { and } 50)$$ to demonstrate the superiority of our proposed method. Tables 1 and 2 show the numerical results. The first PSNR of denoising methods is shown in bold. In light of PSNR, the best results are obtained using the proposed method.
The numbers in the SSIM metric represent the difference between the traditional and FF-NLM methods (Table 2). In Table 2, the overall PSNR and SSIM of LR-NLM and LG-NLM are good, but the performance of SSIM value drops significantly. LG-NLM and LR-NLM based on Euclidean distance have less interference resistance in the case of strong noise.
Average PSNR (dB) for different standard deviations with various denoising methods
Average SSIM for different standard deviations with various denoising methods
As the image texture of Barbara and Lena is rich, the proposed algorithm and BM3D have high denoising capability. However, under a high noise level, the performance of BM3D significantly decreases, and that of our proposed algorithm is maintained well. The proposed algorithm can efficiently obtain the image structure information through the visual features of a fuzzy metric and reduce the influence of noise on similarity weighting.
Residual image of Pepper with noise for various denoising methods: original image ( = 25), (b) FF-NLM, (c) BM3D, (d) LR-NLM, (e) NLM, and (f) LG-NLM.
Residual image of Lena with noise for various denoising methods: original image ( = 25), (b) proposed method, (c) BM3D, (d) LR-NLM, (e) NLM, and (f) LG-NLM.
Residual noise (RN) refers to the difference between the denoised and noisy images. Useful information of the image is taken into the RN brought by denoising methods. Hence, the RN is also a subjective quality evaluation of the quantity of information in the RN. Figs. 3 and 4 display the comparisons of RN images of Lena and Pepper with noise, respectively, obtained by different test methods. The fewer the reserved structural information in the RN image, the finer the performance. RN images obtained by BM3D and LG-NLM methods include more structural information than other test methods. Several image structures of the Pepper image with noise are seriously lost with the LG-NLM and BM3D methods. RN images obtained by NLM, BM3D, LR-NLM, and LG-NLM reserve distinct structural information in the head zone of the Lena image. However, structural information disappears in the RN images in Fig. 4(b). The RN image of our proposed method retains minimum structural information of the images in Figs. 3 and 4. The reason is that measure similarity index is developed using fuzzy metrics and smooth kernel functions, which can efficiently retain image structures.
We conducted simulations in a notepad with Intel Core i5, M460 @2.53 GHZ of CPU, and 4 GB memory using MATLAB 2016a to validate the proposed FF-NLM method. Table 3 shows the comparison of average running times of four images with different sizes of this proposed method with BM3D, LGNLM, LR-NLM, and NLM. Table 3 shows that FF-NLM in an image size of 128 × 128 requires more complexity than LG-NLM and LR-NLM. Although LG-NLM takes the least time, its extraction feature causes the image details to be lost seriously.
As the image increases, fuzzy metric and similarity weight effectively eliminate low-weight patches. The computational complexity of the proposed method is significantly reduced.
Comparison of average CPU times (s) for different image sizes with various denoising methods
A novel fuzzy NLM algorithm for Gaussian denoising was proposed. The NLM method based on a novel patch similarity measure was developed. This method was adopted to measure the similarity of structure and luminance between image patches with a fuzzy metric. Kernel function was used to calculate the weights and filter image patches with low weights from experiment results to reduce computational time by adopting the threshold. A comparison using three public test images and an MRI image slice showed that the proposed method has better visual quality and PSNR and SSIM metrics than existing methods. Therefore, our proposed method is competitive.
This study was funded by Innovation Foundation (Believe in Engineering) of Sichuan Province of China (No. 2019088).
Junrui Lv
She received B.S. degree in School of Computer Science and Engineering from Central South University, China, in 2005 and M.S. degree in Software College from University of Electronic Science and Technology, China, in 2009. She is currently a lecturer in the School of Computer Science and Engineering at Panzhihua University in China. Her current research interests include logistics optimization and image processing.
Xuegang Luo
He received B.S. degree in School of Computer Science and Engineering from Huazhong Agricultural University in 2005, and received the M.S. degree from University of Electronic Science and Technology, China, in 2008, and Ph.D. degree from Chengdu University of Technology, China, in 2015. He is currently an assistant professor in the School of Computer Science and Engineering at Panzhihua University in China. His current research interests include logistics optimization and machine learning.
1 P. Milanfar, "A tour of modern image filtering: new insights and methods, both practical and theoretical," IEEE Signal Processing Magazine, vol. 30, no. 1, pp. 106-128, 2013.doi:[[[10.1109/MSP.2011.2179329]]]
2 A. Buades, B. Coll, J. M. Morel, "A review of image denoising algorithms, with a new one," Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490-530, 2005.doi:[[[10.1137/040616024]]]
3 X. G. Luo, J. R. Lu, H. J. Wang, Q. Yang, "Fast nonlocal means image denoising algorithm using selective calculation," Journal of University of Electronic Science and T echnology of China, vol. 44, no. 1, pp. 84-90, 2015.doi:[[[10.3969/j.issn.1001-0548.2015.01.014]]]
4 H. Li, C. Y. Suen, "A novel non-local means image denoising method based on grey theory," Pattern Recognition, vol. 49, pp. 237-248, 2016.doi:[[[10.1016/j.patcog.2015.05.028]]]
5 V. May, Y. Keller, N. Sharon, Y. Shkolnisky, "An algorithm for improving non-local means operators via low-rank approximation," IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1340-1353, 2016.doi:[[[10.1109/TIP.2016.2518805]]]
6 J. V. Manjon, J. Carbonell-Caballero, J. J. Lull, G. Garcia-Marti, L. Marti-Bonmati, M. Robles, "MRI denoising using non-local means," Medical Image Analysis, vol. 12, no. 4, pp. 514-523, 2008.doi:[[[10.1016/j.media.2008.02.004]]]
7 P. Q. Yin, D. M. Lu, Y. Y uan, "An improved non-local means image de-noising algorithm using Mahalanobis distance," Journal of Computer-Aided Design & Computer Graphics, vol. 28, no. 3, pp. 404-410, 2016.custom:[[[-]]]
8 S. Morillas, V. Gregori, G. Peris-Fajarnes, P. Latorre, "A fast impulsive noise color image filter using fuzzy metrics," Real-Time Imaging, vol. 11, no. 5-6, pp. 417-428, 2005.doi:[[[10.1016/j.rti.2005.06.007]]]
9 V. Gregori, S. Morillas, A. Sapena, "Examples of fuzzy metrics and applications," Fuzzy Sets and Systems, vol. 170, no. 1, pp. 95-111, 2011.doi:[[[10.1016/j.fss.2010.10.019]]]
10 S. Grecova, S. Morillas, "Perceptual similarity between color images using fuzzy metrics," Journal of Visual Communication and Image Representation, vol. 34, no. 230-235, 2016.doi:[[[10.1016/j.jvcir.2015.04.003]]]
11 S. Morillas, V. Gregori, A. Sapena, "Fuzzy bilateral filtering for color images," in Image Analysis and Recognition. Heidelberg: Springer, pp. 138-145, 2006.doi:[[[10.1007/11867586_13]]]
12 K. Zhang, X. Gao, D. Tao, X. Li, "Single image super-resolution with non-local means and steering kernel regression," IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4544-4556, 2012.doi:[[[10.1109/TIP.2012.2208977]]]
13 X. Chen, S. Bing Kang, J. Y ang, J. Y u, "Fast patch-based denoising using approximated patch geodesic paths," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, 2013;pp. 1211-1218. custom:[[[-]]]
14 K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, "Image denoising by sparse 3-D transform-domain collaborative filtering," IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080-2095, 2007.doi:[[[10.1109/TIP.2007.901238]]]
15 Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.doi:[[[10.1109/TIP.2003.819861]]]
Noisy image
LR-NLM
LG-NLM
BM3D
[TeX:] $$\sigma=10$$ Lena 34.59 34.10 34.53 34.48 34.60
Barbara 34.73 34.21 34.53 34.49 34.64
Pepper 34.62 34.12 34.56 34.50 34.69
MRI slice 33.58 32.45 33.10 33.28 33.41
The first PSNR of denoising methods is shown in bold.
[TeX:] $$\sigma=10$$ Lena 0.923 0.899 0.909 0.909 0.912
Barbara 0.916 0.901 0.910 0.904 0.911
Pepper 0.915 0.897 0.903 0.911 0.915
MRI slice 0.901 0.881 0.897 0.899 0.906
Image size (pixel)
128 × 128 256 × 256 512 × 512 1024 × 768
Proposed method 1.64 4.49 8.02 10.27
NLM 6.89 21.36 58.25 102.89
LG-NLM 1.43 7.43 20.90 39.25
BM3D 3.64 7.13 16.91 23.65
LR-NLM 1.69 12.47 21.57 42.71
|
CommonCrawl
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
View all journals
Electric-field-driven non-volatile multi-state switching of individual skyrmions in a multiferroic heterostructure
Yadong Wang1,2,
Lei Wang3,
Jing Xia4,
Zhengxun Lai5,
Guo Tian1,2,
Xichao Zhang4,
Zhipeng Hou ORCID: orcid.org/0000-0003-4935-21491,2,
Xingsen Gao ORCID: orcid.org/0000-0002-2725-07851,2,
Wenbo Mi ORCID: orcid.org/0000-0002-9108-99305,
Chun Feng3,
Min Zeng1,2,
Guofu Zhou1,2,
Guanghua Yu3,
Guangheng Wu6,
Yan Zhou ORCID: orcid.org/0000-0001-5641-91914,
Wenhong Wang6,
Xi-xiang Zhang ORCID: orcid.org/0000-0002-3478-64147 &
Junming Liu ORCID: orcid.org/0000-0001-8988-84298
Nature Communications volume 11, Article number: 3577 (2020) Cite this article
Ferroelectrics and multiferroics
Magnetic properties and materials
Spintronics
Electrical manipulation of skyrmions attracts considerable attention for its rich physics and promising applications. To date, such a manipulation is realized mainly via spin-polarized current based on spin-transfer torque or spin–orbital torque effect. However, this scheme is energy consuming and may produce massive Joule heating. To reduce energy dissipation and risk of heightened temperatures of skyrmion-based devices, an effective solution is to use electric field instead of current as stimulus. Here, we realize an electric-field manipulation of skyrmions in a nanostructured ferromagnetic/ferroelectrical heterostructure at room temperature via an inverse magneto-mechanical effect. Intriguingly, such a manipulation is non-volatile and exhibits a multistate feature. Numerical simulations indicate that the electric-field manipulation of skyrmions originates from strain-mediated modification of effective magnetic anisotropy and Dzyaloshinskii–Moriya interaction. Our results open a direction for constructing low-energy-dissipation, non-volatile, and multistate skyrmion-based spintronic devices.
Magnetic skyrmions, which are topologically nontrivial swirling spin configurations, have received increasing interest from the research community in view of their magneto-electric properties1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25, such as topological Hall effect24, skyrmion Hall effect25, and ultralow threshold for current-driven motion15,16,17,18,19. These magneto-electronic properties, in combination with the nanoscale size and stable particle-like features, make magnetic skyrmions promising candidates for carrying information in future magnetic memories or logic circuits1,2,3,4.
As the information bits, magnetic skyrmions are required to be controllably manipulated through a purely electrical manner for easy integration into modern electronic technology. To date, electrical manipulation of skyrmions has been generally realized via the use of the spin-polarized current on basis of the spin-transfer torque or spin–orbital torque effect6,8,9,10,11,14,19. However, in those studies, the required current density is immense, which leads to a high energy dissipation. Moreover, the Joule heating produced by the injected current is detrimental to the stability of skyrmion bits. In contrast, the electric-field (EF) method provides a potentially effective route to achieve the low-energy-dissipation and low-Joule-heating target, as the operations generate almost no current21,22,26,27,28,29,30,31,32,33. Moreover, the use of EF scheme can avoid the unexpected displacement of skyrmions during the writing process26. These features are of great significance to practical applications and have thus promoted the usage of electric field instead of current to manipulate skyrmions21,22,26,27,28,29,30,31,32,33.
Despite recognition of the potential for the EF manipulation of skyrmions, experimental realizations are limited21,22,26,27,28,29, especially at the room temperature27,28,29. Room-temperature EF-induced switching of skyrmions was first experimentally realized in the ferromagnet/oxide heterostructures, where the external electric field was directly applied at the ferromagnet/oxide interface to induce an interfacial charge redistribution. Consequently, a reliable binary conversation between skyrmions and ferromagnetic states was obtained27,28. Recently, Ma et al. have reported that such a room-temperature manipulation of skyrmions could also be realized by utilizing the magnetic-anisotropy gradient29. However, these methods have been demonstrated to be volatile, which may limit their further applications in spintronic devices.
In addition to the methods based on the pure ferromagnetic material systems, some recent theoretical reports have proposed the EF manipulation of skyrmions through the use of a strain-mediated ferromagnetic/ferroelectric (FM/FE) multiferroic heterostructure30,31. A large in-plane strain generated from the reversal of FE polarization in the FE substrate is expected to significantly modify the interfacial magnetism of the FM layer. Thus, a reliable and controllable transition between skyrmions and other magnetic states may be envisioned. Furthermore, the strain associated with the FE polarization is non-volatile, which makes the manipulation of skyrmions non-volatile too. This feature is especially useful and essential to the design of skyrmion-based spintronic devices. Therefore, researchers are stimulated to explore approaches to realize the EF manipulation of skyrmions on basis of the FM/FE heterostructure30,31. Yet the experimental realization has not been implemented successfully.
In this work, we have experimentally constructed the strain-mediated FM/FE multiferroic heterostructure by combining the skyrmion-hosting multilayered stacks [Pt/Co/Ta]n (n is the repetition number) with the FE substrate (001)-oriented single-crystalline 0.7PbMg1/3Nb2/3O3-0.3PbTiO3 (PMN-PT) to explore an EF manipulation of skyrmions at the room temperature. Since the design of skyrmion-based spintronic devices usually requires controllable manipulation of individual skyrmions at a custom-confined position26,30,31,32,33, the FM layer [Pt/Co/Ta]n was fabricated into geometrically confined nano-dots with the expectation that each nano-dot could host a single skyrmion. We show that a reliable EF-induced stripe–skyrmion–vortex multistate switching can be realized, which is directly detected by the in-situ magnetic force microscope (MFM) technique. More intriguingly, such a switching is non-volatile and does not need an external magnetic field except a low one of less than 10 mT from the MFM tip.
Fabrication of nanostructured FM/FE heterostructure
For the multiferroic heterostructure, we opt for the (001)-orientated single-crystalline PMN-PT as the FE substrate in the view of its large piezoelectric coefficients and non-volatile strain. Meanwhile, the size of the ferroelectric domains in the (001)-oriented single-crystallized PMN-PT is ~10 μm34, which allows us to manipulate the FM domain in a relatively large area. The multilayered stack [Pt/Co/Ta]n is selected as the target FM layer because it can stabilize sub-100 nm Néel-type skyrmions at the room temperature as a result of the balance between a large DMI, magnetic effective anisotropy, and magnetic exchange coupling8,35,36.
Figure 1a presents the detailed structure of a typical [Pt/Co/Ta]12/PMN-PT multiferroic heterostructure. First, magnetron sputtering is employed to deposit the multilayered stack [Pt(2.5 nm)/Co(2.2 nm)/Ta(1.9 nm)]12/Ta(5 nm) on the (001)-oriented single-crystalline PMN-PT. Subsequently, a two-step nano-patterning method is utilized to fabricate the [Pt/Co/Ta]12 stack into nano-dots with diameters (d) ranging from 1 μm to 150 nm. More details about the fabrication processes are presented in Supplementary Fig. 1. Figure 1b provides a top-view of the heterostructure imaged with a scanning electron microscope (SEM) and demonstrates an ordered arrangement of nano-dots. By further using scanning transmission electron microscopy (STEM) to visualize its cross section (see Fig. 1c), we find that the heterostructure possess reasonably sharp interfaces between PMN-PT, Ta, and [Pt/Co/Ta]12, which confirms the as-required structure.
Fig. 1: Nanostructured FM/FE multiferroic heterostructure.
a Scheme of the nanostructured FM/FE multiferroic heterostructure. b SEM image of ordered [Pt/Co/Ta]12 multilayer nano-dot arrays. c STEM image of the cross-sections. d The magnetic domain evolution process in the [Pt/Co/Ta]12 nano-dot as a function of both the external magnetic field μ0H and d. The inset shows the spin textures of the magnetic domain enclosed by the color boxes. The magnetic domain enclosed by black, red, and green boxes represents the stripe, skyrmion, and single domain, respectively. The red-filled rhombuses represent the critical field where the stripe domains completely transform into skyrmions. The hollow rhombuses represent the magnetic field where the out-of-plane single domain appears. The MFM contrast represents the MFM tip resonant frequency shift (Δf). The scale bar in c represents 500 nm.
To establish the d range for hosting a single skyrmion, we first use MFM under different magnetic field (μ0H) to image the domain structure of the [Pt/Co/Ta]12 nano-dots with different values of d (see Fig. 1d and Supplementary Fig. 2). Notably, μ0H represents the external magnetic field that is nominally applied to the nano-dots; however, it does not reflect the effective magnetic field, as the MFM tip also has a certain magnetic field. To minimize the influence of the magnetic field from the MFM tip on the magnetic domain structure during scanning, we select a low-moment magnetic tip for the MFM measurements (the magnetic field of MFM tip applied on the nano-dots is less than 10 mT). As illustrated in Fig. 1d, the nucleation of the skyrmions depends drastically on d of the nano-dots. Both the critical magnetic field for the nucleation of skyrmions (μ0Hc) and the maximum number of skyrmions in the nano-dot (N) decrease correspondingly with the reduction of d. In particular, when d is equal to 350 nm, a single skyrmion forms at a relatively low μ0Hc of 25 mT. By further reducing d, no skyrmion is observed anymore while an out-of-plane single domain starts to appear in the nano-dot.
Electric-field-induced switching of individual skyrmions
Having established that the d ~ 350 nm nano-dot hosts a single skyrmion, we adopt it as the platform for exploring the EF manipulation of individual skyrmions. As indicated in Fig. 1a, when the external electric field (E) is applied to the heterostructure through the top Ta layer and bottom Au electrode, an in-plane strain is generated at the PMN-PT substrate. The in-plane strain is subsequently transferred to the FM nano-dots as a result of the strong mechanical coupling between nano-dots and substrate and we expect that the magnetic states in the nano-dots would be altered via an inverse magneto-mechanical effect. Since the transferred strain (ε) on the d ~ 350 nm nano-dot is difficult to be measured, the corresponding ε distribution is simulated by using the finite element analysis. Details about the simulations are presented in Supplementary Fig. 3, Supplementary Note 1 and Supplementary Table 1. The simulated results demonstrate that the ε distribution on the d ~ 350 nm nano-dot is rather inhomogeneous and relaxes along both its thickness and diameter directions. To describe the electrical field (E)-dependent inhomogeneous ε, average strain (εave), which represents an overall result of the inhomogeneous ε, is introduced. We have derived the E-εave curve on basis of the relationship between simulated ε distribution and E-dependent strain generated at the PMN-PT substrate (εsub). Details about the establishing processes are presented in Supplementary Fig. 4. Figure 2a summarizes εave and the corresponding images of the magnetic domain structure in a nano-dot captured by the in-situ MFM in a cycle of sweeping E ranging from +10 to −10 kV cm−1 (positive E represents the direction of applied E is opposite to that applied to polarize PMN-PT to saturation at initial state and negative E represents the direction of applied E is same as that applied to polarize PMN-PT to saturation). At the initial state (E = 0 kV cm−1, εave = 0%, and μ0H = 0 mT), only the stripe domain is observed, which suggests that the magnetic field from the MFM tip is not sufficiently large to induce the nucleation of skyrmion. As E increases from 0 to +3 kV cm−1, a tensile-type strain (εave > 0) induces the stripe domain to gradually convert into a single skyrmion. Notably, in such an EF-induced transition process, no external magnetic field (excluding the magnetic field from MFM tip) is applied. Meanwhile, without strain, a magnetic field of 25 mT (excluding the magnetic field from MFM tip) has to be applied to induce the nucleation of skyrmion. This observation clearly evidences that the tensile strain has a similar function to the external magnetic field that can significantly lower the energy barrier between the stripe domain and the skyrmion and shift the skyrmion to the local energy minimum state. As we further increase E, the transferred strain gradually changes to a compressive type (εave < 0). In contrast to the assisting effect of the tensile strain on the nucleation of skyrmion, the compressive strain promotes the skyrmion to gradually switch back to the stripe domain, indicating that the stripe domain has lower energy compared with the skyrmion under the compressive strain. When E is decreased from +10 to 0 kV cm−1, the stripe domain is restored. By sweeping E toward the negative range, the variation of magnetic state exhibits a nearly symmetric behavior though the strength of both tensile and compressive strain is lower than that at the positive E range. Notably, the skyrmion formed at E = −3 kV cm−1 appears to be elongated, which can be ascribed to the inadequacy of the tensile strain to form a perfect skyrmion. More details about the EF-induced magnetic domain evolution process are presented in Supplementary Fig. 5. Notably, the morphology of the skyrmions is little affected by varying the tip-sample distance or reversing the magnetization of the tip (see Supplementary Fig. 6). This feature clearly suggests that the stabilization of skyrmions is little affected by the tip stray field. We also realize such a binary switching between the skyrmion and stripe domain in the [Pt/Co/Ta]12 nano-dots with different diameters (d ~600, 850 nm) as well as in the continuous thin film, though the corresponding number of skyrmions formed in the nano-dots and the assisting magnetic fields required for the EF-induced formation of skyrmions are different (see Supplementary Fig. 7). These results strongly demonstrate that the observed EF-induced switching of skyrmions in the multiferroic heterostructure is reliable and a general phenomenon.
Fig. 2: Electric-field-induced switching of individual skyrmion.
The transferred average strain εave and corresponding magnetic domain evolution processes in the d ~ 350 nm a [Pt/Co/Ta]12 and b [Pt/Co/Ta]8 nano-dots in a cycle of E ranging from +10 to −10 kV cm−1. Positive εave (red dots) represents tensile strain while negative εave (blue dots) represents compressive strain. μ0H represents the external magnetic field except that from the MFM tip and here μ0H is equal to be 0 mT. The inset of b illustrates the spin texture of the magnetic domain that is encompassed by the red box. The stripe domain enclosed by the black box shows the initial state of the magnetic domain evolution path. The gray dots represent the corresponding electric field for the MFM images. The MFM contrast represents the MFM tip resonant frequency shift (Δf). The scale bar represents 250 nm.
To investigate the EF-induced magnetic domain evolution process under larger strains, the repetition number of the FM [Pt/Co/Ta]n multilayer is decreased from 12 to 8. As expected, εave increases drastically (see Fig. 2b). Subsequently, we have examined the effects of the transferred strain on the formation of skyrmions in the d ~ 350 nm nano-dot. With increasing E from 0 to +3 kV cm−1, the tensile strain can also induce a stripe–skyrmion conversion, which resembles that observed in the [Pt/Co/Ta]12 nano-dots. However, when the tensile strain becomes a compressive one with the further increase of E, the domain evolution process changes. We find that the compressive strain no longer makes the skyrmion return back to the stripe domain anymore but forces its spins to gradually align with the in-plane direction to form an in-plane domain at E = +10 kV cm−1 (the contrast of MFM image gradually became weak). Based on previous literatures37,38 and further analysis of the spin configuration of the in-plane domain, we confirm it as the vortex state (see Supplementary Fig. 8). After decreasing E from +10 to 0 kV cm−1, the vortex state is retained. With increasing E toward the negative numerical range, the strength of both the tensile and compressive strain decreases significantly. Although the tensile strain stimulates the skyrmion to appear again, the compressive strain cannot induce the skyrmion to transform into the vortex but the stripe domain. This feature suggests that the energy difference between the skyrmion and vortex exceeds that between the skyrmion and stripe domain. As a result, the compressive strain that is generated at the negative E range is not sufficiently large to overcome such an energy difference to induce the formation of the vortex state. More information about the EF-induced magnetic domain evolution process and the repetition of the multistate conversion in different samples are presented in Supplementary Figs. 9 and 10.
Non-volatile switching of different magnetic structures
We observe that the skyrmion, stripe domain, and vortex, can be reliably switched with each other through the use of 1 ms pulses of E = ±3, +10, and −10 kV cm−1, respectively (see Fig. 3a and Supplementary Fig. 11). This finding reflects the non-volatility of the three magnetic states. We propose that such non-volatile switching is closely related to both the large remnant strain and the geometrically confined effect.
Fig. 3: Switching of individual skyrmions induced by pulse electric field.
a Switching of topological number Q of various magnetic domains (Q = 1.0, 0.5, and 0 corresponds to skyrmion, vortex, and stripe, respectively) by applying a pulse electric field with a pulse width of 1 ms. The insets contain the corresponding MFM images for the switching. The values of E for the generation of the skyrmion, vortex, and stripe are ±3, +10, and −10 kV cm−1, respectively. The MFM contrast represents the MFM tip resonant frequency shift (Δf). The scale bar represents 250 nm. b Schematic of the envisioned cross-bar random access memory device based on the FM/FE multiferroic heterostructure nano-dots with the stripe, skyrmion and vortex as storage bits.
For the non-volatility of skyrmions, the remnant strain is not the main reason, as it is nearly 0% when E is reduced from ±3 to 0 kV cm−1 (see Supplementary Fig. 12). We propose that the strong pinning effect that results from the geometrical edge of the nano-dots is essential for the stabilization of the skyrmions after an E = ±3 kV cm−1 pulse. To prove this point, we first induce the stripe domains to completely transform into skyrmions by increasing magnetic field, and subsequently the magnetic field is decreased to zero. As shown in Supplementary Fig. 12, most of the skyrmions are retained at zero magnetic field though their sharp appears to be elongated. This result demonstrates that the magnetic hysteretic effect results in the non-volatile feature of skyrmions. We have also carried out the same measurements on a continuous thin film (see Supplementary Fig. 12). However, few skyrmions are retained at zero magnetic field. This feature further suggests that the magnetic hysteretic effect in our experiments mainly originates from the geometrical pinning effect of the nano-dots, as reported in the previous literatures9,39,40. For the non-volatility of vortex, the case is different. As illustrated in Fig. 2b, when E decreases from +10 to 0 kV cm−1, a large remnant compressive strain is obtained as a result of the hysteretic characteristic of the PMN-PT substrate. Such a remnant strain is proposed to be the main reason for stabilizing the vortex state at E = 0 kV cm−1 after an E = +10 kV cm−1 pulse. When we increase E across 0 kV cm−1 toward the negative range, the strength of compressive strain decreases correspondingly and the vortex gradually transforms into the skyrmion. This feature suggests that a large enough compressive strain is essential for the stabilization of vortex. On the other hand, we find that the vortex state at E = 0 kV cm−1 can reform in the nano-dot even if it is destructed by applying an external out-of-plane magnetic field (see Supplementary Fig. 13). If the magnetic hysteretic is the key factor for the stabilization of vortex at E = 0 kV cm−1, the out-of-plane single domain would transform into the stripe domain or skyrmions after decreasing the external magnetic field to zero (see Supplementary Fig. 13). This feature further confirms that the dominated factor for the stabilization of vortex at E = 0 kV cm−1 is not the magnetic hysteretic effect but the large remanent strain from the ferroelectric substrate. As for the stripe domain, it is natural for us to observe that the stripe domain is reserved after an E pulse of −10 kV cm−1, as it is the ground state in the switching process.
In terms of practical applications, the EF-induced non-volatile, multistate switching of skyrmions is highly suitable for constructing low-energy-dissipation skyrmion-based spintronic devices, such as EF-controlled skyrmion random access memory. Figure 3b presents the schematic of an envisioned cross-bar random access memory device that employs the nanostructured FM/FE multiferroic heterostructure with the stripe, skyrmion, and vortex as storage bits. In such a device, we expect that an EF pulse would be applied on the nano-dots via a magnetic writing head (with a low-magnetic field of less than 10 mT) to implement the information writing. To read the information bits, we propose an implementation that the nanostructured heterostructure is combined with the magnetic tunnel junctions, whose magnetoresistance (MR) can exhibit a significant variation with the switching of the stripe, skyrmion, and vortex, according to the previous reports41.
We have demonstrated that a reliable EF-induced switching of individual skyrmions can be realized at the room temperature on basis of the nanostructured FM/FE heterostructure. Next, we will discuss the physics origin underlying the experimental observations. In our experiments, the EF manipulation of skyrmions originates from the strain-mediated modification of interfacial magnetism of the FM layer, such as the effective magnetic anisotropy and DMI. Hence, it is essential to explore the impact of the strain on the effective magnetic anisotropy constant (Keff) and DMI constant (D). Although the magnetic exchange interaction is crucial for magnetic domain evolution as well, we propose that it is slightly influenced by the external electric field in the heterostructure, as the strain transferred to the nano-dots (maximum strain of 0.42%) is too small to induce a significant variation of magnetic exchange constant (A)42,43. More discussions about this point are shown in Supplementary Fig. 14 and Supplementary Note 2.
Based on the experimentally established relationship between E and εave (see Fig. 2), we can obtain εave-dependent Keff by measuring the E-dependent magnetization curves of d ~ 350 nm [Pt/Co/Ta]12 nano-dots (see Fig. 4a and Supplementary Fig. 15). As known to us, strain is closely coupled with Keff44,45. Thus, the inhomogeneous strain distribution should lead to an inhomogeneous Keff distribution. In this work, we use the average (Kave) of inhomogeneous Keff to describe the experimentally measured Keff on the d ~ 350 nm nano-dots. At εave = 0 (E = 0 kV cm−1), Kave has a negative value of (−0.90 ± 0.02) × 105 J m−3, which indicates an in-plane magnetic anisotropy (IMA). Namely, the in-plane direction of the [Pt/Co/Ta]12 layer is more easily magnetized compared with the out-of-plane direction. With the increase of tensile strain, the absolute value of Kave decreases correspondingly, which suggests that the tensile strain decreases IMA. In contrast, the compressive strain induces a heightened IMA with increasing the compressive strain. The changing tendency of Kave with the strain is similar to the observations in many other Co-based magnetic systems44,45, which confirms our experimental results.
Fig. 4: Simulated variation of Dave and Kave on magnetic domain evolution.
Dependence of the experimentally established values of a Kave (red circles) and b Dave (red squares) on εave on the d ~ 350 nm [Pt/Co/Ta]12 nano-dot. The positive value of εave represents the tensile strain, while the negative value signifies the compressive strain. The dashed line represents the boundary between the tensile strain and the compressive strain. The black lines in a and b represent the fitting lines by using linear equations. The error margin of Kave at different E is added by measuring two different samples and the error margin of Dave at different E is added by fitting different εave–Dave curves for both the continuous thin film and d ~850 nm nano-dot. c Simulations of the influence of Dave on the magnetic domain evolution. d Simulations of the variation of Kave on the magnetic domain evolution. Notably, when one magnetic parameter varies in the simulations, the other magnetic parameters are fixed. An external magnetic field of 100 mT is applied in the simulations, and the magnetic domain enclosed by the black dashed boxes in c, d represent the initial states in the domain evolution process. The magnetization along the z-axis (Mz) is represented by regions in red (+Mz) and blue (−Mz). The scale bar is 350 nm.
The corresponding absolute value of Dave could be calculated from8,27,46
$$\sigma _{{\mathrm{DW}}} = 4\sqrt {AK} - \pi \left| {D_{{\mathrm{ave}}}} \right|,$$
where σDW is the domain wall surface energy density, A is the exchange constant, and K is the magnetic anisotropy constant that accounts for the energy difference between the spin in the domain and that in the middle of the domain wall and here is proposed to be approximately equal to that of Kave8,47,48. As εave-dependent Kave has been obtained above, the relationship between εave and σDW should be experimentally established to calculate εave-dependent Dave. The value of σDW can be quantified as follows by measuring the low-magnetic-field domain period (w) on the basis of a domain spacing model8,27,49
$$\frac{{\sigma _{{\mathrm{DW}}}}}{{\mu _0M_{\mathrm{S}}^2t}} = \frac{{w^2}}{{t^2}}\mathop {\sum }\limits_{{\mathrm{odd}}\,n = 1}^\infty \left( {\frac{1}{{(\pi n)^3}}} \right)\left[ {1 - \left( {1 - 2\pi nt/w} \right)\exp \left( { - 2\pi nt/w} \right)} \right],$$
where t is the thickness of the film, Ms is saturation magnetization, and w is the low-field domain period (w = w↑ + w↓, w↑, and w↓ were the up and down domain widths, respectively) that can be obtained from the MFM data. For the continuous thin film or nano-dot with a relatively large diameter (d ≥ 850 nm), they host periodic magnetic stripe domains. Supplementary Fig. 16 elaborate on the establishment of εave-dependent w and σDW. However, when d is smaller than 850 nm (d < 850 nm), the strong geometrical confinement effect induces the periodic stripe domain to transform into the non-period one (see Fig. 1c). Thus, we can no longer directly calculate Dave of the d ~350 nm nano-dot based on Kave and σDW. Instead, we may derive it on basis of the Dave–εave relationship established on the continuous thin film and d ~850 nm nano-dot because D is an intrinsic parameter that originates from the spin–orbital coupling effect of the film interface and is hence little affected by the geometrical confinement. Details about establishing Dave of the d ~ 350 nm nano-dot are presented in Supplementary Figs. 17 and 18, Supplementary Notes 3 and 4, and Supplementary Table 2. Figure 4b summarizes the derived εave-dependent Dave of the d ~ 350 nm [Pt/Co/Ta]12 nano-dots. We can find that the absolute value of Dave decreases with the increase of tensile strain while increases with the increase of compressive strain. Such a change tendency of Dave with εave agrees with both previous theoretical and experimental results43,50,51,52, and can be attributed to the strain-medicated modification of the electronic structure of the [Pt/Co/Ta]12 stack50.
After experimentally establishing the relationship between the correlative magnetic parameters and the strain, we have performed micromagnetic simulations to clarify their respective roles in the EF-induced domain evolution process. To agree with the experiments, both the Keff and D distributions are set to be inhomogeneous based on the simulated ε distribution and the experimentally established relationship among Keff, D, and ε. Details about the simulations can be found in Supplementary Note 5 for micromagnetic simulation. We have firstly simulated the magnetization dynamics of the d ~ 350 nm nano-dot on basis of the experimental magnetic parameters of the [Pt/Co/Ta]12 heterostructure. The simulated evolution of the magnetic domain agrees well with the experimental observations (see Supplementary Fig. 19) and thus validates our theoretical model. Subsequently, we have simulated the switching of magnetic domain with the variation of Dave and Kave (see Fig. 4c, d). The simulated stripe domain at μ0H = 100 mT (see Supplementary Fig. 19) is used as the initial state to represent the experimental domain structure at E = 0 kV cm−1 and εave = 0%. As shown in Fig. 4c, d, both the absolute values of Dave and Kave are first decreased and then increased, which corresponds to the experimentally established relationship among Kave, Dave, εave, and E. Our experimental results demonstrate that the transferred strain first exhibits a tensile type and increases correspondingly with the increase of E from 0 kV cm−1. The heightened tensile strain decreases both the absolute values of Dave and Kave and induces the stripe domain to convert into skyrmion. However, the simulation results demonstrate that the nucleation of skyrmion is insensitive to the variation of Kave but that of Dave. As shown in Figure 4c, a slight decrease of Dave from 1.0 mJ m−2 to 0.95 mJ m−2 can induce the stripe–skyrmion transition. This feature is well consistent with the experimental observations in the [Pt/Co/Ta]12 heterostructure and suggests that the EF-induced formation of skyrmion originates from the tensile-strain-mediated decrease of Dave. Meanwhile, the simulations demonstrate that the skyrmion can recover back to the stripe domain by increasing Dave. This feature is also consist with our experimental observations that the heightened compressive strain increases the absolute value of Dave and induces the skyrmion to convert into stripe. On basis of these results, we hence propose that that the strain-mediated variation of Dave has a dominant influence on the observed EF-induced stripe–skyrmion conversion in our experiments. However, we find that the variation of Dave cannot induce the formation of vortex that is observed in the [Pt/Co/Ta]8 heterostructure even when the value of Dave is drastically increased to the immense value of 2.50 mJ m−2 or decreased to only 0.10 mJ m−2. Nevertheless, upon reducing the value of Kave from −0.90 × 105 J m−3 to −2.00 × 105 J m−3, namely, increasing the in-plane effective anisotropy, the vortex can form in the nano-dot. This change tendency is well consistent with our experimental observations that the heightened compressive strain increases in-plane effective anisotropy and induces the formation of the vortex. Therefore, we propose that the primary factor for the EF-induced formation of the vortex is the strain-mediated increase of IMA.
In summary, we have accomplished both a binary and multistate EF-induced switching of individual skyrmions on the basis of a nanostructured FM/FE heterostructure at the room temperature. Furthermore, such a switching is non-volatile and does not require the assistance of an external magnetic field though the low-magnetic field of MFM tips may have a minor effect. Such features reveal an approach to constructing low-energy-dissipation, non-volatile, multistate skyrmion-based spintronic devices, such as the low-energy-dissipation skyrmion random access memory. In addition, the numerical simulations evidence that the EF-induced manipulation of skyrmions originates mainly from the strain-mediated modification of the effective magnetic anisotropy and DMI. These findings offer valuable insights into the fundamental mechanisms underlying the strain-mediated EF manipulation of skyrmions.
Magnetic force microscope measurements
The MFM observations are performed with scanning probe microscopy (MFP-3D, Asylum Research). For the measurements, a low-moment magnetic tip (PPP-LM-MFMR, Nanosensors) is selected, and the distance between the tip and sample is maintained at a constant distance of 30 nm. The VFM3 component (Asylum Research) is integrated into the MFP-3D to vary the perpendicular magnetic field.
Establishing magnetic parameters of [Pt/Co/Ta]12 nano-dot
The E-dependent Ms can be obtained by measuring the E-dependent magnetization curves of the continuous thin film. The value of Ms is proposed to be E-independent and established to be 697 ± 7 kA m−3. The error bar of Ms is added by summarizing the values of Ms under different E. Based on the values of Ms and w, the values of σDW can be calculated. The error margin of σDW is added based on the error bar of Ms and w. The value of Kave is significantly affected by the variation of E and the error margin of Kave at different E is established by measuring the two different samples. The value of A is obtained by fitting the temperature-dependent saturation magnetization Ms(T), with the Bloch law. We find that the variation of E affects A slightly. Thus, the value of A is proposed to be E-independent and established to be 16.9 ± 0.2 pJ m−1. The error bar of A is added fitting the Ms(T) cure within different temperature range. Based on the values of σDW, A, and Kave, the absolute value of Dave for the continuous thin film and d ~850 nm nano-dot can be directly calculated. The error margin of Dave is added based on the error margin of σDW, A, Ms, and Kave. Dave of the d ~ 350 nm nano-dot is derived it on basis of the Dave–εave relationship established on the continuous thin film and d ~850 nm nano-dot because D is an intrinsic parameter that originates from the spin–orbital coupling effect of the film interface and is hence little affected by the geometrical confinement.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
All relevant data that support the plots within this paper are available from the corresponding author upon reasonable request.
Fert, A., Cros, V. & Sampaio, J. Skyrmions on the track. Nat. Nanotech. 8, 152–156 (2013).
ADS CAS Google Scholar
Rosch, A. Skyrmions: Moving with the current. Nat. Nanotech. 8, 160–161 (2013).
Wiesendanger, R. Nanoscale magnetic skyrmions in metallic films and multilayers: a new twist for spintronics. Nat. Rev. Mater. 1, 16044 (2016).
Zhou, Y. Magnetic skyrmions: intriguing physics and new spintronic device concepts. Natl Sci. Rev. 6, 210–212 (2019).
Milde, P. et al. Unwinding of a skyrmion lattice by magnetic monopoles. Science 340, 1076–1080 (2013).
ADS CAS PubMed Google Scholar
Jiang, W. et al. Blowing magnetic skyrmion bubbles. Science 349, 283–286 (2015).
Liang, D., DeGrave, J. P., Stolt, M. J., Tokura, Y. & Jin, S. Current-driven dynamics of skyrmions stabilized in MnSi nanowires revealed by topological Hall effect. Nat. Commun. 6, 8217 (2015).
ADS CAS PubMed PubMed Central Google Scholar
Woo, S. et al. Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets. Nat. Mater. 15, 501–506 (2016).
Caretta, L. et al. Fast current-driven domain walls and small skyrmions in a compensated ferrimagnet. Nat. Nanotech. 13, 1154–1160 (2018).
Büttner, F. et al. Field-free deterministic ultrafast creation of magnetic skyrmions by spin–orbit torques. Nat. Nanotech. 12, 1040–1044 (2017).
ADS Google Scholar
Woo, S. et al. Current-driven dynamics and inhibition of the skyrmion Hall effect of ferrimagnetic skyrmions in GdFeCo films. Nat. Commun. 9, 959 (2018).
ADS PubMed PubMed Central Google Scholar
Maccariello, D. et al. Electrical detection of single magnetic skyrmions in metallic multilayers at room temperature. Nat. Nanotech. 13, 233–237 (2018).
Hrabec, A. et al. Current-induced skyrmion generation and dynamics in symmetric bilayers. Nat. Commun. 8, 15765 (2017).
Yu, G. et al. Room-temperature creation and spin–orbit torque manipulation of skyrmions in thin films with engineered asymmetry. Nano Lett. 16, 1981–1988 (2016).
Iwasaki, J., Mochizuki, M. & Nagaosa, N. Universal current-velocity relation of skyrmion motion in chiral magnets. Nat. Commun. 4, 1463 (2013).
ADS PubMed Google Scholar
Iwasaki, J., Mochizuki, M. & Nagaosa, N. Current-induced skyrmion dynamics in constricted geometries. Nat. Nanotech. 8, 742–747 (2013).
Jonietz, F. et al. Spin Transfer torques in MnSi at ultralow current densities. Science 330, 1648–1651 (2010).
Zang, J., Mostovoy, M., Han, J. H. & Nagaosa, N. Dynamics of skyrmion crystals in metallic thin films. Phys. Rev. Lett. 107, 136804 (2011).
Yu, X. Z. et al. Current-induced nucleation and annihilation of magnetic skyrmions at room temperature in a chiral magnet. Adv. Mater. 29, 1606178 (2017).
Seki, S., Yu, X. Z., Ishiwata, S. & Tokura, Y. Observation of skyrmions in a multiferroic material. Science 336, 198–201 (2012).
White, J. S. et al. Electric-field-induced skyrmion distortion and giant lattice rotation in the magnetoelectric insulator Cu2OSeO3. Phys. Rev. Lett. 113, 107203 (2014).
Huang, P. et al. In situ electric field skyrmion creation in magnetoelectric Cu2OSeO3. Nano Lett. 18, 5167–5171 (2018).
Romming, N. et al. Writing and deleting single magnetic skyrmions. Science 341, 636–639 (2013).
Wang, W. et al. A centrosymmetric hexagonal magnet with superstable biskyrmion magnetic nanodomains in a wide temperature range of 100–340 K. Adv. Mater. 28, 6887–6893 (2016).
Jiang, W. et al. Direct observation of the skyrmion Hall effect. Nat. Phys. 13, 162–169 (2017).
Hsu, P. J. et al. Electric-field-driven switching of individual magnetic skyrmions. Nat. Nanotech. 12, 123–126 (2017).
Schott, M. et al. The skyrmion switch: turning magnetic skyrmion bubbles on and off with an electric field. Nano Lett. 17, 3006–3012 (2017).
Srivastava, T. et al. Large-voltage tuning of Dzyaloshinskii–Moriya interactions: a route toward dynamic control of skyrmion chirality. Nano Lett. 18, 4871–4877 (2018).
Ma, C. et al. Electric field-induced creation and directional motion of domain walls and skyrmion bubbles. Nano Lett. 19, 353–361 (2019).
Liu, Y. et al. Chopping skyrmions from magnetic chiral domains with uniaxial stress in magnetic nanowire. Appl. Phys. Lett. 111, 022406 (2017).
Hu, J. M., Yang, T. & Chen, L. Q. Strain-mediated voltage-controlled switching of magnetic skyrmions in nanostructures. npj Comput. Mater. 4, 62 (2018).
Nakatani, Y., Hayashi, M., Kanai, S., Fukami, S. & Ohno, H. Electric field control of skyrmions in magnetic nanodisks. Appl. Phys. Lett. 108, 152403 (2016).
Upadhyaya, P., Yu, G. Q., Amiri, P. K. & Wang, K. L. Electric-field guiding of magnetic skyrmions. Phys. Rev. B 92, 134411 (2015).
Ba, Y. et al. Spatially resolved electric-field manipulation of magnetism for CoFeB mesoscopic discs on ferroelectrics. Adv. Funct. Mater. 28, 1706448 (2018).
Wang, L. et al. Construction of a room-temperature Pt/Co/Ta multilayer film with ultrahigh-density skyrmions for memory application. ACS Appl. Mater. Interfaces 11, 12098–12104 (2019).
Zhang, S. et al. Direct writing of room temperature and zero field skyrmion lattices by a scanning local magnetic field. Appl. Phys. Lett. 112, 132405 (2018).
Shinjo, T., Okuno, T., Hassdorf, R., Shigeto, K. & Ono, T. Magnetic vortex core observation in circular dots of permalloy. Science 289, 930–932 (2000).
Natali, M. et al. Correlated magnetic vortex chains in mesoscopic cobalt dot arrays. Phys. Rev. Lett. 88, 157203 (2002).
Hou, Z. et al. Manipulating the topology of nanoscale skyrmion bubbles by spatially geometric confinement. ACS Nano 13, 922–929 (2019).
Ho, P. et al. Geometrically tailored skyrmions at zero magnetic field in multilayered nanostructures. Phys. Rev. Appl. 11, 024064 (2019).
Zhang, X. et al. Skyrmions in magnetic tunnel junctions. ACS Appl. Mater. Interfaces 10, 16887–16892 (2018).
Liu, X. et al. Exchange stiffness, magnetization, and spin waves in cubic and hexagonal phases of cobalt. Phys. Rev. B 53, 12166–121172 (1996).
Shibata, K. et al. Large anisotropic deformation of skyrmions in strained crystal. Nat. Nanotech. 10, 589–592 (2015).
Yang, Q. et al. Voltage control of perpendicular magnetic anisotropy in multiferroic (Co/Pt)3/PbMg1/3Nb2/3O3− PbTiO3 heterostructures. Phys. Rev. Appl 8, 044006 (2017).
Sun, Y. et al. Electric-field modulation of interface magnetic anisotropy and spin reorientation transition in (Co/Pt)3/PMN–PT heterostructure. ACS Appl. Mater. Interfaces 9, 10855–10864 (2017).
Heide, M., Bihlmayer, G. & Blügel, S. Dzyaloshinskii-Moriya interaction accounting for the orientation of magnetic domains in ultrathin films: Fe/W (110). Phys. Rev. B 78, 140403 (2008).
Pellegren, J. P., Lau, D. & Sokalski, V. Dispersive Stiffness of Dzyaloshinskii Domain Walls. Phys. Rev. Lett. 119, 027203 (2017).
Srivastava, T. et al. Mapping different skyrmion phases in double wedges of Ta/FeCoB/TaOx trilayers. Phys. Rev. B 100, 220401 (2019).
Johansen, T. H., Pan, A. V. & Galperin, Y. M. Exact asymptotic behavior of magnetic stripe domain arrays. Phys. Rev. B 87, 060402 (2013).
Koretsune, T., Nagaosa, N. & Arita, R. Control of Dzyaloshinskii-Moriya interaction in Mn1−xFexGe: a first-principles study. Sci. Rep. 5, 13302 (2015).
Zhang, W. et al. Enhancement of interfacial Dzyaloshinskii-Moriya interaction: a comprehensive investigation of magnetic dynamics. Phys. Rev. Appl. 12, 064031 (2019).
Gusev, N. et al. Manipulation of the Dzyaloshinskii-Moriya Interaction in Co/Pt multilayers with strain. Phys. Rev. Lett. 124, 157202 (2020).
The authors thank for the financial supports from the National Key Research and Development Program of China (Nos. 2016YFA0201002 and 2016YFA0300101), the National Natural Science Foundation of China (Nos. 11674108, 51272078, 11574091, 51671023, 51871017, 11974298, and 61961136006), Science and Technology Planning Project of Guangdong Province (No. 2015B090927006), the Natural Science Foundation of Guangdong Province (No. 2016A030308019), Open Research Fund of Key Laboratory of Polar Materials and Devices, Ministry of Education, National Natural Science Foundation of China Youth Fund (Grant No. 51901081), Science and Technology Program of Guangzhou (No. 2019050001), President's Fund of CUHKSZ, Longgang Key Laboratory of Applied Spintronics and Shenzhen Peacock Group Plan (Grant No. KQTD20180413181702403), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2019A1515110713).
Guangdong Provincial Key Laboratory of Optical Information Materials and Technology & Institute for Advanced Materials, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, 510006, China
Yadong Wang, Guo Tian, Zhipeng Hou, Xingsen Gao, Min Zeng & Guofu Zhou
National Center for International Research on Green Optoelectronics, South China Normal University, Guangzhou, 510006, China
School of Materials Science and Engineering, University of Science and Technology Beijing, Beijing, 100083, China
Lei Wang, Chun Feng & Guanghua Yu
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, China
Jing Xia, Xichao Zhang & Yan Zhou
Colleage of Science, Tianjin University, Tianjin, 300392, China
Zhengxun Lai & Wenbo Mi
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, China
Guangheng Wu & Wenhong Wang
Physical Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal, 23955-6900, Saudi Arabia
Xi-xiang Zhang
Laboratory of Solid State Microstructures and Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 211102, China
Junming Liu
Yadong Wang
Lei Wang
Jing Xia
Zhengxun Lai
Guo Tian
Xichao Zhang
Zhipeng Hou
Xingsen Gao
Wenbo Mi
Chun Feng
Min Zeng
Guofu Zhou
Guanghua Yu
Guangheng Wu
Wenhong Wang
Z.H. and X.G. conceived and designed the experiments. C.F., L.W., and Y.W. synthesized the heterostructures. X.C.Z., J.X., and Y.Z. performed the micromagnetic simulations. The manuscript was drafted by Z.H. and X-.X.Z. with contributions from G.T., Z.L., W.M., M.Z., G.Z., G.Y., X.G., G.W., W.W., and J.L. All authors discussed the results and contributed to the manuscript.
Correspondence to Zhipeng Hou, Xingsen Gao or Chun Feng.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Wang, Y., Wang, L., Xia, J. et al. Electric-field-driven non-volatile multi-state switching of individual skyrmions in a multiferroic heterostructure. Nat Commun 11, 3577 (2020). https://doi.org/10.1038/s41467-020-17354-7
Electric-field control of skyrmions in multiferroic heterostructure via magnetoelectric coupling
You Ba
Shihao Zhuang
Yonggang Zhao
Nature Communications (2021)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Reviews & Analysis
Editorial Values Statement
Journal Impact
Editors' Highlights
Top 50 Articles
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
nature.com sitemap
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Masterclasses
Nature Research Academies
Libraries & institutions
Librarian service & tools
Advertising & partnerships
Partnerships & Services
Nature Careers
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Korea
Nature Middle East
© 2022 Springer Nature Limited
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
MAGIC observations and multiwavelength properties of the quasar 3C279 in 2007 and 2009 (1101.2522)
The MAGIC Collaboration: J.Aleksić, P.Antoranz, J.Becerra González, E.Bernardini, G.Bonnoli, A.Canellas, J.L.Contreras, F.Dazzi, C.Delgado Mendez, D.Dominis Prester, D.Ferenc, R.J.Garcia López, D.Hadasch, J.Hose, T.Krähenbühl, E.Leonardo, E.Lorenz, K.Mannheim, D.Mazin, J.Moldón, R.Orito, S.Partini, L.Peruzzo, P.G.Prada Moroni, W.Rhode, A.Saggion, V.Scalzotto, M.Shayduk, F.Spanier, N.Strah, T.Terzić, D.F.Torres, R.M.Wagner IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain INAF National Institute for Astrophysics, I-00136 Rome, Italy Università di Siena, , INFN Pisa, I-53100 Siena, Italy Technische Universität Dortmund, D-44221 Dortmund, Germany Universidad Complutense, E-28040 Madrid, Spain Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain Depto. de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Spain University of Łódź, PL-90236 Lodz, Poland Tuorla Observatory, University of Turku, FI-21500 Piikkiö, Finland Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Switzerland Max-Planck-Institut für Physik, D-80805 München, Germany Universitat de Barcelona Università di Udine, INFN Trieste, I-33100 Udine, Italy Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Institute R. Boskovic, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria INAF/Osservatorio Astronomico, INFN, I-34143 Trieste, Italy Università dell'Insubria, Como, I-22100 Como, Italy ICREA, E-08010 Barcelona, Spain now at Ecole polytechnique fédérale de Lausanne now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas University of California, Los Angeles, USA, Turku, Finland)
March 2, 2011 astro-ph.CO
Context. 3C 279, the first quasar discovered to emit VHE gamma-rays by the MAGIC telescope in 2006, was reobserved by MAGIC in January 2007 during a major optical flare and from December 2008 to April 2009 following an alert from the Fermi space telescope on an exceptionally high gamma -ray state. Aims. The January 2007 observations resulted in a detection on January 16 with significance 5.2 sigma, corresponding to a F(> 150 GeV) (3.8 \pm 0.8) \cdot 10^-11 ph cm^-2 s^-1 while the overall data sample does not show significant signal. The December 2008 - April 2009 observations did not detect the source. We study the multiwavelength behavior of the source at the epochs of MAGIC observations, collecting quasi-simultaneous data at optical and X-ray frequencies and for 2009 also gamma-ray data from Fermi. Methods. We study the light curves and spectral energy distribution of the source. The spectral energy distributions of three observing epochs (including the February 2006, which has been previously published in Albert et al. 2008a) are modeled with one-zone inverse Compton models and the emission on January 16, 2007 also with two zone model and with a lepto-hadronic model. Results. We find that the VHE gamma-ray emission detected in 2006 and 2007 challenges standard one-zone model, based on relativistic electrons in a jet scattering broad line region photons, while the other studied models fit the observed spectral energy distribution more satisfactorily.
MAGIC observation of the GRB080430 afterglow (1004.3665)
MAGIC Collaboration: J. Aleksić, L. A. Antonelli, S. Balestra, J. K. Becker, E. Bernardini, D. Borla Tridon, T. Bretz, P. Colin, M. T. Costado, E. de Cea del Pozo, M. De Maria, A. Domínguez, D. Elsaesser, R. Firpo, R. J. García López, D. Höhne-Mönch, T. Jogler, A. La Barbera, F. Longo, G. Maneva, M. Mariotti, R. Mirzoyan, A. Moralejo, I. Oya, F. Pauss, L. Peruzzo, I. Puljak, M. Rissi, M. Salvati, V. Scapin, A. Sierpowska-Bartosik, D. Sobczynska, B. Steinke, F. Tavecchio, D. F. Torres, V. Zabalza A. de Ugarte-Postigo ETH Zurich, Switzerland INAF National Institute for Astrophysics, Rome, Italy Technische Universität Dortmund, Dortmund, Germany Universitat Autònoma de Barcelona, Bellaterra, Spain Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain University of Łódź, Lodz, Poland Tuorla Observatory, University of Turku, Piikkiö, Finland, Zeuthen, Germany Università di Siena, INFN Pisa, Siena, Italy Universitat de Barcelona Universität Würzburg, Würzburg, Germany Depto. de Astrofisica, Universidad, La Laguna, Tenerife, Spain Università di Udine, INFN Trieste, Udine, Italy, E-08193 Bellaterra, Spain Croatian MAGIC Consortium, Institute R. Boskovic, University of Rijeka, University of Split, Zagreb, Croatia University of California, Davis, CA, USA INAF/Osservatorio Astronomico, INFN, Trieste, Italy ICREA, Barcelona, Spain supported by INFN Padova now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas now at: Max-Planck-Institut für Kernphysik, Heidelberg, Germany
April 23, 2010 astro-ph.HE
Context: Gamma-ray bursts are cosmological sources emitting radiation from the gamma-rays to the radio band. Substantial observational efforts have been devoted to the study of gamma-ray bursts during the prompt phase, i.e. the initial burst of high-energy radiation, and during the long-lasting afterglows. In spite of many successes in interpreting these phenomena, there are still several open key questions about the fundamental emission processes, their energetics and the environment. Aim: Independently of specific gamma-ray burst theoretical recipes, spectra in the GeV/TeV range are predicted to be remarkably simple, being satisfactorily modeled with power-laws, and therefore offer a very valuable tool to probe the extragalactic background light distribution. Furthermore, the simple detection of a component at very-high energies, i.e. at $\sim 100$\,GeV, would solve the ambiguity about the importance of various possible emission processes, which provide barely distinguishable scenarios at lower energies. Methods: We used the results of the MAGIC telescope observation of the moderate resdhift ($z\sim0.76$) \object{GRB\,080430} at energies above about 80\,GeV, to evaluate the perspective for late-afterglow observations with ground based GeV/TeV telescopes. Results: We obtained an upper limit of $F_{\rm 95%\,CL} = 5.5 \times 10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$ for the very-high energy emission of \object{GRB\,080430}, which cannot set further constraints on the theoretical scenarios proposed for this object also due to the difficulties in modeling the low-energy afterglow. Nonetheless, our observations show that Cherenkov telescopes have already reached the required sensitivity to detect the GeV/TeV emission of GRBs at moderate redshift ($z \lesssim 0.8$), provided the observations are carried out at early times, close to the onset of their afterglow phase.
|
CommonCrawl
|
Association of hypertriglyceridemic waist phenotype with renal function impairment: a cross-sectional study in a population of Chinese adults
Yun Qiu1 na1,
Qi Zhao1 na1,
Na Wang1,
Yuting Yu1,
Ruiping Wang2,
Yue Zhang1,
Shuheng Cui1,
Meiying Zhu2,
Xing Liu1,
Yonggen Jiang2 &
Genming Zhao1
Hypertriglyceridemic waist (HTGW) phenotype has been suggested as a risk factor for chronic kidney disease (CKD). However, there is limited evidence on the relationship of triglyceride waist phenotypes with estimated glomerular filtration rate (eGFR) status and severity. Our aim was to explore the associations of triglyceride waist phenotypes with reduced eGFR and various decreased eGFR stages among Chinese adults.
A population-based, cross-sectional study was conducted among Chinese participants aged 20–74 years from June 2016 to December 2017 in Shanghai, China. An eGFR value below 60 mL/min/1.73 m2 was defined as decreased eGFR. HTGW phenotype was defined as triglyceride (TG) ≥1.7 mmol/L and a waist circumference (WC) of ≥90 cm for men and ≥ 80 cm for women. We examined the association of triglyceride waist phenotypes with decreased eGFR risk using the weighted logistic regression models.
A total of 31,296 adults were included in this study. Compared with normal TG level/normal WC (NTNW) phenotype, normal TG level/enlarged WC (NTGW) and elevated TG level/enlarged WC (HTGW) phenotypes were associated with the increased risk of decreased eGFR. Multivariable-adjusted ORs (95% CI) associated with NTGW, elevated TG level/normal WC (HTNW), and HTGW phenotypes were 1.75 (1.41–2.18), 1.29 (0.99–1.68), and 1.99 (1.54–2.58), respectively. These associations between triglyceride waist phenotypes and decreased eGFR risk remained across almost all the subgroups, including sex, age, BMI, T2DM, and hypertension. HTGW phenotype was consistently positively associated with the risk of mildly and moderately decreased eGFR, but not with severely decreased eGFR risk.
HTGW was consistently associated with the increased risk of decreased eGFR and various decreased eGFR stages, except for severely decreased eGFR. Further prospective studies are warranted to confirm our findings and to investigate the underlying biological mechanisms.
Chronic kidney disease (CKD) remains a main cause of morbidity and mortality worldwide, with increasing prevalence and incidence [1]. One of the characteristics of CKD is a decline of kidney function, which is assessed by estimated glomerular filtration rate (eGFR), a most commonly used marker [2]. The global prevalence of CKD and decreased eGFR (eGFR < 60 mL/min/1.73 m2) in general populations were 14.3 and 9.8%, respectively in 2016 [3]. The overall prevalence of CKD was 10.8% and decreased eGFR was 1.7% in the a general population of Chinese adults in 2012 [4]. Recent rapid increases in diabetes, hypertension, and obesity cases will contribute to the increase in CKD prevalence, eventually leading to higher burden of CKD and a bigger threat to public health in less developed regions [5]. Patients with kidney function decline were more likely to have increased risk of cardiovascular events [6], and death from cardiovascular diseases (CVD) [7].
Hypertriglyceridemic waist (HTGW) phenotype is defined by the simultaneous presence of elevated serum triglycerides (TG) level and increased waist circumference (WC). It was first proposed by Lemieux et al. [8], as an indicator of atherosclerosis and an effective tool to identify men who were at high risk of coronary artery disease (CAD). Because assessment of HTGW is relatively inexpensive and easy to acquire, a growing number of studies have shown that HTGW phenotype was associated with the increased risk of CVD [9], CAD [10], hypertension [11], prediabetes [12], and type 2 diabetes mellitus (T2DM) [13], as well as hyperuricemia [14].
Early identification of pertinent risk factors is needed for prevention and control of renal function decline and the development of CKD. In comparison to elevated TG or enlarged WC used alone, the HTGW phenotype is superior in evaluating excess visceral adiposity, and is also a useful clinical tool for identifying individuals with higher risk of abnormal metabolism [15]. Several studies have reported that HTGW phenotype is associated with an increased risk of CKD in adults aged ≥40 years old [16], in elderly (aged ≥60 years old) [17], and in relatively lean people (BMI < 24 kg/m2) [18]. However, existing evidence regarding the association of HTGW with CKD remains controversial. In a cross-sectional study, Ramezankhani et al. [19] found a positive association of HTGW with CKD only in women; while no significant association between HTGW and CKD was observed in the prospective study. A recent cross-sectional study has demonstrated that HTGW was related to CKD risk in women group but not in men group among adults aged 18 to 75 years old [20]. While another cross-sectional study in elderly participants reached an opposing conclusion [17]. Thus far, limited evidence suggest increased risk of decline renal function with HTGW [21]. Previous studies were limited by the lack of stratified analyses. Furthermore, no study has reported the relationship of HTGW with the various stages of decreased eGFR, which could represent the progression of CKD.
Using data from physical examinations and electronic medical records, we aimed to examine the association of HTGW and three other triglyceride waist phenotypes with risk of decreased eGFR in overall and subgroup population among Chinese adults. Additionally, we explored whether these phenotypes were associated with different stages of eGFR, including mildly, moderately, and severely decreased eGFR.
This population-based, cross-sectional study was conducted in Community Health Centers of Songjiang District, Shanghai, China from June 2016 to December 2017. Details of sample methods in this study have been described elsewhere [22]. Briefly, we used a multistage, stratified, clustered sampling method to collect health-related data from 31 neighborhood committees and 16 administrative villages in four study community sites, including Zhongshan, Xinqiao, Sheshan, and Maogang. Exclusion criteria were as follows: unable or unwilling to provide a written informed consent form; pregnancy; previously diagnosed critical illness, including cancer, stroke, CAD, cirrhosis, chronic hepatitis, cardiorespiratory failure, and hyper-or hypothyroidism; or have got organ transplantation or on dialysis therapy. For the present analysis, a total of 37,670 adults aged 20 to 74 years who were natives of Shanghai municipality or those have lived in Shanghai for at least 5 years were enrolled in the present study. Among these, we excluded participants who violated the inclusion criteria (n = 4271), who had no serum creatinine (Scr) measurement (n = 264) or had missing data on physical examination, questionnaire survey, or laboratory measurements (n = 1839). The final analysis included 31,296 participants (Fig. 1). The study protocol was approved by the Ethics Committee of Fudan University, School of Public Health (IRB#2016-04-0586) and complied with the principles of the Declaration of Helsinki. Informed written consent were obtained from all participants prior to data collection.
Flowchart of the study population
The general information of all study participants including sociodemographic characteristics (age, sex, marital status, educational level, and working status), self-reported history of chronic diseases (such as T2DM, hypertension, and cancer), and lifestyle factors (smoking status, alcohol consumption, and physical activity) was collected by trained interviewers through face-to-face interviews using a structured questionnaire. History of a diagnosis of T2DM, hypertension, cancer, stroke, CAD, cirrhosis, chronic hepatitis, cardiorespiratory failure, and hyper-or hypothyroidism, as well as receipt of organ transplantation or dialysis was obtained from electronic medical records. We used the International Physical Activity Questionnaire to assess physical activity. Smoking status was defined as > 1 cigarette per day and lasting > 6 months, and alcohol consumption was defined as alcohol intake at least 3 times per week and lasting for at least half a year. Smoking status and alcohol consumption have been classified as never, former, or current.
Anthropometric data were obtained from all participants, including height, weight, and waist circumference (WC), and were measured in duplicate when participants were wearing light clothing without shoes. The mean values of these measurements were calculated for further analysis. Body mass index (BMI) was defined as body weight in kilograms divided by height in meters squared (kg/m2). Blood pressure (BP) was consecutively measured three times using an electronic sphygmomanometer, and the mean values were used for analysis.
Laboratory measurement
Prior to the investigation, all participants were asked to fast overnight for at least 8 h, and fasting venous blood specimens were collected to perform laboratory measurements in DiAn medical laboratory center. Serum total cholesterol, TG, high-density lipoprotein (HDL) cholesterol, and low-density lipoprotein (LDL) cholesterol levels were measured using an automatic biochemical analyzer (Roche Cobas C501). Fasting plasma glucose (FPG) level was measured by glycokinase method using Roche P800 biochemical analyzer. Scr level was measured using enzymatic methods by Roche C702 automatic biochemical analyzer. HbA1c level was determined using high pressure liquid chromatography (TOSOH G8 automatic biochemical analyzer).
Kidney function assessment
The estimated glomerular fltration rate (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation for Chinese population [23]:
$$ \mathrm{eGFR}\left(\mathrm{mL}/\min /1.73{\mathrm{m}}^2\right)=141\times \min {\left(\mathrm{Scr}/\upkappa, 1\right)}^{\upalpha}\times \max {\left(\mathrm{Scr}/\upkappa, 1\right)}^{-1.209}\times {0.993}^{\mathrm{Age}}\times 1.018\left[\mathrm{if}\ \mathrm{female}\right] $$
where Scr is the serum creatinine (mg/dl), k is 0.7 for females and 0.9 for males, α is − 0.329 for females and − 0.411 for males, min indicates the minimum of Scr/k or 1, and max indicates the maximum of Scr/k or 1.
Decreased eGFR was defined as an eGFR value below 60 mL/min/1.73 m2. The CKD classification was in accordance with the National Kidney Foundation [2], and we classified GFR stages into 4 categories as follows: normal eGFR, ≥90 mL/min/1.73 m2; mildly decreased eGFR, 60–89 mL/min/1.73 m2; moderately decreased eGFR, 30–59 mL/min/1.73 m2; and severely decreased eGFR, 15–29 mL/min/1.73 m2. In addition, eGFR was evaluated using the Modification of Diet in Renal Disease (MDRD) Study Equation for Chinese population [24] in a sensitivity analysis:
$$ {\mathrm{eGFR}}_{\mathrm{MDRD}}\left(\mathrm{ml}/\min /1.73{\mathrm{m}}^2\right)=175\times \mathrm{serum}\ \mathrm{creatinine}{\left(\mathrm{mg}/\mathrm{dl}\right)}^{-1.154}\times {\mathrm{age}}^{-0.203}\times 0.742\left[\mathrm{if}\ \mathrm{female}\right] $$
Definitions of triglyceride waist phenotype, T2DM, and hypertension
Participants were classified into four groups according to the following cut-off points [13]: (1) NTNW, normal serum TG level (< 1.7 mmol/L) and normal WC (< 90 cm for men and < 80 cm for women); (2) NTGW, normal serum TG level and enlarged WC (≥90 cm for men and ≥ 80 cm for women); (3) HTNW, elevated serum TG level (≥1.7 mmol/L) and normal WC; (4) HTGW, elevated serum TG level and enlarged WC. In the sensitivity analyses, HTGW phenotype was also defined as elevated serum TG level (≥2.0 mmol/L) along with enlarged WC (≥90 cm for men and ≥ 85 cm for women) [15], or a TG level ≥ 2.0 mmol/L and WC ≥90 cm for men or a TG level ≥ 1.5 mmol/L and WC ≥85 cm for women [25].
The definition of T2DM was in accordance with the American Diabetes Association criteria [26]: FPG level ≥ 7.0 mmol/L, HbA1c concentration ≥ 6.5%, or previously diagnosed type 2 diabetes mellitus (T2DM). The diagnosis of hypertension was according to the Seventh Joint National Committee Report on Detection, Evaluation, and Treatment of High Blood Pressure guidelines (JNC-7) [27]: systolic BP ≥140 mmHg, diastolic BP ≥90 mmHg, or previously diagnosed hypertension.
We accounted for a complex sample survey design, and the results were weighted in the present study. Continuous variables were presented as means ± standard deviation (SD) or median with interquartile ranges. Categorical variables were expressed as number and percentage. We compared the differences between decreased eGFR and non-decreased eGFR using student's t test or Mann-Whitney U test for comparisons of continuous data and χ2 test for categorical data. We used weighted logistic regression models to determine the association of triglyceride waist phenotypes with decreased eGFR, and odd ratios (ORs) and 95% confidence intervals (CIs) were calculated, with NTNW as the reference group. Multiple models were adjusted for age, sex (men vs. women), marital status (married vs. unmarried/divorced/widowed), educational level (0–6, 7–12, and > 12 years), working status (retired vs. not retired), smoking status (never, former, and current), alcohol consumption (never, former, and current), physical activity, BMI, hypertension, LDL cholesterol, HDL cholesterol, total cholesterol, and T2DM.
For sensitivity analyses, we repeated the analyses to examine the association of triglyceride waist phenotypes with decreased eGFR and estimated GFR by the use of the MDRD equation or redefined the triglyceride waist phenotypes on the basis of the previous recommended criteria. We performed stratified analyses and potential effect modifications by sex, age (< 60 years, ≥60 years), BMI (< 24 kg/m2, ≥24 kg/m2), and presence or absence of T2DM or hypertension. In addition, we investigated the associations of four phenotype groups with the severity of decreased eGFR, including mildly, moderately, and severely decreased eGFR with using weighted multinomial logistic regression models, respectively by treating the normal eGFR group as the control group, and the same confounding factors as above were adjusted for the analyses.
All analyses were performed using SAS 9.4 version (Institute Inc., Cary, NC, USA). P values of less than 0.05 (two-sided) were considered to be statistically significance.
Baseline characteristics of subjects with or without decreased eGFR
Baseline characteristics of the study participants based on decreased eGFR status were shown in Table 1. Among 31,296 participants, the mean age of the study participants was 55.64 ± 11.35 years, 12,702 (40.59%) of them were men, 4247 (13.57%) had T2DM, 15,881 (50.74%) had hypertension. As expected, participants with decreased eGFR were more likely to have HTGW phenotype than those without decreased eGFR. Subjects with decreased eGFR had higher Scr, BMI, WC, BP, total cholesterol, TG, LDL cholesterol, and FPG levels, and a lower eGFR level in comparison with those in non-decreased eGFR subjects. In addition, less education, a higher proportion in past smokers and past drinkers were observed in decreased eGFR group than in non-decreased eGFR group. No significant differences in HDL cholesterol, marital status and physical activity were observed between the two groups.
Table 1 Characteristics of study participants by decreased eGFR
Association of triglyceride waist phenotypes with decreased eGFR
Table 2 showed the ORs and 95% CIs for the association of decreased eGFR with triglyceride waist phenotypes. Compared with the NTNW phenotype, NTGW and HTGW phenotypes were associated with a higher risk of decreased eGFR after adjusting for age, sex, marital status, educational level, working status, smoking status, alcohol consumption, physical activity, BMI, hypertension, LDL cholesterol, HDL cholesterol, total cholesterol, and T2DM; whereas HTNW phenotype was not significantly associated with decreased eGFR risk. The multivariable-adjusted ORs for decreased eGFR risk associated with NTGW, HTNW, and HTGW phenotypes were 1.75 (95% CI, 1.41–2.18 P < 0.001), 1.29 (95% CI, 0.99–1.68, P = 0.060), and 1.99 (95% CI, 1.54–2.58, P < 0.001), respectively. We also performed three sensitivity analyses including the MDRD equation-based eGFR and redefining the triglyceride waist phenotypes to assess the robustness of our results, and found similar results in all sensitivity analyses (Tables 2 and 3).
Table 2 Odds ratios for decreased eGFR according to triglyceride level and waist circumference
Table 3 Odds ratios for decreased eGFR at different levels of triglyceride level and waist circumference
We further examined the association of triglyceride waist phenotypes with decreased eGFR in different subgroups of sex, age, BMI, T2DM, and hypertension in Table 4. The associations between NTGW, HTNW, and HTGW phenotypes and the risk of decreased eGFR remained consistent across almost all subgroups. The strongest positive association of HTGW phenotype with decreased eGFR was found in the subgroup of presence of T2DM (OR 2.60, 95% CI 1.38–4.89). No significant interaction effect was observed between the triglyceride waist phenotypes and all subgroup variables in decreased eGFR risk.
Table 4 Odds ratios for decreased eGFR according to triglyceride level and waist circumference by various subpopulations
Association of different triglyceride waist phenotypes with mildly, moderately, and severely decreased eGFR
The multivariable-adjusted ORs for mildly, moderately, and severely decreased eGFR according to triglyceride waist phenotypes were present in Table 5. The number of participants with normal eGFR, mildly decreased eGFR, moderately decreased eGFR, and severely decreased eGFR were 19,538, 10,929, 785, and 30, respectively. After adjusting for potential confounders, HTGW phenotype was positively associated with mildly decreased eGFR risk, whereas no significant association of NTGW and HTNW phenotypes with mildly decreased eGFR was found. Risk of moderately decreased eGFR was higher for subjects with NTGW and HTGW phenotypes, but not for subjects with HTNW phenotype, as compared to subjects with NTNW phenotype. This was consistent with the results of primary analyses among all subjects. No significant association was found between NTGW, HTNW, and HTGW phenotypes and the risk of severely decreased eGFR.
Table 5 Odds ratios for mildly, moderately, and severely decreased eGFR according to triglyceride level and waist circumference
In this large population-based, cross-sectional study, we explored the association of triglyceride waist phenotypes with decreased eGFR (< 60 mL/min/1.73 m2) in the overall population and across a variety of subgroups. We found that HTGW phenotype was associated an increased risk of decreased eGFR; in addition, HTGW phenotype was significantly associated with the progression of renal function decline except for severely decreased eGFR. To the best of our knowledge, this is the first study to examine the association of HTGW phenotype with different stages of renal function in Chinese adults.
HTGW phenotype was found to have a similar predictive power with metabolic syndrome (MetS), and may be more easily applicable than MetS [28]. A recent cross-sectional study has shown that the HTGW phenotype, and not the common indices (including WC, waist-to-hip ratio, and waist-to-height ratio), was associated with a higher risk of CKD [18]. The relationship between triglyceride waist phenotypes and CKD has been extensively reported in many cross-sectional studies. However, most studies were conducted in with a limited sample size [16,17,18,19,20] and findings were inconsistent. In a middle-aged or older population, compared with NTNW phenotype, NTGW/HTNW and HTGW phenotypes were associated with higher risk of CKD [16]. Results regarding association of triglyceride waist phenotypes with CKD in the cross-sectional study and prospective study were not identical [19]. The reasons for the conflicting results may likely be due to the difference of population characteristics, the number of participants, inclusion of confounding factors, and methods of eGFR calculation. Ramezankhani et al. [19] and our study estimated GFR using the CKD-EPI equation, which was a more accurate method of renal function than the MDRD Equation in routine clinical practice. While many previous studies [16,17,18, 20] used the MDRD Equation for estimating GFR, a most common method for estimating GFR.
The decline in eGFR was related with the increased risk of mortality and end-stage renal disease (ESRD) [29]. Only a few studies have determined the association of decreased eGFR risk with triglyceride waist phenotypes [21], and shown that HTGW was associated with abnormal renal function among both Chinese and Australian subjects. In the present large Chinese population, we also observed similar findings in overall and stratified populations; NTGW phenotype was also associated with an increased risk of reduced eGFR. In the stratified analyses, the independent positive association of NTGW and HTGW phenotypes with reduced eGFR risk still persisted across almost all subgroups; a strongest association between HTGW phenotype and decreased eGFR risk was found in the subgroup of presence of T2DM, suggesting the predictive power of HTGW for decreased eGFR might be better for T2DM patients. Our findings suggest that each of the triglyceride waist phenotypes may play different roles on renal function decline. This lack of evidence may be explained by differences of NTGW and HTNW phenotype on decreased eGFR, and it is in line with a previous study reporting associations with CKD in men [17]. TG was considered to be a useful marker of visceral obesity with a given WC [30]. Additionally, compared with NTGW or HTNW phenotype, HTGW phenotype was a more stable and stronger risk factor for renal function decline. Thus, our data provide evidence that HTGW phenotype might play an important role in the development of renal function impairment.
Corresponding studies on the role of triglyceride waist phenotypes on the development of kidney dysfunction remain scarce. Most studies were performed to examine the association of HTGW phenotype with CKD risk, but were not involved in CKD progression. In the present study, we found that NTGW and HTGW phenotypes were independently associated with mildly decreased eGFR, or moderately decreased eGFR risk, except for the NTGW-mildly decreased eGFR association. No significant association between any of the phenotypes and severely decreased eGFR was found. Underlying mechanisms of these associations are still unknown, the null association of triglyceride waist phenotypes with severely decreased eGFR may be due to the small number of subjects. Thus, our findings suggest that HTGW phenotype may be an independent risk factor for the development of renal function impairment, prevention and that control of HTGW may be an effective measure to attenuate the risk of the progression of renal function decline. HTGW was closely related to visceral obesity, which may result in fat accumulation in kidney [30, 31]. Excess accumulation of fat on the kidney, which is related to oxidative stress and inflammation response, may impair the kidney and contribute to an unfavorable renal hemodynamic profile [32]. CKD patients have a lower activity of lipoprotein lipase and hepatic TG lipase, which likely leads to the development of hypertriglyceridemia, and subsequently to augment renal impairment [33, 34]. Further elucidation of the mechanism for the role of HTGW on renal function or CKD is still warranted.
The strengths of the present study include a large-scale population-based study, enrollment of participants living in both urban and rural areas, and the extensive adjustment for potential confounders, including demographic, lifestyle, anthropometric, and clinical factors. Several potential limitations of our study also should be considered. Firstly, the causal relationship could not be determined due to the nature of cross-sectional study design in our analyses. Secondly, all study participants were from only one district of Shanghai, China, and the generalizability of our results may be limited. Further studies should be conducted in more diverse regions. Thirdly, although many covariates were adjusted for in our analyses, the present study still lacks data on medication treatment, dietary, and genetic factors, and so on. Lastly, despite the overall relatively large sample size, only a few participants were in the subpopulations, especially for severely decreased eGFR, which resulted in wide CIs for the effect estimates and inaccurate results.
In conclusion, HTGW phenotype was significantly associated with an increased risk of decreased eGFR in the overall study population, and remained consistent across all subgroups, which suggests that the HTGW phenotype may be a stable risk factor for the decline of kidney function. We also found that the HTGW phenotype was significantly associated with higher risks of mildly and moderately decreased eGFR, but not with severely decreased eGFR. These findings underscore the importance of preventing and controlling the HTGW phenotype, which may reduce the risk of kidney function decline or even the progression of CKD.
The dataset used and analyzed during the current study is available from the corresponding author on reasonable request.
BP:
CAD:
CKD:
CKD-EPI:
Chronic Kidney Disease Epidemiology Collaboration
CVD:
eGFR:
Estimated glomerular filtration rate
ESRD:
FPG:
Fasting plasma glucose
HDL:
High-density lipoprotein
HTGW:
Hypertriglyceridemic waist
HTNW:
Elevated TG level/normal WC
LDL:
Low-density lipoprotein
MDRD:
Modification of Diet in Renal Disease
MetS:
NTGW:
Normal TG level/enlarged WC
NTNW:
Normal TG level/normal WC
ORs:
Odd ratios
Scr:
Bikbov B, Purcell CA, Levey AS, Smith M, Abdoli A, Abebe M, et al. Global, regional, and national burden of chronic kidney disease, 1990–2017: a systematic analysis for the global burden of disease study 2017. Lancet. 2020;395(10225):709–33.
Levey AS, Coresh J, Balk E, Kausz AT, Levin A, Steffes MW, et al. National Kidney Foundation practice guidelines for chronic kidney disease: evaluation, classification, and stratification. Ann Intern Med. 2003;139(2):137–47.
Ene-Iordache B, Perico N, Bikbov B, Carminati S, Remuzzi A, Perna A, et al. Chronic kidney disease and cardiovascular risk in six regions of the world (ISN-KDDC): a cross-sectional study. Lancet Glob Health. 2016;4(5):e307–19.
Zhang L, Wang F, Wang L, Wang W, Liu B, Liu J, et al. Prevalence of chronic kidney disease in China: a cross-sectional survey. Lancet. 2012;379(9818):815–22.
Nugent RA, Fathima SF, Feigl AB, Chyung D. The burden of chronic kidney disease on developing nations: a 21st century challenge in global health. Nephron Clin Pract. 2011;118(3):c269–77.
Hermans MM, Henry R, Dekker JM, Kooman JP, Kostense PJ, Nijpels G, et al. Estimated glomerular filtration rate and urinary albumin excretion are independently associated with greater arterial stiffness: the Hoorn study. J Am Soc Nephrol. 2007;18(6):1942–52.
Navaneethan SD, Schold JD, Arrigain S, Jolly SE, Nally JJ. Cause-specific deaths in non-Dialysis-dependent CKD. J Am Soc Nephrol. 2015;26(10):2512–20.
Lemieux I, Pascot A, Couillard C, Lamarche B, Tchernof A, Almeras N, et al. Hypertriglyceridemic waist: a marker of the atherogenic metabolic triad (hyperinsulinemia; hyperapolipoprotein B; small, dense LDL) in men? Circulation. 2000;102(2):179–84.
Wang A, Li Z, Zhou Y, Wang C, Luo Y, Liu X, et al. Hypertriglyceridemic waist phenotype and risk of cardiovascular diseases in China: results from the Kailuan study. Int J Cardiol. 2014;174(1):106–9.
Blackburn P, Lemieux I, Lamarche B, Bergeron J, Perron P, Tremblay G, et al. Hypertriglyceridemic waist: a simple clinical phenotype associated with coronary artery disease in women. Metabolism. 2012;61(1):56–64.
Li Q, Zhang D, Guo C, Zhou Q, Tian G, Liu D, et al. Association of hypertriglyceridemic waist-to-height ratio and its dynamic status with incident hypertension: the rural Chinese cohort study. J Hypertens. 2019;37(12):2354–60.
Zhao K, Yang SS, Wang HB, Chen K, Lu ZH, Mu YM. Association between the Hypertriglyceridemic waist phenotype and Prediabetes in Chinese adults aged 40 years and older. J Diabetes Res. 2018;2018:1031939.
Ren Y, Zhang M, Zhao J, Wang C, Luo X, Zhang J, et al. Association of the hypertriglyceridemic waist phenotype and type 2 diabetes mellitus among adults in China. J Diabetes Investig. 2016;7(5):689–94.
Chen S, Guo X, Dong S, Yu S, Chen Y, Zhansg N, et al. Association between the hypertriglyceridemic waist phenotype and hyperuricemia: a cross-sectional study. Clin Rheumatol. 2017;36(5):1111–9.
Sam S, Haffner S, Davidson MH, D'Agostino RS, Feinstein S, Kondos G, et al. Hypertriglyceridemic waist phenotype predicts increased visceral fat in subjects with type 2 diabetes. Diabetes Care. 2009;32(10):1916–20.
Li Y, Zhou C, Shao X, Liu X, Guo J, Zhang Y, et al. Hypertriglyceridemic waist phenotype and chronic kidney disease in a Chinese population aged 40 years and older. PLoS One. 2014;9(3):e92322.
Zeng J, Liu M, Wu L, Wang J, Yang S, Wang Y, et al. The Association of Hypertriglyceridemic Waist Phenotype with chronic kidney disease and its sex difference: a cross-sectional study in an urban Chinese elderly population. Int J Environ Res Public Health. 2016;13(12):1233.
PubMed Central Google Scholar
Zhou C, Li Y, Shao X, Zou H. Identification of chronic kidney disease risk in relatively lean southern Chinese: the hypertriglyceridemic waist phenotype vs. anthropometric indexes. Eat Weight Disord. 2018;23(6):885–92.
Ramezankhani A, Azizi F, Ghanbarian A, Parizadeh D, Hadaegh F. The hypertriglyceridemic waist and waist-to-height ratio phenotypes and chronic kidney disease: cross-sectional and prospective investigations. Obes Res Clin Pract. 2017;11:585–96.
Huang J, Zhou C, Li Y, Zhu S, Liu A, Shao X, et al. Visceral adiposity index, hypertriglyceridemic waist phenotype and chronic kidney disease in a southern Chinese population: a cross-sectional study. Int Urol Nephrol. 2015;47(8):1387–96.
Yu D, Yang W, Chen T, Cai Y, Zhao Z, Simmons D. Hypertriglyceridemic-waist is more predictive of abnormal liver and renal function in an Australian population than a Chinese population. Obes Res Clin Pract. 2018;12(5):438–44.
Qiu Y, Zhao Q, Gu Y, Wang N, Yu Y, Wang R, et al. Association of Metabolic Syndrome and its Components with decreased estimated glomerular filtration rate in adults. Ann Nutr Metab. 2019;75:168–78.
Levey AS, Stevens LA, Schmid CH, Zhang YL, Castro AR, Feldman HI, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604–12.
Levey AS, Coresh J, Greene T, Stevens LA, Zhang YL, Hendriksen S, et al. Using standardized serum creatinine values in the modification of diet in renal disease study equation for estimating glomerular filtration rate. Ann Intern Med. 2006;145(4):247.
Arsenault BJ, Lemieux I, Despres JP, Wareham NJ, Kastelein JJ, Khaw KT, et al. The hypertriglyceridemic-waist phenotype and the risk of coronary artery disease: results from the EPIC-Norfolk prospective population study. CMAJ. 2010;182(13):1427–32.
American Diabetes Association. Executive summary: Standards of medical care in diabetes--2014. Diabetes Care. 2014;37(Suppl 1):S5–13.
Chobanian AV, Bakris GL, Black HR, Cushman WC, Green LA, Izzo JJ, et al. Seventh report of the joint National Committee on prevention, detection, evaluation, and treatment of high blood pressure. Hypertension. 2003;42(6):1206–52.
He S, Zheng Y, Shu Y, He J, Wang Y, Chen X. Hypertriglyceridemic waist might be an alternative to metabolic syndrome for predicting future diabetes mellitus. PLoS One. 2013;8(9):e73292.
Coresh J, Turin TC, Matsushita K, Sang Y, Ballew SH, Appel LJ, et al. Decline in estimated glomerular filtration rate and subsequent risk of end-stage renal disease and mortality. JAMA. 2014;311(24):2518–31.
Despres JP, Lemieux I, Bergeron J, Pibarot P, Mathieu P, Larose E, et al. Abdominal obesity and the metabolic syndrome: contribution to global cardiometabolic risk. Arterioscler Thromb Vasc Biol. 2008;28(6):1039–49.
Weinberg JM. Lipotoxicity. Kidney Int. 2006;70(9):1560–6.
Kwakernaak AJ, Zelle DM, Bakker SJ, Navis G. Central body fat distribution associates with unfavorable renal hemodynamics independent of body mass index. J Am Soc Nephrol. 2013;24(6):987–94.
Thomas R, Kanso A, Sedor JR. Chronic kidney disease and its complications. Prim Care. 2008;35(2):329–44.
Chan CM. Hyperlipidaemia in chronic kidney disease. Ann Acad Med Singap. 2005;34(1):31–5.
The authors thank all the study participants in this study, the leader of the Disease Control and Prevention Center in Songjiang District, Shanghai for assistance with organization and substantial support, and all the staffs of the Community Health Centers from Zhongshan, Xinqiao, Sheshan and Maogang in Songjiang District for their technical support and contribution of data collection.
This work was supported by the National Key Research and Development Program of China, Precision Medicine Project (2017YFC0907000) and the Shanghai Municipal Education Commission-Gaofeng Discipline Development Project for Public Health and Preventive Medicine (No. 17).
Yun Qiu and Qi Zhao contributed equally to this work.
Department of Epidemiology, School of Public Health, Key Laboratory of Public Health Safety of Ministry of Education, Fudan University, Shanghai, 200032, China
Yun Qiu, Qi Zhao, Na Wang, Yuting Yu, Yue Zhang, Shuheng Cui, Xing Liu & Genming Zhao
Songjiang District Center for Disease Control and Prevention, Shanghai, 201600, China
Ruiping Wang, Meiying Zhu & Yonggen Jiang
Yun Qiu
Qi Zhao
Na Wang
Yuting Yu
Ruiping Wang
Yue Zhang
Shuheng Cui
Meiying Zhu
Xing Liu
Yonggen Jiang
Genming Zhao
YQ, QZ, and NW conceived and designed the study. YY, RW, YZ, and SC contributed to the data analysis and interpretation. MZ, XL, and YQ contributed to data acquisition and the manuscript draft. YJ and GZ supervised the study and revised the manuscript. All authors approved the final version of the manuscript to be submitted.
Correspondence to Yonggen Jiang or Genming Zhao.
The study protocol was approved by the Ethics Committee of the Fudan University, School of Public Health (IRB#2016-04-0586) and complied with the principles of the Declaration of Helsinki. Informed written consents were obtained from all participants before data collection.
The authors have no conflicts of interest to declare.
Qiu, Y., Zhao, Q., Wang, N. et al. Association of hypertriglyceridemic waist phenotype with renal function impairment: a cross-sectional study in a population of Chinese adults. Nutr Metab (Lond) 17, 63 (2020). https://doi.org/10.1186/s12986-020-00483-7
Hypertriglyceridemic waist phenotype
Renal function decline
|
CommonCrawl
|
Computer science is the study of computer technology, including hardware and software. Because computers dominate many aspects of modern life, computer science is a popular area of study for college students.
When you study computer science in college, you'll spend time designing, analyzing and implementing algorithms and computer code to solve problems. You'll take a wide range of classes to cover the field's broad array of topics, with a special focus on math skills.
Computer science is an exciting, evolving field with an excellent job outlook. In fact, computer science-related jobs are some of the most in-demand positions in the U.S. and around the world. Some popular career paths for computer science majors include:
Computer systems analyst
If you're a college student taking a challenging computer science course and you need extra help with assignments or grasping important concepts, essayhelpp.com has a team of computer science experts to help. Whether you're just starting to learn about computer science or taking advanced classes, we'll match you with the right tutor to help you succeed.
Online Computer Science Tutors
When you come to essayhelpp.com for computer science help, you'll receive expert assistance from our online tutors. You can schedule a tutoring session or get homework help on virtually any computer science topic.
Tutoring Sessions
Schedule a live, virtual tutoring session with one of our computer science experts to get all the help you need. We use state-of-the-art whiteboard technology with video, audio, desktop sharing and file upload capabilities. When you schedule a session in advance, you can upload materials like homework, notes and old quizzes for your tutor to review ahead of time.
If you're struggling with an algorithm or coding assignment, submit a request to get homework help from our knowledgeable tutors. They'll give you detailed explanations and examples of concepts related to your assignment that you can use to help create your solution.
If you need immediate assistance, search our Homework Library to find solved computer science problems related to your assignment.
Computer Science Topics
Our online computer science tutors can help you with any topic, from basic programming to advanced algorithms. We have tutors that specialize in different areas of computer science and programming languages. You can get help on a vast array of computer science topics across four main categories:
Theoretical computer science: This area of study uses logic and computation to solve software problems. Examples include coding theory, data structures and algorithms.
Computer systems: These classes typically comprise a study of computation structures including computer architecture and engineering.
Computer applications: Here, you cover cases where computers are used to solve real-world problems. Topics include artificial intelligence, scientific computing and computer visualization.
Software engineering: This is the study of creating software, including design and implementation using programming code. 24HourAnswers supports students learning numerous coding languages, including Java, C++, Python, HTML, PHP and many others.
Why Choose Essayhelpp.com
essayhelpp.com connects you with highly qualified computer science tutors. Unlike other online tutoring services that employ only college students, we have an elite team of experienced professionals. Many of our tutors have advanced degrees in their field, including doctorates or equivalent certifications. We meticulously prescreen all our applicants, carefully reviewing their qualifications to ensure we hire only the best computer science tutors.
We also provide 24/7 assistance to help you when you need it most. If you're in a time crunch and need answers as soon as possible, you can count on us to provide fast, reliable academic support.
We make it quick and easy for you to get help with our straightforward process. Simply enter your request or question, upload any relevant files, enter a due date and specify your budget to get started. You'll hear from a tutor promptly — sometimes within minutes — with a quote. Your quoted price is unique to your request, with no hidden costs or obligations. You're also free to discuss the quote with your tutor.
Request Help Today!
Creating an account takes less than 30 seconds. Submit your request for online tutoring or computer science homework help today!
Why Choose Essayhelpp.com ?
Get the Help You Need, Whenever You Need It
To fulfill our tutoring mission of online education, our college homework help and online tutoring centers are standing by 24/7, ready to assist college students who need homework help with all aspects of computer science. Our computer science tutors can help with all your projects, large or small, and we challenge you to find better online computer science tutoring anywhere.
Does technology always follow science?
What is a lock screen?
Consider the insertion of items with the following keys (in the given order) into an initially empty AVL tree: 30, 40, 24, 58, 48, 26, 11, 13. Draw the final tree that results.
A hard disk drive provides ________ storage so data is retained after the power is turned off.
Why is Grace Hopper famous?
Why is Grace Hopper exemplary?
Is Siri considered artificial intelligence?
Answer the following multiple choice questions. 1. When you use a computer, the following materials are loaded into the RAM. A. Application software B. Operating system software C. User data D. Non…
What are the four main functions of a computer?
Assume that you are the Information Technology Director of a major chain restaurant, and you have been assigned to design a menu ordering application that can run on all devices. Examine whether us…
Imagine that you are the Information Technology Director of a major chain restaurant, and you have been assigned to design a menu ordering application that can run on all devices. Examine whether u…
You have been assigned to resolve several issues on the open-items list but you are having a hard time getting policy decisions from the user contact. How can you encourage the user to finalize the…
What does CSC stand for in computer science education?
What are developer options?
How did Clarence Ellis die?
Consider the following MIPS loop and answer the questions below.
How does income affect computer science education?
What is computer literacy? a. Knowledge and familiarity with computer applications b. The ability to type quickly and efficiently c. The ability to fix a computer d. All of the above
1. What events do the following components generate: JButton JTextField JComboBox 2. What methods does JTable implement which are required by the interfaces implemented by the JTable class beyond t…
What part of the system unit processes the input data?
What features do the following data types have which improve a program? Processing speed Memory Accuracy
Create a policy for 802.11 Wi-Fi security in a wireless network in a 500-employee company with a 47-access point WLAN. Make it a document for people in your firm to read.
Show, using truth tables, that the propositional logic expression PXOR Q XOR S Is well-defined: that is, you can treat it as(P XOR Q) XOR SorP XOR (Q XOR S)and both statements are equivalent. (Note…
Show a truth table for the following functions. a. F = YZ + X'Z' b. G = X'Y + (X+Z')(Y+Z) Find the complement of the following expressions. a. (a+c)(a+b')(a'+b+c') b. x'y'+x'y
OS Question: There are 4 processes, executing concurrently. Process P0 is in an infinite loop, incrementing the value of the variable x (x is initialized to 0). P0 is the only process that changes…
What is a virtual workstation?
What are the areas of specialization in computer science?
What does a screen protector do?
What is streaming architecture?
Did Alan Turing go to college?
Why is Mark Dean important?
What is a blade workstation?
What is a workstation assessment?
On your own computer, you will want to __________ regularly to preserve your documents, photos, music, and all the important information you have.
What is a mobile workstation?
Who coined the term cyberspace?
What the functions of RAM, CPU, and DVD drive?
What is an algorithm?
What happens when your computer overheats?
How might prescribing medications for depression be improved in the future to increase the likelihood that a drug would work and minimize side effects?
Write a computer program (in your preferred programming language) that calculates the approximate value of sin(x) using the Maclaurin series approximation sin x=x-x^3 3! + x^5 5! – x^7 7! + up to…
Draw a binary tree T that simultaneously satisfies the following: -Each internal node of T stores single character. -A preorder traversal of T yields EXAMFUN. -An inorder traversal of T yields MAFX…
Question 1. Draw a single binary tree. T, such that each internal node of T stores a single character, a preorder traversal of T yields HOECLFNKJIANGDB, an inorder traversal of T yields EONFKLJCAIN…
Which of the following identities are true? a) (L / a) a = L (the left side represents the concatenation of the languages L/a and {a} b) a(a\L) = L (again, concatenation with {a}, this time on the…
What is Carence Skip Ellis famous for?
Colossus used valves. What was the role of these valves?
Why do digital computers use binary numbers for their operation?
Which of the following types of data are likely to be normally distributed? select all correct answers a) The outcomes of rolling a single fair die b) The time it takes for an airliner to fly from…
Why is Charles Babbage called the father of computer?
Consider the following three relations: TRAVEL_AGENT(name, age, salary) CUSTOMER(name, departure_city, destination, journey_class) TRANSACTION(number,cust_name, travel_agent_name, amount_paid) Writ…
Which of the following is not a hardware approach to mutual exclusion? (A) Interrupt disabling (B) Compare and Swap instruction (C) Spin waiting (D) Exchange instruction (E) Semaphores
With the 2-D parity method, a block of bits with n rows and k columns uses horizontal and vertical parity bits for error correction or detection. During the class, we show that 2-D parity method ca…
How did computers change during the 1990s?
Write a Java program that performs the following mathematical tasks in the following order, using double values for all input and calculations and Math.PI when the value of pi is required. Ask the…
How is computer numerical control (CNC) distinguished from conventional NC?
A company has to choose a new computer system. The most important objective is to buy a system with
Biological computers will be developed using {Blank}. a) Biosensor b) Gene c) Bio-Chips d) Enzymes
What happens when pin 2 is grounded in the 4-bit ripple counter?
Write a Java class that implements the concept of Coins, assuming the following attributes (variables): number of quarters, number of dimes, number of nickels, and number of pennies. Include two c…
One of the characteristics of computer based simulation is the use of random number generators to imitate probabilistic system behaviour. a. True b. False
Describe how you could obtain a statistical profile of the amount of time spent by a program executing different sections of its code. Discuss the importance of obtaining such a statistical profile.
Which is not a factor when categorizing a computer? A. Speed of the output device B. Amount of main memory the CPU can use C. Cost of the system D. Capacity of the hard disk E. Where it was purchased
What is cyberphobia?
1. Create a view named product_summary. This view should return summary information about each product. Each row should include product_id, order_count (the number of times the product has been ord…
Create a view named product_summary. This view should return summary information about each product. Each row should include product_id, order_count (the number of times the product has been ordere…
Write a console program in C# that reads two pairs of grid coordinates (X1, X2) and (Y1, Y2) from the user, and then calculates and outputs to the console window the Euclidean and Rectilinear(or M…
Develop a computational model (with a C program) that computes the slope of a line between two points in the cartesian plane with coordinates (x1, y1 ) and (x2, y2). The program should include code…
When did Philip Emeagwali get married?
What are the two types of removable and portable data storage?
What are the specific requirements for the virtualization workstation?
What can a computer program be best described as?
1. Which of these formulas gives the maximum total number of nodes in a binary tree that has N levels? (remember that the root is level 0.) a. N^2 – 1 b. 2^N c. 2^{N+1}-1 d. 2^{N+1} 2. Which…
How can quantum computing be applied to solve challenging questions in biology or bioinformatics?
1. Given A = (1100 0001 1101 1101 1000 0000 0000 0000)2 and assuming A is a single precision IEEE-754 floating point number, what decimal value does A represent? Show A in normalized scientific not…
Professional sports is a huge money maker. How does technology support this? Some ideas to get you started include high-tech stadiums, RFID on players, social media, and big data. Discuss the probl…
What skill has Mary developed if she is able to type, text, web search, and use Internet databases and software? a. Critical thinking b. Computer literacy c. Cell phone literacy d. Technical advocacy
A hypothetical college freshman likes math, and ultimately wants to engage with living systems on the level of biomolecules. What should he study?
What is meant by encryption and decryption?
How can a personal computer act as a terminal?
(a) What is the difference between a compare validator and a range validator? (b) When would you choose to use one versus the other?
What is a client workstation?
How many anti-symmetric relations on the set A = {1,2,3,4,5,6} contain the ordered pairs (2,2), (3,4) and (5,6)?
How do smartphones maintain an inertial frame of reference?
If r and s are characters, prove that: (rs+r)*r = r(sr+r)*
The _______ aspect of the user interface includes speech recognition software and computer-generated speech. (a) physical (b) conceptual (c) perceptual (d) virtual.
Briefly describe the in-memory structures that may be used to implement a file system. Briefly describe the different levels of RAID with their key features.
Which of the following is (are) true of the EDP auditors? A. they will be replaced by traditional auditors in the near future. B. they should have computer expertise. C. two of the above. D. c…
What common objects benefit from an embedded computer?
What capabilities can you envision from an embedded computer?
Is computer science a natural or social science?
Create B tree and B+ tree of degree 3 for the following sequence of keys. Show the structure in both cases after every insertion. 21, 30, 56, 17, 19, 48, 29, 24
Write a function called bubble_sort() that accepts an array of pointers to strings and the number of strings as arguments, and returns nothing. The function sorts the strings according to the follo…
The auto repair shop of Quality Motor Company uses standards to control the labor time and labor cost in the shop. The standard labor cost for a motor tune-up is given below: |Job|Standard Hours| S…
Who invented the first smartwatch?
To continue learning in digital forensics, you should research new tools and methods often. Write a guide on how to load a VHD file converted from a ProDiscover .eve image file into VirtualBox. The…
Given the following grammar: S -> Sa | SSb | b | Write a leftmost derivation for the sentence bbaaba.
mov ax, 32770d cmp ax, 2d jge xyz ; A jump occurs: True or false?
Because A-B=A+ (-B), the subtraction of signed numbers can be accomplished by adding the complement. Subtract each of the following pairs of 5-bit binary numbers by adding the complement of the sub…
This type of projection is when projectors are parallel to each other, but are at an angle other than 90 degrees to the plane of projection: A. Oblique projection B. Perpendicular projection C. Ae…
What did Clarence Ellis invent?
Fill the final vales of the Register file and Data memory after the following assemble code has been executed. Values in ( ) are the initial values before the assemble code is executed. Show tempor…
What are the basic parts of a laptop computer?
What is formal grammar?
What is a regular grammar?
What is pipe-and-filter architecture?
What is difference between a desktop and a workstation?
What is vaporware?
What is a crash dump?
What is a memory dump error?
What is the difference between a computer being in sleep and hibernate?
What is the information processing cycle?
What is TOGAF architecture framework?
What is platform architecture?
What is structural grammar?
What is context-sensitive grammar?
What is context-free grammar?
What is a digital audio workstation?
Who established computer science?
What is computer science education?
What is a hybrid computer?
What science term did Grace Hopper invent?
What is a real-time system?
What is the numeric character reference for the apostrophe?
What is Flynn's taxonomy?
Where was Dr. Mark Dean born?
What is a system builder?
What is lexicon-based sentiment analysis?
What is ambiguous grammar?
What is scalable architecture?
Can computers have a conscious mind?
What year was the analytical engine invented?
What is the output of the following code if the user typed the letter 'C', then hit the 'Enter' key: print("Type Control C or -1 to exit") number = 1 while number != -1: try: number = int(input("E…
Give an E/R diagram for a database that records information about teams, players, and their fans, including: 1. For each (unique) team, its name, its players, its captain (one of the players), and…
What is a startup disk?
What does BIOS provide for the computer?
Where did Mark Dean grow up?
What is a driver update?
How is CAD used in architecture?
Who invented the first computer?
What is SOA architecture?
What is pipeline architecture?
Which of the following is a programming language that permits Web site designers to run applications on the user's computer? (a) Python (b) Ruby (c) Java (d) Smalltalk.
The hardware of a computer system includes the computer itself and other devices that help the computer perform its tasks. These other devices are commonly also called: A. Helper equipment B. IT…
What is an active workstation?
Create a program in IAR embedded workbench that simulates an alarm clock using the Mspexp430g2. Watchdog timer needs to be included.
What is the main circuit board of a system unit?
Consider the following solution to the mutual-exclusion problem involving two processes P0 and P1. Assume that the variable turn is initialized to 0. Process P0's code is presented below. /* Other…
Find all the solutions to x011 = 011x where x ? {0, 1}? .
What decimal value does the 8-bit binary number 00010001 have if: a) it is interpreted as an unsigned number? b) it is on a computer using signed-magnitude representation? c) it is on a computer…
A. Discuss the different scheduling algorithms with respect to (a) waiting time, (b) starvation, (c) turnaround time, and (d) variance in turnaround time. B. Which scheduling algorithm was noted as
Create a logic for a program that accepts an annual salary as input. Generate a method that calculates the highest monthly housing payment the user can afford. Assuming that the year's total paymen…
Write the HTML code to create the following. a. A text box named username that will be used to accept the user name of web page visitors. The text box should allow a maximum of 30 characters to be…
Consider a data center heavily built on Hyper-V and the ability to clone virtual machines from template VMs or from other existing VMs. How does such a highly virtualized data center change the dep…
How computer technology enhanced psychological research? Explain in brief.
One company that operates a production line that fills 16-ounce boxes of pasta shells uses a quality assurance process that involves randomly selecting & weighing four boxes every hour. The sample…
Acceptance test plans can be defined as a set of tests that if passed will establish that the software can be used in production. True False
The Waterfall methodology is the most straightforward of all of the SDLC (Software Development Lifecycle) methodologies, because there are strict phases and each phase needs to be completed first b…
Understanding the user's needs is mission critical to a successful SDLC (Software Development Lifecycle). True False
The SDLC (Software Development Lifecycle) is a framework defining tasks performed at each step in the software development process. True False
All of the following accurately apply to project performance reporting EXCEPT: 1. Progress report meetings are a good way to capture lessons learned. 2. Progress can be reported at varying degrees…
Identify the term being described by the following statement: A diagram that depicts the sequencing of activities needed to complete a project.
Social media tools make it easier to manage employees at work. (a) True (b) False.
Identify the term being described by the following statement: Endpoints that represent completion of major activities.
(a) What is knowledge management? (b) What are its primary benefits?
What does the Nyquist theorem have to do with communications?
What is meant by sampling theorem?
True or false? The more ingrained IT becomes in a company, the greater is the risk that the firm will not be able to function properly if IT suffers a major failure.
The Gartner survey found that large projects are less likely to fail because they have management's attention and organizational support. a. True. b. False.
Construct a Gantt chart for the following set of activities. Indicate the total project completion time and the slack for each activity. Submit a plain text version of your Gantt chart by using das…
Write the electron configuration, Lewis structure, molecular geometry, and, electronic geometry for PCl_5.
Describe the phases of the project life cycle.
What value does the SOW have in a project?
What is a namespace in Python?
"Willingness to pay" and "willingness to accept" estimates of the value of environmental and natural resources often can vary by a factor of up to 10. Briefly, why do you think that the two approac…
You have a large basket of assorted candy, including Starbursts, Reese's Cups, and Hershey's chocolate. There are three students to whom you want to evenly distribute the candy. You have three empt…
How has the devolution of regulatory enforcement responsibilities from the federal government to the states affected environmental outcomes in America?
Projects differ from business processes for all of the following reasons except ________. (a) projects are temporary (b) projects have a unique output (c) there are no pre-assigned jobs (d) project…
Find all solutions to the system: \begin{Bmatrix} \frac{(x + 2)^{2}}{25} + \frac{(y – 3)^{2}}{4} = 1 \\ \frac{(x + 2)^{2}}{4} + \frac{(y – 3)^{2}}{25} = 1 \end{Bmatrix}. Give exact answers, not de…
How can knowledge management systems be incorporated into business systems?
Why is Tim Berners-Lee important?
Why did Tim Berners-Lee create the World Wide Web?
Where did Tim Berners-Lee work?
Where did Tim Berners-Lee grow up?
Where did Tim Berners-Lee go to school?
When was Tim Berners-Lee knighted?
What awards has Tim Berners-Lee won?
How did Tim Berners-Lee create the World Wide Web?
What came after the technological revolution?
What was part of the late 20th-century information revolution?
Which invention initiated the Digital Revolution?
Where did Claude Shannon live?
True or false? Controlling through the use of rules and budgets can lead to a loss of creativity in an organization.
What is Charles Babbage's analytical engine?
A series of statements, or instructions, processed by a computer would be best described as: a. software. b. hardware. c. program. d. conversion. e. procedure.
All of the following are among the activities a project manager should undertake to develop a highly effective team except: a. assess sponsor capability b. assess project team capability c. establi…
A(n) _____ device provides data and instructions to the computer and receives results from it. a. expansion b. back-side c. internal d. input/output
What is a schedule baseline and what is it used for?
Security controls are the main mechanisms used to reduce risk consequence and risk likelihood. (a) True (b) False.
Information systems that support processes spanning two or more independent organizations are termed _________ information systems. (a) functional (b) personal (c) workgroup (d) inter-enterprise (e…
Which of the following networks monitors environmental changes in a building? A. Content delivery B. Mobile ad-hoc C. Wireless sensor D. Cloud computing
What are the fundamental problems of information silos, and how can they be remedied?
What is the definition of an information silo?
What are the main reasons why projects are delayed?
Determine if the following statement about capital investment project scoring is correct: Project scoring combines the payback, net present value, and internal rate of return values to create a sin…
How does software add value to automakers' products?
Alibaba IPO set record on September 18, 2014, the Chinese e-commerce company completed the largest initial public offering (IPO) in history, raising more than $21 billion by selling 320 million sha…
(a) Explain how the deep web contains hidden information. (b) What kinds of E-business function on the deep and dark web?
What types of information systems are used by organizations?
List two (2) types of system software and three (3) examples for each type of system software listed.
Why are accurate estimates critical to effective project management?
A mixture of 14.2 grams of He(g) and 36.7 grams of Ar(g) is placed in a 100 mL container at 290 K. What is the mole fraction of each gas?
What is one pitfall of online business?
"For all the speeches given, the commercials aired, the mud splattered in any election, a hefty percentage of people decide whom they will vote for only at the last minute." Suppose that a sample o…
What components of a decision support system allow decision-makers to easily access and manipulate the DSS and use common business terms and phrases? a) The knowledge base b) The model base c) T…
A web browser is best described as a? a) General purpose application program b) Application specific program c) System management programs d) System development programs
(a) What is knowledge management? (b) Explain the differences between the two fundamental approaches to knowledge management.
Which of the following do transactional e-commerce businesses provide? a) Sale of goods and services. b) Sale of goods only. c) Online sale of goods. d) Online sale of transactions.
(a) Explain the differences between functional, matrix, and project organizations. (b) Describe how each structure affects the management of a project.
The ____ manages the hardware components, including the CPU, memory storage, and peripheral devices. a) operating System b) device Drivers c) motherboard d) ports
Which type of network is contained over a large geographic area? a) Local area network b) Wide area network c) Start topology d) None of the above
Johnson Company produces plastic bottles to customer order. The quality inspector randomly selects four bottles from the bottle machine and measures the outside diameter of the bottle neck, a criti…
The computer and equipment that connects to it are called the? a) Hardware b) Motherboard c) Software d) Control Unit
A ____ is a programmable electronic device that can input, process, output, and store data. a) computer b) motherboard c) CPU d) operating system
SSL is a ____. a) Protocol b) A technology c) A kind of digital signature d) A virus
Who is involved in helping users determine what outputs they need and constructing the plans needed to produce these outputs? a) CIO b) Application programmer c) System programmer d) System anal…
What consists of all activities that, if delayed, would delay the entire project? a) Deadline activities b) Slack activities c) RAD tasks d) The critical path
List three steps to implement a retention schedule.
Find the gradient field F = nabla psi for the potential function psi = x^2 y – y^2 x.
A project team is studying the downtime cost of a soft drink bottling line. Data analysis in thousands of dollars for a three-month period is as follows. Back-Pressure Regulator = 30 Adjust Feed Wo…
Is artificial intelligence chaos theory?
What is visual perception in artificial intelligence?
Ghost Squadron Historical Aircraft, Inc. (GSHAI) is considering adding a rare World War II B-24 bomber to its collection of vintage aircraft. The plane was forced down in Burma in 1942, and it has…
What is the total amount budgeted for March in this time-phased budget? Activity January February March April Survey $8,000 Design $5,250 $4,500 Dirt $6,700 Foundation $12,550 Framing $37,250 Plumb…
What are some similarities and overlaps between ITIL and COBIT? What are their distinct differences? What are some challenges to implementing ITIL and COBIT in an organization?
What is an autonomous system in OSPF?
What is a domain of learning?
The greatest commitment of cost occurs in the acquisition phase of the life cycle. (a) True (b) False.
List and briefly describe three of the five common classifications of EC by the nature of the transaction.
How might a company's internet presence improve information problems?
Why should an organization be interested in knowing what level they are at in the project maturity model?
Compare and contrast simple lists such as a mailing list with a customer database.
Evaluate how the level of environmental concern varies among different groupings (social, economic, racial, national, etc.).
Show that the vector field F = (2 x^2 y^2 + y) i + (x^3 y – x) j is not conservative.
A firm is contemplating whether to invest in a new project. The project requires an investment of 1 unit and can be ?good?or ?bad.? If the project is good, it pays off 1.5 units. If it is bad, it p…
Using the industry life cycle model, explain how the threats and opportunities for existing firms in an industry change over time.
A firm is considering a new project whose risk is greater than the risk of the firm's average project, based on all methods for assessing risk. In evaluating this project, it would be reasonable fo…
What is functional analysis in information management?
A point like projectile of mass m is fired upward from the origin O. At any time after launch, the position and velocity of the projectile are described by the variables x, y, x' and y' (where x' i…
Identify the most important differences between object-oriented programming languages and generations 1-4 of (often called top-down or structured) programming languages. How are they similar?
How do I buy an online company?
Suppose that two investments have the same three payoffs, but the probability associates with each payoff differs, as illustrated in the following table. If Ken has the following utility function:…
How may you use the study techniques recommended for your personality type and most vital intelligence to function best in a distance-learning environment?
Over three-quarters of the companies that responded to an information technology survey stated they monitor employees' computers to determine which Websites had been selected. a. True b. False
Explain the terms primary key, candidate key, and foreign key. Give an example for each.
Which forces are part of the macroenvironment?
Explain if the for-profit markets become more or less competitive because of online retailing.
How does online retailing impact for-profit markets?
Apple is considering expanding a store. Identify three methods management can use to evaluate whether to expand.
What role do internet subcultures play in shaping peoples' morals and norms, and how do online communities impact criminal behavior? Provide an example in your response.
Who should support or maintain the IT infrastructure? A) programmers and technicians who work for the company B) experts in the various specialties who contract with the company C) a government age…
Information constraints are one of the main factors that keep markets separate. Overseas Chinese commerce has been portrayed as "borderless". The personal connections of many overseas Chinese busin…
A hospital emergency room needs the following numbers of nurses: Day Mon. Tue. Wed. Thur. Fri. Sat. Sun. Min no. 4 3 2 4 8 6 4 Each nurse should have two consecutive days off. How many full-time…
What is an Autonomous System Number?
A small stone is kicked from the edge of a cliff. Its x and y coordinates as function are given by x=17.7 t and y=4.24 t – 4.90 t^2, where x and y are in meter and t is in seconds. Obtain the expre…
A key component of a DBMS is the data, which contains information about the structure of the database. For each data element stores in the database, such as the customer number, there is a correspo…
The expression "= (Price) \times (Quantity)" would most likely be found in: A. a bound control. B. a calculated control. C. a label control. D. an unbound control.
Show that F = 3 x^2 z x hat i + z y hat j + (x^3 + 2 y z) hat k, is a conservative field and find its potential.
Microsoft Word is non-rivalrous and excludable, so the market outcome – where all consumers and firms pay the same price – is efficient. Indicate whether the statement is true, false, or uncertain….
How can the Systems Development Life Cycle (SDLC) be used to analyze, plan, and document systems changes in an organization? Briefly explain.
How would companies like Ford, Chevrolet, etc. utilize information systems with respect to CIO, CSO and CKO?
What are the stages of design and development of an organization's information system?
Which of the 4 age groups shows the lowest average satisfaction rating for the shampoo? a) Youth b) College c) Adult d) Senior e) None – all show the same average satisfaction level per ANOVA…
The goal of this experiment is to test the hypothesis that the flux of light decreases as the square of the distance from the source. In this case, the absolute value of the voltage measured by the…
Explain the following five cybersecurity risks in the banking industry: 1. Unencrypted data 2. Malware 3. Third-party services that are not secure 4. Data that has been manipulated 5. Spoofing
True or False. Explain. A project supports a fixed amount of debt over the project's economic life.
U.S. environmental policies: a. must be approved by the United Nations. b. impose all of their costs on businesses, not consumers. c. limit the size of public utilities. d. none of the above.
When collecting samples from the environment, why is it necessary to include a sterile water control tube?
How does a crossing system differ from an electronic exchange?
What are the advantages and disadvantages between the traditional and online learning model?
Enter the following data into a variable p with c () 2 3 5 7 11 13 17 19 Use length() to check its length.
The U.S., Europe, and the U.N. may all be guilty of significant naivety regarding information and communication technology. What is the "Information Society?" What are some of the negative aspects…
For this assignment, write a Java application that prompts the user for pairs of inputs of a product number (1-5), and then an integer quantity of units sold (these are two separate prompts for inp…
Use Java. Write a program that prompts the user for a file name, assuming that the file contains a Java program. Your program should read the file and print its contents properly indented. When you…
Write a Java program that prompts the user for a file name, assuming that the file contains a Java program. Your program should read the file and print its contents properly indented. When you see…
Write a program that prompts the user for a file name, assuming that the file contains a Java program. Your program should read the file and print its contents properly indented. When you see a lef…
Write a loop to require the user to enter two integers; the second integer must be equal to, or larger than, the first integer. Both integers must be at least 1 and not larger than 20. If they do…
Write a program that asks the user for the name of a file. The program should display the contents of the file on the screen. If the file's contents won't fit on a single screen, the program should…
What is the purpose of an interrogator? a) To read information from the tags b) To prevent unauthorized access to the tags c) To increase the read distance d) To store a charge that powers pass…
The use of flexible manufacturing systems, properly sequencing jobs, and properly placing tools will minimize ________. (a) setup time (b) processing time (c) moving time (d) inspection time.
Identify 3 different decisions that can be supported, due to the value of information systems.
Is there somewhere online that I can learn how amortization of intangible assets is refigured for the purposes of calculating earnings and profits?
Over the years, hardware has become cheaper. This is especially true with respect to hard drive capacity. Keeping that in mind, are disk quotas worth implementing? Why or why not? If you were asked…
What is the difference between infrastructure and architecture?
What is a structure in C programming language?
Does dimensionality have to be upheld to do an inner product?
A production operation is a single-channel, single-phase, unlimited-queue-length queueing system. Products arrive at the operation at an average rate of 90 per hour, and the automated operation pro…
An organization has a goal to prevent the ordering of inventory quantities in excess of its needs. One individual in the organization wants to design a control that requires a review of all purchas…
_ the airlines, trucking, and long-distance phone calling has led to lower prices for consumers. a. Regulation of b. Trusts in c. Deregulation of d. Conglomerates in
Why are databases challenging for doing data analysis?
Many companies use the computer to generate purchase orders. Who is responsible for authorizing a purchase when the computer generates the purchase order? How does the responsible individual ensure…
What are five layers in the Internet Protocol stack? What are the principal responsibilities of each of these layers?
The key reasons why EC criminals cannot be stopped include each of the following EXCEPT: A. strong EC security makes online shopping inconvenient and demanding on customers. B. sophisticated hack…
iven the immediate predecessors and a, m, b for each activity in the table below, compute: 1) t e and V for each activity 2) ES, EF, LS and LF for each activity 3) T e and V p for the proj…
What does WLAN mean?
What do you see if you run the program and then press the '+' key 22 times? a. A 10 sided polygon, b. A square, c. Nothing, d. A 9 sided polygon.
As applied to virtual teams, the "equalization phenomenon" is that _____. a. virtual teams are comprised of members of equal status b. computer-mediated technology may reduce status effects c. empl…
Consider the vector field \left\langle {0,3, – 5} \right\rangle \times {\bf{r}}, where {\bf{r}} = \left\langle {x,y,z} \right\rangle. a. Compute the curl of the field and verify that it has the s…
Management teams are formed to take on "one-time" tasks that are generally complex. a. True b. False
a. How can technology be used to prevent internal data breaches from SQL queries? b. What technology can be used to do the same?
What is the distinction between the system and the user space?
In about 100 words, outline and briefly describe the main function of shopping cart software
What is the difference between an IP address and an IP packet?
What is the difference between a router and a modem?
When you type text into a shape, it is _______. (a) right aligned (b) centered both horizontally and vertically (c) left aligned and centered vertically (d) aligned with top of the shape.
Which of the following is not an effect that E-commerce has had on organizations? (i) E-commerce enables smaller businesses to operate in areas dominated by larger companies. (ii) E-commerce incre…
Find the component form of the vector PQ, where P = (1, 3) and Q = (2, -1).
The practice of assigning project team members to multiple projects is called: A. concurrent engineering. B. parallel activities. C. fast-tracking. D. multitasking.
Write a program which creates a binary search tree of different shapes from a file. – The comparison is based on the shape's area – There are 3 shapes o Rectangle o Circle o Right Triangle – The fi…
Explain how to compute the derivative of r(t) = (f(t) g(t) h(t) Choose the correct answer below A) The derivative of r(t) is the scalar valued function r'(t) whose value is the dot product of r(t)…
A majority element in an array, A, of size N is an element that appears more than N/2 times (thus, there is at most one). For example, the array 3,3,4,2,4,4,2,4,4 has a majority element (4), where…
Which answer is correct? If there is aggregate uncertainty, A. the individual will trade state-contingent contracts to smooth all their risk. B. markets will be inefficient C. full insurance is…
Do product marketing directors work with SSPs?
Let C be the curve determined by r ( t ) = ( t 2 + 1 ) i + t cos ( t ) j + sin (t ) k a. Find a tangent vector in the direction of increasing t at the point where t = 3 b. Find the ge
Let C be the curve determined by r(t) = (t^2 + 1) i + t \cos(\pi t) j + \sin (\pi t) k . Find a tangent vector in the direction of increasing t at the point where t = 3
Find the outward normal for the curve r(t)= 5 cost j+5sin t k. a. What is the type of representation of the curve? Give the equations for the unit tangent and the unit outward normal in this case….
Which is best software to manage activities and to generate invoices for exporters?
Objectives 1. To give students practice at typing in, compiling and running simple programs. 2. To learn how to read in input from the user. 3. To learn how to use assignment statements and arithme…
What are the components of a hydroponic project?
Describe both online and traditional learning models.
Implement a function in assembly that would do the following: loop through a sequence of characters and swap them such that the end result is the original string in reverse (Hint: collect the str…
For the function f (x, y, z) = 2 x^2 + 3 y – z. (a) Find f (1, -2, 1). (b) Find f (-2, 1, 0).
Sketch the vector field in the plane along with its horizontal and vertical components at a representative assortment of points on the circle x^2 + y^2 = 4. F = x \mathbf i – y \mathbf j
Why was Sephadex used as the resolving column material?
What challenges have you faced in leading a team through an analysis project?
What is the difference between C++ and Python in terms of capabilities?
What is the world's most extensive public communication system?
Which of the following is false? A. The keyboard shortcut to delete rows or columns is CTRL + minus sign. B. The keyboard shortcut to add more sheets to a workbook is CTRL + N. C. The keyboard s…
Define the term hijacking as it relates to cybersecurity.
Consider a smart array that automatiically expands on demand. It starts with a given capacity of a 100, and whenever it expands it adds 50 to the current capacity. So, how many total units of work…
…………… items are best suited for selling on the World Wide Web. a. high-volume b. low-margin c. commodity d. all of the above.
How do you create a Perl Module that executes system commands to retrieve system metrics like CPU utilization?
Given the vector x = [1 8 3 9 0 1], create a short set of commands that will: a. Add up the values of the elements (check your result using the function sum). b. Computes the running sum (for ele…
What impact, if any, does the use of AI by stockbrokers (or whatever entity buys and sells stocks for others) have on the market?
Can an AI stock investing system outperform even Warren Buffett and the best hedge fund managers in the world, or is this unlikely?
Which of the following is likely the most serious issue with database table design? a. insertion anomaly b. delation anomaly c. update anomaly d. all of the above are equally serious regarding tabl…
Write java codes to define three classes with the following functions and relationship among them: The class Thought contains a method message which implements to print out the message "I'm a teach…
In Excel, assume that cell A1 contains an arbitrary number between 0 and 1. a. Write a formula (in a single cell) that produces a result of 1 if the number in A1 is less than or equal to 1/2 and 2…
Which of these is most likely to be the first task in a systems study? a) Systems analysis b) Systems design c) Preliminary investigation d) Any of these are possible – it depends upon the system u…
Which of these should come first when performing a systems study? a) Systems analysis b) Systems design c) Preliminary investigation d) Systems audit
The totality of computer systems in a firm is called its: a) Applications portfolio b) Systems burden c) BPR collection d) IT department
It is critical that project manager's set priorities for the different aspects of their management of the project. Explain how the project manager can set such priorities.
As the manager of a Small Town Water project in a community in the Savannah Region of Ghana, explain how you will do the following: a) Project planning; b) Project scheduling and implementation;…
Locating the faulty area requires the troubleshooter to have the ability to identify components as well as being able to ________ the system into functional zones that can be checked for proper ope…
Is biological evolution algorithmic?
How many sheets can web offset printing produce?
If a resource or capability is valuable and rare but not costly to imitate, exploiting this resource will generate a sustainable competitive advantage for a firm. Indicate whether the statement is…
In virtual teams, members frequently depend on electronically mediated communications. Indicate whether the statement is true or false.
Indicate whether the statement is true or false: Virtual teams are often at an advantage in having good teamwork because the team members have considerable face-to-face interaction with each other.
Virtual teams consist of geographically dispersed coworkers who interact using a combination of telecommunications and information technologies to accomplish organizational tasks. Indicate whether…
Indicate whether the statement is true or false. Virtual teams should be managed differently than face-to-face teams in an office, partially because virtual team members may not interact along trad…
Two soccer players are practicing for an upcoming game. One of them runs 10 m from point A to point B. She then turns left at 90 degrees and runs 10 m until she reaches point C. Then she kicks t…
The United States is a net exporter of services to China. What does this imply about the magnitude of the deficit in the U.S. balance on goods and services with China compared with the size of the…
What three groups must be taken into account in the consideration of the ethics of a research project? a. society. b. clinical practitioners. c. the scientific community. d. research participants.
When studying history is it better to study in chronological order or is it better/okay to go between topics?
File RepeatLetters.java (shown below) contains an incomplete program, whose goal is to take as input text from the user, and then print out that text so that each letter of the text is repeated mul…
Consider the curves r_1(t) = (t, t^2, t^3) and r_2(t) = (sin t, sin 2t, t). A) Do the curves intersect? If so, where? B) Denote T_1 and T_2 to be the unit tangent vector of r_1 and r_2, respectivel…
Consider the curves r_1(t) = [t, t^2, t^3]and r_2(t)=(sin t, sin 2t, t) }] a) Do the curves intersect? If so, where? b) Denote T₁ and T₂ to be the unit tangent vector of r₁ and…
How should a company prioritize all of its capital project opportunities? Explain.
Video r sum s eliminate the possibility of applicants claiming discrimination in a firm's hiring practices. Indicate whether this statement is true or false.
1. C++ program to do the following: Define a function called "difference" that can accept two integer values as arguments and returns an integer (which is the obtained by subtracting the first the…
What guidance do project management principles give for coordinating functional departments?
Java Programming Generate a 8 to 10-row "double" Pascal triangle as per the instructions shown in the attached slides. Use a recursive method to generate the Pascal Triangle.
2) Which business driver of information systems influenced and benefited the business?
What is UNIX architecture?
What is OSS and BSS architecture?
What is hardware architecture?
If you choose to be an expert for E-commerce business models and concepts explain why you would choose it, and why?
As new algorithm software continues to develop to optimize the buying and selling process, where do you see the role of the salesperson evolving? Is there a role? Discuss and explain your answer.
What is binding constraint in linear programming?
What is the difference between systems development and the systems development life cycle (SDLC)?
|
CommonCrawl
|
Need verification if this simple derivation of the Schrödinger equation is valid
By 1924 it was well observed that matter (as well as light) has wave-particle duality (later named quantum), and the wavelength-momentum-energy relation of quanta $$\lambda=\frac{h}{p}\;\;\longleftrightarrow\;\;p=\hbar k\;\;\longleftrightarrow\;\;E=\hbar\omega$$ had been hypothesized (and experimentally validated) by de Broglie. Let us model a quantum as a periodic function $\Psi$ of spacial cooridnates and time, which can be expressed in the form of Fourier series $$\Psi=\sum_{n\in\mathbb{Z}}A_n\psi_n$$ where $$\psi_n=e^{i\left(\mathbf{k}_n\cdot\mathbf{x}-\omega_n t\right)}.$$ We want to get the information about the momentum and energy of such quantum, for now only its basis $\psi_n$. Consider the followings $$\begin{align*} \nabla^2\psi_n&=-k^2\psi_n& &\longleftrightarrow& -\hbar^2\nabla^2\psi_n&=p_n^2\psi_n\\ \frac{\partial}{\partial t}\psi_n&=-i\omega_n\psi_n& &\longleftrightarrow& i\hbar\frac{\partial}{\partial t}\psi_n&=E_n\psi_n. \end{align*}$$ which can be interpreted as eigenvalue problems with the operators $$\hat{p^2_n}:=-\hbar^2\nabla^2\quad\text{and}\quad\hat{E_n}:=i\hbar\frac{\partial}{\partial t}.$$ The total energy of the quantum is given $$\sum_{n\in\mathbb{Z}}E_n=\frac{1}{2m}\sum_{n\in\mathbb{Z}}p_n^2+V$$ and by superposition principle we can write down the following equation $$\hat{E_n}\psi_n=\frac{1}{2m}\hat{p_n^2}\psi_n+\hat{V}\psi_n$$ or equally $$i\hbar\frac{\partial}{\partial t}\psi_n=-\frac{\hbar^2}{2m}\nabla^2\psi_n+\hat{V}\psi_n.$$
I have never learned or seen this derivation. Other derivations were most of the time too advanced to me to follow (the math) or the equation itself was taken for granted in the first place. My question is that if this derivation makes sense and I'm doing right.
quantum-mechanics energy operators schroedinger-equation hamiltonian
$\begingroup$ Where are you getting $E=\hbar \omega$ from? What is $\omega$ the frequency of? I think this is where you've loaded all the assumptions (which of course, you have to make, since the Schrodinger equation can't be be derived). $\endgroup$ – jacob1729 May 21 at 17:55
$\begingroup$ Possible duplicates: physics.stackexchange.com/q/17477/2451 , physics.stackexchange.com/q/16812/2451 , physics.stackexchange.com/q/220697/2451 and links therein. $\endgroup$ – Qmechanic♦ May 21 at 17:58
$\begingroup$ Apart from relations like $p=\hbar k$ and $E=\hbar \omega$ there is one key assumption which enters in this "derivation", that is that particles behaves like waves. This fact can only be found by experiments. $\endgroup$ – Frederic Thomas May 21 at 18:01
$\begingroup$ Why in the world did someone downvote this? If you think the derivation is wrong, then that's an answer, not a reason to downvote. Please don't treat new users with this kind of hostility. $\endgroup$ – Ben Crowell May 21 at 19:08
$\begingroup$ It's perfectly fine, and probably very similar to how Schrodinger first came up with it! Of course, the physical content of the things you've assumed, explicitly and implicitly, is basically equivalent to what the Schrodinger equation says. $\endgroup$ – knzhou May 22 at 20:22
Sure, this is valid, subject to some hidden assumptions. The issue would be that those assumptions haven't been explicitly stated, and it isn't trivial to figure out the complete list of hidden assumptions. Some hidden assumptions:
That $E=\hbar \omega$. This is plausible on dimensional grounds and because of the relativistic analogy energy:momentum::time:space, but it gets cloudy when you remember that this is a derivation of the nonrelativistic Schrodinger equation. Einstein had already published this relation as an empirical one based on stuff like the photoelectric effect, but it wasn't obvious at the time how that fit into the structure of physics.
That wavefunctions live in the field of complex numbers, not, e.g., in the reals or the quaternions. This is basically needed if you want to get conservation of probability, but that would not be obvious if you hadn't already tried, e.g., making a real Schrodinger equation and seeing conservation of probability fail.
That the relevant degree of freedom is position, as opposed to something like spin.
That position is an observable. This is false in quantum field theory, and even in nonrelativistic field theory there is the issue that we don't have eigenstates of position unless we allow things like Dirac deltas.
That it's OK for phase and normalization to be unobservable.
$\begingroup$ I think this answer would be even better if you added something to the effect of knzhou's comment that these assumptions are essentially equivalent to the usual Schrodinger equation. As such, the derivation is perhaps more of a 'motivation'. +1 all the same. $\endgroup$ – jacob1729 May 22 at 21:57
$\begingroup$ @jacob1729 Your argument that "assumptions are essentially equivalent to equations" can be applied to every scientific equation, because an equation is basically underlying assumptions (or hypotheses, or postulates, tomato tomahto) written in the language of mathematics. Coming up to an equation from known assumptions that governs broader or new object than assumptions do is enough to be called a derivation. In this case the de Broglie relations would be the assumption and wave function is what the derived equation governs. All this, I think, is only a matter of wording afterall. $\endgroup$ – user575201 May 22 at 23:09
Not the answer you're looking for? Browse other questions tagged quantum-mechanics energy operators schroedinger-equation hamiltonian or ask your own question.
Why can't $ i\hbar\frac{\partial}{\partial t}$ be considered the Hamiltonian operator?
Energy is actually the momentum in the direction of time?
Is there a time operator in quantum mechanics?
An operator on the other side of the Schrödinger equation
Potential in Schrödinger equation when doing a Galilean transformation
How do you get quantum Hamilton-Jacobi equation from Schrödinger equation?
Show that $ \hat{r}·\vec{p} $ is not hermitian
Equivalent representations of stationary states in Quantum Mechanics
What's a reasonable 'translation' of the Schrödinger equation?
The Schrödinger equation as an Euler-Lagrange equation
What is the meaning of the second quantised wave function, actually?
Solving the Schrödinger equation - dilemma with separation of variables procedure
Definition of Hamiltonian in Quantum Mechanics
|
CommonCrawl
|
> math > arXiv:2012.01682
math.AG
Mathematics > Algebraic Geometry
Title: Construction of varieties of low codimension with applications to moduli spaces of varieties of general type
Authors: Purnaprajna Bangere, Francisco Javier Gallego, Jayan Mukherjee, Debaditya Raychaudhury
(Submitted on 3 Dec 2020 (v1), last revised 19 Dec 2022 (this version, v2))
Abstract: In this article we develop a new way of systematically constructing infinitely many families of smooth subvarieties $X$ of any given dimension $m$, $m \geq 3$, and any given codimension in $\mathbb P^N$, embedded by complete subcanonical linear series, and, in particular, in the range of Hartshorne's conjecture. We accomplish this by showing the existence of everywhere non--reduced schemes called ropes, embedded in $\mathbb P^N$, and by smoothing them. In the range $3 \leq m < N/2$, we construct smooth subvarieties, embedded by complete subcanonical linear series, that are not complete intersections. We also go beyond a question of Enriques on constructing simple canonical surfaces in projective spaces, and construct simple canonical varieties in all dimensions. The canonical map of infinitely many of these simple canonical varieties is finite birational but not an embedding. Finally, we show the existence of components of moduli spaces of varieties of general type (in all dimensions $m$, $m \geq 3$) that are analogues of the moduli space of curves of genus $g > 2$ with respect to the behavior of the canonical map and its deformations. In many cases, the general elements of these components are canonically embedded and their codimension is in the range of Hartshorne's conjecture.
Comments: 37 pages. New version considerably enlarged and reorganized; improved exposition; new, stronger results. Among other things, the title is changed, some sections and results have been split (e.g. Section 2 split into Sections 2, 4 and 5; (old) Section 4 split into Sections 6, 7 and 8; Thm. 3.5 split into Thms. 3.5, 3.7), new material and results have been added (e.g. Thm. 7.6; Sections 10 and 11)
Subjects: Algebraic Geometry (math.AG)
MSC classes: 14B10, 14D06, 14D15, 14D20, 14M10
Cite as: arXiv:2012.01682 [math.AG]
(or arXiv:2012.01682v2 [math.AG] for this version)
From: Jayan Mukherjee [view email]
[v1] Thu, 3 Dec 2020 03:39:39 GMT (31kb)
[v2] Mon, 19 Dec 2022 13:06:35 GMT (42kb)
|
CommonCrawl
|
Unilateral global interval bifurcation for Kirchhoff type problems and its applications
CPAA Home
Infinitely many solutions for generalized quasilinear Schrödinger equations with sign-changing potential
January 2018, 17(1): 39-52. doi: 10.3934/cpaa.2018003
A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition
VicenŢiu D. RǍdulescu 1, and Somayeh Saiedinezhad 2,,
Institute of Mathematics "Simion Stoilow" of the Romanian Academy, P.O. Box 1-764,014700 Bucharest, Romania, Department of Mathematics, University of Craiova, Street A.I. Cuza 13,200585 Craiova, Romania
School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran
* Corresponding author: Somayeh Saiedinezhad
Received August 2016 Revised July 2017 Published September 2017
We are concerned with the study of the following nonlinear eigenvalue problem with Robin boundary condition
$\begin{cases} -{\rm div}\,(a(x,\nabla u))=λ b(x,u)&\mbox{in} \ Ω\\\dfrac{\partial A}{\partial n}+β(x) c(x,u)=0&\mbox{on}\\partialΩ.\end{cases}$
The abstract setting involves Sobolev spaces with variable exponent. The main result of the present paper establishes a sufficient condition for the existence of an unbounded sequence of eigenvalues. Our arguments strongly rely on the Lusternik-Schnirelmann principle. Finally, we focus to the following particular case, which is a $p(x)$-Laplacian problem with several variable exponents:
$\begin{cases} -{\rm div}\,(a_0(x) |\nabla u|^{p(x)-2}\nabla u)=λ b_0(x)|u|^{q(x)-2}u&\mbox{in} \ Ω\\|\nabla u|^{p(x)-2}\dfrac{\partial u}{\partial n}+β(x)|u|^{r(x)-2} u=0&\mbox{on}\\partialΩ.\end{cases}$
Combining variational arguments, we establish several properties of the eigenvalues family of this nonhomogeneous Robin problem.
Keywords: Nonlinear eigenvalue problem, Lusternik-Schnirelmann principle, variable exponent Sobolev space, p(x)-Laplacian operator.
Mathematics Subject Classification: Primary:34B09, 35P30;Secondary:35J20.
Citation: VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003
R. Agarwal, M. B. Ghaemi and S. Saiedinezhad, The existence of weak solution for degenerate $ \sum {{\Delta _{{p_i}(x)}}} $-equation, J. Comput. Anal. Appl., 13 (2011), 629-641. Google Scholar
C. Alves and Marco A. S. Souto, Existence of solutions for a class of problems in $ {\mathbb R}^N $ involving the p(x)-Laplacian, in Contributions to nonlinear analysis, Birkhäuser Basel, (2005), 17-32. doi: 10.1007/3-7643-7401-2_2. Google Scholar
R. Aronson, Boundary conditions for diffusion of light, J. Opt. Soc. Am. A, 12 (1995), 2532-2539. Google Scholar
F. Browder, On the eigenfunctions and eigenvalues of the general linear elliptic differential operator, Proc. Nat. Acad. Sci. USA, 39 (1953), 433-439. Google Scholar
F. Browder, Lusternik-Schnirelmann category and nonlinear elliptic eigenvalue problems, Bull. Amer. Math. Soc., 71 (1965), 644-648. doi: 10.1090/S0002-9904-1965-11378-7. Google Scholar
F. Browder, Variational methods for nonlinear elliptic eigenvalue problems, Bull. Amer. Math. Soc., 71 (1965), 176-183. doi: 10.1090/S0002-9904-1965-11275-7. Google Scholar
F. Browder, Existence theorems for nonlinear partial differential equations, 1970 Global Analysis (Proc. Sympos. Pure Math., Vol. XVI, Berkeley, Calif., 1968), pp. 1-60, Amer. Math. Soc., Providence, R. I. Google Scholar
S.-G. Deng, Eigenvalues of the $ p (x) $-Laplacian Steklov problem, J. Math. Anal. Appl., 339 (2008), 925-937. doi: 10.1016/j.jmaa.2007.07.028. Google Scholar
X. Fan, Remarks on eigenvalue problems involving the $ p (x) $-Laplacian, J. Math. Anal. Appl., 352 (2009), 85-98. doi: 10.1016/j.jmaa.2008.05.086. Google Scholar
X. Fan, Q. Zhang and D. Zhao, Eigenvalues of $ p (x) $-Laplacian Dirichlet problem, J. Math. Anal. Appl., 302 (2005), 306-317. doi: 10.1016/j.jmaa.2003.11.020. Google Scholar
R. Filippucci, P. Pucci and V.D. Rădulescu, Existence and non-existence results for quasilinear elliptic exterior problems with nonlinear boundary conditions, Communications in Partial Differential Equations, 33 (2008), 706-717. doi: 10.1080/03605300701518208. Google Scholar
Y. Fu and Y. Shan, On the removability of isolated singular points for elliptic equations involving variable exponent, Adv. Nonlinear Anal., 5 (2016), 121-132. doi: 10.1515/anona-2015-0055. Google Scholar
O. Kovacik and J. Rakosnik, On spaces $ L^{p (x)} $ and $ W^{k, p (x)} $, Czechoslovak Mathematical Journal, 41 (1991), 592-618. Google Scholar
A. Le, Eigenvalue problems for the $ p $-Laplacian, Nonlinear Analysis: Theory, Methods & Applications, 64 (2006), 1057-1099. doi: 10.1016/j.na.2005.05.056. Google Scholar
L. A. Lusternik and L. G. Schnirelmann, Topological Methods in Variational Problems, Trudy Inst. Mat. Mech. Moscow State Univ. (1930), 1-68.Google Scholar
M. Mihailescu and V. Răadulescu, A multiplicity result for a nonlinear degenerate problem arising in the theory of electrorheological fluids, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 2625-2641. doi: 10.1098/rspa.2005.1633. Google Scholar
C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992. Google Scholar
V. Răadulescu, Nonlinear elliptic equations with variable exponent: old and new, Nonlinear Anal., 121 (2015), 336-369. doi: 10.1016/j.na.2014.11.007. Google Scholar
V. Răadulescu and D. Repovš, Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis, CRC Press, Taylor & Francis Group, Boca Raton FL, 2015. doi: 10.1201/b18601. Google Scholar
D. Repovš, Stationary waves of Schrödinger-type equations with variable exponent, Anal. Appl. (Singap.), 13 (2015), 645-661. doi: 10.1142/S0219530514500420. Google Scholar
M. Ruzicka, Electrorheological Fluids: Modeling and Mathematical Theory, Springer Science & Business Media, New York, 2000. Google Scholar
S. Samko, On a progress in the theory of Lebesgue spaces with variable exponent: maximal and singular operators, Integral Transforms and Special Functions, 16 (2005), 461-482. doi: 10.1080/10652460412331320322. Google Scholar
O. Scherzer (Ed. ), Handbook of Mathematical Methods in Imaging, Springer, Berlin, 2011.Google Scholar
J. Simon, Régularité de la solution d'une équation non linéaire dans $ {\mathbb R}^N $, Journées d'Analyse Non Linéaire (Proc. Conf., Besan¸con, 1977), pp. 205-227, Lecture Notes in Math., 665, Springer, Berlin, 1978.Google Scholar
Z. Yücedag, Solutions of nonlinear problems involving $ p(x) $-Laplacian operator, Adv. Nonlinear Anal., 4 (2015), 285-293. doi: 10.1515/anona-2015-0044. Google Scholar
E. Zeidler, Nonlinear Functional Analysis and Its Applications, Ⅲ. Variational Methods and Optimization, Springer Science & Business Media, New York, 2013. doi: 10.1007/978-1-4612-5020-3. Google Scholar
E. Zeidler, The Lusternik-Schnirelmann theory for indefinite and not necessarily odd nonlinear operators and its applications, Nonlinear Analysis: Theory, Methods & Applications, 4 (1980), 451-489. doi: 10.1016/0362-546X(80)90085-1. Google Scholar
Q. Zhang, Existence of solutions for $ p (x) $-Laplacian equations with singular coefficients in $ {\mathbb R}^N $, J. Math. Anal. Appl., 348 (2008), 38-50. doi: 10.1016/j.jmaa.2008.06.026. Google Scholar
Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012
Giuseppina Barletta, Roberto Livrea, Nikolaos S. Papageorgiou. A nonlinear eigenvalue problem for the periodic scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1075-1086. doi: 10.3934/cpaa.2014.13.1075
Anouar Bahrouni, VicenŢiu D. RĂdulescu. On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 379-389. doi: 10.3934/dcdss.2018021
Julián Fernández Bonder, Leandro M. Del Pezzo. An optimization problem for the first eigenvalue of the $p-$Laplacian plus a potential. Communications on Pure & Applied Analysis, 2006, 5 (4) : 675-690. doi: 10.3934/cpaa.2006.5.675
Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 907-922. doi: 10.3934/dcdss.2011.4.907
Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
M. Nakamura, Tohru Ozawa. The Cauchy problem for nonlinear wave equations in the Sobolev space of critical order. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 215-231. doi: 10.3934/dcds.1999.5.215
V. V. Motreanu. Uniqueness results for a Dirichlet problem with variable exponent. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1399-1410. doi: 10.3934/cpaa.2010.9.1399
Ángel Arroyo, Joonas Heino, Mikko Parviainen. Tug-of-war games with varying probabilities and the normalized p(x)-laplacian. Communications on Pure & Applied Analysis, 2017, 16 (3) : 915-944. doi: 10.3934/cpaa.2017044
Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi. $ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1967-2008. doi: 10.3934/cpaa.2019090
Bin Guo, Wenjie Gao. Finite-time blow-up and extinction rates of solutions to an initial Neumann problem involving the $p(x,t)-Laplace$ operator and a non-local term. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 715-730. doi: 10.3934/dcds.2016.36.715
Kanishka Perera, Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 743-753. doi: 10.3934/dcds.2005.13.743
Gabriele Bonanno, Giuseppina D'Aguì, Angela Sciammetta. One-dimensional nonlinear boundary value problems with variable exponent. Discrete & Continuous Dynamical Systems - S, 2018, 11 (2) : 179-191. doi: 10.3934/dcdss.2018011
V. V. Motreanu. Multiplicity of solutions for variable exponent Dirichlet problem with concave term. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 845-855. doi: 10.3934/dcdss.2012.5.845
Xing-Bin Pan. An eigenvalue variation problem of magnetic Schrödinger operator in three dimensions. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 933-978. doi: 10.3934/dcds.2009.24.933
Goro Akagi, Kei Matsuura. Well-posedness and large-time behaviors of solutions for a parabolic equation involving $p(x)$-Laplacian. Conference Publications, 2011, 2011 (Special) : 22-31. doi: 10.3934/proc.2011.2011.22
Xudong Shang, Jihui Zhang, Yang Yang. Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent. Communications on Pure & Applied Analysis, 2014, 13 (2) : 567-584. doi: 10.3934/cpaa.2014.13.567
C. Fabry, Raul Manásevich. Equations with a $p$-Laplacian and an asymmetric nonlinear term. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 545-557. doi: 10.3934/dcds.2001.7.545
Hugo Beirão da Veiga, Francesca Crispo. On the global regularity for nonlinear systems of the $p$-Laplacian type. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1173-1191. doi: 10.3934/dcdss.2013.6.1173
Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130
VicenŢiu D. RǍdulescu Somayeh Saiedinezhad
\begin{document}$ p(x) $\end{document}-growth and generalized Robin boundary value condition" readonly="readonly">
|
CommonCrawl
|
Original research | Open | Published: 16 May 2018
Image-based SPECT calibration based on the evaluation of the Fraction of Activity in the Field of View
Adrien Halty ORCID: orcid.org/0000-0002-5176-31261,2,
Jean-Noël Badel2,
Olga Kochebina1 &
David Sarrut1,2
EJNMMI Physicsvolume 5, Article number: 11 (2018) | Download Citation
SPECT quantification is important for dosimetry in targeted radionuclide therapy (TRT) and the calibration of SPECT images is a crucial stage for image quantification. The current standardized calibration protocol (MIRD 23) uses phantom acquisitions to derive a global calibration factor in specific conditions. It thus requires specific acquisitions for every clinical protocols. We proposed an alternative and complementary image-based calibration method that allows to determine a calibration factor adapted to each patient, radionuclide, and acquisition protocol and that may also be used as an additional independent calibration.
The proposed method relies on a SPECT/CT acquisition of a given region of interest and an initial whole-body (WB) planar image. First, the conjugate view of WB planar images is computed after scatter and attenuation correction. 3D SPECT images are reconstructed with scatter, attenuation, and collimator-detector response (CDR) corrections and corrected from apparent dead-time. The field of view (FOV) of the SPECT image is then projected on the corrected WB planar image. The fraction of activity located in the area corresponding to the SPECT FOV is then calculated based on the counts on the corrected WB planar image. The Fraction of Activity in Field Of View (FAF) is then proposed to compute the calibration factor as the total number of counts in the SPECT image divided by this activity. Quantification accuracy was compared with the standard calibration method both with phantom experiments and on patient data.
Both standard and image-based calibrations give good accuracy on large region of interest on phantom experiments (less than 7% of relative difference compared to ground truth). Apparent dead-time correction allows to reduce the uncertainty associated with standard calibration from 2.5 to 1%. The differences found between both methods were lower than the uncertainty range of the standard calibration (<3%). In patient data, although no ground truth was available, both methods give similar calibration factor (average difference 3.64%).
A calibration factor may be computed directly from the acquired SPECT image providing that a WB planar image is also available and if both acquisitions are performed before biological elimination. This method does not require to perform phantom acquisition for every different acquisition conditions and may serve to double check the calibration with an independent factor.
In targeted radionuclide therapy (TRT), the determination of the spatial and the temporal radioactivity distributions within the body is required to estimate the absorbed dose distribution. In practice, in vivo activity distributions can be visualized from 3D SPECT images. Currently, SPECT has become an imaging modality as quantitative as PET [1], although SPECT images are generally noisier. Indeed, images are degraded by several phenomena: photons attenuation and scattering, instrumentation constraints such as partial volume effects (PVE) or dead-time (DT), and motion artifact [2]. A lot of efforts have been conducted toward a reliable quantification [2–6]. Nowadays, scatter and attenuation corrections are embedded into most reconstruction algorithms, PVE correction is partly tackled with recovery coefficient, and DT correction is feasible. For example, a global 5% accuracy of 99mTc quantification has recently been reported which is similar to one obtained with 18F in PET [7]. On in vivo data, a standard error of 8.4% has been reported in bladder activity quantification [6]. Similar results (standard error of 7%) were found on corrected ventilation-substracted perfusion images [5].
However, even if significant progresses have been made during the last 10 years [8], reliable quantification remains difficult. In particular, a crucial step is the determination of a global calibration factor of the system sensitivity that convert a number of counts (cts) into an activity concentration in Bq (per voxel). The MIRD committee recommends [4] to perform the acquisition of a known activity with scattering condition close to presented with a patient. The exact same parameters of acquisition (energy windows) and reconstruction must be used in patient data. This means that a calibration factor should be determined each time when one of the parameters, such as the width or the number of energy windows for multi-gamma emitters or the acquisition duration, is changed. The acquisition parameters depend on specific clinical needs and constraints of a study and thus may require a dedicated calibration factor. This is sometimes inconvenient in clinical routine and could be costly for expensive radionuclides such as 177Lu or 111In.
In this work, we propose an image-based method, similar to one sometimes used in radioembolization dosimetry [9–11], where the total injected activity is present in the FOV of the SPECT. This method is often preferred to the MIRD method because it is patient-specific. The proposed approach is a generalization for the cases where the injected activity is not entirely present in the SPECT FOV. It consists in an estimation of the calibration factor from the image itself, using the apparent fraction of activity in the field of view and the known injected activity. In the following, we first describe the proposed method and then provide the experimental results of experiments in order to evaluate its performances.
SPECT voxel values are typically expressed in number of counts (cts), and the quantification goal is to determine the calibration factor S (or system sensitivity) to convert cts to Bq. It is important to keep in mind that the calibration factor is global and it does not take into account PVE effects, and coefficient recovery factors are still required to perform the quantification on small volumes.
The current MIRD guidelines are briefly reviewed before describing the proposed image-based method.
Phantom-based calibration according to MIRD guidelines
According to the MIRD guidelines for the quantification [4, 12], the calibration factor Sstd (std for "standard") is computed using a SPECT acquisition of a large source of a known activity in a determined phantom. Typically a large uniform tank (Jaszczak phantom) of water with a low activity concentration (similar to what is expected in clinic) is imaged with the exact same parameters as in the clinical study where the calibration factor will be used. SPECT images should be reconstructed with a method that includes attenuation correction (AC) based on CT [13], scatter correction (SC) based on double- or triple-energy window (DEW or TEW) methods [14], and collimator-detector response (CDR) compensation [15]. Although less frequently mentioned in publications, images should be corrected from apparent DT (aDT) [16] and not only the system DT (sDT). Indeed, system DT, noted τ is often relatively low in the modern devices (τ≈ 1 or 2 μs [3, 5]). Willowson et al. [5] reported that a significant impact of sDT may be observed for count rates of 40 kcps or higher, corresponding to a theoretical loss of 5%. However, aDT can be significantly higher than sDT; indeed, all detected events regardless of their energy cause DT and not only counts recorded in the primary or scatter windows [12]. The aDT, denoted τa, is then given by Eq. 1 where the window fraction ωf corresponds to the percentage of detected events in the energy windows of interest (photopeaks, scatter).
$$ \tau_{\mathrm{a}}=\frac{\tau}{\omega_{\mathrm{f}}} $$
We denoted Amean the mean activity over the acquisition duration ΔTacq given by Eq. 2.
$$ {A}_{\text{mean}} =\frac{A_{0}\int_{0}^{\Delta T_{\text{acq}}} e^{-\lambda t} \, \mathrm{d}t }{\Delta T_{\text{acq}}} $$
Here, A0 is the injected activity corrected from the potential residual activity in syringes and physical decay.
Therefore, the calibration factor Sstd is given by Eq. 3, with NSPECT the total number of counts in the SPECT image corrected by SC, AC, CDR, and DT.
$$ S_{\text{std}}=\frac{N_{\text{SPECT}}}{A_{\text{mean}} \times \Delta T_{\text{acq}}} $$
The repeatability of the standard calibration factor, Sstd, was estimated from several measurements with the coefficient of variation, COV, given by Eq. 4, $\overline {S_{\text {std}}}$ being the average Sstd over the experiments.
$$ \text{COV}=\frac{\sigma_{S_{\text{std}}}}{\overline{S_{\text{std}}}} $$
The MIRD methodology hence requires rigorous experiments and specific logistics (cost and storage of phantoms with long half-live radionuclides). Consequently, it is not always easy to implement in clinical routine despite the good results obtained in terms of accuracy.
Calibration factor with all activity inside the field of view
A specific situation may be considered when the total injected activity is inside the SPECT field of view (FOV). In this case, the standard phantom acquisition is not needed anymore. The image-based calibration factor Simb is directly derived from the patient image with Eq. 5, in the assumption that the total injected activity is inside the patient. This approach is often used in radioembolization dosimetry [9–11]. For example, Paciolia et al. have recently reported that relative calibration allows to partially compensate suboptimal scatter corrections.
$$ S_{\text{imb}}=\frac{N_{\text{SPECT}}}{A_{mean} \times \Delta T_{\text{acq}} } $$
Calibration factor with activity inside and outside the field of view
Most of the times, the injected activity is spread in the blood flow and therefore is not entirely present in the SPECT FOV that is generally centered in the region of interest. This is, for example, the case for pre-therapeutic dosimetry with 177Lu treatment of neuroendocrine tumors [17], administrated by intravenous injections. In this case, the previously described assumption is not fulfilled, and Eq. 5 cannot be applied. Moreover, performing SPECT acquisitions with several table steps to cover the whole body would be generally too long in the clinical routine. Instead, we propose to estimate the activity in the SPECT FOV thanks to the planar whole-body scintigraphies (WBS). WBS acquisitions are generally significantly faster than tomographic SPECT acquisition (about 10 cm/min for WBS versus the equivalent of 2.5 cm/min for SPECT). WBS are generally acquired just before SPECT to adjust the table position. The proposed method consists in the following steps:
Acquisition of conjugate planar WBS with scatter and attenuation correction.
Projection of the SPECT FOV boundaries on WBS.
Computation of the Fraction of Activity in FOV (FAF) and of the calibration factor S
Step 1 Conjugate planar WBS images are first corrected for scatter with the DEW method [14]. Anterior and posterior images are then combined with the geometric mean method [18]. The CT image of the SPECT/CT acquisition is converted into a 3D attenuation coefficient map, which is then averaged along the antero-posterior (AP) axis to obtain a 2D mean attenuation map with the same spatial orientation as the scatter corrected planar images. Finally, the 2D planar image is corrected for attenuation. If the CT image is smaller than conjugate planar WBS, the attenuation coefficient factor is extrapolated from the border of the planar attenuation coefficient map.
Step 2 The boundaries of the SPECT FOV are projected along the AP axis onto the 2D planar image as shown in Fig. 1. This step may require to align the 3D SPECT and the 2D planar images if they are in different coordinate systems. This registration could be performed using table coordinates in the DICOM files or by automated rigid 2D registration between planar image and AP-projected SPECT image. On some devices, the SPECT voxel matrix size may be larger than the real detector length leading to two bands of voxels with zero counts in the top and bottom parts of the image. Those bands must be removed before projecting the boundaries onto the planar image, see Fig. 1.
SPECT image is projected along the antero-posterior axis
Step 3 The ratio between the number of counts NA within the 2D ROI (region named "A" in Fig. 1) obtained after Step 2, and the total number of counts NB in the whole WBS (region named "B" in the figure) is defined as the Fraction of Activity in FOV (FAF) factor, $\text {FAF} = \frac {N_{\mathrm {A}}}{N_{\mathrm {B}}}$. The final calibration factor is then obtained with Eq. 6 where NSPECT is the number of counts in the 3D SPECT image and Amean is the averaged injected activity over the acquisition (2).
$$ S_{\text{FAF}}=\frac{N_{\text{SPECT}}}{A_{\text{mean}} \times \Delta T_{\text{acq}} \times \text{FAF} } $$
This method requires the totality of injected activity being in the patient body. Therefore, the images (planar WB and SPECT/CT) must be acquired before any biological elimination (e.g., before urination). This approach is based on the assumption that the fraction of activity present in the 3D SPECT FOV can be estimated from the 2D planar images.
Several experiments were performed to evaluate the proposed methods. All experiments used 99mTc, but the method could be applied to other radionuclides. An example of the use of this method for a TRT with 111In is given in the last part.
Imaging acquisition and reconstruction for the 99mTc experiments
The image acquisitions were performed on a Tandem Discovery NM/CT 670 from GE Medical Systems with two heads. We used LEHR/PARA collimators (low-energy high-resolution/parallel) with hexagonal holes. The head radius was set to a constant distance of 24 cm. Two energy windows, primary and scatter, were recorded with the standard clinical settings. The primary window corresponding to the photopeak of 99mTc and was set to 126.45–154.55 keV, and the scatter window was set to 114–126 keV. SPECT acquisitions consisted in 60 step-and-shoot projections of 25 s each and over 360°. The spatial sampling was 4.418 × 4.418 mm, and the 2D matrix of pixel was 128 × 128. CT was acquired right after SPECT, with a tube voltage of 120 kV. Slice thickness was 1.25 mm, and pixel spacing was 0.9765 × 0.9765 mm. SPECT reconstruction was performed with manufacturer's iterative ordered-subset expectation maximization (OSEM) algorithm that include attenuation, DEW scatter, and CDR correction. All images were reconstructed with the same software version (Xeleris 3.0) and parameters sets, with 10 subsets and 20 iterations.
Standard phantom-based MIRD calibration
First, the standard calibration factor, Sstd, was computed from the phantom-based MIRD calibration procedure from repeated SPECT/CT acquisitions of a Jaszczak phantom with three spheres of high concentration of 99mTc (10 MBq in total), respectively of 16, 8, and 4 mL. The spheres were placed in uniform background with several increasing activity concentrations. The background activity concentrations were 10, 20, 30, and 50% of the sphere activity concentration. Values are reported in Table 1 after correction from residual activity in syringes and physical decay. The different levels of activity allowed to evaluate the DT correction. Four acquisitions were performed at each level of background activity to estimate the reproducibility.
Table 1 Activities for standard calibration phantom acquisitions
The value of Sstd was estimated according to the MIRD guidelines. Large ROI covering about 2 cm around the phantom were manually drawn to compensate from spill-out effects.
Dead-time correction
A paralyzable model was considered; a new event resets the time frame of no detection. This model is described by Eq. 7 with τ the sDT, Ro the observed count rate, and Rt the true count rate without dead-time [16].
$$ R_{\mathrm{o}}=R_{\mathrm{t}}e^{-R_{\mathrm{t}} \tau} $$
Dead-time of camera was experimentally measured with the two-sources method [16]. Three acquisitions were performed with an energy window set to 0–511 keV in order to record the whole energy spectrum. The first acquisition was performed without collimator, with a 40-MBq source. In the second acquisition, another source of 40-MBq was added next to the first one. The third acquisition was done keeping only the second one. The count rate was evaluated for each acquisition and denoted respectively R1, R12, and R2. The dead-time, τ, is calculated through them as shown in Eq. 8 [16].
$$ \tau\approx\frac{2R_{12}}{(R_{1}+R_{2})^{2}}\text{ln}\left(\frac{R_{1}+R_{2}}{R_{12}}\right) $$
The phantom used for the standard phantom-based calibration was placed in the same position as in the calibration experiments and with an additional window set to 0–511 keV. The window fraction, ωf, was computed as the ratio between the primary and scatter counts and the counts recorded in the total spectrum window. The value of aDT was computed with Eq. 1. Equation 7 was solved with τ for sDT correction, or τa for aDT correction. Reconstructed images were then scaled by $\frac {R_{\mathrm {t}}}{R_{\mathrm {o}}}=e^{R_{\mathrm {t}} \tau } \left (\text {or} = e^{R_{\mathrm {t}} \tau _{\mathrm {a}}}\right)$.
Test case 1: Accuracy of image-based method
Test case 1 was designed to compare Sstd obtained from conventional phantom-based calibration with Simb obtained from our image-based method, when all activity is in the FOV. Four bags of 500 mL of saline solution containing 99mTc were placed in a cylindrical phantom half filled with water without activity, see Fig. 2. It allows to evaluate the different conditions of attenuation and scatter as some bags were in the water while others were in the air. In the acquisitions 1 and 2, two bags were placed at the air/water interface. Hence, about half of the saline bag was in the air and the other half was in the water. The activities in the saline bags are given in Table 2. Residual activities in the syringes after injection were taken into account.
Photo (a) and schematic view (b) of the phantom used to test calibration with different conditions of attenuation and scattering. Here, two saline bags are in attenuating condition (water) and two in non-attenuating condition (air) corresponding to the third acquisition
Table 2 Activities for image-based quantification test
The acquisition protocol and reconstruction parameters were identical to those used in the standard phantom-based MIRD calibration method. A calibration factor, named Simb, was determined from the image, knowing that all injected activity was visible in the SPECT FOV.
The bag ROIs were manually selected on the images as spheres of 3 cm of diameter inside the bag contour, away from the boundaries of the bag in order to avoid PVE. The activities in the four bags were determined as the mean activity in bag ROIs based on the two CFs and compared to the known ground truth values. The relative error on quantification was calculated as the difference between the activity found with the considered calibration factor and the ground truth, divided by the ground truth value.
Test case 2: FAF evaluation
Test case 2 was designed to evaluate the hypothesis used for the FAF method. Image acquisitions were performed on a phantom composed of three parts. The first one consists of three spheres of respectively 4, 8, and 16 mL with a high activity concentration of 99mTc placed inside a Jaszczak phantom with medium activity concentration. The two other parts are cylindrical phantoms filled with water of low activity concentration placed next to the Jaszczak phantom. The spheres in the Jaszczak phantom mimic thoracic lesions, while the cylindrical phantoms mimic the lower limbs. SPECT acquisitions were performed with a FOV that cover entirely the Jaszczak phantoms and partly one cylindrical phantom as shown in Fig. 3. This experiment was performed with three activity concentrations, denoted 2a, 2b, and 2c as summarized in Table 3. The acquisition parameters and reconstruction protocols were the same than detailed for the standard calibration. For the WBS acquisition, the spatial sampling was 2.209 × 2.209 mm, and the 2D matrix of pixel was 256 × 1024. The total acquisition time was 4 min.
Projection of 3D SPECT into the 2D WBS on phantom. The area A in red is obtained from the antero-posterior projection of the SPECT FOV into the whole-body planar image. The area B corresponds to the entire WB planar image
Table 3 Activities used for test case 2
The 2D planar image scatter correction was performed with the DEW method [14] as recommended by the manufacturer; a scatter multiplier of 1.1 was used to scale the number of counts in the scatter window according to the number of scattered photons in the photopeak window. The attenuation correction was also performed according to the manufacturer's formula, reported, and validated in [13], see Eq. 9, with μ m , μ w , μ a , and μ b the attenuation coefficients of respectively material, water, air, and bone, E the energy of the gamma photons in keV, and Eeff the mean energy of the CT beam. We assume $E_{\text {eff}}= \frac {E_{\text {peak}}}{3}$, with Epeak the maximum energy of the CT beam.
$$\begin{array}{*{20}l} \mu_{m, E}=\mu_{w, E} + \frac{\left(\mu_{w, E}-\mu_{a, E}\right)\times \text{CT}}{1000} \qquad \text{if CT} < 0\\ \mu_{m, E}=\mu_{w, E} + \frac{\mu_{w, E_{\text{eff}}}\times\left(\mu_{b, E}-\mu_{w, E}\right)\times \text{CT}}{1000\times\left(\mu_{b, E_{\text{eff}}}-\mu_{w, E_{\text{eff}}}\right)} \qquad \text{if CT} > 0 \end{array} $$
Test case 3: Patient study
The proposed image-based calibration method was applied to clinical patient images. We compared SFAF to Sstd obtained from a uniform Jaszczak phantom acquisition with an activity of 13.64 MBq of 111In at the time of acquisition, according to the MIRD protocol.
We selected images of six patients from a phase I clinical trial named Synfrizz which was previously approved by local authorities (ANSM; ClinicalTrials.gov Identifier: NCT01469975). It involved a 90Y radiolabeled monoclonal antibody (mAb), OTSA101, developed by OncoTherapy Science (OTS) targeting a tumor antigen over-expressed in synovial sarcoma [19]. Before the therapy, patients were injected with 111In-labeled mAb to evaluate uptakes and biodistributions. Sequences of planar WBS, immediately followed by SPECT/CT images, were acquired at 1, 5, 24, 48, 72, and 144 h following the intravenous injection.
The imaging protocol was similar to the previous test cases, except that the MEGP/PARA (medium-energy general-purpose/parallel) collimators with hexagonal holes were used. 111In has two main gamma ray emissions at 171 and 245 keV. As recommended by the manufacturer, the primary energy windows were 153.9–188.1 keV and 220.5–269.5 keV, and the scatter window used for DEW scatter correction was 198.3–219.6 keV. SPECT acquisitions consisted in 60 step-and-shoot projections of 30 s each and over 360° followed by CT aquisition. SPECT images were reconstructed with the manufacturer OSEM algorithm provided by the software (Xeleris 3.0). In addition to scatter correction with DEW, attenuation correction based on CT image and "resolution recovery" package were used. SPECT voxel spacing was 4.18 × 4.18 × 4.18 mm3. For the dead-time correction, similar dead-time as in phantom study was assumed. Indeed, the whole spectrum was not recorded at the time of data acquisition to allow proper correction of aDT. The SPECT acquisition was performed with two table steps, with a small overlap, covering in total 92 cm from the patient's neck to below the pelvic region, see Fig. 4. Here, SFAF was applied on a two-step image, and Amean is the mean activity during one step. Because of physical decay, the Amean of the second step is slightly lower. Therefore, the average Amean between the two steps was used to compute SFAF.
On the left side, SPECT/CT fusion image of a Synfrizz patient. On the right, whole body planar image after attenuation and scatter correction. The SPECT FOV is represented on both image by the red rectangle
The WBS planar image dimension was 1024 × 256 with pixel spacing of 2.40 × 2.40 mm2. Table velocity was 10 cm/min. SFAF was estimated with the method described above on the first acquired SPECT image, 1 h after injection. No biological elimination occurred between injection and the first image acquisition. Time between WBS and SPECT/CT was always less than 10 min. The total activity Amean in patients was equal to the injected activity corrected from decay and residual activity of the syringe as given by Eq. 2. In the clinical study, the heart, the kidneys, the liver, the spleen, the bone marrow, and the main lesions were analyzed [19]. However, since the activity ground truth in each ROI is unknown, only the difference in the global calibration factor was considered.
Standard calibration factor with 99mTc
Measured sDT of the GE camera was 1.66 μs, and the window fraction on this configuration was 46% resulting in an aDT of 3.6 μs. With sDT, correction factors were from 1.015 on the last 10% background acquisition (with the lowest count rate) to 1.048 on the first 50% background acquisition (with the highest count rate). With aDT, correction factor ranges were larger than those with sDT. They were from 1.034 to 1.112 in the same conditions. In other words, in the highest count rate configuration, we assume that 11.2% of events are lost with aDT, compared to only 4.8% with sDT.
Whatever the level of background, the Sstd values remained stable with a coefficient of variation in the range 0.11–0.23%, showing a good repeatability. The calibration factor was found to be 708 cps/MBq with a COV of 0.96% with aDT correction. Note that the value was 682 cps/MBq with a COV of 2.39% with sDT correction. Figure 5 displays the Sstd values for the different configurations and the associated uncertainty (3σ of repeated measurements). We observed a slight decrease of 2.54% of the calibration factor when the background level increases, i.e., with the increasing count rate. In the following, only aDT calibration is considered and compared to the image-based method.
Calibration factor and uncertainties with correction of aDT (blue) and and sDT (red). Error bars correspond to 3σ on repeated measurements
Test case 1: Evaluation of image-based method on phantoms experiments
As expected when all the activity is inside the FOV, results with the image-based method are similar to those with the standard calibration. Figure 6 gives the error on quantification with both methods, on the whole image and different bags. We grouped the bags according to the attenuation condition: attenuating, non-attenuating, and intermediate. On the whole image, the image-based method leads to no quantification error by construction. We considered all the activities in the FOV to calculate Simb. With the standard method, the relative error is also very small (0.65%). In the subregions of interest, both methods give relative errors less than 6%. The difference of relative errors between the two methods was small (0.54–0.68%) compared to the uncertainty associated to the standard method (± 3%).
Quantification error in the different configurations and regions with the image-based method (red) and the standard calibration (green)
Test case 2: Evaluation of the FAF method
Figure 7 illustrates the attenuation correction of the WBS. Table 4 shows the estimated and real FAF for the three experiments 2a, 2b, and 2c. Absolute errors were less than 2% in all configurations. Figure 8 depicts the differences between standard- and image-based FAF quantification for the three experiments used for three ROIs: the whole image, the Jaszczak phantom part, and the cylinder. The relative errors compared to ground truth range from -6.18 to 5.08% with the standard calibration method and from - 6.87 to 3.16% with image-based FAF method. Again, the differences between both methods are within the uncertainty of standard calibration (± 3%).
Illustration of the AC in the WBS. From left to right, a coronal view of the phantoms used, the geometrical mean of the planar scintigraphy corrected from SC, the attenuation correction factor map obained from the CT, and the WBS corrected from scatter and attenuation
Quantification error for the different configurations and ROI with the image-based FAF method (filled) and the standard calibration (dot) compared to the ground truth
Table 4 Evaluation of fraction of activity in the FOV test
The calibration factor obtained with the standard method was 1352 cps/MBq. The image-based calibration factors as well as the relative difference with the standard method are given in Table 5. The average difference was 3.64% with a standard deviation of 4.46%. Differences up to 9% were observed.
Table 5 Comparison of the calibration factors obtained with standard method and image-based FAF method, computed on patient data
No correlation between personalized calibration factor, and patient weight, i.e., scattering volume, has been found (R2 = 0.04). In some case, the patients' arms are not in the same position during the WBS (arms along the body) and the SPECT/CT (arms above the head) acquisition because of patient comfort issue. Therefore, the 2D attenuation map from the CT does not perfectly match the WBS image leading to a slight underestimation of the activity in the arms. It seems that there is a better agreement between SFAF and Sstd when the patients' arms remain in the same position, despite little activity there.
Repeated acquisition for the standard calibration protocol would have been necessary to evaluate Sstd uncertainty and the relevance of the relative difference between both methods.
The method assumes that all injected activities are present in the planar images. Indeed, image acquisitions must be performed before any biological elimination (in particular urination). Moreover, the method assumes that the activity in the SPECT FOV may be estimated from the planar images. Indeed, WBS and SPECT/CT acquisitions are performed successively within 10 min. We thus assume that the activity redistribution within the body between the two images is negligible. Also, WBS are not corrected from nuclear decay during the acquisitions since they are much faster than the radionuclide half-lives.
Note that the calibration factors reported here are relatively higher than the typical values found in SPECT camera specifications. We indeed observed that enabling the resolution recovery option of the reconstruction algorithm in Xeleris 3.0 leads to larger values. A calibration factor may be involved, but it is not described in the constructor documentation. For patient data, SFAF was computed with the average Amean over the two-step acquisition rather than one for each step. Since the Amean decreases very slightly (less than 0.3%), the error propagated to SFAF is negligible. Scatter correction relies on a DEW method but could be applied with other approaches [14].
Dead-time correction with aDT method was important, reducing coefficient variation from 2.4% with sDT to less than 1% with aDT. This may be particulary important for example for 177Lu therapies where a large activity (2.5–7.5 GBq) is injected. Here, ωf and aDT were determined from 99mTc and applied to 111In. Given the injected activities and the low count rate observed in the patient study, the impact of the dead-time correction factors here was negligible (around 1%). A slight dependence of the calibration factor with 99mTc according to the count rate was still observed (< 1%); it may be due to a purely paralyzable model consideration, as recommended in the literature [20], and not a hybrid model.
We showed that a reliable estimation of the Fraction of Activity in the Field of View, with less than 2% error, may be obtained with the proposed method involving standard planar WBS acquisitions.
Overall relative quantification errors were below 7% in various acquisition conditions. This represents a level of accuracy typically found in the literature for SPECT and PET quantification [1, 6].
The image-based FAF calibration method does not require specific phantom acquisitions and is intrinsically adapted to each acquisition conditions: reconstruction parameters, radionuclides, and attenuation configurations. It could limit systematic bias that could potentially occur with standard calibration protocol. In practice, we advocate the use of those two independent calibration methods, phantom-based and image-based, as quality assurance.
Bailey DL, Willowson KP. Quantitative SPECT / CT : SPECT joins PET as a quantitative imaging modality. Eur J Nucl Med Mol Imaging. 2013; 41(1):17–25.
Ritt P, Vija H, Hornegger J, Kuwert T. Absolute quantification in SPECT. Eur J Nucl Med Mol Imaging. 2011; 38(SUPPL. 1):69–77.
D'Arienzo M, Cazzato M, Cozzella ML, Cox M, D'Andrea M, Fazio A, Fenwick A, Iaccarino G, Johansson L, Strigari L, Ungania S, De Felice P. Gamma camera calibration and validation for quantitative SPECT imaging with 177Lu. Appl Radiat Isot. 2016; 112:156–64.
Dewaraja YK, Frey EC, Sgouros G, Brill AB, Roberson P, Zanzonico PB, Ljungberg M. MIRD pamphlet no. 23: quantitative SPECT for patient-specific 3-dimensional dosimetry in internal radionuclide therapy. J Nucl Med. 2012; 53(8):1310–25.
Willowson K, Bailey DL, Baldock C. Quantitative SPECT reconstruction using CT-derived corrections. Phys Med Biol. 2008; 53(12):3099–112.
Zeintl J, Vija AH, Yahil A, Hornegger J, Kuwert T. Quantitative accuracy of clinical 99mTc SPECT/CT using ordered-subset expectation maximization with 3-dimensional resolution recovery, attenuation, and scatter correction,. J Nucl Med : Off Publ, Soc Nucl Med. 2010; 51(6):921–8.
Bailey D, Willowson KP. An evidence-based review of quantitative SPECT imaging and potential clinical application. J Nucl Med. 2013; 54(1):83–9.
Frey EC, Humm JL, Ljungberg M. Accuracy and precision of radioactivity quantification in nuclear medicine images. Semin Nucl Med. 2012; 42(3):208–18.
Cremonesi M, Ferrari M, Bartolomei M, Orsi F, Bonomo G, Aricò D, Mallia A, De Cicco C, Pedroli G, Paganelli G. Radioembolisation with 90Y-microspheres: dosimetric and radiobiological investigation for multi-cycle treatment. Eur J Nucl Med Mol Imaging. 2008; 35(11):2088–96.
Pacilio M, Ferrari M, Chiesa C, Lorenzon L, Mira M, Botta F, Torres LA, Perez MC, Gil AV, Basile C, Ljungberg M, Pani R, Cremonesi M. Impact of SPECT corrections on 3D-dosimetry for liver transarterial radioembolization using the patient relative calibration methodology Impact of SPECT corrections on 3D-dosimetry for liver transarterial radioembolization using the patient relative calib. Med Phys. 2016; 43(7):4053–64.
Strigari L, Sciuto R, Rea S, Carpanese L, Pizzi G, Soriani A, Iaccarino G, Benassi M, Ettorre GM, Maini CL. Efficacy and toxicity related to treatment of hepatocellular carcinoma with 90 Y-SIR spheres : radiobiologic considerations. J Nucl Med. 2010; 51(9):1377–86.
Ljungberg M, Celler A, Konijnenberg MW, Eckerman KF, Dewaraja Y, Sjögreen-Gleisner K. MIRD Pamphlet No. 26: joint EANM/MIRD guidelines for quantitative 177 Lu SPECT applied for dosimetry of radiopharmaceutical therapy. J Nucl Med. 2016; 57(1):151–63.
Patton JA, Turkington TG. SPECT / CT Physical principles and attenuation correction. J Nucl Med Technol. 2008; 36(1):1–11.
Hutton BF, Buvat I, Beekman FJ. Review and current status of SPECT scatter correction. Phys Med Biol. 2011; 56(14):85–112.
Frey E, Tsui B. Collimator-detector response compensation in spect. In: Quantitative Analysis in Nuclear Medicine Imaging. Boston: Springer; 2006. Chapter 5, p. 141–66.
Cherry SR, Sorenson JA, Phelps ME. Physics in Nuclear Medicine. Fourth edition. Philadelphia: Elsevier Saunders; 2012.
Sandström M, Garske U, Granberg D, Sundin A, Lundqvist H. Individualized dosimetry in patients undergoing. Eur J Nucl Med Mol Imaging. 2010; 37(2):212–25.
King M, Farncombe T. An overview of attenuation and scatter correction of planar and SPECT data for dosimetry studies. Cancer Biother Radiopharm. 2003; 18(2):181–90.
Sarrut D, Badel JN, Halty A, Garin G, Perol D, Cassier P, Blay JY, Kryza D, Giraudet AL. 3D absorbed dose distribution estimated by monte carlo simulation in radionuclide therapy with a monoclonal antibody targeting synovial sarcoma. EJNMMI Phys. 2017; 4(1):6.
Adams R, Hine GJ, Zimmerman CD. Deadtime measurements in scientillation cameras uder scatter conditions simulating quantitative nuclear cardiography. J Nucl Med. 1978; 19:538–45.
The publication of this article was supported by the funds of the European Association of Nuclear Medicine (EANM).
This study was partly funded by LYric INCa-DGOS-4664 and Labex PRIMES ANR-11-LABX-0063.
The data and materials are available on request.
Univ Lyon, INSA-Lyon, Université Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69008, France
Adrien Halty
, Olga Kochebina
& David Sarrut
Univ Lyon, Centre Léon Bérard, Lyon, 69008, France
, Jean-Noël Badel
Search for Adrien Halty in:
Search for Jean-Noël Badel in:
Search for Olga Kochebina in:
Search for David Sarrut in:
All authors contributed equally to this work. All authors discussed the results and implications and commented on the manuscript. All authors read and approved the final manuscript.
Correspondence to Adrien Halty.
This article do not report a patient study. However, the methodology has been applied on patient data from the clinical trial named Synfrizz Synfrizz which was previously approved by local authorities (ANSM; ClinicalTrials.gov Identifier: NCT01469975).
There is no publication of individual person's data in this report.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Targeted radionuclide therapy
Absorbed dose estimation
SPECT calibration
|
CommonCrawl
|
A survey of submesoscale currents
James C. McWilliams ORCID: orcid.org/0000-0002-1237-50081
Geoscience Letters volume 6, Article number: 3 (2019) Cite this article
Submesoscale currents are pervasive throughout the ocean. They have intermediate space and time scales—neither mesoscale nor microscale—that have made them elusive for measurements and modeling until recently. In this brief article, a survey is presented of their primary characteristics and interpretive explanations, intended for a broad audience of physical and biogeochemical oceanographers. Besides their identifying scales, submesoscale currents are distinctive in their flow patterns, their essential dynamical processes, and their consequences for transport, mixing, and dissipation in the general circulation. There are two primary submesoscale populations, a frontal one in the near-surface layer with its typically reduced stratification, and another vortical one, generated in topographic wakes, that (sparsely) fills the oceanic interior.
The ocean is highly variable: just ask any sailor or sea-going experimentalist. Among its modes of variability is a class of phenomena that have come to be called submesoscale currents (abbreviated here as SMCs). The SMC name arose in relation to the widely familiar mesoscale eddies that contain the greatest fraction of kinetic energy in the ocean; i.e., SMCs are the next size class down from the eddies, with typical horizontal lengths of tens to ten thousands of meters, vertical heights of ten to hundreds of meters, and evolutionary time scales of hours to days.Footnote 1 The name is also apt in the sense that the primary source for SMC energy comes from mesoscale eddies by a downscale transfer. The principal SMC generation mechanisms are (1) extraction of available potential energy (due to horizontal buoyancy gradients) in the weakly stratified surface layer either through baroclinic instability or frontogenesis, and (2) topographic-drag vorticity generation in flows along a sloping bottom, followed by boundary current separation and wake instability. These phenomena partly manifest a loss of hydrostatic, geostrophic momentum balance, and they exhibit a turbulent energy cascade forward toward even smaller scales. Thus, SMC dynamics typically go beyond quasigeostrophy, which is the generally successful theoretical framework for mesoscale eddies, while still being strongly influenced by Earth's rotation and the generally stable density stratification in the ocean. That is, the Rossby and Froude numbers:
$$\begin{aligned} Ro = \frac{V}{f\ell } \quad \mathrm {and} \quad Fr = \frac{V}{Nh}, \end{aligned}$$
are typically order-one parameters, rather than their usually small values for mesoscale eddies (V is a characteristic horizontal velocity scale, \(\ell\) and h horizontal and vertical length scales, f the Coriolis frequency, and N the Brunt–Vaisala or stratification frequency.) Thus, a Rossby number Ro measures the relative magnitude of momentum advection relative to the Coriolis force, and a Froude number Fr measures the ratio of an advecting velocity to the phase speed of an internal gravity wave. Small values for these numbers indicate the dynamical importance of rotation and stratification. Nevertheless, the evolution of SMCs is primarily through advection, which distinguishes it from the inter-gravity waves that can occupy similar scale ranges in the ocean; their mutual influence is a topic of current research, but, in some first approximation, their interactions are weak.
The science of SMCs has blossomed in recent years. The delay, compared to other more familiar types of oceanic currents, is mainly due to technical barriers. The SMC space and time scales are awkwardly in between the finer scale sampling from ships, sparse buoys, and floats, and the larger scale sampling from most satellite sensors; their simulation requires large computations that encompass both the mesoscale and submesoscale ranges; and the relevant theories involve difficult nonlinear dynamics. The more recent empowering technologies are high-resolution surface images, multiply-nested computational simulation methods, and, in a few instances, massive swarms of surface drifters (as in the CARTHE experiments in the Gulf of Mexico). As yet there is no widely deployable SMC measurement technique for the subsurface ocean, so simulations are leading the way. Autonomous gliders and ship-towed instruments can provide submesoscale spatial sampling along their tracks, but they are limited to two dimensions and thus often have difficulty in distinguishing SMCs and inertia-gravity waves.
Figure 1 is diagram of the flow of information and energy in the global oceanic circulation.Footnote 2 Their originating source is forcing by surface winds and air–sea buoyancy fluxes at the energetic scales of the atmosphere, i.e., mostly on the planetary scale comparable to the size of oceanic basins. The direct oceanic response is oceanic currents on the basin and inter-basin scales, including their narrower transport closures as boundary and equatorial currents. On the other hand, the sink is energy dissipation and information loss that can only be completed at the microscale due to molecular viscosity and conductivity. Currents of different types must connect the source and sink across the intervening scales and dynamical regimes. The first step is geostrophic instabilities of the forced circulation, yielding mesoscale eddies. Through the force-balanced constraints of geostrophic and hydrostatic balance, they are inhibited from further transfers to smaller scales; i.e., they have an inverse cascade of energy (Charney 1971). Leakages out of the mesoscale eddies by partial violation of these force balances continue downscale. Three middle-scale "routes" are depicted in Fig. 1: (1) spontaneous emission of inertia-gravity waves from currents, either in the interior or as bottom lee waves, followed ultimately by energy transfer to smaller scales; (2) partly ageostrophic instabilities and forward energy cascade of non-wave (partly balanced) currents; and (3) turbulent bottom drag on currents that generates both bottom boundary-layer turbulence and topographic vortical wakes. Below this middle-scale range, three-dimensional turbulence completes the connection to the microscale. While no accurate global accounting of these three routes is yet available, the SMC role in the latter two routes is almost certainly a dominant one. Along the pathway toward the microscale, the character of the currents changes from being highly anisotropic with \(h/\ell \ll 1\) and relatively small vertical velocity to approaching isotropy with \(h \sim \ell\), as in Kolmogorov's paradigm for universal turbulent behavior at high Reynolds number. In addition, the local Ro and Fr systematically increase as h and \(\ell\) decrease.
Stages in the oceanic general circulation from planetary scale forcing to microscale dissipation and mixing: climate forcing by wind stress and air-sea buoyancy flux; balanced flow dynamics (e.g., mostly geostrophic and hydrostatic); submesoscale transitional dynamics; and microscale flows with only a weak influence from Earth's rotation, a mostly unbalanced momentum dynamics with large accelerations, and an approximate equivalence of vertical and horizontal lengths scales, \(h \sim \ell\). The Rossby and Froude numbers, Ro and Fr, start from very low values at the planetary scale, pass through \(\mathcal {O}(1)\) values within the submesoscale regime, and end up with large values at the microscale
The paper is organized by considering the two SMC surface-layer and topographic populations separately in "Lines on the surface" and "Topographic wakes" sections, and it ends with a summary in "Final remarks" section.
This article is adapted from a lecture given at the 2018 annual meeting of the Asia Oceania Geosciences Society (AOGS) in Honolulu, HI. It is intended for a broad audience, and it is more about my experiences and opinions, with some illustrations, rather than about the full evidence and literature behind them. A more extensive and scholarly review article is McWilliams (2016), while this paper is intended to complement it for a more general audience.
Lines on the surface
SMCs can be difficult to discern in single vertical profiles or time series because of potential confusions with other types of flows. Among the most useful observations are surface images, especially those with high horizontal resolution (subsurface images are rare, but see Fig. 14). By the addition of an extradimension and with dense sampling, patterns emerge, and with experience, they can be interpreted for their underlying phenomena. Images in a horizontal plane (i.e., at the oceanic surface) allow a recognition of sharp convergence lines and small, horizontally recirculating vortices. Several examples are presented in this section.
Besides the recognition of lines of large horizontal density gradients (i.e., fronts), many of these lines exhibit meanders, which is suggestive of frontal instability, likely due to the associated vertical or horizontal shear in the mixed layer. The former is a type of baroclinic instability but with preferred length scales that are in the submesoscale range due to the small surface-layer baroclinic deformation radius (Boccaletti et al. 2007; Fox-Kemper et al. 2008). The latter is a form of classical shear instability that is engendered by the sharpening horizontal gradients caused by active frontogenesis. Both types can be partly ageostrophic because of the large value of the Rossby number.
A snapshot of sea surface temperature (SST) within the California Current System shows both mesoscale eddies and associated submesoscale fronts, filaments, instabilities, and vortices (Fig. 2). A front is defined as a sharp horizontal gradient in density with an extensive central axis in the perpendicular direction (i.e., a line along the surface). A filament is similar, except that it is a narrow horizontal extremum in density. Both light and dense filaments are possible, but dense filaments have much stronger ageostrophic circulation and more rapid frontogenesis, and, hence, are more common in the ocean (McWilliams et al. 2009; Adams et al. 2017). These submesoscale density patterns have associated geostrophic flows along the axis and ageostrophic flows, especially in the cross-axis plane.
Sea surface temperature of California. Satellite image on June 3, 2006 (NOAA Coastwatch). Notice the mesoscale contrasts over \(\approx\) 200 km and the submesoscale sharp fronts and filaments on a scale of \(\approx\) 5–10 km. In addition, evident are frontal instabilities (wiggles) and a roll-up into a coherent vortex (in the northeast corner)
A sun-glint reflection pattern in the Mediterranean Sea (Fig. 3) reveals a dense family of lines on the surface. They are due to high concentrations of buoyant surfactants (in this case biogenic scum) that are gathered into narrow lines along the front or filament axes by surface ageostrophic convergence with downwelling jets underneath. The surrounding submesoscale flows give a pattern organization to the lines, thus serving as a means of flow visualization. In this example, at least two scales of SMCs are present. The larger scale of kilometers is a vortex street of a sort which one might plausibly associate with the late stage of an instability of a lateral shear layer. The smaller scale of tens to hundreds of meters is the lines themselves. Within the vortices, the lines have an inwardly spiraling pattern that exposes both cyclonic swirl and a central convergence in the vortex cores. This led Walter Munk to refer to these as "spirals on the sea" (Munk et al. 2000). During the manned space flight era, many examples were photographed by the astronauts. Outside the vortices, the lines indicate a generally larger scale (mesoscale) flow pattern.
Spirals on the sea. Sun-glint pattern in a photograph by an astronaut Scully-Power (1986) over the Mediterranean not far off the coast of North Africa. The lines are created by surfactants concentrated in convergence lines that alter the scattered reflection by short-surface gravity waves. Their patterns are organized by submesoscale currents. The vortex diameters are \(\approx\) 5 km, and the surfactant lines are \(\approx\) 100 m wide. The pattern suggests that a vortex-street roll-up has occurred from a lateral shear instability of some antecedent front, filament, or headland wake
A closer view of an SMC convergence line is in Fig. 4, a photograph taken from a ship in the Gulf of Mexico. Here, it is floating seaweed that is gathered along the axis. The cross-line scale is a few tens of meters, and the along-axis scale reaches to the horizon. This example is from a study whose main content was satellite images showing a dense network of such lines spanning a wide region of hundreds of kilometers in the central Gulf of Mexico (Gower et al. 2006).
Surface convergence line. A fantail view of a \(\approx\) 10 m-wide line of buoyant sargassum weed in the Gulf of Mexico concentrated by a submesoscale frontal secondary circulation (Gower et al. 2006). Notice the along-front extent away into the distance, with a suggestion of frontal meanders or instabilities
A more detailed view of a cyclonic spiral vortex comes from a satellite color image showing plankton patterns in the Baltic Sea (Fig. 5). The plankton has high concentrations where gathered into surface convergence lines that are mostly dense filaments. The pattern interpretation is similar to Fig. 3, except that, here, the SMC vortex appears to be somewhat isolated in space. In other such examples, the lines are more abundant and the identifiable vortices are rarer [e.g., in McWilliams (2016)]; this regime is called the "submesoscale soup", perhaps, with vermicelli in mind (e.g., the regions away from the Gulf Stream in Fig. 6).
Submesoscale surface vortex. A satellite image of plankton concentrated in surface convergence lines in the Baltic Sea. The lines indicate a submesoscale central cyclone with spiral arms that are dense filaments. A similar behavior is seen in a set of convergent surface drifter trajectories, first into an arm and then into the cyclone center, in the Gulf of Mexico (D'Asaro et al. 2018), but in a separate event from that depicted in Fig. 4
Simulated offshore Gulf Stream. Vertical vorticity (\(\zeta = \partial _x v - \partial _y u\)) normalized by f at the surface in the wintertime Gulf Stream after separation from the western boundary in a nested-subdomain simulation (Gula et al. 2015). Notice the meandering Gulf Stream in the center, the northern warm anticyclonic and southern cold cyclonic mesoscale Rings, and the nearly ubiquitous submesoscale features of many different types, including the typical open-sea "soup" away from strong mesoscale currents
Such images and photographs are highly informative, but their information is intrinsically subjective. It is quite difficult to get in situ measurements that cover the indicated patterns, though there have been some successes. More generally, however, the quantitative science of SMCs has been advanced by computational simulations. An example for the offshore Gulf Stream is Fig. 6. The experience has been that, in realistic simulations with active mesoscale eddies and associated horizontal density gradients, a sufficiently fine-grid resolution will lead to the spontaneous emergence of SMCs that first arise in the weakly stratified surface layer. The necessary resolution varies with conditions (e.g., region or season), but it is around \(dx \approx 1\) km; such simulations can be referred to as "submesoscale-permitting", because the full range of submesoscale variability extends down to \(\approx\) 10–100 m, and the latter would have to be reached to be fully "submesoscale-resolving". Nevertheless, simulations show approximately self-similar scaling behavior when dx is varied within this submesoscale range. The associated kinetic-energy horizontal-wavenumber spectra are relatively shallow, \(E(k) \sim k^{-\,\gamma }\), with \(\gamma \approx 2\), where k is the horizontal wavenumber. This differs from simulations and altimetric sea-level measurements that show generally steeper spectra (larger \(\gamma\) values) in the mesoscale range. The value of the simulation results is mainly in the phenomenological discoveries they have enabled. This is illustrated in Fig. 6 by the variety of different SMC patterns associated with different mesoscale environments. Once the phenomenology is known, then detailed diagnostic analyses and theoretical explanations can be adduced.
One dynamical frontier for submesoscale simulations is the onset of essentially non-hydrostatic behavior. Most SMC simulations to date are made with hydrostatic models. Some estimates for this lower size limit for SMCs are where frontogenesis is arrested by frontal instability and/or where the currents in the forward energy cascade reach scales where rotation and stratification influences cease to be significant (i.e., relevant Ro and Fr values are large; Sullivan and McWilliams 2018). Both estimates would yield a horizontal scale in the 10–100 m range. Whether, in fact, there are important non-hydrostatic effects on SMCs at larger scales remains to be further tested.
In the surface layer, the principal energy source for SMCs is the available potential energy associated with horizontal density gradients on larger scales. This energy source can be tapped either by baroclinic instability (often called mixed-layer instability when confined to the weakly stratified surface layer) or by frontogenesis. Frontogenesis is a familiar concept from meteorology because of its frequent manifestation on surface weather maps. The prevailing meteorological interpretation is that fronts are caused by the horizontal strain rate in what is called a deformation flow (e.g., confluence in a horizontal plane) on a larger scale. A sketch of the associated surface frontal structure is in Fig. 7 for both density-front and dense-filament configurations. The horizontal buoyancy (i.e., \(b = - g \rho /\rho _0\), where g is gravity, \(\rho\) is density, and \(\rho _0\) a mean value) gradient will initially sharpen at an exponential rate \(\sim \exp [\alpha t]\) as a function of time t, if the gradient is favorably aligned in relation to a barotropic deformation flow with a uniform strain rate, no horizontal divergence, and no vorticity; i.e., \(\mathbf{u}= (u_d,v_d,0)\) with \(u_d = - \, \alpha x / 2\) and \(v_d = \, \alpha y / 2\), where \(\alpha\) is the horizontal strain rate and (x, y, z) and \((u,\ v,\ w)\) are the coordinates and velocity components in the (east, north, and upward) directions (Fig. 7a). Because of this density structure, there is an associated circulation, both along the axis and mostly geostrophic in v, and in the cross-axis plane with ageostrophic u and w. If the front is uniform along the axis or if an average is taken in this direction (denoted by angle brackets), then the cross-axis velocity is 2D non-divergent, and it can be represented with a secondary-circulation streamfunction \(\Phi\) defined by the following:
$$\begin{aligned} (\langle u \rangle , \ \langle w \rangle ) = (-\, \partial _z, \ \partial _x ) \Phi \,. \end{aligned}$$
For a front, \(\Phi\) is a positive monopole indicating a closed circulation loop with upwelling on the light side and surface flow toward the dense side. For a dense filament, \(\Phi\) is a dipole, with strong central downwelling. Both patterns imply that \(wb > 0\), i.e., the conversion of available potential energy to the SMC kinetic energy. Once \(\nabla b\) is strong enough, and \(\Phi\) is large enough, then the frontogenetic rate increases due to surface horizontal convergence of the secondary circulation, \(- \, \langle u_x \rangle \ > \ 0\), and it further amplifies until some other process arrests the frontogenesis. The frontal arrest process (discussed at the end of this section) sets the lower size limit for SMCs.
Frontogenesis by strain. Sketch of surface-layer frontogenesis caused by a larger scale (mesoscale) deformation flow for a front (top) and dense filament (bottom) (McWilliams 2016). There is a geostrophic along-front flow and an ageostrophic secondary circulation in the cross-front plane. With finite Ro, as typical of SMCs, the downwelling and cyclonic vorticity zones are stronger than the upwelling and anticyclonic zones
Strain-induced frontogenesis can occur in the ocean, with the principal strain and buoyancy gradients associated with mesoscale eddies or strong currents. More commonly in simulations, however, the dynamical character of the submesoscale fronts is consistent with a combination of a surface density gradient and vertical momentum mixing by boundary-layer turbulence. This situation is called Turbulent Thermal Wind (TTW), which has a linear, steady, surface-layer, incompressible approximation in its horizontal momentum and continuity balances:
$$\begin{aligned} -\, \partial _z [\nu _v \partial _z u] - \, f v= & {} - \,\partial _x \int ^z b \, \mathrm{d}z\nonumber \\ -\, \partial _z [\nu _v \partial _z v] + \, f u= & {} - \, \partial _y \int ^z b \, \mathrm{d}z\nonumber \\ \partial _x u + \partial _y v + \partial _z w= & {} \, 0 \,, \end{aligned}$$
where \(\nu _v\) is the vertical eddy viscosity associated with boundary-layer turbulence. Without the buoyancy gradient, this would describe an Ekman layer. Without the turbulent mixing, it would describe a geostrophic current in thermal wind balance. Together, they describe the mixed geostrophic and ageostrophic currents associated with a given \(\nabla b\) and \(\nu _v\) (McWilliams 2017). The TTW b and \(\Phi\) fields for a surface front are shown in Fig. 8. Interestingly, the monopole \(\Phi\) pattern is qualitatively the same as for a front in a deformation flow (Fig. 7), and the same similarity occurs for dense filaments. Thus, the TTW circulations are also frontogenetic due to the surface convergence on the dense side or center. The last panel in Fig. 8 shows the Lagrangian tendency for the SMC horizontal shear variance, \(T^\mathbf{u}\ = \ D|\nabla \mathbf {u}|^2/Dt\): it is strongly positive on the upper dense side of the front. Thus, differential advection by the secondary circulation is the cause of frontogenesis in both the buoyancy gradient and velocity shear.
Frontogenesis by turbulent thermal wind (TTW). The buoyancy field b (left), ageostrophic secondary-circulation streamfunction \(\Phi\) (center), and Lagrangian, velocity-gradient frontogenetic tendency, \(T^\mathbf{u}\ = \ D|\nabla \mathbf {u}|^2/Dt\) [10\(^{-13}\) s\(^{-3}\)] (right), for an idealized 2D surface front with vertical mixing (McWilliams 2017). The thick black line, pointed to by the arrow on the left panel, is the boundary-layer depth, and the curved line in the center panel indicates the direction of the secondary circulation. The horizontal convergence on the dense side near the surface (i.e., the upper left region in the buoyancy b(x, z) in left panel induces frontogenesis in both \(\nabla b\) and \(\nabla \mathbf {u}\)
The simulations discussed above are made with what is called a oceanic circulation model, designed to calculate currents on larger scales, starting from global and working downward as computational capacity allows. Most circulation models make the hydrostatic approximation, which seems generally safe to within the submesoscale-permitting regime, apart from whatever high-frequency internal gravity waves might arise. In such a model, small-scale turbulent mixing is parameterized, as with the vertical eddy viscosity \(\nu _v\) in (3). Somewhere approaching the dynamical microscale, however, the validity of these simplifications will fail. A dynamical model that is more complete is called a Large-Eddy Simulation (LES), e.g., commonly used for turbulent boundary layers. With respect to SMCs, a large LES calculation that includes both a SMC and its microscale turbulence is a valuable tool for a more fundamental view of their mutual interaction. The following figures are taken for such an LES simulation (Sullivan and McWilliams 2018). It is posed for an isolated dense filament in an otherwise turbulent boundary layer (e.g., due to a surface wind stress and/or convective buoyancy flux). This is a TTW frontogenetic situation. Figure 9 shows the LES initial conditions for the along-axis-averaged temperature (buoyancy) and velocity field in the presence of a fully developed, turbulent, convective boundary layer. Their structure is similar to the filament structure in Fig. 7. From this state, rapid frontogenesis ensues, as seen in the same figure at a time 6 h later. The filament width has narrowed dramatically at the surface while broadening deeper in the layer. The velocity patterns are similar to their initial shapes, but deformed to have very sharp near-surface gradients at the filament center. In particular, the central \(\langle w \rangle < 0\) jet has amplified and narrowed substantially. Another view is provided by the cross-axis streamfunction (Fig. 10), whose horizontal convergence in the center similarly narrows and amplifies in accompaniment to the frontogenesis. The time of 6 h coincides with the peak frontal strength as measured by both the vertical vorticity, \(\langle \zeta \rangle = \partial _x \langle v \rangle\), and the downward jet, \(\langle w \rangle\) (Fig. 11). The duration of the frontogenesis period depends on how strong and wide the filament buoyancy gradient is initially and on the strength of the turbulent momentum mixing that supports the secondary circulation in a TTW momentum balance. This figure also shows that the turbulent kinetic energy (TKE; associated with velocity deviations from the along-front average) also amplifies inside the filament, reaching levels much higher than would occur in a boundary layer without the SMC.
Simulated filament frontogenesis by TTW. A fully turbulent "initial condition" (left) and 6 h later at the time of peak frontal strength and frontal arrest (right), in a Large-Eddy Simulation (LES) of a submesoscale dense filament in a boundary layer with surface cooling (Sullivan and McWilliams 2018). The (x, z) fields are averaged in the along-front y direction: temperature \(\langle \theta \rangle - \theta _0\), along-front velocity \(\langle v \rangle\), cross-front velocity \(\langle u \rangle\), and vertical velocity \(\langle w \rangle\). Units are \(^\circ\)C and m s\(^{-1}\), respectively. The apparent noise in \(\langle w \rangle\) is sampling error due to the finite domain size in y and the presence of much larger w values in the turbulent eddies
Filament secondary circulation. The associated cross-front streamfunction \(\Phi\) at three different times in the filament life-cycle: (top-to-bottom) initial condition for frontogenesis (\(t = 0\)), during the peak time of frontal arrest (\(t = 6\) h), and during an extensive period of frontal decay (\(t = 2\) days). Notice the strong, narrow surface convergence that is the cause of the frontal sharpness. This is for the same simulation as in Fig. 9 (Sullivan and McWilliams 2018)
The frontogenesis is arrested by the development of a submesoscale instability of the filament. In this example, the instability is associated with the amplifying horizontal shear in \(\langle v \rangle\), and evidently, its growth rate exceeds the frontogenetic rate once the filament is narrow enough. As the shear instability amplifies, its eddies have an opposing horizontal Reynolds stress divergence, \(-\, \partial _x \langle u'v' \rangle\), which counters the advective effect of the frontogenetic secondary circulation. This frontal arrest occurs when the frontal width is \(\approx 100\) m for this case, comparable to the boundary-layer depth. In the ocean, fronts are seen with a range of widths from meters to kilometers, so this particular final width is not universal. The SMC circulation and buoyancy structure persist for much longer than the arrest time, with a slow decay in its strength over several ensuing days (Figs. 10, 11). In the filament, after the submesoscale instability has arisen, a turbulent forward energy cascade occurs, illustrating the local pathway from submesoscale to microscale to dissipation (Fig. 1). The associated TKE wavenumber spectrum E(k) (Fig. 12) shows a broad range of variability from the submesoscale peak with its characteristic spectrum slope exponent of \(\gamma \approx 2\) into a more fully 3D range with a smaller value of \(\gamma \approx 5/3\), as expected for boundary-layer turbulence (Thorpe 2005; McWilliams 2016).
Filament life-cycle. The associated time series for peak values of the along-front averaged fields in the middle of the filament: peak vorticity normalized by the initial value (top), peak downwelling velocity normalized by the surface cooling scale \(w_*\) (middle), and peak turbulent kinetic energy normalized by \(w_*^2\). This is for the same simulation as in Fig. 9 (Sullivan and McWilliams 2018)
Filament spectrum during frontal arrest. The associated along-front horizontal-wavenumber \(k_y\) spectrum of turbulent kinetic energy E(k) and its vertical-velocity w component in the center of the filament at the time of frontal arrest (\(t = 6\) h; Sullivan and McWilliams 2018). The spectrum peak is associated with a lateral shear instability whose eddy momentum flux, \(\langle u' v' \rangle\), arrests the frontogenesis by the mean secondary circulation. It is accompanied by a forward cascade of energy to microscale dissipation with a characteristic spectrum slope \(\propto \ k_y^{-\,5/3}\). This is for the same simulation as in Fig. 9 (Sullivan and McWilliams 2018)
Thus, there is an intimate relation between submesoscale currents and boundary-layer turbulence near the surface, with the turbulence providing important mixing effects (e.g., in a TTW evolution) and the former providing important additional TKE excitation and modifying the mixing behavior. This is a research frontier that is almost completely wide open.
Topographic wakes
Wakes are a familiar fluid dynamical phenomenon: flow past an obstacle generates vorticity in a boundary layer and then separates in the lee; if the Reynolds numberFootnote 3 is not small, then the velocity shears within both the boundary layer and the wake are unstable and generate turbulence. The question is how to translate this for the ocean, which involves stratification, rotation, and bottom slopes not side walls. In some instances, the currents are diverted horizontally to approximately follow bathymetric contours without much vorticity generation, and in others, the flow is diverted vertically and generates internal gravity lee waves propagating vertically into the interior. However, here, my focus is on instances of significant vorticity generation that lead to unstable wakes, locally enhanced diapycnal mixing and energy dissipation, and formation of submesoscale coherent vortices (SCVs) that are advected into and widely populate the interior ocean (McWilliams 1985).
Because of rotation and stratification, the dynamics of currents is especially sensitive to the vertical vorticity \(\zeta = \hat{\mathbf {z}} \cdot \nabla \times \mathbf {u}\), and the Ertel potential vorticity, \(q = (f\hat{\mathbf {z}} + \nabla \times \mathbf {u}) \cdot \nabla b\), where the effect of \(\zeta\) is emphasized by its multiplication by what is usually the largest component of the buoyancy gradient, \(\partial _z b\) (\(\hat{\mathbf {z}}\) is the unit vector in the vertical direction). For currents in a boundary layer over a flat bottom, all the vorticity of the turbulence-averaged flow is horizontal. In contrast, currents in a boundary layer along a slope do generate averaged \(\zeta\) and q by the geometric argument depicted in Fig. 13: because the bottom boundary layer decreases an interior mean flow to zero at the sloping bottom, there must be an associated horizontal shear (i.e., vertical vorticity, \(\zeta ^z\)) along a horizontal line extending out into the interior. This is a flow-structure argument, and it needs to be extended to encompass the actual rate of \(\zeta ^z\) generation by the along-slope gradients in bottom stress. Nevertheless, the sketch indicates why along-slope near-bottom flows are necessarily a source of vertical vorticity of the flow, and vertical vorticity is a common ingredient in lateral (barotropic) shear instability with small Ro and Fr values. Furthermore, there is little impetus for currents to separate from a flat-bottom boundary against the gravitational barrier of a stable vertical stratification, whereas it is much more common for currents along a slope to separate while on an intersecting isopycnal surface, whether aided by boundary curvature or even spontaneously.
Drag-induced vorticity generation on a slope. Sketch of vorticity generation in an along-slope current V(x, z) for a uniform interior flow \(V_0\) and a turbulent bottom boundary layer over a slope with \(s = dz_b/dx\). The turbulent drag causes the bottom velocity to go to zero, leading to the local vertical profile V(z) (red) with a boundary-layer depth h and horizontal profile V(x) (blue) with vertical vorticity \(\zeta = dV/dx\) due to the velocity shear on the horizontal scale \(\ell _b = h/s\) (Molemaker et al. 2015)
SCVs are vortices in the interior with limited vertical extent (\(\approx 100\) s m). They typically trap the material concentrations within their cores and often have lifetimes much longer than most individual mesoscale eddies. A particularly famous type is a Meddy (Mediterranean Eddy) SCV, formed from the warm, dense water that flows out of the Mediterranean Sea through the Strait of Gibraltar. The outflow current flows downhill as an entraining density current to a level neutral buoyancy (around 1000 m depth), turns poleward as a boundary current, separates, and becomes unstable in the interior. An acoustic image of its temperature structure is in Fig. 14. It is an axisymmetric blob of warm water (hence high salinity for its given buoyancy) surrounded by a horizontally recirculating current whose maximum is at its middle depth and is in gradient-wind momentum balance with a central pressure anomaly. A Meddy is a relatively large type of SCV with a radius of \(\approx 20\) km and a half-height of several hundred meters.
Meddy SCV. An acoustic tomographic image of a cross section of T in a Mediterranean eddy (Meddy) offshore in the Eastern Atlantic (Papenberg et al. 2010). It is formed by an instability of the separating Iberian slope current fed by the Mediterranean outflow. Lifetimes are up to years, and a few are known to traverse the width of the Atlantic ocean
A different SCV example is Fig. 15, which shows horizontal float trajectories that recirculate many times around the vortex core with no indication of a significant decay in strength over the two-month sampling period. Separate hydrographic casts made through its core show large chemical property anomalies indicating both material trapping and a lifetime long enough to travel 1000 km or more (i.e., years).
Float track inside an atlantic SCV. Tracks of two acoustically tracked floats at 700 m depth near Bermuda (Riser et al. 1986). They were deployed at day 0 about 20 km apart and stayed close together over a period of more than 70 days while being advected by mesoscale currents. One of these was inside a SCV, the other not. Intersecting hydrographic profiles showed a large, trapped water mass anomaly in T–S and \(\hbox {O}_2\), indicating a subpolar origin for the SCV
The hypothesis is that SCVs are generated primarily in topographic wakes where the drag-induced \(\zeta\) and q are large enough to make the ensuing vortices strong enough to resist the early disruption by encounters with other, weaker interior currents. Alternatively, a localized vertical mixing event in a stratified region, followed by geostrophic adjustment, can do so, as well (McWilliams 1985; Bosse et al. 2017). Because bottom currents, stable stratification, and topographic slopes are ubiquitous in the ocean at all depths, so SCVs are common, albeit occupying only a small volumetric fraction of the oceanic interior (one estimate for Meddies is \(< 10\%\) of the middle depths of the Eastern Subtropical Atlantic).
Model simulations support this hypothesis. In an idealized problem of a uniform, steady inflow past an isolated seamount, SCVs are generated whenever the seamount height is large enough (i.e., the slope is steep enough), the stratification is not weak, and the value of Ro is small (Fig. 16). Another example is a realistic simulation of the Subtropical Eastern-Boundary Current System west of North America. There, the poleward California undercurrent flows along the continental slope. It manifests the drag-induced vorticity generation scenario described above. Where it separates, a strong centrifugal instabilityFootnote 4 arises in the wake (with \(fq < 0\)), and submesoscale vortices emerge and mutually interact to form a California Undercurrent SCV (a Cuddy; Fig. 17). Many different Cuddies have been detected off the U.S. West Coast by acoustically tracked recirculating, trapped subsurface float trajectories.
Seamount wake. Snapshots of normalized vertical vorticity (left) and w (right) in horizontal planes for a simulated flow past a seamount. The upstream flow is steady and uniform, with \(V = 0.05\text { m/s}\) to the north. The bottom is flat at \(z = -\, 4000\) m away from the seamount that has a half-width of 10 km and a height of 600 m. Vertical vorticity is generated by drag on the slope (Fig. 15), the flow separates into an unstable wake, and the vortex filaments organize into coherent SCVs. In this instance, there is only weak lee gravity wave generation, as shown by the small w above the top of the seamount (Srinivasan et al. 2019)
Cuddy SCV generation by boundary current separation. California undercurrent eddies (Cuddies) form by separation of the California Undercurrent at the headland south of Monterey Bay, CA. This is a simulation snapshot of normalized vertical vorticity, \(\zeta /f\), at 150 m depth. Anticyclonic (blue) vorticity is generated by the bottom drag along the slope and flows north until it separates near Pt. Sur, CA. Centrifugal instability occurs, wake vortices emerge, and then, these smaller vortices merge into a larger Cuddy, seen here in the middle of the Bay at an intermediate stage of self-organization. Subsequently, Cuddies disperse into the interior Pacific by mesoscale currents (Molemaker et al. 2015)
These unstable wakes exhibit strong submesoscale turbulence with a forward energy cascade to dissipation and mixing of material concentrations both along and across density surfaces. Thus, this class of topographic SMCs can have widespread influences on the oceanic interior. In my view, it is important to explore these phenomena further, both computationally and observationally.
This paper focuses on the physical manifestations of SMCs, but there are also biogeochemical and ecological consequences associated with their material fluxes. One illustration is in Fig. 18, showing that the submesoscale nitrogen flux at the base of the euphotic zone acts to limit primary productivity in the eutrophic California Current System. In other more oligotrophic situations, SMCs can enhance productivity by bringing up nutrients from the interior nutricline (Mahadavan 2016). Both instances are due to the relatively large SMC vertical velocity w in the surface layer. More generally, SMCs enhance material exchanges between the turbulent boundary layers and the interior.
Submesoscale nutrient flux. A time series of a scatterplot of the product of w and inorganic N for all spatial points at 50 m depth in the California current in the region 20–200 km offshore. A submesoscale-permitting simulation (\(dx = 1\) km; red) shows much greater variability than a mesoscale-resolving one (\(dx = 4\) km; blue). In this eastern-boundary upwelling system, both simulations have a mean \(\overline{w'N'} < 0\), indicating "eddy quenching", i.e., a reduction in primary productivity by burial of unconsumed nutrients by eddy fluxes along the descending isopycnal surfaces. The SMCs enhance \(\overline{w'N'}\) by about 40% in this comparison (Kessouri et al. 2019).
In summary, SMCs are active over much of the ocean with large seasonal and geographical variability. They have a distinctive dynamics by being advective, partly ageostrophic, and frontogenetic. There are at least two distinct populations: one associated with surface-layer frontogenesis and the other associated with topographic wakes. Both populations are tightly coupled with the local microscale turbulence, and thus, they are a significant cause of intermittency, heterogeneity, and non-stationary behavior in the surface and bottom boundary layers.
An important open question is how active SMCs are in the ocean interior. Idealized simulations of rotating, stratified turbulence indicate that they should be so, at least in some places where Ro and Fr are not too small (Molemaker et al. 2010; Kafiabad and Bartello 2016), e.g., within strong currents and eddies. As yet no realistic oceanic simulations or measurements unambiguously show this to be true, but neither has this issue yet been pushed very hard. Were it to be true, then one would expect a relatively shallow kinetic-energy spectrum, forward energy cascade, and elevated dissipation and diapycnal mixing rates. Of course, SCVs are abundant in the interior, but, by the hypothesis stated in "Topographic wakes" section, these are most likely to have been generated near the topography or in local mixing zones and then moved into the wider ocean.
However, Submesoscale Coherent Vortices (SCVs), once formed and freely moving within the interior ocean, can have survival lifetimes of years (McWilliams 1985).
The energy and information cycles for the tides and surface gravity waves are largely separate from the general circulation cycle.
\({Re} = VL/\nu\), where \(\nu\) is the molecular viscosity. It is a common parameter indicating how strong momentum advection is compared to momentum diffusion.
Centrifugal (or symmetric or inertial) instability occurs when the Ertel potential vorticity q changes sign within the local domain. Thus, it can only occur when Ro or Fr is large. In both the surface and topographic SMC populations, it often is triggered by potential vorticity fluxes through the top and bottom boundaries, respectively.
Adams K, Hosegood P, Taylor J, Sallee JB, Bachman S, Torres R, Tamper M (2017) Frontal circulation and submesoscale variability during the formation of a southern ocean mesoscale eddy. J Phys Ocean 47:1737–1753
Boccaletti G, Ferrari R, Fox-Kemper B (2007) Mixed layer instabilities and restratification. J Phys Ocean 37:2228–2250
Bosse A, Testor P, Mayot N, Prieur L, D'Ortenzio F, Mortier L, Goff HL, Gourcuff C, Coppola L, Lavigne H, Raimbault P (2017) A submesoscale coherent vortex in the Ligurian Sea: from dynamical barriers to biological implications. J Geophys Res Oceans 122:1–22
Charney JG (1971) Geostrophic turbulence. J Atmos Sci 28:1087–1095
D'Asaro E, Shcherbina A, Klymak JM, Molemaker J, Novelli G, Gigand C, Haza A, Haus B, Ryan E, Jacobs GA, Huntley HS, Laxagne HJM, Chen S, Judt F, McWilliams JC, Barkan R, Krwan AD, Poje AC, Ozgokmen TM (2018) Ocean convergence and dispersion of flotsam. PNAS 115:1162–1167
Fox-Kemper B, Ferrari R, Hallberg RW (2008) Parameterization of mixed layer eddies. Part I: theory and diagnosis. J Phys Ocean 38:1145–1165
Gower J, Hu C, Borstad G, King S (2006) Ocean color satellites show extensive lines of floating sargassum in the Gulf of Mexico. IEEE Trans Geosci 44:3619–3625
Gula J, Molemaker MJ, McWilliams JC (2015) Gulf Stream dynamics and frontal eddies along the southeastern U.S. Seaboard. J Phys Ocean 45:690–715
Kafiabad KA, Bartello P (2016) Balance dynamics in rotating stratified turbulence. J Fluid Mech 796:914–949
Kessouri F, McWilliams JC, Bianchi D, Renault L, Deutsch C, Frenzel H (2019) Effects of submesoscale circulation on the nitrogen cycle in the California Current System. Global Biogeochem Cycles (submitted)
Mahadavan A (2016) The impact of submesoscale physics on primary productivity of plankton. Ann Rev Mar Sci 8:161–184
McWilliams JC (1985) Submesoscale, coherent vortices in the ocean. Rev Geophys 23:165–182
McWilliams JC (2016) Submesoscale currents in the ocean. Proc R Soc A 472:20160117–132
McWilliams JC (2017) Submesoscale surface fronts and filaments: secondary circulation, buoyancy flux, and frontogenesis. J Fluid Mech 823:391–432
McWilliams JC, Colas F, Molemaker MJ (2009) Cold filamentary intensification and oceanic surface convergence lines. Geophys Res Lett 36:18602
Molemaker MJ, McWilliams JC, Capet X (2010) Balanced and unbalanced routes to dissipation in an equilibrated Eady flow. J Fluid Mech 654:35–63
Molemaker MJ, McWilliams JC, Dewar WK (2015) Submesoscale instability and generation of mesoscale anticyclones near a separation of the California Undercurrent. J Phys Ocean 45:613–629
Munk W, Armi L, Fischer K, Zachariasen F (2000) Spirals on the sea. Proc R Soc A 456:1217–1280. https://doi.org/10.1098/rspa.2000.0560
Papenberg C, Kaleschen D, Krahmann G, Hobbs RW (2010) Ocean temperature and salinity inverted from combined hydrographic and seismic data. Geophys Res Lett 37:04601
Riser SC, Owens WB, Rossby HT, Ebbesmeyer CC (1986) The structure, dynamics, and origin of a small scale lens of water in the Western North Atlantic thermocline. J Phys Ocean 16:572–590
Scully-Power P (1986) Navy oceanographer shuttle observations, STS 41-G Mission Report. NUSC. Technical Document 7611
Srinivasan K, McWilliams JC, Molemaker MJ, Barkan R (2019) Submesoscale vortical wakes in the lee of a seamount. J Phys Ocean (in press)
Sullivan PP, McWilliams JC (2018) Frontogenesis and frontal arrest for a dense filament in the oceanic surface boundary layer. J Fluid Mech 837:341–380
Thorpe SA (2005) The Turbulent Ocean. Cambridge University Press, Cambridge, p 439
JM wrote the manuscript. The author read and approved the final manuscript.
This paper was written during a visit to the Kavli Institute for Theoretical Physics, supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. I appreciate continuing research support from the National Science Foundation and the Office of Naval Research. I also appreciate the invitation from AOGS to present a Distinguished Lecture for the Ocean Sciences Section.
The author declares no competing interests.
Department of Atmospheric and Oceanic Sciences, UCLA, 405 Hilgard Ave., Los Angeles, CA, 90095-1565, USA
James C. McWilliams
Search for James C. McWilliams in:
Correspondence to James C. McWilliams.
McWilliams, J.C. A survey of submesoscale currents. Geosci. Lett. 6, 3 (2019) doi:10.1186/s40562-019-0133-3
Submesoscale currents
Density fronts and filaments
Coherent vortices
Topographic waves
|
CommonCrawl
|
Convergence rate and stability of the split-step theta method for stochastic differential equations with piecewise continuous arguments
Persistent two-dimensional strange attractors for a two-parameter family of Expanding Baker Maps
February 2019, 24(2): 671-694. doi: 10.3934/dcdsb.2018202
Invasion fronts on graphs: The Fisher-KPP equation on homogeneous trees and Erdős-Réyni graphs
Aaron Hoffman 1, and Matt Holzer 2,
Franklin W. Olin College of Engineering, Needham, MA 02492, USA
Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA
Received June 2017 Revised January 2018 Published June 2018
We study the dynamics of the Fisher-KPP equation on the infinite homogeneous tree and Erdős-Réyni random graphs. We assume initial data that is zero everywhere except at a single node. For the case of the homogeneous tree, the solution will either form a traveling front or converge pointwise to zero. This dichotomy is determined by the linear spreading speed and we compute critical values of the diffusion parameter for which the spreading speed is zero and maximal and prove that the system is linearly determined. We also study the growth of the total population in the network and identify the exponential growth rate as a function of the diffusion coefficient, α. Finally, we make predictions for the Fisher-KPP equation on Erdős-Rényi random graphs based upon the results on the homogeneous tree. When α is small we observe via numerical simulations that mean arrival times are linearly related to distance from the initial node and the speed of invasion is well approximated by the linear spreading speed on the tree. Furthermore, we observe that exponential growth rates of the total population on the random network can be bounded by growth rates on the homogeneous tree and provide an explanation for the sub-linear exponential growth rates that occur for small diffusion.
Keywords: Invasion fronts, linear spreading speed, homogeneous tree, random graph, Fisher-KPP equation.
Mathematics Subject Classification: Primary: 37L60; Secondary: 35R02, 35C07, 05C80.
Citation: Aaron Hoffman, Matt Holzer. Invasion fronts on graphs: The Fisher-KPP equation on homogeneous trees and Erdős-Réyni graphs. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 671-694. doi: 10.3934/dcdsb.2018202
A.-L. Barabási and R. Albert, Emergence of scaling in random networks, Science, 286 (1999), 509-512. doi: 10.1126/science.286.5439.509. Google Scholar
V. Batagelj and U. Brandes, Efficient generation of large random networks, Phys. Rev. E, 71 (2005), 036113. doi: 10.1103/PhysRevE.71.036113. Google Scholar
A. Bers, Space-time evolution of plasma instabilities-absolute and convective, in Basic Plasma Physics: Selected Chapters, Handbook of Plasma Physics, Volume 1 eds. A. A. Galeev & R. N. Sudan, (1984), 451-517. Google Scholar
B. Bollobás, Random Graphs, volume 73 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, second edition, 2001. doi: 10.1017/CBO9780511814068. Google Scholar
M. Bramson, Convergence of solutions of the Kolmogorov equation to travelling waves, Mem. Amer. Math. Soc., 44 (1983), iv+190pp. doi: 10.1090/memo/0285. Google Scholar
L. Brevdo and T. J. Bridges, Absolute and convective instabilities of spatially periodic flows, Philos. Trans. Roy. Soc. London Ser. A, 354 (1996), 1027-1064. doi: 10.1098/rsta.1996.0040. Google Scholar
R. J. Briggs, Electron-Stream Interaction with Plasmas, MIT Press, Cambridge, 1964. Google Scholar
D. Brockmann and D. Helbing, The hidden geometry of complex, network-driven contagion phenomena, Science, 342 (2013), 1337-1342. doi: 10.1126/science.1245200. Google Scholar
R. Burioni, S. Chibbaro, D. Vergni and A. Vulpiani, Reaction spreading on graphs, Phys. Rev. E, 86 (2012), 055101. doi: 10.1103/PhysRevE.86.055101. Google Scholar
X. Chen, Existence, uniqueness, and asymptotic stability of traveling waves in nonlocal evolution equations, Adv. Differential Equations, 2 (1997), 125-160. Google Scholar
G. Chinta, J. Jorgenson and A. Karlsson, Heat kernels on regular graphs and generalized Ihara zeta function formulas, Monatsh. Math., 178 (2015), 171-190. doi: 10.1007/s00605-014-0685-4. Google Scholar
F. Chung and S. -T. Yau, Coverings, heat kernels and spanning trees, Electron. J. Combin., 6 (1999), Research Paper 12, 21 pp. Google Scholar
V. Colizza, R. Pastor-Satorras and A. Vespignani, Reaction—diffusion processes and metapopulation models in heterogeneous networks, Nat Phys, 3 (2007), 276-282. doi: 10.1038/nphys560. Google Scholar
R. Durrett, Random Graph Dynamics, volume 20 of Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge, 2007. Google Scholar
P. Erdős and A. Rényi, On random graphs. I, Publ. Math. Debrecen, 6 (1959), 290-297. Google Scholar
R. A. Fisher, The wave of advance of advantageous genes, Annals of Human Genetics, 7 (1937), 355-369. doi: 10.1111/j.1469-1809.1937.tb02153.x. Google Scholar
J. Hindes, S. Singh, C. R. Myers and D. J. Schneider, Epidemic fronts in complex networks with metapopulation structure, Phys. Rev. E, 88 (2013), 012809. Google Scholar
M. Holzer, A proof of anomalous invasion speeds in a system of coupled Fisher-KPP equations, Discrete Contin. Dyn. Syst., 36 (2016), 2069-2084. doi: 10.3934/dcds.2016.36.2069. Google Scholar
M. Holzer and A. Scheel, Criteria for pointwise growth and their role in invasion processes, J. Nonlinear Sci., 24 (2014), 661-709. doi: 10.1007/s00332-014-9202-0. Google Scholar
A. Kolmogorov, I. Petrovskii and N. Piscounov, Etude de l'equation de la diffusion avec croissance de la quantite' de matiere et son application a un probleme biologique, Moscow Univ. Math. Bull., 1 (1937), 1-25. Google Scholar
N. E. Kouvaris, H. Kori and A. S. Mikhailov, Traveling and pinned fronts in bistable reaction-diffusion systems on networks, PLoS ONE, 7 (2012), e45029. doi: 10.1371/journal.pone.0045029. Google Scholar
H. Matano, F. Punzo and A. Tesei, Front propagation for nonlinear diffusion equations on the hyperbolic space, J. Eur. Math. Soc. (JEMS), 17 (2015), 1199-1227. doi: 10.4171/JEMS/529. Google Scholar
B. Mohar and W. Woess, A survey on spectra of infinite graphs, Bull. London Math. Soc., 21 (1989), 209-234. doi: 10.1112/blms/21.3.209. Google Scholar
M. E. J. Newman, The structure and function of complex networks, SIAM Review, 45 (2003), 167-256. doi: 10.1137/S003614450342480. Google Scholar
M. A. Porter and J. P. Gleeson, Dynamical Systems on Networks, volume 4 of Frontiers in Applied Dynamical Systems: Reviews and Tutorials. Springer, Cham, 2016. A tutorial. doi: 10.1007/978-3-319-26641-1. Google Scholar
B. Sandstede and A. Scheel, Absolute and convective instabilities of waves on unbounded and large bounded domains, Phys. D, 145 (2000), 233-277. doi: 10.1016/S0167-2789(00)00114-7. Google Scholar
S. H. Strogatz, Exploring complex networks, Nature, 410 (2001), 268-276. doi: 10.1038/35065725. Google Scholar
W. van Saarloos, Front propagation into unstable states, Physics Reports, 386 (2003), 29-222. Google Scholar
A. Vespignani, Modelling dynamical processes in complex socio-technical systems, Nature Physics, 8 (2012), 32-39. doi: 10.1038/nphys2160. Google Scholar
D. J. Watts and S. H. Strogatz, Collective dynamics of "small-world" networks, nature, 393 (1998), 440-442. Google Scholar
H. F. Weinberger, Long-time behavior of a class of biological models, SIAM Journal on Mathematical Analysis, 13 (1982), 353-396. doi: 10.1137/0513028. Google Scholar
B. Zinner, G. Harris and W. Hudson, Traveling wavefronts for the discrete Fisher's equation, J. Differential Equations, 105 (1993), 46-62. doi: 10.1006/jdeq.1993.1082. Google Scholar
Figure 1. The linear spreading speed for (5), calculated numerically as a function of $\alpha$ for $k = 3$ (red), $k = 4$ (black) and $k = 5$ (blue). Note the critical values $\alpha_2(k)$ for which the spreading speed is zero and $\alpha_1(k)$ where the speed is maximal. Also note that as $\alpha\to 0$, these spreading speeds appear to approach a common curve
Figure 2. Critical rates of diffusion for period trees with period $m = 2$. On the left, we plot $\alpha_1$ as a function of $k_1$ with $k_2$ fixed to preserve the mean degree. On the right, we plot $\alpha_2$ as a function of $k_1$. Note that in both case the periodic heterogeneity increases the critical diffusion rates
Figure 3. Numerical simulations of (2) with $k = 3$ and for $\alpha = 0.2$ (left), $\alpha = 0.8$ (middle) and $\alpha = 2.2$ (right). The blue curves are $u_n(t)$ while the red curves depict the normalized population at each level, i.e. $w_n(t)/\max_n(w_n(t))$. Note that $0.2 < \alpha_1(3) < 0.8 < \alpha_2(3) < 2.2$. For $\alpha = 0.2$, we observe that the maximal population is concentrated at the front interface. For $\alpha = 0.8$, the maximal population is concentrated ahead of the front interface. Finally, for $\alpha = 2.2$ the local population at any fixed node converges to zero, but the total population grows and eventually is concentrated at the final level of the tree
Figure 4. On the left, we compare predictions for the exponential growth rate of the maximum of $w_n(t)$ as a function of $\alpha$ (blue line) against the exponential growth rates of $M(t)$ observed in direct numerical simulations (asterisks) for $k = 5$. On the right, we compare numerically observed spreading speeds for $w_n(t)$ (asterisks) versus linear spreading speeds determined numerically from the pinched double root criterion applied to $\tilde{d}_s(\gamma,\lambda)$ (blue line). Here we have taken $k = 5$
Figure 5. Arrival times for an Erdős-Réyni graph with $N = 60,000$ and expected degree $k_{ER} = 2$. Various values of $\alpha$ are considered. In green is the best fit linear approximation for the mean arrival times for nodes with distance between $3$ and $12$ from the initial location
Figure 6. Speed associated to the mean arrival times in numerical simulations on an Erdős-Réyni graph with $N = 60,000$ and expected degree $k_{ER} = 2$ are shown in asterisks. The blue curve is the spreading speed predicted by the analysis in Section 2 for the homogeneous tree with $k = 2.54$, found by numerically computing roots of (6). This value is chosen since it is one less than the mean degree of the network over those nodes with distance between $3$ and $12$ from the original location
Figure 7. Growth rate of the total population for Erdős-Rényi graph. On the left, $N = 500,000$ and $\alpha = 0.1,0.35,0.6,0.85$. Larger values of $\alpha$ correspond to faster growth rates. On the right is the case of $N = 60,000$ with the same values of $\alpha$
Figure 8. Numerically calculated exponential growth rate for the Erdős-Rényi graph. On the left, $N = 500,000$ and observed growth rates are plotted as circles. The asterisks are the corresponding growth rates in the homogeneous tree with depth $13$. The lower curve is degree $k = 3$ while the larger curve is degree $k = 4$. On the right are the same computations, but for the Erdős-Rényi graph with $N = 60,000$ and for homogeneous trees with $k = 2$ and $k = 3$
Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362
Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226
Qian Liu, Shuang Liu, King-Yeung Lam. Asymptotic spreading of interacting species with multiple fronts Ⅰ: A geometric optics approach. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3683-3714. doi: 10.3934/dcds.2020050
Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329
Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021008
Yongge Tian, Pengyang Xie. Simultaneous optimal predictions under two seemingly unrelated linear random-effects models. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020168
Mia Jukić, Hermen Jan Hupkes. Dynamics of curved travelling fronts for the discrete Allen-Cahn equation on a two-dimensional lattice. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020402
Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Nicolas Rougerie. On two properties of the Fisher information. Kinetic & Related Models, 2021, 14 (1) : 77-88. doi: 10.3934/krm.2020049
Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047
Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020295
Yunping Jiang. Global graph of metric entropy on expanding Blaschke products. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1469-1482. doi: 10.3934/dcds.2020325
Alberto Bressan, Sondre Tesdal Galtung. A 2-dimensional shape optimization problem for tree branches. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020031
Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021017
Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319
Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033
Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001
PDF downloads (115)
Aaron Hoffman Matt Holzer
|
CommonCrawl
|
Definition:Vector Analysis
From ProofWiki
3 Historical Note
Vector analysis is the branch of linear algebra concerned with differentiation and integration of vector spaces, primarily in Euclidean space of $3$-dimensional space.
Results about vector analysis can be found here.
One of the earliest attempts to develop a calculus for working directly on vectors was made by Gottfried Wilhelm von Leibniz in $1679$, but this was unsuccessful.
Jean-Robert Argand's demonstration in $1806$ of a geometrical representation of the complex plane gave the misleading impression that vectors in the Cartesian plane required them to be represented as complex numbers, which held development back for some time.
August Ferdinand Möbius published his Der Barycentrische Calcul in $1827$, which was the forerunner of the more general analysis of geometric forms developed by Hermann Günter Grassmann.
Giusto Bellavitis published Calcolo delle Equipollenze in $1832$, which was one of the first works to deal systematically with addition and equality of vectors.
In $1843$, Hermann Günter Grassmann published Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (that is: "Linear Extension Theory, a new branch of mathematics").
In $1844$, William Rowan Hamilton started publication of a series of articles in Philosophical Magazine discussing quaternions.
Both of these works developed the theory of vector analysis, independently of each other, from different directions.
Further development was due to Peter Guthrie Tait, whose Elementary Treatise on Quaternions of $1867$ progressed the theory considerably.
However, the theory of quaternions was too complicated and theoretical to be much practical use in studying real-world problems.
As a result, several mathematical physicists worked on improving the system and developing more elementary techniques.
Important to this process were August Otto Föppl, Max Abraham, Alfred Heinrich Bucherer, Vladimir Sergeyevitch Ignatowski, Richard Martin Gans, Oliver Heaviside and Josiah Willard Gibbs.
The approach of Gibbs and Heaviside was not well received by Tait, who was displeased with the fact that they did not use his beloved quaternions.
Contrariwise, Gibbs and Heaviside did not appreciate the inflexibility of Tait's approach, being likened by them to (and ridiculed as) a religious ritual.
However, the approach of Gibbs and Heaviside prevailed, and by the time of Edwin Bidwell Wilson, their techniques had bedded in.
C.E. Weatherburn gleefully relates the controversy, standing four-square upon the side of Gibbs and Heaviside, as well he might; his presentation is thoroughly within their tradition.
Roberto Marcolongo and Cesare Burali-Forti continued the work in developing vector algebra.
It is worth noting that many of the techniques of vector analysis were developed in response to the need to analyse Maxwell's equations in the field of electromagnetism.
Its application to mechanics happened later.
2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics (5th ed.) ... (previous) ... (next): Entry: vector analysis
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Vector_Analysis&oldid=489908"
Definitions/Branches of Mathematics
Definitions/Vector Analysis
Definitions/Linear Algebra
Definitions/Analysis
Random proof
$\mathsf{Pr} \infty \mathsf{fWiki}$ $\LaTeX$ commands
ProofWiki.org
Proof Index
Definition Index
Symbol Index
Axiom Index
Proofread Articles
Wanted Proofs
More Wanted Proofs
Research Required
Tidy Articles
Improvements Invited
Missing Links
This page was last modified on 21 September 2020, at 20:35 and is 0 bytes
About ProofWiki
|
CommonCrawl
|
Intuitively Understanding Double Dual of a Vector Space
I am trying to see if someone can help me understand the isomorphism between $V$ and $V''$ a bit more intuitively.
I understand that the dual space of $V$ is the set of linear maps from $V$ to $\mathbb{F}$. i.e. $V' = \mathcal{L}(V, \mathbb{F})$.
Therefore, double dual of $V$, is the set of linear maps from $V'$ to $\mathbb{F}$, or $V'' = \mathcal{L}(V', \mathbb{F})$. That is to say, the $V''$ is the set of linear functionals on linear functionals on $V$.
The part that gets me tripped up is the natural isomorphism $\varphi: V \rightarrow V''$, where $\varphi(v)(f)=f(v)$ for $f \in V'$. I know how the proof that this is a isomorphism goes, but I am having trouble understanding it intuitively.
I think of an isomorphism as a bijective map that tells me how to "relabel" elements in the domain to elements in the codomain. For example, the subspace $\{(0,y) | y \in \mathbb{R} \} \subset \mathbb{R}^2$ is isomorphic with the subspace $\{(x,0) | x \in \mathbb{R} \} \subset \mathbb{R^2}$. One particular isomorphism is the map $T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by $(0,y) \mapsto (y,0)$. It's clear that the rule says: take the input, and flip the coordinates. In particular, it tells me how to go from one vector space to the other clearly.
However, when I try to figure out what the rule is for $\varphi: V \rightarrow V''$ in words, I'm a little stuck.
$\varphi$ takes any $v \in V$ and finds a unique map $g \in \mathcal{L}(V', \mathbb{F})$. How does it "find" this unique map $g$? The definition $\varphi(v)(f)=f(v)$ seems to only describe what you do with $g$, which is evaluate it with the input $f$ and $v$ - it doesn't tell me what this $g$ is, in way that's equally satisfying like the example with $\mathbb{R}^2$ above.
Another way to pose my question is, how would you define $\varphi:V \rightarrow V''$ using the "maps to" symbol? $v \mapsto .....?$ I'm not sure what should be in the place of the .....
linear-algebra dual-spaces
SnowballSnowball
$\begingroup$ $g$ is the map $f\mapsto f(v)$, evaluation at $v$. So, $\varphi$ is the map $v\mapsto (f\mapsto f(v))$, the map that sends $v$ to the functional 'evaluation at $v$'. $\endgroup$ – conditionalMethod Dec 5 '19 at 9:02
$\begingroup$ Just to clarify on the saying "evaluation at $v$", which I've seen numerous places. If $g$ is just a map in $V''$ (not in the context of this isomorphism), is it automatically endowed with a $v \in V$? In other words, when I think of any $g \in V'' $, do I think of it having a $f \in V'$ and $v \in V$ as an input? Previously I've only been thinking of $g$ only as having $f$ as an input, and somehow there's a way to associate each of these $f$ with $\mathbb{F}$, which may be why I'm slightly confused. $\endgroup$ – Snowball Dec 5 '19 at 9:34
$\begingroup$ One thing that may be adding to your confusion: $\varphi$ is always linear and injective, but it is only surjective (and therefore an isomorphism) when $V$ is finite dimensional (or when your definition of "dual space" is more than just "linear maps into the scalar field"). Therefore, it doesn't directly correspond to your $\Bbb R^2$ example, as there isn't a natural way to find the $v \in V$ that maps to a given $g \in V''$. $\endgroup$ – Paul Sinclair Dec 5 '19 at 17:59
$\begingroup$ @PaulSinclair So if I limited it to $V$ being finite dimensional, and "dual space" is just linear maps into the scalar field, then is this still true? "Therefore, it doesn't directly correspond to your $\mathbb{R}^2$ example, as there isn't a natural way to find the $v\in V$ that maps to a given $g \in V''$?" $\endgroup$ – Snowball Dec 5 '19 at 18:53
$\begingroup$ @Snowball Have you any experience with functional programming, in particular, are you familiar (or at least somewhat acquainted) with currying? If so: $\varphi$ is the curried version of the evaluation map $\eta \colon V \times V' \to \mathbb{F}$. $\endgroup$ – Daniel Fischer♦ Dec 5 '19 at 19:05
Maybe it helps if we first widen our view, in order to then narrow it again and see the double-dual as special case.
So let's start with functions (any functions, for now) $f:X\to Y$. Let's as concrete example, take $X=Y=\mathbb R$. That is, we are dealing with real-values functions of a real argument. Examples would be the identity $\mathrm{id} = x\mapsto x$, the constant functions $\mathrm{const}_c = x\mapsto c$, of the trigonometric functions $\sin$ and $\cos$.
Now the normal way to look at functions is to think of them as encoding the operation, for example, it is a property of the function $\sin$ that it maps the number $\pi$ to the number $0$: $$\sin(\pi) = 0$$
But another view is that the result of applying the function $\sin$ to the number $\pi$ gives the number $0$, and it is that applying that has all the logic. So you have one function $\mathrm{apply}$ that takes two arguments, a real function and a real number, and assigns them another number: $$\mathrm{apply}(\sin,\pi)=0$$
Now looking at this form, we see that $\sin$ and $\pi$ are on equal footing. Both are merely arguments of the $\mathrm{apply}$ function. You recover the original sine function by "pre-inserting" $\sin$ as first argument of apply (this is known as currying): $$x\mapsto \mathrm{apply}(\sin,x)$$
But given that both arguments are on equal footing, you may just as well pre-apply the second argument instead: $$f\mapsto \mathrm{apply}(f,\pi)$$
We might consider this the application of $\pi$ to the function $f$. Thus $\mathrm{apply}(\sin,\pi)$ could equivalently be written as $$\pi(\sin) = 0$$
So now from each real number, we get a function that maps real functions to real numbers. Note that just like the function $\sin$ is not determined just by the value $\sin(\pi)$, but by the values it takes for all real numbers, similarly, the function $\pi$ is not determined just by the value it takes at $\sin$, but by the values it takes for all real functions. That is, we not only have $\pi(\sin)=0$, but also $\pi(\cos)=-1$, $\pi(\mathrm{id})=\pi$ and $\pi(\mathrm{const_c})=c$.
Note also that the real functions form an $\mathbb R$-vector space under pointwise addition and scalar multiplication. And it is easily determined that those "number functions" defined above are linear functions, that is, they live in the dual space of that function space. And quite obviously they only form a proper subset of that dual space, as they for example don't include the constant function $f\mapsto 0$ (as there is no real number that is mapped to $0$ by all real functions). Indeed, that example shows that here we don't even have a subspace here.
However we have an injection into that dual, as we can identify each number by looking only at the function values. Easiest of course by applying it to the identity function (that returns the number itself), but even if we did not have that available (as will be the case below), we could e.g. look at the functions that are $1$ for exactly one number, and $0$ for all others; with those functions we can uniquely identify the number by just noting whioch of those functions give a value of $1$.
Now let's look instead at a vector space $V$ over a field $K$, and at linear functions $V\to K$, that is, members of the dual $V^*$. Again, we can do the same game as above, and for each vector, we get a function mapping members of $V^*$ to the dual of $V^*$, which is the double dual of $V$.
However, now that we have only linear functions, we get more than above: The function that maps vectors to members of the double dual can easily be shown to be linear itself. And again, we can construct a set of functions in $V^*$ that uniquely identifies the vector: Choose a basis $\{b_i\}$ in $V$, and then take the set of linear functions $f_i$ that map $v = \sum_i\alpha_i b_i$ to $\alpha_i$. Since a vector is uniquely identified by its basis coefficients, this proves that the map $V\to V^{**}$ is injective: You can uniquely identify the vector by the values $v(f_i)=\alpha_i$.
How would you define $\varphi:V \rightarrow V''$ using the "maps to" symbol?
We can write $$\begin{aligned}\varphi:V&\longrightarrow V''\\ v&\longmapsto\left( {\begin{aligned} g_v:V'&\to\mathbb R\\ f&\mapsto f(v) \end{aligned}}\right) \end{aligned}$$ Therefore, $$\varphi(v)=g_v$$ and thus $$(\varphi(v))(f)=g_v(f)=f(v)$$
In short: $\varphi$ is the map $v\mapsto g_v$ where, for each fixed $v\in V$, $g_v$ is the map $f\mapsto f(v)$.
Edit (in response to the comments)
Example: Let $V$ be the vector space of polynomials. In this case, $\varphi$ is the map that takes a polynomial $p$ to the linear map $g_p$ defined by $$g_p(f)=f(p),\quad \forall \ f\in V'.$$ For example:
if $f:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $1$ (that is, $f(p)=p(1)$), then $$g_p(f)=p(1).$$ In particular,
$g_{x^2-1}(f)=0$
$g_{x^2+1}(f)=2$
$g_{x-1}(f)=0$
if $h:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $2$ (that is, $h(p)=p(2)$), then $$g_p(h)=p(2).$$ In particular,
$g_{x^2-1}(h)=3$
$g_{x^2+1}(h)=5$
$g_{x-1}(h)=1$
if $i:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $\int_0^1 p(t)\;dt$ (that is, $i(p)=\int_0^1 p(t)\;dt$), then $$g_p(i)=\int_0^1 p(t)\;dt.$$ In particular,
$g_{x^2-1}(i)=-\frac{2}{3}$
$g_{x^2+1}(i)=\frac{4}{3}$
$g_{x-1}(i)=-\frac{1}{2}$
Remark: The image of $p\in V$ by $\varphi$ is the functional $g_p$ (not the value of $g_p$ in some particular functional). Therefore, the fact that $g_{x^2-1}(f)=0$ and $g_{x-1}(f)=0$ (for the particular $f$ in the example above) does not violate the injectivity of $\varphi$ because the images of $x^2-1$ and $x-1$ by $\varphi$ are not $0$. In order to violate injectivity, we should have the existence of $p,q\in V$ such that $$\varphi(p)=\varphi (q),$$ that is, $$g_p(f)=g_q(f),\quad \forall\ f\in V'$$ (for all $f$, not only for a particular $f$).
PedroPedro
$\begingroup$ Thanks for the quick answer. So let's say $V$ was the vector space of polynomials. $f: V \rightarrow \mathbb{F}$ is the linear functional that evaluates a polynomial at the value 1. What you are saying is that $\varphi$ is the map that takes a polynomial, say $x^2-1$ to a linear map, $g$, which evaluates $x^2-1$ at 1, which gives us 0? (I am tempted to say "which evaluates 1 at $x^2-1$, since $f$ represents evaluation at 1, and $v$ is the polynomial, which is the reverse of what I'm used to.) $\endgroup$ – Snowball Dec 5 '19 at 9:45
$\begingroup$ But then wouldn't another $v \in V$, say $x-1$, evaluated at $1$, give us $0$, and thus imply $\varphi$ isn't injective? $\endgroup$ – Snowball Dec 5 '19 at 9:50
$\begingroup$ @Snowball See my edit in the post $\endgroup$ – Pedro Dec 5 '19 at 11:35
$\begingroup$ Thanks for the edit. Very helpful. $\endgroup$ – Snowball Dec 5 '19 at 18:27
$\begingroup$ One more follow up - if I were to look at the maps $g$ in the vector space $V''$ outside of the context of our isomorophism, then any map $g$ in $V''$at this point does not have a $v$ associated with it, it merely asks for a $f \in V'$ as input. Continuing with our polynomial example above, $g$ is asking for "what do I evaluate at"? Once you give $g$ an $f$, $g$ is the map from this $f$ to $\mathbb{F}$. What I find odd is that it is this $f$ that demands a polynomial as an input, not the $g$. So it seems that $g$ could live happily without ever being given an $v$... $\endgroup$ – Snowball Dec 5 '19 at 18:47
A shorthand way to write some partially evaluated functions is by leaving a $-$ sign (pronounced "blank") in the space of an argument. As an example, if $v \in \mathbb{R}^n$ and $\cdot$ is the dot product, we have a function $(v \cdot -) \in V^*$ given by taking the dot product with $v$, meaning $(v \cdot -) = (u \mapsto (v \cdot u))$. As an example, we could say that the hyperplane orthogonal to $v$ is the set of points where the function $(v \cdot -)$ evaluates to zero.
Now, if $V$ is any vector space and $V*$ is its dual, then for $v \in V$ and $f \in V^*$ introduce the alternative notation $\langle v, f \rangle = f(v)$. (I like this notation because it reminds me that $(v, f) \mapsto f(v)$ is bilinear, and puts $V$ and $V^*$ on more equal footing). There are two canonical partial evaluations we can do:
The map $V^* \to V^*$ defined by $f \mapsto \langle -, f\rangle$ is the identity map.
The map $V \to V^{**}$ defined by $v \mapsto \langle v, - \rangle$ is the canonical injection into the double dual.
JoppyJoppy
This natural isomorphism only arises in finite-dimensional vector spaces. Do note that there exists isomorphisms between $V$ and $V^*$ as well, but these need coordinates (or rather, an inner product) to be properly defined, so they're never a "natural" isomorphism. (Fun fact, it's apparently this very question of a bijection which needed extra properties to work well (ie, not "natural") which led Eilenberg and MacLane to develop Category Theory.)
My way of seeing this question intuitively is the following.
1) $V \simeq L(K, V)$
Why ? Your vectors in $V$ are column vectors, and are thus $n*1$ matrices, so correspond to maps from $K$ (dimension $1$) to $V$ (dimension $n$). (This is another way of understanding vectors, as functions from scalars into vectors.)
Fun fact: $K \simeq L(K, K)$, even as a $K$-algebra isomorphism, where multiplication of scalars is composition of functions.
2) $V^* := L(V, K)$
What are elements of $V^*$, covectors, as matrices ? Covectors are simply row-vectors, so $1*n$ matrices, which take an $n$-vector and return a scalar.
3) Going from $V$ to $V^*$, or $L(K, V)$ to $L(V, K)$
How do you go from one to the other ? Your (conjugate) transpose. But since the (finite-dimensional, conjugate) transpose is an involution, you get back what you started with, ie, elements of $V^{**}$ are column vectors just like elements of $V$.
This makes sense, if you consider bra-ket type handling of vector spaces and their dual. For the double-dual, you want a map that returns a scalar from a covector, in a linear way. What allows you to return a scalar from a covector $\langle \phi|$ ? Simply a vector $|u\rangle$. So it makes sense that you'd have precisely the same possibilities for evaluation maps $\epsilon_u$ as you do for vectors $u$, ie an isomorphism $V \simeq V^{**}$ such that $|\epsilon_u \rangle \langle \phi| = \langle \phi | u \rangle$
4) Infinite dimensions
In infinite dimensions, the dualization operator is injective. Thus, the double-dualization operator is a composition of injections, and an injection itself.
Tristan DuquesneTristan Duquesne
$\begingroup$ Are you saying that $\varphi:V\to V''$ (as defined by the OP) cannot be an isomorphism if $V$ is infinite dimensional? $\endgroup$ – Pedro Dec 5 '19 at 13:56
$\begingroup$ @Pedro See here. For infinite-dimensional spaces, the dimension of the [algebraic] dual is strictly larger, hence a fortiori the dimension of the double dual. $\endgroup$ – Daniel Fischer♦ Dec 5 '19 at 18:57
$\begingroup$ Proving the injectivity of $\varphi$ is easy. Take $0 \neq v \in V$. Extend to a basis, define $\lambda \in V'$ by $\lambda(v) = 1$ and $\lambda(w) = 0$ for all $w \neq v$ in the basis. Then $\varphi(v)(\lambda) = 1 \neq 0$, so $\varphi(v) \neq 0$. $\endgroup$ – Daniel Fischer♦ Dec 5 '19 at 19:00
$\begingroup$ In fact, I've given proving the naturality of this isomorphism as an exercise. (Though not in those terms, just asking them to prove if $T : V \to W$ is linear then $\Phi_W \circ T = T^{**} \circ \Phi_V$ without mentioning where the problem came from. Makes a nice exercise in unfolding the definitions.) $\endgroup$ – Daniel Schepler Dec 6 '19 at 2:31
$\begingroup$ @DanielFischer and Tristan, thanks for the clarifications. $\endgroup$ – Pedro Dec 7 '19 at 23:07
The intuitive difficulty you are having seems to be that you wish to write $\varphi(v) = g,$ or $v \mapsto g$, where $g$ is an expression that denotes a function in the same way in which $(y, 0)$ denotes an ordered pair, or in which (say) $\{x \in \mathbb{R} : x > 1\}$ denotes a set, so that it doesn't appear as if $g$ somehow magically already exists.
The only way I can think of to do so without either inventing a new notation (the edit history of this answer contains several unnecessary and embarrassingly verbose attempts in that direction) or relying too heavily on an arbitrary choice of a particular set-theoretic construction of a function (as a set of ordered pairs, or as a tuple with an element that is a set of ordered pairs), is to use the notation for a family. You could write: \begin{gather*} \varphi \colon V \to V'', \ v \mapsto (f(v))_{f \in V'}, \\ \text{or }\ \varphi(v) = (f(v))_{f \in V'} \in V'' \quad (v \in V), \end{gather*} or (to press the point - admittedly tastelessly): $$ \varphi = ((f(v))_{f \in V'})_{v \in V} \in \mathscr{L}(V; V''), $$ or any of several other variants (which I must refrain from labouring, as I did in earlier versions of this answer!).
Calum GilhooleyCalum Gilhooley
Not the answer you're looking for? Browse other questions tagged linear-algebra dual-spaces or ask your own question.
Why are vector spaces not isomorphic to their duals?
What is the difference between "family" and "set"?
Prove linearity of map involving dual space
What is the dual vector space of $\mathbb{C}\,$?
The dual vector space in quantum mechanics
Find the dual space and dual basis of basis $\beta$ in vector space $\mathbb{C^3}$
Double dual Spaces and the annihilator
Dual Map as Composition
|
CommonCrawl
|
Data Science Stack Exchange is a question and answer site for Data science professionals, Machine Learning specialists, and those interested in learning more about the field. It only takes a minute to sign up.
Node values in Boltzmann machines (0/1 vs -1/1). Are they the same?
Boltzmann machines were introduced by Hinton and Sejnowski as taking values in $\{0,1\}$. The Wikipedia entry also uses this convention. However, Hopfield Networks, which are the deterministic version of Boltzmann machines, are usually introduced as taking values in $\{-1,1\}$. Ising models also follow this convention.
With the energy function being defined equivalently in both models as $$ E(x) = \sum_i b_ix_i + \sum_{i<j}w_{ij}x_ix_j$$ it seems that the two conventions would behave differently. For example, how would the $\{0,1\}$ model learn a preference for checkered patterns?
More generally, in an undirected graphical model where the nodes take values in $\{a,b\}$, we can define the interaction (energy) between two neighbouring nodes $x$ and $y$ as $$E(x,y) = \begin{cases} w_{aa}, & \text{if}\ x=a,y=a \\ w_{ab}, & \text{if}\ x=a,y=b \\ w_{ba}, & \text{if}\ x=b,y=a \\ w_{bb}, & \text{if}\ x=b,y=b \end{cases}$$ or equivalently represented by the matrix $ \begin{bmatrix} w_{aa} & w_{ab} \\ w_{ba} & w_{bb} \end{bmatrix} $.
Boltzmann machines restrain the interactions between two neighbouring nodes to being described by a single scalar $w_{ij}$. In the case when our values $\{a,b\}$ are $\{0,1\}$, we get $$ E(x_i,x_j) = \begin{bmatrix} w_{ij} & 0 \\ 0 & 0 \end{bmatrix} $$ If we set $\{a,b\}$ to $\{1,-1\}$, we instead get $$ E(x_i,x_j) = \begin{bmatrix} w_{ij} & -w_{ij} \\ -w_{ij} & w_{ij} \end{bmatrix} $$
Are these two formalisms really equivalent? It seems unlikely...
mathematics graphical-model
Brian Spiering
samlafsamlaf
Thanks for contributing an answer to Data Science Stack Exchange!
Browse other questions tagged mathematics graphical-model or ask your own question.
Graphs demonstrating the structure of neural networks are very unclear
What does it mean when we say most of the points in a hypercube are at the boundary?
How to tell if the "clusters" I see in my pair plots are statistically significant or occurring by random chance?
What is the range of values of the expected percentile ranking?
How are the channels handled in CNN? Is it independently processed or fused?
How to calculate similarity between 2 users based on the images they share?
What do the symbols in this image mean in relation to autoencoding? (which concepts do they represent and why?)
|
CommonCrawl
|
Mathematics > Number Theory
[Submitted on 17 Jul 2020 (v1), last revised 18 Apr 2021 (this version, v3)]
Title:On Class Numbers, Torsion Subgroups, and Quadratic Twists of Elliptic Curves
Authors:Talia Blum, Caroline Choi, Alexandra Hoey, Jonas Iskander, Kaya Lakein, Thomas C. Martinez
Abstract: The Mordell-Weil groups $E(\mathbb{Q})$ of elliptic curves influence the structures of their quadratic twists $E_{-D}(\mathbb{Q})$ and the ideal class groups $\mathrm{CL}(-D)$ of imaginary quadratic fields. For appropriate $(u,v) \in \mathbb{Z}^2$, we define a family of homomorphisms $\Phi_{u,v}: E(\mathbb{Q}) \rightarrow \mathrm{CL}(-D)$ for particular negative fundamental discriminants $-D:=-D_E(u,v)$, which we use to simultaneously address questions related to lower bounds for class numbers, the structures of class groups, and ranks of quadratic twists. Specifically, given an elliptic curve $E$ of rank $r$, let $\Psi_E$ be the set of suitable fundamental discriminants $-D<0$ satisfying the following three conditions: the quadratic twist $E_{-D}$ has rank at least 1; $E_{\text{tor}}(\mathbb{Q})$ is a subgroup of $\mathrm{CL}(-D)$; and $h(-D)$ satisfies an effective lower bound which grows asymptotically like $c(E) \log (D)^{\frac{r}{2}}$ as $D \to \infty$. Then for any $\varepsilon > 0$, we show that as $X \to \infty$, we have
$$\#\, \left\{-X < -D < 0: -D \in \Psi_E \right \} \, \gg_{\varepsilon} X^{\frac{1}{2}-\varepsilon}.$$ In particular, if $\ell \in \{3,5,7\}$ and $\ell \mid |E_{\mathrm{tor}}(\mathbb{Q})|$, then the number of such discriminants $-D$ for which $\ell \mid h(-D)$ is $\gg_{\varepsilon} X^{\frac{1}{2}-\varepsilon}.$ Moreover, assuming the Parity Conjecture, our results hold with the additional condition that the quadratic twist $E_{-D}$ has rank at least 2.
Comments: 17 pages, 1 table
Subjects: Number Theory (math.NT)
MSC classes: 11R29, 11G05
Cite as: arXiv:2007.08756 [math.NT]
(or arXiv:2007.08756v3 [math.NT] for this version)
From: Thomas Martinez [view email]
[v1] Fri, 17 Jul 2020 04:49:11 UTC (29 KB)
[v2] Sat, 6 Mar 2021 19:05:39 UTC (22 KB)
[v3] Sun, 18 Apr 2021 06:16:30 UTC (22 KB)
math.NT
|
CommonCrawl
|
Boundary Value Problems
Uniqueness result for a fractional diffusion coefficient identification problem
Fadhel Jday1,2 &
Ridha Mdimagh1,3
Boundary Value Problems volume 2019, Article number: 170 (2019) Cite this article
In this paper, we establish an identifiability result for the coefficient identification problem in a fractional diffusion equation in a bounded domain from the observation of the Cauchy data on particular subsets of the boundary.
Let Ω be a smooth bounded domain of \(\mathbb{R}^{d}\) (\(d \geq 3\)) with boundary \(\partial \varOmega \in \mathcal{C}^{\infty }\), and let \(T>0\) be a fixed real number. Our inverse problem consists in determining the potential \(q:=q(x)\) via the fractional diffusion equation from given Cauchy data on particular subsets of the boundary. This problem is presented by the following equations:
$$ \bigl(P_{q,f}^{\alpha }\bigr)\quad \textstyle\begin{cases} \partial _{t}^{\alpha }u-\Delta _{x} u+q(x) u =0&\mbox{in } \varOmega _{T}:= \varOmega \times (0, T), \\ u(x,0)=u_{0}(x)&\mbox{in } \varOmega , \\ u=f&\mbox{on } \varSigma _{T}:=\partial \varOmega \times (0, T), \end{cases} $$
where \(\partial _{t}^{\alpha }u\) represents the fractional Caputo time derivative of order \(0<\alpha < 1\) defined by equation (3.7). We assume that the coefficient \(q\in L^{ \infty }(\bar{\varOmega })\), \(f\in C^{1}([0; T];H^{\frac{3}{2}}(\partial \varOmega ))\), and \(u_{0}\in L^{2}(\varOmega )\).
The classical diffusion–advection equation does not often describe the abnormal diffusion (e.g., for the problem of the soil scatter field data [1]) and that the fractional diffusion equation is used as a model equation for this problem and for many physical phenomena such as the diffusion of material in heterogeneous media, the diffusion of fluid flows in inhomogeneous anisotropic porous media, turbulent plasma, the diffusion of carriers in amorphous photoconductors, the diffusion in a turbulent medium flow, a percolation model in porous media, various biological phenomena, and finance problems (see [9]).
The inverse coefficients problems associated with the system (\(P_{q,f} ^{\alpha }\)) were investigated by many authors for the parabolic case ([8, 12, 13], …) when \(\alpha = 1\) and for the hyperbolic case ([3,4,5,6, 23], …) when \(\alpha = 2\). Few works exist dealing with this type of problem in the fractional case (\(0 <\alpha <1 \)). In [11] the authors, in the one-dimensional case, by Dirichlet boundary measurements determined the fractional order α and a time-independent coefficient. Using pointwise measurements of the solution over the entire time span, the authors in [16] identified the fractional order α for the case \(d\geq 2\). Using a specifically designed Carleman estimate, the authors in [10, 25] derived a stability estimate of a zero-order time-independent coefficient, with respect to partial internal observation of the solution in the particular case where \(d=1\) and \(\alpha =\frac{1}{2}\). In [14] the authors proved a unique determination of a time-dependent parameter appearing in the source term or in the zero-order coefficient from pointwise measurements of the solution over the whole time interval. By performing a single measurement of the Cauchy data on the accessible boundary, the authors in [15] gave identifiability and local Lipschitz stability results to solve the inverse problem of identifying fractional sources. The authors in [19] proved a uniqueness result for the time-independent coefficients using the Dirichlet-to-Neumann operator obtained by probing the system with inhomogeneous Dirichlet boundary conditions of the form \(\lambda (t)g(x)\), where λ was a fixed real-analytic positive function of the time variable. In [17] the authors proved the uniqueness in the inverse problem of determining the smooth manifold (up to an isometry) and various time-independent smooth coefficients appearing in their equation from measurements of the solution on a subset of the boundary at fixed time.
In this work, we establish a uniqueness result of the coefficient identification problem using Carleman estimates and particular complex geometrical solutions of the fractional diffusion equation. These techniques are inspired by [7], where the authors proved the uniqueness of a coefficient identification problem for the Schrödinger equation.
In Sect. 2, we state the forward problem. In Sect. 3, we recall some definitions and properties of the fractional derivatives, and in Sect. 4, we give the proof of our main result.
Forward problem
In this work, our fundamental question is to prove the uniqueness of the potential q via the fractional diffusion equation from the knowledge of the Cauchy data. The forward problem is to find the solution u from the following system of equations:
$$ \textstyle\begin{cases} \partial _{t}^{\alpha }u-\Delta _{x} u+q(x) u =0&\mbox{in } \varOmega _{T}:= \varOmega \times (0, T), \\ u(x,0)=u_{0}(x)&\mbox{in } \varOmega , \\ u=f&\mbox{on } \varSigma _{T}:=\partial \varOmega \times (0, T), \end{cases} $$
where the coefficient \(q\in L^{\infty }(\bar{\varOmega })\), \(f\in C^{1}([0; T];H^{\frac{3}{2}}(\partial \varOmega ))\), and \(u_{0}\in L^{2}(\varOmega )\) are given. Referring to [17, 20], we can choose \(G \in C^{1}([0; T];H^{2}(\varOmega ))\) satisfying \(G= f\) on \(\varSigma _{T}\), and by setting \(w=u-G\), w is a solution of
$$ \textstyle\begin{cases} \partial _{t}^{\alpha }w-\Delta _{x} w+q(x) w =F&\mbox{in } \varOmega _{T}, \\ w(x,0)=a(x)&\mbox{in } \varOmega , \\ w=0&\mbox{on } \varSigma _{T}, \end{cases} $$
where \(F=-(\partial _{t}^{\alpha }G-\Delta _{x} G+q(x) G)\) and \(a(x)=u_{0}(x)-G(x,0)\). The study of the existence and uniqueness of the solution of (2.1) is reduced to the existence and uniqueness of problem (2.2).
Let \(a\in L^{2}(\varOmega )\) and \(F\in L^{\infty }(0,T,L^{2}(\varOmega ))\). Then problem (2.2) has a unique weak solution
$$ w\in C\bigl([0, T ]; L^{2}(\varOmega )\bigr) \cap C\bigl((0, T ]; H^{2}(\varOmega ) \cap H ^{1}_{0}(\varOmega )\bigr)\cap L^{2}\bigl([0, T ]; H^{2}(\varOmega )\cap H^{1}_{0}( \varOmega )\bigr). $$
We split problem (2.2) into the following two problems by taking \(w=w_{1}+w_{2}\), where \(w_{1}\) is the solution of
$$ \textstyle\begin{cases} \partial _{t}^{\alpha }u-\Delta _{x} u+q(x) u =0&\mbox{in } \varOmega _{T}, \\ u(x,0)=a&\mbox{in } \varOmega , \\ u=0&\mbox{on } \varSigma _{T}, \end{cases} $$
and \(w_{2}\) is the solution of
$$ \textstyle\begin{cases} \partial _{t}^{\alpha }u-\Delta _{x} u+q(x) u =F&\mbox{in } \varOmega _{T}, \\ u(x,0)=0&\mbox{in } \varOmega , \\ u=0&\mbox{on } \varSigma _{T}. \end{cases} $$
From Sakamoto et al. [22] we have
Problem (2.3) has a unique weak solution
$$ w_{1}\in C\bigl([0, T ]; L^{2}(\varOmega )\bigr) \cap C\bigl( (0, T]; H^{2}(\varOmega ) \cap H^{1}_{0}(\varOmega ) \bigr) $$
$$ \bigl\Vert u(\cdot ,t) \bigr\Vert _{H^{2}(\varOmega )}+ \bigl\Vert \partial ^{\alpha }_{t} u(\cdot ,t) \bigr\Vert _{L^{2}(\varOmega )}\leq C_{1} t^{-\alpha } \Vert a \Vert _{L^{2}(\varOmega )}. $$
Problem (2.4) has a unique weak solution \(w_{2}\in L^{2}([0, T ]; H^{2}(\varOmega )\cap H^{1}_{0}(\varOmega ))\) satisfying
$$ \Vert u \Vert _{L^{2}([0, T ]; H^{2}(\varOmega ))}+ \bigl\Vert \partial ^{\alpha }_{t} u \bigr\Vert _{L^{2}(\varOmega _{T})}\leq C_{2} \Vert F \Vert _{L^{2}(\varOmega _{T})}. $$
Preliminaries
We start this section by giving some definitions and fundamental properties of fractional integrals and fractional derivatives, which can be found in [18, 21].
Let \(\alpha > 0\), let n be the integer satisfying \(n-1\leq \alpha < n\), and let a, \(b\in \mathbb{R}\).
Let \(g: [a, b] \rightarrow \mathbb{R}\) be a function, and let Γ be the Euler gamma function.
The left and right Riemann–Liouville fractional integrals of order α are defined respectively by
$$ {}_{a}I_{t}^{\alpha }g(t):= \frac{1}{\varGamma (\alpha )} \int _{a}^{t}(t-s)^{\alpha -1}g(s) \,ds $$
$$ {}_{t}I_{b}^{\alpha }g(t):= \frac{1}{\varGamma (\alpha )} \int _{t}^{b}(s-t)^{\alpha -1}g(s) \,ds. $$
The left and right Riemann–Liouville fractional derivatives of order α are defined respectively by
$$ {}_{a}D_{t}^{\alpha }g(t):= \frac{d^{n}}{dt^{n}} {_{a}}I_{t}^{n-\alpha }g(t)= \frac{1}{ \varGamma (n-\alpha )}\frac{d^{n}}{dt^{n}} \int _{a}^{t}(t-s)^{n-\alpha -1}g(s) \,ds $$
$$ {}_{t}D_{b}^{\alpha }g(t):=(-1)^{n} \frac{d^{n}}{dt^{n}} {_{t}}I_{b}^{n- \alpha }g(t)= \frac{(-1)^{n}}{\varGamma (n-\alpha )}\frac{d^{n}}{dt^{n}} \int _{t}^{b}(s-t)^{n-\alpha -1}g(s) \,ds. $$
In particular, if \(\alpha =0\), then
$$ {}_{a}D_{t}^{0}g(t)= {{}_{t}}D_{b}^{0}g(t)=g(t), $$
and if \(\alpha =k\in \mathbb{N}\), then
$$ {{}_{a}}D_{t}^{k}g(t)={{}_{t}}D_{b}^{k}g(t)=g^{(k)}(t). $$
The left and right Caputo fractional derivatives of order α are defined by
$$ {}^{c}_{a}D_{t}^{\alpha }g(t):=_{a}I_{t}^{n-\alpha }g^{(n)}(t)= \frac{1}{ \varGamma (n-\alpha )} \int _{a}^{t}(t-s)^{n-\alpha -1}g^{(n)}(s) \,ds $$
$$ {}^{c}_{t}D_{b}^{\alpha }g(t):=(-1)^{n} _{t}I_{b}^{n-\alpha }g^{(n)}(t)= \frac{(-1)^{n}}{ \varGamma (n-\alpha )} \int _{t}^{b}(s-t)^{n-\alpha -1}g^{(n)}(s) \,ds. $$
In particular, if \(0<\alpha < 1\), then we denote
$$ \partial _{t} ^{\alpha }g(t):={}^{c}_{0}D_{t}^{\alpha }g(t)= \frac{1}{ \varGamma (1-\alpha )} \int _{0}^{t}(t-s)^{-\alpha }g'(s)\,ds. $$
([18])
If \(g\in AC^{n}[a,b]\), then the Riemann–Liouville fraction derivative and the Caputo fractional derivative are connected with each other by the following relations:
$$ {}_{a}D_{t}^{\alpha }g(t)={{}^{c}_{0}D}_{t}^{\alpha }g(t)+ \sum_{k=0}^{n-1} \frac{g^{(k)}(a)}{\varGamma (1+k-\alpha )} (t-a)^{k-\alpha } $$
$$ {}_{t}D_{b}^{\alpha }g(t)={{}^{c}_{0}D}_{b}^{\alpha }g(t)+ \sum_{k=0}^{n-1} \frac{(-1)^{k} g^{(k)}(b)}{\varGamma (1+k-\alpha )} (b-t)^{k-\alpha }. $$
$$\begin{aligned}& AC^{n}[a,b]= \biggl\{ g:[a, b]\rightarrow \mathbb{R} \textit{ such that } \frac{d^{n-1}}{dx^{n-1}}(g)\in AC[a,b] \biggr\} , \\& g\in AC[a,b]\quad \Leftrightarrow \quad \textit{ there exists } \varphi \in L(a,b) \textit{ such that } g(x)=c+ \int _{a}^{b} \varphi (t) \,dt, \quad c\in \mathbb{R}, \end{aligned}$$
and \(L(a,b)\) is the set of Lebesgue-measurable complex-valued functions on \([a,b]\).
If \(0<\alpha < 1\), then
$$ {}_{0}D_{t}^{\alpha }u(x,t)= {{}^{c}_{0}D}_{t}^{\alpha }u(x,t)+ \frac{u(x,0)}{\varGamma (1-\alpha )} t^{-\alpha }. $$
In addition, if \(u(x,0)=0\), then \({}_{0}D_{t}^{\alpha }u(x,t)={{}^{c}_{0}D} _{t}^{\alpha }u(x,t)\).
The Mittag-Leffler function is
$$ E_{\alpha ,\beta }(z)= \sum_{k=0}^{\infty } \frac{z^{k}}{\varGamma (\alpha k+\beta )},\quad z\in \mathbb{C}, $$
where \(\alpha > 0\) and β are arbitrary constants.
The α-exponential function is defined by
$$ e_{\alpha }^{\lambda z}=z^{\alpha -1}E_{\alpha ,\alpha }( \lambda z), \quad z\in \mathbb{C}\setminus \{0\}, $$
where \(\alpha > 0\) and \(\lambda \in \mathbb{C}\).
For \(0<\alpha <1\) and \(\lambda >0\), we have
$$ \partial _{t}^{\alpha }E_{\alpha ,1}\bigl(-\lambda t^{\alpha }\bigr)=-\lambda E _{\alpha ,1}\bigl(-\lambda t^{\alpha } \bigr),\quad t>0. $$
For \(0<\alpha <1\) and \(\lambda \in \mathbb{C}\), we have
$$ _{0}D_{t}^{\alpha }e_{\alpha }^{\lambda t}= \lambda e_{\alpha }^{ \lambda t}, \quad t>0. $$
For \(s>|\lambda |^{\frac{1}{\alpha }}\), we have
$$ \int _{0}^{\infty }e^{-st}t^{\alpha k+\beta -1} E_{\alpha ,\beta }^{(k)}\bigl(- \lambda t^{\alpha }\bigr) \,dt= \frac{k! s^{ \alpha -\beta }}{(s^{\alpha }-\lambda )^{k+1}}. $$
The main result
In this work, we focus on the uniqueness of the solution q of the inverse problem.
$$ H_{\Delta }(\varOmega _{T})=\bigl\{ u\in \mathcal{D}'( \varOmega _{T})/ u\in L^{2}( \varOmega _{T}), \Delta u \in L^{2}(\varOmega _{T})\bigr\} . $$
For \(x=(x_{1},\ldots ,x_{d})\) and \(y=(y_{1},\ldots ,y_{d})\in \mathbb{C}^{d}\), \(\langle x,y\rangle := \sum_{i=1}^{d} x_{i} y_{i}\).
Fix \(\xi \in S^{d-1}\), where \(S^{d-1}=\{x\in \mathbb{R}^{d}, |x|=1\}\), and for \(\varepsilon >0\), denote
$$\begin{aligned}& \partial \varOmega _{+}(\xi )=\bigl\{ x\in \partial \varOmega , \bigl\langle \nu (x), \xi \bigr\rangle >0\bigr\} , \\& \partial \varOmega _{-}(\xi )=\bigl\{ x\in \partial \varOmega , \bigl\langle \nu (x), \xi \bigr\rangle < 0\bigr\} , \\& \partial \varOmega _{+,\varepsilon }(\xi )=\bigl\{ x\in \partial \varOmega , \bigl\langle \nu (x),\xi \bigr\rangle >\varepsilon \bigr\} , \\& \partial \varOmega _{-,\varepsilon }(\xi )=\bigl\{ x\in \partial \varOmega , \bigl\langle \nu (x),\xi \bigr\rangle < \varepsilon \bigr\} , \end{aligned}$$
where ν represents the unit outer normal to the boundary ∂Ω,
$$ C_{q,\varepsilon }=\bigl\{ \bigl(u(\cdot ,0),u_{|\varSigma _{T}},\partial _{\nu }u _{|\partial \varOmega _{-,\varepsilon }(\xi )\times (0,T)}\bigr)/u\in H_{ \Delta }(\varOmega _{T}),\bigl(\partial _{t}^{\alpha }-\Delta +q\bigr) u=0 \mbox{ in } \varOmega _{T}\bigr\} . $$
In the following theorem, we give a global uniqueness result for the fractional diffusion equation in which the Cauchy data are given only on part of the boundary.
Let \(q_{i}\in L^{\infty }(\varOmega )\), \(i=1,2\). Given \(\xi \in S^{d-1}\) and \(\varepsilon >0\), assume that \(C_{q_{1},\varepsilon }=C_{q_{2}, \varepsilon }\). Then \(q_{1}=q_{2}\).
To prove this result, we need the following results.
([7])
Let \(\rho \in \mathbb{C}^{d}\) be such that \(\langle \rho ,\rho \rangle =0\), where \(\rho =\tau (\xi +i\eta )\) with \(\xi , \eta \in S^{d-1}\). Suppose that \(f(\cdot ,\rho /|\rho |)\in W^{2,\infty }(\varOmega )\) satisfies \(\partial _{\xi }f=\partial _{\eta }f=0\), where \(\partial _{ \xi }\) denotes the directional derivative in the direction ξ. Then there is a solution to \(\Delta u-q u=0\) in Ω of the form
$$ u(x,\rho )=e^{\langle x,\rho \rangle }\bigl(f\bigl(x,\rho / \vert \rho \vert \bigr)+ \psi (x, \rho )\bigr), $$
$$ \psi _{|\partial \varOmega _{-}(\xi )}=0, $$
$$ \bigl\Vert \psi (\cdot ,\rho ) \bigr\Vert _{L^{2}(\varOmega )}\leq \frac{M}{\tau } $$
for some \(M>0\) and \(\tau \geq \tau _{0}>0\).
Corollary 4.3
Let \(\rho \in \mathbb{C}^{d}\) be such that \(\langle \rho ,\rho \rangle =0\), where \(\rho =\tau (\xi +i\eta )\) with \(\xi , \eta \in S^{d-1}\), and \(\lambda >0\).
There is a solution w to \(-\Delta w+(q-\lambda ) w=0\) in Ω of the form
$$ u_{q,\lambda }(x,\rho ):=e^{\langle x,\rho \rangle }\bigl(1+\psi _{q}(x, \rho )\bigr), $$
$$ \psi _{q|\partial \varOmega _{-}(\xi )}=0 $$
$$ \bigl\Vert \psi _{q}(\cdot ,\rho ) \bigr\Vert _{L^{2}(\varOmega )}\leq \frac{M}{\tau },\quad \tau \geq \tau _{0}, $$
for some \(M>0\) and \(\tau _{0}>0\).
The function \((x,t)\mapsto E_{\alpha ,1}(-\lambda t^{\alpha })u_{q, \lambda }(x,\rho )\) is a solution of \((\partial _{t}^{\alpha }-\Delta +q) w=0\) in \(\varOmega _{T}\).
The function \((x,t)\mapsto e_{\alpha }^{-\lambda t}u_{q,\lambda }(x, \rho )\) is a solution of \((_{0}D_{t}^{\alpha }-\Delta +q) w=0\) in \(\varOmega _{T}\).
1. Applying Lemma 4.2 for \(f=1\) and \(q:=q-\lambda \), we obtain the result.
2. Using Proposition 3.5 and item 1, it is easy to prove items 2 and 3. □
(Carleman estimates [7])
For \(q\in L^{\infty }(\varOmega )\), there exist \(\tau _{0}>0\) and \(C>0\) such that for all \(u\in \mathcal{C}^{2}(\varOmega )\), \(u_{|\partial \varOmega }=0\), and \(\tau \geq \tau _{0}\), we have the estimate
$$\begin{aligned}& C\tau ^{2} \int _{\varOmega } \bigl\vert e^{-\tau \langle x,\xi \rangle }u \bigr\vert ^{2} \,dx+\tau \int _{\partial \varOmega _{+}}\langle \nu ,\xi \rangle \bigl\vert e^{-\tau \langle x,\xi \rangle }\partial _{\nu }u \bigr\vert ^{2} \,dS \\& \quad \leq \int _{\varOmega } \bigl\vert e^{-\tau \langle x,\xi \rangle }(\Delta -q) u \bigr\vert ^{2} \,dx-\tau \int _{\partial \varOmega _{-}}\langle \nu ,\xi \rangle \bigl\vert e^{-\tau \langle x,\xi \rangle }\partial _{\nu }u \bigr\vert ^{2} \,dS. \end{aligned}$$
If \(u\in H_{\Delta }(\varOmega _{T})\) and \(u_{|\varSigma _{T}}=0\), then \(u\in L^{2}(0,T;H^{2}(\varOmega ))\).
Suppose first that \(u\in \mathcal{C}^{2}(\bar{\varOmega _{T}})\). From the first Green formula we obtain
$$ 2\int _{\varOmega _{T}} \vert \nabla u \vert ^{2} \,dx=-2 \int _{\varOmega _{T}}u \Delta u \,dx\leq \int _{\varOmega _{T}}\bigl( \vert u \vert ^{2}+ \vert \Delta u \vert ^{2}\bigr) \,dx= \Vert u \Vert _{H_{\Delta }( \varOmega _{T})}^{2}< \infty . $$
From the formula of Kadlec [24],
$$ \sum_{i,j} \int _{\varOmega _{T}} \vert \partial _{i}\partial _{j}u \vert ^{2} \,dx\leq \int _{\varOmega _{T}} \vert \Delta u \vert ^{2} \,dx+C_{0} \int _{\varSigma _{T}} \vert \partial _{\nu }u \vert ^{2} \,dx, $$
since \(\int _{\varSigma _{T}}|\partial _{\nu }u|^{2} \,dx\leq C_{1} \int _{\varOmega _{T}}|\Delta u|^{2} \,dx\), we get
$$ \Vert u \Vert _{L^{2}(0,T;H^{2}(\varOmega ))}\leq C_{2} \Vert u \Vert _{H_{\Delta }(\varOmega _{T})}. $$
Using standard density arguments, we obtain
$$ \Vert u \Vert _{L^{2}(0,T;H^{2}(\varOmega ))}\leq C_{2} \Vert u \Vert _{H_{\Delta }(\varOmega _{T})},\quad \forall u\in H_{\Delta }(\varOmega _{T}), \mbox{ and } u _{|\varSigma _{T}}=0. $$
Thus \(u\in L^{2}(0,T;H^{2}(\varOmega ))\). □
Let \(u_{i}:=u_{q_{i},f_{i}}^{\alpha }\), \(i=1,2\), be solutions of the fractional diffusion equations (\(P_{q_{i},f_{i}}^{\alpha }\)), \(q_{i}\in L^{\infty }(\varOmega )\) and \(f_{i}\in L^{2}(\varSigma _{T})\). Then we have
$$ \partial _{t}^{\alpha }u_{1} \star u_{2} = u_{1} \star _{0}D_{t}^{ \alpha }u_{2}, $$
where ⋆ is the convolution product.
From [2] we have
$$ \int _{a}^{b} g(s){{}^{c}_{a}}D_{s}^{\alpha }f(s) \,ds= \int _{a}^{b}f(s) {{}_{s}}D_{b}^{\alpha }g(s) \,ds+ \sum_{j=0}^{n-1} \bigl[ {{}_{s}}D_{b}^{\alpha +j-n}g(s)\cdot {{}_{s}}D _{b}^{n-1-j}f(s) \bigr]_{a}^{b} $$
for \(0<\alpha <1\), \(a=0\), and \(b=t\), and
$$ \int _{0}^{t} g(s)\partial _{s}^{\alpha }f(s) \,ds= \int _{0}^{t} f(s) {{}_{s}}D_{t}^{\alpha }g(s) \,ds+ \bigl[ {{}_{s}}D_{t}^{\alpha -1}g(s) \cdot f(s) \bigr]_{0}^{t}. $$
$$ \begin{aligned}[b] \partial _{t}^{\alpha }u_{1} \star u_{2}&= \int _{0}^{t}\partial _{s}^{\alpha }u_{1}(x,s) u_{2}(x,t-s) \,ds \\ &= \int _{0}^{t} u_{1}(x,s) {}_{s}D_{t}^{\alpha }u_{2}(x,t-s) \,ds+\bigl[{{}_{s}D} _{t}^{\alpha -1}u_{2}(x,t-s)u_{1}(x,s)\bigr]_{0}^{t}. \end{aligned} $$
Since \(\alpha -1<0\), we have \({{}_{s}D}_{t}^{\alpha -1}u_{2}(x,t-s)= {{}_{s}I}_{t}^{1-\alpha }u_{2}(x,t-s)\) (see [2]) and
$$ {{}_{s}I}_{t}^{1-\alpha }u_{2}(x,t-s)= \frac{1}{\varGamma (1-\alpha )} \int _{s}^{t}(\tau -s)^{-\alpha }u_{2}(x,t- \tau ) \,d\tau . $$
Using integration by parts, we get
$$ {{}_{s}I}_{t}^{1-\alpha }u_{2}(x,t-s)= \frac{1}{\varGamma (2-\alpha )} \biggl[(t-s)^{1-\alpha }u_{2}(x,t-s)+ \int _{s}^{t}(\tau -s)^{1-\alpha } \frac{\partial u_{2}}{\partial \tau }(x,t-\tau ) \,d\tau \biggr]. $$
We deduce that \(\lim_{s\to t}{{}_{s}D}_{t}^{\alpha -1}u_{2}(x,t-s)=0\) and
$$ \int _{0}^{t}\partial _{s}^{\alpha }u_{1}(x,s) u_{2}(x,t-s) \,ds = \int _{0}^{t} u_{1}(x,s) {}_{s}D_{t}^{\alpha }u_{2}(x,t-s) \,ds. $$
By definition
$$ {}_{s}D_{t}^{\alpha }u_{2}(x,t-s):= \frac{1}{\varGamma (1-\alpha )} \frac{d}{dt} \int _{s}^{t}(t-\tau )^{-\alpha }u_{2}(x, \tau -s) \,d\tau . $$
By change of variable \(\xi =\tau -s\) we obtain
$$ \begin{aligned}[b] {}_{s}D_{t}^{\alpha }u_{2}(x,t-s)&=\frac{1}{\varGamma (1-\alpha )} \frac{d}{dt} \int _{0}^{t-s}(t-s-\xi )^{-\alpha } u_{2}(x,\xi ) \,d\xi \\ &={}_{0}D_{t-s}^{\alpha }u_{2}(x,t-s), \end{aligned} $$
which implies
$$ \int _{0}^{t}\partial _{s}^{\alpha }u_{1}(x,s) u_{2}(x,t-s) \,ds = \int _{0}^{t} u_{1}(x,s) _{0}D_{t-s}^{\alpha }u_{2}(x,t-s) \,ds $$
$$ \partial _{t}^{\alpha }u_{1} \star u_{2} = u_{1} \star _{0}D_{t}^{ \alpha }u_{2}. $$
If \(u_{2}(x,0)=0\), then from Remark 3.3 we obtain
$$ \partial _{t}^{\alpha }u_{1} \star u_{2} = u_{1} \star \partial _{t} ^{\alpha }u_{2}. $$
Let \(\xi \in S^{d-1}\). Fix \(k\in \mathbb{R}^{d}\) such that \(\langle k, \xi \rangle =0\). By Corollary 4.3 there exists \(u_{2}\in H _{\Delta }(\varOmega _{T})\) satisfying \((\partial _{t}^{\alpha }-\Delta +q _{2}) u_{2}=0\) in \(\varOmega _{T}\) of the form
$$ u_{2}(x,t)=E_{\alpha }\bigl(-\lambda t^{\alpha } \bigr)u_{{q_{2}},\lambda }(x, \rho _{2}), $$
where \(u_{{q_{2}},\lambda }\) id defined by (4.2), \(\rho _{2}= \tau \xi -i \frac{k+\ell }{2}\), \(\langle k,\ell \rangle =\langle \xi ,\ell \rangle =0\), and \(|k+\ell |^{2}=4\tau ^{2}\) (under these conditions, \(\langle \rho _{2},\rho _{2}\rangle =0\)). In dimension \(d \geq 3\), we can always choose such a vector ℓ.
Since \(C_{q_{1},\varepsilon }=C_{q_{2},\varepsilon }\), there exists \(u_{1}\in H_{\Delta }(\varOmega _{T})\) such that
$$ \textstyle\begin{cases} (\partial _{t}^{\alpha }-\Delta +q_{1}) u_{1}=0&\mbox{in } \varOmega _{T}, \\ u_{1}(x,0)=u_{2}(x,0)&\mbox{in } \varOmega , \\ u_{1}=u_{2}&\mbox{on } \varSigma _{T}, \\ \partial _{\nu }u_{1}=\partial _{\nu }u_{2}&\mbox{on } \partial \varOmega _{-,\varepsilon }(\xi )\times (0,T). \end{cases} $$
Setting \(u=u_{1}-u_{2}\) and \(q=q_{1}-q_{2}\), we have
$$ \textstyle\begin{cases} (\partial _{t}^{\alpha }-\Delta +q_{1}) u=-qu_{2}&\mbox{in } \varOmega _{T}, \\ u(x,0)=0&\mbox{in } \varOmega , \\ u=0&\mbox{on } \varSigma _{T}, \\ \partial _{\nu }u=0&\mbox{on } \partial \varOmega _{-,\varepsilon }(\xi ) \times (0,T). \end{cases} $$
By Lemma 4.5, since \(u\in H_{\Delta }(\varOmega _{T})\) and \(u_{|\varSigma _{T}}=0\), it follows that \(u\in L^{2}(0,T;H^{2}(\varOmega ))\).
From Green's formula and Lemma 4.6, for \(v\in H_{\Delta }(\varOmega _{T})\), we obtain
$$\begin{aligned} \int _{\varOmega }\bigl(\partial _{t}^{\alpha }-\Delta +q_{1}\bigr)u\star \bar{v} \,dx&=- \int _{\varOmega }qu_{2}\star \bar{v} \,dx \\ &= \int _{\varOmega }u\star \bigl(_{0}D_{t}^{\alpha }- \Delta +q_{1}\bigr)\bar{v} \,dx+ \int _{\partial \varOmega }\partial _{\nu }u\star \bar{v} \,ds. \end{aligned}$$
$$ \bar{v}(x,t)= e_{\alpha }^{-\lambda t}u_{{q_{1}},\lambda }(x,\rho _{1}), $$
which is a solution of \((_{0}D_{t}^{\alpha }-\Delta +q_{1})\bar{v}=0\), where \(u_{{q_{1}},\lambda }(x,,\rho _{1})=e^{\langle x,\rho _{1}\rangle }(1+\psi _{q_{1}}(x,\rho _{1}))\),
$$ \psi _{{q_{1}}|\partial \varOmega _{-}(\xi )}=0, $$
$$ \bigl\Vert \psi _{q_{1}}(\cdot ,\rho _{1}) \bigr\Vert _{L^{2}(\varOmega )}\leq \frac{ \tilde{M}}{\tau },\quad \tau \geq \tau _{0}, $$
where \(\rho _{1}=-\tau \xi -i\frac{k-\ell }{2}\). We have \(\rho _{1}+\rho _{2}=-i k\) and \(|\rho _{1}|^{2}=2\tau ^{2}\).
Then equation (4.9) becomes
$$ - \int _{\varOmega }qu_{2}\star \bar{v} \,dx= \int _{\partial \varOmega }\partial _{\nu }u\star \bar{v} \,ds. $$
Let \(\lambda >0\) and \(s>\lambda ^{\frac{1}{\alpha }}\), extending u by 0 on \((T, \infty )\). Applying the Laplace transform to (4.10), we obtain
$$ - \int _{\varOmega }q \tilde{u_{2}} \tilde{\bar{v}} \,dx= \int _{\partial \varOmega }\partial _{\nu }\tilde{u} \tilde{\bar{v}} \,dS, $$
where \(\tilde{u}:=\tilde{u}(\cdot ,s)\) is a solution of the following problem:
$$ \textstyle\begin{cases} \Delta \tilde{u}-(q_{1}+s^{\alpha }) \tilde{u}=q \tilde{u_{2}}& \mbox{in } \varOmega , \\ \tilde{u}=0&\mbox{on } \partial \varOmega , \\ \partial _{\nu }\tilde{u}=0&\mbox{on } \partial \varOmega _{-,\varepsilon }(\xi ), \end{cases} $$
and the Laplace transform g̃ of a function g is defined by
$$ \tilde{g}(s)= \int _{0}^{\infty }e^{-st}g(t) \,dt. $$
Using Lemma 3.6, we obtain
$$\begin{aligned}& \tilde{u_{2}}(x,s)=\frac{s^{\alpha -1}}{s^{\alpha }-\lambda } u_{ {q_{2}},\lambda }(x,\rho _{2}), \\& \tilde{\bar{v}}(x,s)=\frac{1}{s^{\alpha }-\lambda } u_{{q_{1}},\lambda }(x,\rho _{1}). \end{aligned}$$
Equation (4.11) becomes
$$ \int _{\varOmega }q u_{{q_{1}},\lambda }(x,\rho _{1})u_{{q_{2}},\lambda }(x, \rho _{2}) \,dx =\frac{(s^{\alpha }-\lambda )}{s^{\alpha -1}} \int _{\partial \varOmega }\partial _{\nu }\tilde{u} u_{{q_{1}},\lambda }(x,\rho _{1}) \,dS. $$
In the following step, we show that \(\lim_{\tau \to \infty } \int _{\partial \varOmega }\partial _{\nu }\tilde{u} u_{{q_{1}},\lambda }(x,\rho _{1}) \,dS=0\). Indeed, since \(\partial _{\nu }\tilde{u}=0\) on \(\partial \varOmega _{-,\varepsilon }(\xi )\), we have
$$\begin{aligned}& \biggl\vert \int _{\partial \varOmega }\partial _{\nu }\tilde{u} u_{{q_{1}},\lambda }(x,\rho _{1}) \,dS \biggr\vert \\& \quad = \biggl\vert \int _{\partial \varOmega _{+,\varepsilon }(\xi )}\partial _{\nu }\tilde{u} e^{\langle x,\rho _{1}\rangle }\bigl(1+\psi _{q_{1}}(x,\rho _{1})\bigr) \,dS \biggr\vert \\& \quad \leq \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert \bigl\vert 1+\psi _{q_{1}}(x,\rho _{1}) \bigr\vert \,dS \\& \quad \leq \biggl( \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,dS \biggr)^{ \frac{1}{2}} \biggl( \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert 1+\psi _{q_{1}}(x,\rho _{1}) \bigr\vert ^{2} \,dS \biggr)^{\frac{1}{2}} \\& \quad \leq \biggl( \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,dS \biggr)^{ \frac{1}{2}} \biggl(2 \int _{\partial \varOmega _{+,\varepsilon }(\xi )}\bigl(1+ \bigl\vert \psi _{q_{1}}(x,\rho _{1}) \bigr\vert ^{2}\bigr) \,dS \biggr)^{\frac{1}{2}} \\& \quad \leq \sqrt{2} \biggl( \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,dS \biggr)^{ \frac{1}{2}}\bigl(\operatorname{vol}(\partial \varOmega )+ \Vert \psi _{q_{1}} \Vert _{L^{2}(\partial \varOmega )}^{2}\bigr)^{\frac{1}{2}}. \end{aligned}$$
From [7, p. 666] we have
$$ \Vert \psi _{q_{1}} \Vert _{\mathcal{C}(\partial \varOmega )}\leq C \tau ^{ \frac{1}{4}}, $$
and since
$$ \Vert \psi _{q_{1}} \Vert _{L^{2}(\partial \varOmega )}\leq \operatorname{vol}(\partial \varOmega ) \Vert \psi _{q_{1}} \Vert _{\mathcal{C}(\partial \varOmega )}, $$
$$ \biggl\vert \int _{\partial \varOmega }\partial _{\nu }\tilde{u} u_{{q_{1}},\lambda }(x,\rho _{1}) \,dS \biggr\vert \leq C \sqrt{2}\bigl(\operatorname{vol}(\partial \varOmega )\bigr)^{ \frac{1}{2}}\bigl(1+\tau ^{\frac{1}{4}}\bigr) \biggl( \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,dS \biggr)^{ \frac{1}{2}}. $$
From estimates (4.3) and (4.1) we obtain
$$\begin{aligned} \tau \varepsilon \int _{\partial \varOmega _{+,\varepsilon }(\xi )} \bigl\vert \partial _{\nu } \tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,ds \leq& \tau \int _{\partial \varOmega _{+,\varepsilon }(\xi )}\langle \nu ,\xi \rangle \bigl\vert \partial _{\nu }\tilde{u} e^{-\tau \langle x,\xi \rangle } \bigr\vert ^{2} \,dS \\ \leq& \int _{\varOmega } \bigl\vert e^{-\tau \langle x,\xi \rangle }\bigl(\Delta - \bigl(q_{1}+s ^{\alpha }\bigr)\bigr)\tilde{u} \bigr\vert ^{2} \,dx= \int _{\varOmega } \bigl\vert e^{-\tau \langle x, \xi \rangle }q \tilde{u_{2}} \bigr\vert ^{2} \,dx \\ \leq& 2\bigl( \Vert q_{1} \Vert _{\infty }+ \Vert q_{2} \Vert _{\infty }\bigr)^{2} \int _{\varOmega }\bigl(1+ \bigl\vert \psi _{q_{2}}(x,\rho _{2}) \bigr\vert ^{2}\bigr) \,dx \\ \leq& 2\bigl( \Vert q_{1} \Vert _{\infty }+ \Vert q_{2} \Vert _{\infty }\bigr)^{2}\bigl(\operatorname{vol}(\varOmega )+ \bigl\Vert \psi _{q_{2}}(x,\rho _{2}) \bigr\Vert ^{2}_{L^{2}(\varOmega )}\bigr) \\ \leq& 2\bigl( \Vert q_{1} \Vert _{\infty }+ \Vert q_{2} \Vert _{\infty }\bigr)^{2}\biggl(\operatorname{vol}(\varOmega )+ \frac{M}{ \tau ^{2}}\biggr). \end{aligned}$$
Hence we have proved that
$$ \biggl\vert \int _{\partial \varOmega }\partial _{\nu }\tilde{u} \bar{v} \,ds \biggr\vert \leq \tilde{C} \frac{(1+\tau ^{\frac{1}{4}})}{\sqrt{\tau }}\biggl(\sqrt{\operatorname{vol}( \varOmega )}+ \frac{M}{\tau }\biggr), $$
$$ \tilde{C}=\frac{2C ( \Vert q_{1} \Vert _{\infty }+ \Vert q_{2} \Vert _{\infty }) \sqrt{\operatorname{vol}( \partial \varOmega )}}{\sqrt{\varepsilon }}, $$
$$ \lim_{\tau \to \infty } \int _{\partial \varOmega }\partial _{\nu }\tilde{u} \tilde{\bar{v}} \,dS=0. $$
Using equalities (4.1) and (4.11), by tending τ to ∞ we have
$$ \int _{\varOmega }q(x)e^{-i\langle x,k\rangle } \,dx=0. $$
Changing \(\xi \in S^{d-1}\) in a small conic neighborhood and using the fact that \(\hat{q}(k)\) is analytic, we get that \(q=0\), and then \(q_{1}=q_{2}\). □
In this paper, we established a uniqueness result for the coefficient identification problem in a fractional diffusion equation in a bounded domain from the observation of the Cauchy data on particular subsets of the boundary using Carleman estimates and particular complex geometrical solutions of the fractional diffusion equation.
Adams, E.E., Gelhar, L.W.: Field study of dispersion in a heterogeneous aquifer 2. Spatial moments analysis. Water Resour. Res. 28, 3293–3307 (1992)
Agrawal, O.P.: Fractional variational calculus in terms of Riesz fractional derivatives. J. Phys. A, Math. Theor. 40, 6287–6303 (2007)
Bellassoued, M., Choulli, M., Yamamoto, M.: Stability estimate for an inverse wave equation and a multidimensional Borg–Levinson theorem. J. Differ. Equ. 247(2), 465–494 (2009)
Bellassoued, M., Dos Santos Ferreira, D.: Stability estimates for the anisotropic wave equation from the Dirichlet-to-Neumann map. Inverse Probl. Imaging 5(4), 745–773 (2011)
Bellassoued, M., Jellali, D., Yamamoto, M.: Lipschitz stability for a hyperbolic inverse problem by finite local boundary data. Appl. Anal. 85, 1219–1243 (2006)
Ben Aicha, I.: Stability estimate for hyperbolic inverse problem with time dependent coefficient. Inverse Probl. 31, 125010 (2015)
Bukhgeim, A.L., Uhlmann, G.: Recovering a potential from partial Cauchy data. Commun. Partial Differ. Equ. 27(3–4), 653–668 (2002)
Canuto, B., Kavian, O.: Determining coefficients in a class of heat equations via boundary measurements. SIAM J. Math. Anal. 32(5), 963–986 (2001)
Carcione, J., Sanchez-Sesma, F., Luzon, F., Perez Gavilan, J.: Theory and simulation of time-fractional fluid diffusion in porous media. J. Phys. A, Math. Theor. 46, 345501 (2013)
Cheng, J., Xiang, X., Yamamoto, M.: Carleman estimate for a fractional diffusion equation with half order and application. Appl. Anal. 90(9), 1355–1371 (2011)
Cheng, M., Nakagawa, J., Yamamoto, M., Yamazaki, T.: Uniqueness in an inverse problem for a one-dimensional fractional diffusion equation. Inverse Probl. 25, 115002 (2009)
Choulli, M.: Une Introduction aux Problèmes Inverses Elliptiques et Paraboliques. Springer, Berlin (2009)
Choulli, M., Kian, Y.: Stability of the determination of a time-dependent coefficient in parabolic equations. Math. Control Relat. Fields 3(2), 143–160 (2013)
Fujishiro, K., Kian, Y.: Determination of time dependent factors of coefficients in fractional diffusion equations. Preprint. arXiv:1501.01945
Ghanmi, A., Mdimagh, R., Saad, I.B.: Identification of points sources via time fractional diffusion equation. Filomat 32, 6189–6201 (2018)
Hatano, Y., Nakagawa, J., Wang, S., Yamamoto, M.: Determination of order in fractional diffusion equation. J. Math-for-Ind. 5A, 51–57 (2013)
Kian, Y., Oksanen, L., Soccorsi, E., Yamamoto, M.: Global uniqueness in an inverse problem for time fractional diffusion equations. J. Differ. Equ. 264(2), 1146–1170 (2018)
Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006)
Li, Z., Imanuvilov, O.Y., Yamamoto, M.: Uniqueness in inverse boundary value problems for fractional diffusion equations. Preprint. arXiv:1404.7024
Lions, J.L., Magenes, E.: Problèmes aux limites non homogènes et applications, vol. 1. Dunod, Paris (1968)
Podlubny, I.: Fractional Differential Equations. Mathematics in Science and Engineering, vol. 198. Academic Press, San Diego (1999)
Sakamoto, S., Yamamoto, M.: Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J. Math. Anal. Appl. 382, 426–447 (2011)
Stefanov, P., Uhlmann, G.: Recovery of a source term or a speed with one measurement and applications. Trans. Am. Math. Soc. 365(11), 5737–5758 (2013)
Taylor, M.: Partial Differential Equations I. Basic Theory, 2nd edn. Springer, New York (1996)
Yamamoto, M., Zhang, Y.: Conditional stability in determining a zeroth-order coefficient in a half-order fractional diffusion equation by a Carleman estimate. Inverse Probl. 28(10), 105010 (2012)
Inverse Problems, LAMSIN, Ecole Nationale d'Ingénieurs de Tunis, University of Tunis El Manar, Tunis Belvédère, Tunisia
Fadhel Jday
& Ridha Mdimagh
Mathematics Department, Jamoum University College, Umm Al-Qura University, Mecca, Saudi Arabia
Mathematics Department, Faculty of Science and Arts, Khulais, University of Jeddah, Jeddah, Saudi Arabia
Ridha Mdimagh
Search for Fadhel Jday in:
Search for Ridha Mdimagh in:
All authors contributed equally to this paper. All authors read and approved the final manuscript.
Correspondence to Ridha Mdimagh.
The authors declare that there is no conflict of interest regarding the publication of this paper.
Jday, F., Mdimagh, R. Uniqueness result for a fractional diffusion coefficient identification problem. Bound Value Probl 2019, 170 (2019) doi:10.1186/s13661-019-1278-x
Received: 21 February 2019
58J99
Fractional diffusion equation
Partial data
Uniqueness result
|
CommonCrawl
|
Recent questions tagged isi2014-dcg
Let $(1+x)^n = C_0+C_1x+C_2x^2+ \dots + C_nx^n$, $n$ being a positive integer. The value of $\left( 1+\dfrac{C_0}{C_1} \right) \left( 1+\dfrac{C_1}{C_2} \right) \cdots \left( 1+\dfrac{C_{n-1}}{C_n} \right)$ is $\left( \frac{n+1}{n+2} \right) ^n$ $ \frac{n^n}{n!} $ $\left( \frac{n}{n+1} \right) ^n$ $ \frac{(n+1)^n}{n!} $
asked Sep 23, 2019 in Combinatory by Arjun Veteran (430k points) | 166 views
permutation-and-combination
binomial-theorem
Let $a_n=\bigg( 1 – \frac{1}{\sqrt{2}} \bigg) \cdots \bigg( 1 – \frac{1}{\sqrt{n+1}} \bigg), \: n \geq 1$. Then $\underset{n \to \infty}{\lim} a_n$ equals $1$ does not exist equals $\frac{1}{\sqrt{\pi}}$ equals $0$
asked Sep 23, 2019 in Calculus by Arjun Veteran (430k points) | 113 views
$\underset{x \to \infty}{\lim} \left( \frac{3x-1}{3x+1} \right) ^{4x}$ equals $1$ $0$ $e^{-8/3}$ $e^{4/9}$
$\underset{n \to \infty}{\lim} \dfrac{1}{n} \bigg( \dfrac{n}{n+1} + \dfrac{n}{n+2} + \cdots + \dfrac{n}{2n} \bigg)$ is equal to $\infty$ $0$ $\log_e 2$ $1$
Consider the sets defined by the real solutions of the inequalities $A = \{(x,y):x^2+y^4 \leq 1\} \:\:\:\:\:\:\: B=\{(x,y):x^4+y^6 \leq 1\}$ Then $B \subseteq A$ $A \subseteq B$ Each of the sets $A – B, \: B – A$ and $A \cap B$ is non-empty none of the above
asked Sep 23, 2019 in Set Theory & Algebra by Arjun Veteran (430k points) | 92 views
If $f(x)$ is a real valued function such that $2f(x)+3f(-x)=15-4x$, for every $x \in \mathbb{R}$, then $f(2)$ is $-15$ $22$ $11$ $0$
asked Sep 23, 2019 in Calculus by Arjun Veteran (430k points) | 65 views
If $f(x) = \dfrac{\sqrt{3} \sin x}{2+\cos x}$, then the range of $f(x)$ is the interval $[-1 , \sqrt{3}{/2}]$ the interval $[-\sqrt{3}{/2}, 1]$ the interval $[-1, 1]$ none of these
If $M$ is a $3 \times 3$ matrix such that $\begin{bmatrix} 0 & 1 & 2 \end{bmatrix}M=\begin{bmatrix}1 & 0 & 0 \end{bmatrix}$ and $\begin{bmatrix}3 & 4 & 5 \end{bmatrix} M = \begin{bmatrix}0 & 1 & 0 \end{bmatrix}$ ... $\begin{bmatrix} -1 & 2 & 0 \end{bmatrix}$ $\begin{bmatrix} 9 & 10 & 8 \end{bmatrix}$
asked Sep 23, 2019 in Linear Algebra by Arjun Veteran (430k points) | 113 views
The number of divisors of $6000$, where $1$ and $6000$ are also considered as divisors of $6000$ is $40$ $50$ $60$ $30$
asked Sep 23, 2019 in Numerical Ability by Arjun Veteran (430k points) | 165 views
number-system
Let $x_1$ and $x_2$ be the roots of the quadratic equation $x^2-3x+a=0$, and $x_3$ and $x_4$ be the roots of the quadratic equation $x^2-12x+b=0$. If $x_1, x_2, x_3$ and $x_4 \: (0 < x_1 < x_2 < x_3 < x_4)$ are in $G.P.,$ then $ab$ equals $64$ $5184$ $-64$ $-5184$
asked Sep 23, 2019 in Numerical Ability by Arjun Veteran (430k points) | 61 views
The integral $\int _0^{\frac{\pi}{2}} \frac{\sin^{50} x}{\sin^{50}x +\cos^{50}x} dx$ equals $\frac{3 \pi}{4}$ $\frac{\pi}{3}$ $\frac{\pi}{4}$ none of these
definite-integrals
Let the function $f(x)$ be defined as $f(x)=\mid x-1 \mid + \mid x-2 \:\mid$. Then which of the following statements is true? $f(x)$ is differentiable at $x=1$ $f(x)$ is differentiable at $x=2$ $f(x)$ is differentiable at $x=1$ but not at $x=2$ none of the above
$x^4-3x^2+2x^2y^2-3y^2+y^4+2=0$ represents A pair of circles having the same radius A circle and an ellipse A pair of circles having different radii none of the above
asked Sep 23, 2019 in Others by Arjun Veteran (430k points) | 20 views
Let $\mathbb{N}=\{1,2,3, \dots\}$ be the set of natural numbers. For each $n \in \mathbb{N}$, define $A_n=\{(n+1)k, \: k \in \mathbb{N} \}$. Then $A_1 \cap A_2$ equals $A_3$ $A_4$ $A_5$ $A_6$
The sum of the series $\dfrac{1}{1.2} + \dfrac{1}{2.3}+ \cdots + \dfrac{1}{n(n+1)} + \cdots $ is $1$ $1/2$ $0$ non-existent
summation
$\underset{x \to 2}{\lim} \dfrac{1}{1+e^{\frac{1}{x-2}}}$ is $0$ $1/2$ $1$ non-existent
$^nC_0+2^nC_1+3^nC_2+\cdots+(n+1)^nC_n$ equals $2^n+n2^{n-1}$ $2^n-n2^{n-1}$ $2^n$ none of these
asked Sep 23, 2019 in Combinatory by Arjun Veteran (430k points) | 66 views
It is given that $e^a+e^b=10$ where $a$ and $b$ are real. Then the maximum value of $(e^a+e^b+e^{a+b}+1)$ is $36$ $\infty$ $25$ $21$
maxima-minima
If $A(t)$ is the area of the region bounded by the curve $y=e^{-\mid x \mid}$ and the portion of the $x$-axis between $-t$ and $t$, then $\underset{t \to \infty}{\lim} A(t)$ equals $0$ $1$ $2$ $4$
asked Sep 23, 2019 in Geometry by Arjun Veteran (430k points) | 26 views
Suppose that the function $h(x)$ is defined as $h(x)=g(f(x))$ where $g(x)$ is monotone increasing, $f(x)$ is concave, and $g''(x)$ and $f''(x)$ exist for all $x$. Then $h(x)$ is always concave always convex not necessarily concave None of these
convex-concave
The conditions on $a$, $b$ and $c$ under which the roots of the quadratic equation $ax^2+bx+c=0 \: ,a \neq 0, \: b \neq 0 $ and $c \neq 0$, are unequal magnitude but of the opposite signs, are the following: $a$ and $c$ have the same sign while $b$ has the ... $c$ has the opposite sign. $a$ and $c$ have the same sign. $a$, $b$ and $c$ have the same sign.
The sum of the series $\:3+11+\dots +(8n-5)\:$ is $4n^2-n$ $8n^2+3n$ $4n^2+4n-5$ $4n^2+2$
arithmetic-series
Let $f(x) = \dfrac{2x}{x-1}, \: x \neq 1$. State which of the following statements is true. For all real $y$, there exists $x$ such that $f(x)=y$ For all real $y \neq 1$, there exists $x$ such that $f(x)=y$ For all real $y \neq 2$, there exists $x$ such that $f(x)=y$ None of the above is true
The determinant $\begin{vmatrix} b+c & c+a & a+b \\ q+r & r+p & p+q \\ y+z & z+x & x+y \end{vmatrix}$ equals $\begin{vmatrix} a & b & c \\ p & q & r \\ x & y & z \end{vmatrix}$ ... $3\begin{vmatrix} a & b & c \\ p & q & r \\ x & y & z \end{vmatrix}$ None of these
determinant
Let $x_1 > x_2>0$. Then which of the following is true? $\log \big(\frac{x_1+x_2}{2}\big) > \frac{\log x_1+ \log x_2}{2}$ $\log \big(\frac{x_1+x_2}{2}\big) < \frac{\log x_1+ \log x_2}{2}$ There exist $x_1$ and $x_2$ such that $x_1 > x_2 >0$ and $\log \big(\frac{x_1+x_2}{2}\big) = \frac{\log x_1+ \log x_2}{2}$ None of these
Let $y^2-4ax+4a=0$ and $x^2+y^2-2(1+a)x+1+2a-3a^2=0$ be two curves. State which one of the following statements is true. These two curves intersect at two points These two curves are tangent to each other These two curves intersect orthogonally at one point These two curves do not intersect
The area enclosed by the curve $\mid\: x \mid + \mid y \mid =1$ is $1$ $2$ $\sqrt{2}$ $4$
area-under-the-curve
If $f(x) = \sin \bigg( \dfrac{1}{x^2+1} \bigg),$ then $f(x)$ is continuous at $x=0$, but not differentiable at $x=0$ $f(x)$ is differentiable at $x=0$, and $f'(0) \neq 0$ $f(x)$ is differentiable at $x=0$, and $f'(0) = 0$ None of the above
For real $\alpha$, the value of $\int_{\alpha}^{\alpha+1} [x]dx$, where $[x]$ denotes the largest integer less than or equal to $x$, is $\alpha$ $[\alpha]$ $1$ $\dfrac{[\alpha] + [\alpha +1]}{2}$
|
CommonCrawl
|
Encyclopedia Magnetica
barkhausen_noise
Barkhausen noise
Magnetisation process
Single Barkhausen jumps
Measurement of BN
Analysis of BN activity
RMS of BN signal
Total sum of amplitudes (TSA)
Total number of peaks (TNP)
Power spectrum
Other types of analysis
Magneto-acoustic emissions
Stan Zurek, Barkhausen noise, Encyclopedia Magnetica
https://E-Magnetica.pl/barkhausen_noise
Barkhausen noise (BN) - the phenomenon of rapid changes of positions of domain walls during the process of magnetisation of a ferromagnetic material.1)2)3)
Barkhausen noise is caused by rapid changes of flux density B due to domain wall movements - this causes high-frequency noise-like changes in induced voltage V
S. Zurek, E-Magnetica.pl, CC-BY-4.0
These sudden jumps (also referred to as Barkhausen jumps) can be made audible by suppressing the large voltage induced in a search coil (with a high-pass filter) and amplifying the frequencies in the audible range (see the recording of audible noise with the animation).1)4)
Barkhausen noise was discovered by Heinrich Barkhausen in 1919.5)
A phenomenon similar to magnetic Barkhausen noise is also present in ferroelectric materials, which have ferroelectric domains and hence ferroelectric domain walls.6)7)
Recording of a real Barkhausen noise in grain-oriented electrical steel (with sound)
barkhausen_noise_magnetica.mp4
(link to video file)
Barkhausen noise (BN) activity is greater at intervals during which the changes of flux density B are the most rapid
→ → →
Helpful page? Support us!
→ → → PayPal
← ← ←
Help us with just $0.10 per month? Come on…
Magnetisation process involves changes in the configuration of magnetic domains4)
Ferromagnetism is synonymous with spontaneous magnetisation of a material. Each part of the volume is magnetised to saturation and each such partial volume is known as a magnetic domain. The magnetic alignments of domains can point in different or opposing directions so that globally their contributions cancel partially or fully. Thus, the net volume magnetisation can be significantly smaller than saturation, and even zero for a demagnetised body (even though the individual domains remain saturated).2)
Magnetic domain structure (lancet combs switching during magnetisation) in high-permeability grain-oriented electrical steel. Bar domains are visible in the upper part. Copyright © Oles Hostanar
The magnetisation process, for example by applying external magnetic field, involves changes in the configuration of magnetic domains, which is accomplished by movement of the domain walls which separate domains. These movements can be impeded in several ways: crystal defects, grain boundaries, non-magnetic inclusions and precipitates, surface defects, etc.8) The phenomenon of "sticking" to local energy minima is called domain wall pinning.
Simplified animation of a domain wall crossing a non-magnetic void8)
The magnetisation process generates an effective pressure on the given domain wall to move. However, such domain wall can be pinned to one of the above-mentioned defect and therefore it will require additional pressure to overcome to pinning force. Once the pressure to move exceeds the pinning force the domain wall will be suddenly unpinned and it will move rapidly, until the forces are equalised, or for example when the wall encounters the next defect.
As a result, the process of magnetisation is not smooth, but comprises jittery jumps of domain walls. But a sudden movement of a domain wall is synonymous with a rapid change of local magnetisation M and hence also of the local flux density B. And according to the Faraday's law changes in B generate changes in electric field and thus to the voltage induced in a coil magnetically coupled to such material.
Typical Co-Fe amorphous microwires
There are magnetic materials which are magnetically very soft (very low coercivity) and because of their geometry can have just a single domain present (at least in some part of the volume).
Co-Fe amorphous core (red arrow) in glass coating (blue arrows and translucent tip), 0.1 mm diameter
This is the case for example in microwires made from amorphous cobalt. Glass coating is added during the manufacturing process to help with the wire production, obtaining amorphous phase and applying internal stress. The magnetic domain structure is such that there is an inner core with the single-domain system, and the outer core with a cylindrical9) or multiple closure domains.10)
Closure domains can remain at the ends of the inner core. Once external magnetic field is applied, the main domain wall can rapidly change its position (even faster than 1000 m/s) travelling from one end of the wire to the other. Such a rapid transition constitutes a single Barkhausen jump.11)
When plotted as a B-H loop the rapid reversal of magnetisation makes the loop appear rectangular, because once the coercive field HC threshold is exceeded the corresponding B changes its polarity.12)
Structure of magnetic domains in a microwire10)
Single Barkhausen jump in a microwire creates a rectangular B-H loop12) by T. Charubin, M. Nowicki, R. Szewczyk, CC-BY-4.0
Domain wall velocity in Co-based amorphous microwire11)
Simple search coil for detecting Barkhausen noise13) by F. Bohn, G. Durin, M.A. Correa, N.R. Machado, R.D. Della Pace, C. Chesman, R.L. Sommer, CC-BY-4.0
Barkhausen noise is generated by changes of magnetisation M and hence also by flux density B and it can be detected by a search coil (pick-up coil) whose operation is based on the Faraday's law of induction.
Any changes in B induce corresponding voltage V at the terminals of the search coil.
Voltage in a Barkhausen coil sensor
$$ V = N · A · \frac{dB}{dt}$$ (V)
$V$ - voltage induced in the coil (V), $N$ - number of turns of the coil (unitless), $A$ - active cross-sectional area of the coil (m2), $dB/dt$ - derivative of flux density $B$ (T) with respect to time $t$ (s)
However, typical changes of B (apart from the single-domain materials) comprise large-amplitude slower changes which induce low-frequency and high amplitude of the associated voltage. But the very fast Barkhausen events are of much smaller amplitude and thus create much smaller signal, which is superimposed on the slower large signal.
Therefore, before the BN can be analysed it is necessary either to filter out the slower, large amplitude signal, or to compensate it out. This can be achieved either by high-pass or band-pass filter in signal processing electronics, or by arranging the pick-up coils in such a way that the slow large signal is eliminated or not induced at all. Typical band-pass filtering can be from 300 Hz to 300 kHz.4)13)
For example, it is possible to use two search coil connected in series opposition. Barkhausen noise is quite random locally so noise detected at two different locations will just add to each other. But the slow large components of voltage will be similar in both coils and thus these will compensate out each other, leaving only the BN noise signature in the output voltage of such two coils.
Another approach is to use a pick-up coil on a ferromagnetic core positioned perpendicularly to the surface of the sample under test. The activity in the main sample will magnetically couple to the small magnetic core and thus it can be detected without the large voltage being induced in it. The additional magnetic core should be made of a material which has much lower Barkhausen noise activity than the main sample.4)
Typical ways of detecting Barkhausen noise:4) with two opposing B-coils and with a single B-coil perpendicular to the surface (with a small cylindrical magnetic core)
Barkhausen noise is stochastic (random) in nature and its analysis is not straightforward, because the induced noise depends on many factors, including the frequency, amplitude and waveshape of the magnetic excitation (e.g. magnetising current).
Many methods were devised by researchers internationally4), with some examples given below. However, there is no standardised method for performing such measurements so the numerical values from different publications cannot be compared in the absolute sense.
RMS value of Barkhausen noise measured with two opposing B-coils (separated by 5 mm or 40 mm) at 50 Hz for a sample of grain-oriented electrical steel4)
RMS of the noise signal can be calculated after some high-pass or band-pass filtering. The RMS calculation follows the same method as measurement of RMS (root mean square) of any other signal, but it is applied to the Barkhausen noise waveform, typically digitised, with the calculations performed by a computer, for example over one cycle of magnetisation.
If the gain of the signal processing is calibrated, then the RMS of BN can be expressed in absolute units, which are typically quite small, e.g. less than 1 mV (as illustrated).
RMS of Barkhausen noise
(expressed
as integral) $$ V_{BN,RMS} = \sqrt{\frac{1}{T} · \int_0^T {(V_{BN}(t))^2} dt } $$ (V)
as sum
of samples) $$ V_{BN,RMS} = \sqrt{\frac{1}{N_{BN}} · \sum_{i=0}^{N_{BN}-1} {(V_{BN,i})^2} } $$ (V)
where: $V_{BN}(t)$ - voltage signal after filtering (V), $V_{BN,i}$ - sampled (digitised) single value of voltage after filtering (V), $T$ - time interval (s), $N_{BN}$ - total number of sample (unitless), $i$ - index (unitless), $t$ - time (s)
Total sum of amplitudes (TSA) of Barkhausen noise for grain-oriented electrical steel, measured at 50 Hz excitation4)
The total sum of amplitudes (TSA) is a method in which all the instances of digitised signal are added up to produce a single value. Absolute values are used in order to include the negative numbers.
The TSA values are not comparable between different measurement systems, because they depend on the sampling frequency (more data points produces higher values, even if the amplitude of the noise is similar).
Total sum of amplitudes TSA
$$ V_{BN,TSA} = \sum_{i=0}^{N_{BN}-1} { | V_{BN,i} | } $$ (V)
where: $V_{BN,i}$ - sampled (digitised) single value of voltage after filtering (V), $N_{BN}$ - total number of sample (unitless), $i$ - index (unitless)
Total number of peaks (TNP) of Barkhausen noise for non-oriented electrical steel, measured at 50 Hz excitation4)
Total number of peaks (TNP), as the name implies, is calculated simply as the number of detectable peaks in the filtered voltage. The result depends on the type of sensing, filtering, and criteria used for peak detection. Larger Barkhausen events which cause avalanches can cause fewer peaks.
The TNP value is unitless, because it reports the integer number of items.
Total number of peaks TNP
$$ TNP = \sum_{i=0}^{N_{BN}-1} { Peak_{V_{BN},i} } $$ (unitless)
where: $Peak_{V_{BN},i}$ - an instant of peak in the filtered voltage (unitless), $N_{BN}$ - total number of sample (unitless), $i$ - index (unitless)
Power spectrum of Barkhausen noise at lower frequencies for grain-oriented electrical steel, measured at 50 Hz excitation4)
In the power spectrum method, the Barkhausen noise (after filtering) is processed by a Fourier transform, which detects the frequency spectrum of the noise.
By definition, the spectrum will be limited by the filtering used in the analogue and digital processing. In the illustration showing an example of such spectrum, the values reduce to zero below 50 Hz, which is cause by the high-pass filter characteristics of the signal processing.
With digital processing, the maximum frequency that can be detected is limited by the Shannon-Nyquist limit of the data acquisition device.
Also, the possibility of aliasing has to be considered, because the Barkhausen noise can extend up to MHz frequencies. Therefore, some analogue anti-aliasing has to be employed. As a consequence, the signal is processed with band-pass characteristics, because high-pass filter is required to suppress the high-amplitude low-frequency induced voltage, and low-pass filter is required for anti-aliasing. This is one the main reasons for digital methods to have limited upper frequency of BN processing.
Kurtosis of Barkhausen noise for grain-oriented electrical steel, measured at 50 Hz excitation4)
Kurtosis is a method of statistical analysis of a given population of samples. It can quantify the "peakedness" or "flatness" of a statistical distribution. Using this method it is possible to compare the kurtosis value for example to that of ideal Gaussian distribution curve, thus studying the "randomness" of Barkhausen events.
The value of kurtosis has the units of V4 (volt to the power of 4).
Kurtosis K
$$ K = \frac{1}{N_{BN}} · \sum_{i=0}^{N_{BN}-1} { ( V_{BN,i} - V_{BN,mean} )^4 } $$ (V4)
where: $V_{BN,i}$ - subsequent voltage values (V), $V_{BN,mean}$ - mean value of voltage (V), $N_{BN}$ - total number of sample (unitless), $i$ - index (unitless)
Duration of Barkhausen events is correlated with their amplitude. One local Barkhausen jump can initiate others and the whole such sequence is sometimes referred to as Barkhausen avalanche.13)
The duration of avalanches can vary, and they can be analysed from the viewpoint of duration or frequency components. A whole range of analyses can be used even within the same study of the Barkhausen noise phenomenon.13)
Statistical analysis of Barkhausen avalanches in polycrystalline NiFe films of different thicknesses, from 20 to 1000 nm: a) distributions of avalanche sizes measured at 50 mHz, b) similar plot for the distributions of avalanche durations, c) average size as a function of the avalanche duration, d) power spectra.13) by F. Bohn, G. Durin, M.A. Correa, N.R. Machado, R.D. Della Pace, C. Chesman, R.L. Sommer, CC-BY-4.0
If the moving domain walls separate domains which are not in the opposing directions (0-180°), but for example at 90° to each other, then the changes in domain wall position can cause changes of dimensions of the material due to magnetostriction.
Such low-amplitude local vibrations of the material are known as magnetoacoustic emissions (MAE). The frequency spectrum for studying such phenomenon is similar to the Barkhausen noise, and also the type of analysis is similar, for example by plotting the power spectrum. However, the detection is carried out with a very sensitive microphone or acceleration sensor, rather than an inductive coil.4)
Simplified block diagram of signal processing for magneto-acoustic emissions4)
Barkhausen noise activity is affected by crystallographic structure and defects in the given material. Materials exposed to mechanical stress can deform thus increasing the number of internal defects. Also other processes such as neutron irradiation in nuclear plants can degrade the crystallographic arrangements in the steel exposed to such radiation.
The image below shows an example of Barkhausen noise activity in two samples exposed to different mechanical stress, so that the elastic deformation was ε=2.5% and 15%, respectively. In the sample with larger deformation the BN activity is visibly reduced, and this can be correlated with the amount of damage sustained by the given steel.14)
Reduced Barkhausen noise activity in material exposed to larger deformation14) by M. Pitoňák, M. Neslušan, P. Minárik, J. Čapek, K. Zgútová, M. Jurkovič, T. Kalina, CC-BY-4.0
System for measuring residual mechanical stresses in rails by means of Barkhausen noise15) by Y.-I. Hwang, Y.-I. Kim, D.-C. Seo, M.-K. Seo, W.-S. Lee, S. Kwon, K.-B. Kim, CC-BY-4.0
Detection of mechanical properties through measurement of Barkhausen noise is beneficial, because it can be carried out on the surface of the material, without the need for cutting out a sample - hence it belongs to the class of non-destructive testing. The applicability of the method is limited, because the Barkhausen noise cannot be measured or correlated to the material damage in an absolute way.
Nonetheless, there are commercial devices capable of performing non-destructive measurements in an automated way. The excitation is typically applied by a small U-shaped magnetising yoke, and the sensing is carried out by pick-up coils, with processing and filtering similar to as described above. Parameters such as degradation in strength, increase in hardness or embrittlement can be automatically quantified to some extent.
XYZ scanner with transducer: a) block diagram, b) photo, c) 3D view of the transducer16) by M. Maciusowicz, G. Psuj, CC-BY-4.0
However, the correlation between Barkhausen noise and the mechanical properties of a given magnetic sample is not strict, and cannot be quantified independently of a material. It is therefore not possible to calibrate such system for a generic measurement.
Instead, a comparative measurement has to be carried out, when a known "good" sample is available for calibration. For example, degradation of surface of gears made of magnetic steel can be detected.17) In such applications the quality and thermal pre-processing is well known for the "good" steel and degradation with the Barkhausen noise system can give reliable results.
The BN method can be used for assessment of a large surface area for example by employing the scanning methods.16) A small-size detection head can be automatically moved around a large surface to perform the "scanning" action, and a computerised system can collate, analyse and display all the data accordingly.
Magnetic domain
Magnetic domain wall
1), 1) Sławomir Tumański, Handbook of magnetic measurements, CRC Press / Taylor & Francis, Boca Raton, FL, 2011, ISBN 9780367864958
2), 2) David C. Jiles, Introduction to Magnetism and Magnetic Materials, Second Edition, Chapman & Hall, CRC, 1998, ISBN 9780412798603
3) Richard M. Bozorth, Ferromagnetism, Wiley-IEEE Press, 1993, ISBN 0780310322
4), 4), 4), 4), 4), 4), 4), 4), 4), 4), 4), 4), 4) S. Zurek, Characterisation of Soft Magnetic Materials Under Rotational Magnetisation, CRC Press, 2019, ISBN 9780367891572
5) Heinrich Barkhausen (1919), Zwei mit Hilfe der neuen Verstärker entdeckte Erscheinungen, Phys. Z., 20, pp. 401–403
6) Yangyang Xu, Dezhen Xue, Yumei Zhou, Tong Su, Xiangdong Ding, Jun Sun, and E. K. H. Salje, "Avalanche dynamics of ferroelectric phase transitions in BaTiO3 and 0.7Pb(Mg2∕3Nb1∕3)O3-0.3PbTiO3 single crystals", Appl. Phys. Lett. 115, 022901 (2019) https://doi.org/10.1063/1.5099212
7) Keisuke Yazawa, Benjamin Ducharne, Hiroshi Uchida, Hiroshi Funakubo, and John E. Blendell, "Barkhausen noise analysis of thin film ferroelectrics", Appl. Phys. Lett. 117, 012902 (2020) https://doi.org/10.1063/5.0012635
8), 8) B.D. Cullity, C.D. Graham, Introduction to Magnetic Materials, 2nd edition, Wiley, IEEE Press, 2009, ISBN 9780471477419
9) Alekhina, I.; Kolesnikova, V.; Rodionov, V.; Andreev, N.; Panina, L.; Rodionova, V.; Perov, N. An Indirect Method of Micromagnetic Structure Estimation in Microwires. Nanomaterials 2021, 11, 274, https://doi.org/10.3390/nano11020274
10), 10) J. Olivera et al., "Temperature Dependence of the Magnetization Reversal Process and Domain Structure in Fe(77.5-x)Ni(x)Si(7.5)B(15) Magnetic Microwires," IEEE Transactions on Magnetics, vol. 44, no. 11, pp. 3946-3949, Nov. 2008, doi: 10.1109/TMAG.2008.2002194
11), 11) H. Chiriac, T. Ovari and M. Tibu, "Domain Wall Propagation in Nearly Zero Magnetostrictive Amorphous Microwires," in IEEE Transactions on Magnetics, vol. 44, no. 11, pp. 3931-3933, Nov. 2008, doi: 10.1109/TMAG.2008.2001326
12), 12) Charubin, T.; Nowicki, M.; Szewczyk, R. Influence of Torsion on Matteucci Effect Signal Parameters in Co-Based Bistable Amorphous Wire. Materials 2019, 12, 532. https://doi.org/10.3390/ma12030532
13), 13), 13), 13), 13) Bohn, F., Durin, G., Correa, M.A. et al. Playing with universality classes of Barkhausen avalanches. Sci Rep 8, 11294 (2018). https://doi.org/10.1038/s41598-018-29576-3
14), 14) Pitoňák, M.; Neslušan, M.; Minárik, P.; Čapek, J.; Zgútová, K.; Jurkovič, M.; Kalina, T. Investigation of Magnetic Anisotropy and Barkhausen Noise Asymmetry Resulting from Uniaxial Plastic Deformation of Steel S235. Appl. Sci. 2021, 11, 3600. https://doi.org/10.3390/app11083600
15) Hwang, Y.-I.; Kim, Y.-I.; Seo, D.-C.; Seo, M.-K.; Lee, W.-S.; Kwon, S.; Kim, K.-B. Experimental Consideration of Conditions for Measuring Residual Stresses of Rails Using Magnetic Barkhausen Noise Method. Materials 2021, 14, 5374. https://doi.org/10.3390/ma14185374
16), 16) Maciusowicz, M.; Psuj, G. Use of Time-Frequency Representation of Magnetic Barkhausen Noise for Evaluation of Easy Magnetization Axis of Grain-Oriented Steel. Materials 2020, 13, 3390. https://doi.org/10.3390/ma13153390
17) Stresstech, Barkhausen Noise Equipment, Non-destructive (NDT) measurement solutions for grinding burn and heat treatment defect testing, {accessed 2021-11-08}
Barkhausen noise, Magnetic domain walls, Magnetic domains, Ferromagnetism, Counter
barkhausen_noise.txt · Last modified: 2021/11/14 21:16 by stan_zurek
Disclaimer: This website is provided only for educational purposes. In no event the providers can be held liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of this information.
Privacy and cookie policy (GDPR, etc.)
|
CommonCrawl
|
Purpose of page
The purpose of this page is to explore harder instances of the kind of problems considered on this D-Wave comparison page. There is no direct comparison here to any D-Wave results, because the examples considered do not correspond to existing D-Wave experiments, or in the case of N=16 (2048 spins) to an existing D-Wave device. This is mainly meant to be a standalone exploration of the hardest instances of D-Wave-type problems on Chimera graphs of order 16. It also serves the purpose of giving some of the higher order Prog-QUBO strategies a proper workout since their advantage did not show with the previous easier set of examples.
Before making a more specific statement, let's start with a recap.
D-Wave is a hardware device that finds low energy states of a spin glass Ising model on (subgraphs of) a Chimera graph, $C_N$. Here spin glass means roughly that the edge weights are random and can equally well be of either sign. This leads to frustration, which means that finding the ground state is a kind of logic puzzle. This is in contrast, for example, to the ferromagnetic situation where all the weights are negative, in which case finding the ground state is trivial since you just have to align all the spins.
If you assume that the D-Wave device is an oracle that, given a set of edge weights, quickly gives you the ground state of the system, then the question arises, what is the most useful computation you can get out of it? An upper bound for this is given by the difficulty of the hardest Ising-Chimera instances, which are discussed below. First here is a brief mention of the lower bound on what D-Wave can do.
Discussion of lower bound
A lower bound is going to be more tricky to state, since it will depend on P$\neq$NP, and also on certain properties of D-Wave hardware that have not to my knowledge been published. One way to approach a lower bound is to note that the complete graph, $K_{4N+1}$, can be minor-embedded into $C_N$. $K_{4N}$ can be minor-embedded by letting the $r^{\rm th}$ vertex of $K_{4N}$ correspond to the union of the $r^{\rm th}$ row and $r^{\rm th}$ column of $C_N$. This can be augmented to $K_{4N+1}$ by separating one of these row/column pairs into two separate minor-embedded vertices, and $4N+1$ is the best you can do since the treewidth of $C_N$ is $4N$.
This means that a $C_N$-based Ising problem is at least as hard as a $K_{4N+1}$-based Ising problem. You could regard Ising model problems on complete graphs as a standardised scale of difficulty, so bounding $C_N$ below by $K_{4N+1}$ is some kind of an answer to the question of the difficulty of $C_N$. To make this precise, we'd need to add something bounding the size and precision of the weights on the complete graph, for example by restricting them to $\pm1$, and say something about the external field, e.g., disallow it. (Of course, in the absence of a proof of P$\neq$NP we don't know for sure how hard even these complete graph problems are, but we might guess they are exponential in the number of vertices.)
It should be added that the problem on $C_N$ that corresponds to a $\pm1$-weight problem on $K_{4N+1}$ will have weights of the order of $N$ (*), and this might present a practical obstacle to a hardware-based $C_N$-solving device, though it's hard to imagine that the burden of coping with a linear (in $N$) increase in the size of weights would undo the presumed exponential (in $N$) advantage of being able to solve a problem on $K_{4N+1}$. (*) This follows from Theorem 4.2 of Vicky Choi's paper on minor-embedding. (As it happens, the bounds given there can be strengthened in the case of a small external field. In the case of no external field, we can impose $F_i^e<-\frac12 C_i$ which is an improvement in our case where $l_i=4$ for all but two $i$.)
Discussion of hardest cases
Turning now to the hardest Chimera instances, we seek to find as difficult a class of instances as possible for N=8 (512 spins) and N=16 (2048 spins). It should be pointed out that there is no claim to proof that the classes found are the hardest there are. They were found by experimentation and then by optimising an ansatz. Hardness is defined here as the TTS, the average time to find an optimal solution / ground state, required by a particular strategy of the program Prog-QUBO. (Of course this program is not optimal so it's possible that instances that it finds hard would not be hard to another program, but that doesn't matter since we're only after an upper bound on hardness. As it happens, in the previous set of examples there was considerable agreement between the instances that Prog-QUBO found harder and those found harder by CPLEX, which uses an entirely different method.)
The original N=8 QUBO class, for which D-Wave results were published, was defined by letting $J_{ij} (i<j)$ and $h_i$ be IID in $\{-1,1\}$. This class is now made harder in the following four ways, each elaborated on below.
Increasing $N$ (and not disabling any vertices).
Setting the external field to zero.
Continuising (not a word, but hey) the weights $J_{ij}$.
Rebalancing the intra-$K_{44}$ and inter-$K_{44}$ weights.
Naturally, increasing the number of vertices will make the instances harder and here we consider the cases of $C_8$ with 512 vertices and $C_{16}$ with 2048 vertices.
Setting the external field, $h_i$, to zero was discussed in the D-Wave comparison as something that makes things harder with a Chimera graph. This is in contrast to the case of a planar graph, where a non-zero external field would make things (a lot) harder. The reason for this is that an external field effectively turns a planar graph into a new graph with one node connected to all the others, and this is highly non-planar in general. But in our case $C_N$ is already highly-non-planar, since every one of the $N^2$ $K_{44}$s introduces an element of non-planarity, so methods that are successful for planar (or low genus) graphs are of limited benefit here. Instead, the effect of an external field in the case of $C_N$ is to introduce a bias which gives a "clue" as to which way the spins should point. This (presumably) aids heuristic-type searches by decreasing the correlation length.
After some experimentation it became apparent that another factor making the previous examples easier was the discreteness of the weights, in particular restricting to using only $\pm1$ which makes the ground state is highly degenerate. That is, there are many local minima that "happen to be" global minima. This is cured by choosing $J_{ij}$ from a continuous distribution, or in practice a sufficiently finely divided discrete distribution. The intra-$K_{44}$ weights were chosen uniformly from the set $\{-100,-99,\ldots,99,100\}$. Using more steps than this did not appear to make it much harder. No attempt was made to discover if a nonuniform distribution would make it harder still. I don't know if D-Wave hardware can cope with this kind of precision, but this doesn't matter for the purposes of trying to get an upper bound to its capabilities.
Turning to the last item, "rebalancing": neglecting edge effects, there are essentially two types of edge in a Chimera graph, the intra-$K_{44}$ edges and the inter-$K_{44}$ edges, so it's reasonable to suppose that we correspondingly use two different distributions for the weights, $J_{\rm intra}$ and $J_{\rm inter}$ say. It's clearly wrong to use either extreme $J_{\rm intra}\gg J_{\rm inter}$ (the $K_{44}$s split off into $N^2$ independent subgraphs) or $J_{\rm intra}\ll J_{\rm inter}$ (the graph splits into two trivial planes of horizontally connected and vertically connected vertices). But it is also presumably wrong to make these distributions the same, for then the two semi-$K_{44}$s within each $K_{44}$ would be more "tightly bound" than two semi-$K_{44}$s are between adjacent $K_{44}$s, since there are 16 edges from one semi-$K_{44}$ to another within a $K_{44}$, but only 4 to another semi-$K_{44}$ from an adjacent $K_{44}$. As the weights have mean 0 for maximum "glassiness", a reasonable hunch is that balance is achieved by making $J_{\rm inter}$ about twice as big as $J_{\rm intra}$, reasoning from $(16/4)^{1/2}=2$. Something like this ratio does indeed turn out to generate the hardest examples of the form tried, as is seen below.
So then, after various experiments, an ansatz of the form $J_{\rm intra}=U\{-100,-99,\ldots,99,100\}$ and $J_{\rm inter}=U\{-L,-L+1,\ldots,L-1,L\}$ ($U$ meaning uniform distribution) was settled on, with the parameter $L$ to be determined by experiment. The best search strategy for N=8 was found to be S13, and the best for N=16 was S14. S13 and S14 are variants of the previously-described S3 and S4 in which instead of the state being completely randomised every so often, only a certain proportion of the nodes are randomised. It is interesting that the treewidth 2 strategy, S14, is superior to the treewidth 1 strategy S13 for N=16. Previously with N=8 there was no example where using treewidth 2 was an improvement on treewidth 1, but now it appears that was because the problem instances were too easy for the treewidth 2 strategies to overcome their initial overheads.
The heuristically generated answers here have not all been verified for optimality. In both N=8 and N=16 cases, the first runs that determine $L$ have not been checked and the final N=16 run at L=220 has not been checked either. Or in other words, the only answers that have been checked are those from the final run with N=8 at L=200. Doing a proving search at N=16 is beyond the capability of the current exhaustive searcher. I hope to upgrade the proving search to cope with these more difficult cases, but in the meantime the N=16 results rely for optimality on the assumptions alluded to previously. I think there is good reason to believe that the 500 N=16 answers below at L=220 that were searched to 50 independent solutions, are in fact optimal.
A plausibility argument is as follows. The heuristic search is generating a stream of local minimum states. Every now and again it resets itself so that it does not use any previously discovered information (that required non-negligible processing). Saying that two states are independently generated just means that there was such a reset separating the two finds. (It is possible they are the same state.) If you heuristically search until you have generated 50 independent presumed solutions, i.e., 50 states of energy $E_0$ where $E_0$ is the minimum energy seen on the whole run, then along the way you will find other local minimum states with energies $E>E_0$. For a particular run, let $n_E$ denote the number of times that a state of energy $E$ independently crops up before a lower energy is discovered. (There is a technicality explained before, whereby the first occurrence of energy $E$ doesn't count.) Now taking the collection of all $\{n_E|E>E_0\}$ over all the 500 N=16 final runs (below), we get an empirical process distribution, though cut off at 50.
$n_E=i$
Number of occurrences =$m_i$
$\ge$50 ?
The number of occurrences sums to 456, which is less than 500 because on average there is less than 1 positive $n_E$ per run. If $i$ doesn't appear in the left column, it means that $m_i=0$. The probability of an error, that is a heuristically declared optimum that isn't actually the optimum, is the probability that the a random instance of the uncutoff version of the above process extends to 50 or beyond. Assuming some kind of regularity, we may crudely estimate this by modelling each $n_E$ event as a sample from a geometric distribution with an unknown parameter, $p$. That is, to generate $\{n_E|E>E_0\}$ for a particular run, you determine how many events will occur using a Poisson process, then for each event look up $p$ in a suitable prior, then take a sample from a geometric distribution whose parameter is that value of $p$. The concentration in the above chart is evidence for a single prior being a useful explanation, so further crudely estimating the prior to be concentrated on the MLEs of the geometric distributions corresponding to the above samples, that is $P(p=i/(i+1))=m_i/456$, we arrive at the following formula estimating the error probability (actually expected number of errors, but it's practically the same thing for small values):
$$\sum_i \frac{m_i}{500}\left(\frac{i}{i+1}\right)^{50}=4.7\times10^{-4}.$$
This estimate relied on various assumptions, and the outlier value 26 may indicate that all is not well. On the other hand, the N=8 cases that have been verified (including 3000 examples from the previous analysis where the heuristic value turned out to be correct) have all shown a reasonably well behaved energy landscape with no decoy local minima that look like global minima.
Results for N=8
Run used to determine the worst-case value of L.
Number of optima
found per instance
TTS mean (seconds)
Standard error (seconds)
Percentage optimal
100 1000 10 0.0467 0.0013 100%?
Final run with L=200 and increased number of optima per instance.
200 1000 50 0.0592 0.0020 100%
The above table was constructed from this summary file. Problem instances are stored here. Timings are on a single core, single-threaded; full test conditions explained here. Results can be reproduced with this command (replacing 123 with the desired instance number, and replacing -m1 with -m0 to find a single minimum as fast as possible):
./qubo -N8 -S13 -p50 -m1 testproblems/weightmode9/myformat/123
Results for N=16
100 200 10 88 7.8 98-100%?
160 200 10 133 11.8 98-100%?
220 500 50 162 10.7 99.8-100%?
./qubo -N16 -S14 -p50 -m1 testproblems/weightmode10/myformat/123
(In this N=16 case, the problem instance numbers are not contiguous. This is due to having separate batches running on three different computers and the fact that the problem number directly maps to the seed used to generate the instance.)
A tentative conclusion is that software may still be competitive with (future) hardware even up to 2048 qubits. The bottom line software figure from N=16 is 162 seconds per minimum. Of course there is no corresponding D-Wave device to compare with at present, but the current devices operating on 512 qubits take of the order of 10s of milliseconds to produce an answer. It may seem like 162s is slow by comparison, but there are several factors working in favour of software: the hardware may not be able to cope with the size/precision of the weights used here (hundreds to 1); the hardware may not work perfectly and may require error correction, cutting into its advantage; the hardware may not actually be returning absolute minima (ground states) in hard cases and if this is the case then the equivalent software could potentially be enormously faster; the software can easily be spread over multiple cores; lastly I think there is room to significantly improve the software presented here.
On the other hand there are some uncertainties in the above analysis, principally (i) are these instances really the hardest, and (ii) are the N=16 heuristic minima really true absolute minima?
Finally, I'd like to mention again that this isn't intended as an exercise in negativity about D-Wave. If D-Wave really is the first quantum computing device operating with a large number of qubits then that is of course a stunning achievement and will surely lead to many wonderful things. Still, I think it is interesting to push the question of exactly how powerful it is. (And you never know, that line of inquiry may even lead to new software methods for spin glasses.)
|
CommonCrawl
|
Soft particles reinforce robotic grippers: robotic grippers based on granular jamming of soft particles
Holger Götz1,
Angel Santarossa1,
Achim Sack1,
Thorsten Pöschel ORCID: orcid.org/0000-0001-5913-10701 &
Patric Müller1
Granular Matter volume 24, Article number: 31 (2022) Cite this article
Granular jamming has been identified as a fundamental mechanism for the operation of robotic grippers. In this work, we show, that soft particles like expanded polystyrene beads lead to significantly larger gripping forces in comparison to rigid particles. In contradiction to naive expectation, the combination of jamming and elasticity gives rise to very different properties of the jammed phase, compared to hard-particle systems. This may be of interest also beyond the application in robotic grippers.
Idea and state of the art of granular grippers
An important and challenging problem in the context of robotics is to grip objects of various and sometimes unknown shape and surface properties with a single manipulator. Frequently, this problem is addressed by grippers that mimic hands with multiple fingers (e.g. [27]). However, complex mechanics also require complicated software that manages the interplay of sensory and actuating elements. In 2010, revisiting some earlier ideas [7, 28, 32], Brown et al. [6] suggested a different approach based on the jamming transition of granular materials. Here, granulate is loosely enwrapped into an elastic membrane. The filled membrane is then pressed onto an object such that the granulate flows around the object in a fluid-like manner. To grip the object, the air within the membrane is evacuated and eventually the granulate gets compressed and enters a solid-like state due to jamming. This solidification causes forces that can effectively grip and hold the object. More information on the underlying jamming transition in granular materials can be found, e.g., in [3, 23, 24]. As described by Brown, this type of gripper mimics the limit of a robotic hand with infinitely many degrees of freedom [6]. In contrast to ordinary robotic grippers, these degrees of freedom need not to be controlled explicitly. Instead, they automatically adapt to the object and they are collectively activated by evacuating the membrane. It has been shown in many subsequent publications [1, 6, 15, 18, 21] that these grippers can operate for a wide range of shapes, fragile objects and even multiple objects at a time, without any reconfiguration or learning phases.
Brown et al. [6] have shown that, depending on the properties of the object to be gripped, three different mechanisms are essential:
geometric interlocking between the object and the solidified granulate,
static friction between the object and the membrane, and
suction effects, where the membrane seals the surface of the object air-tight.
It was further shown, that the holding forces due to geometric interlocking and suction are significantly larger than those resulting from static friction [6, 22]. Therefore most attempts to optimize the gripping performance were focused on interlocking and suction [18, 21]. For those two mechanisms the strength of the particle material in its jammed state is crucial [6]. Consequently, almost all works use particles from rather rigid materials like steel beads, glass beads, rice, salt or sugar. In the current paper, we consider particles from soft materials instead of rigid ones and show that this leads to a squeezing effect between the gripper and the object, i.e., the object is firmly pressed by the gripper due to its volume reduction (particles and membrane) when vacuum is applied. This effect significantly increases the static friction between the object and the membrane and turns gripping by static friction competitive with the other mechanisms. We study this gripping force enhancement by means of experiments, particle simulations, and continuum mechanical theory. By that, we add one further mode of operation for granular gripping that makes it possible to tightly hold objects whose shape and material properties neither allow for suction effects nor geometric interlocking.
Holding force enhancement due to soft particles
Experimental setup for automatic holding force measurements
In this section, we experimentally compare the performance of soft and rigid particles in a granular gripper by measuring the maximum holding force achieved in a gripping process. To that, we reconsider the simple form of a granular gripper which has already been discussed by Brown et al. [6], where a single elastic and air-tight membrane of spherical shape (diameter \((73.0 \pm 0.5) \text {mm}\)) is filled with granulate. The granulate we consider are glass beads of diameter \((4.0 \pm 0.3) \text {mm}\) and expanded polystyrene (EPS) beads of diameter \((4.2 \pm 0.5) \text {mm}\). The object to be gripped is a steel sphere of diameter \((19.99 \pm 0.01) \text {mm}\) whose surface was roughened to avoid suction effects between the membrane and the object. To measure the holding force, we use the apparatus shown in Fig. 1. One measurement cycle consists of the following steps: first the membrane is pressurized to fluidize the enwrapped granulate, then the z-stage is used to push the object into the gripper. Once the desired indentation depth is reached, the z-stage is stopped and after a relaxation phase the membrane is evacuated, such that the pressure difference between outside and inside is \(p_\text {vac}\approx 90 \text {kPa}\), which is the maximum pressure difference, that our vacuum-pump is able to achieve. After evacuating the membrane, the z-stage is moved downwards until the contact between the object and the gripper ceases. During the interval where the gripper is in touch with the object, the force acting between the object and the z-stage is measured.
Force in z-direction as measured by the load cell shown in Fig. 1. The blue curve shows the force signal for EPS beads, the orange curve the force signal for glass beads. The blue rectangles in the background and the blue text at the bottom indicate different phases of the gripping process (see text). The blue dot marks the maximum holding force (\(\approx 14\,\text {N}\)) for the EPS beads (colour figure online)
Figure 2 shows the result for both, EPS beads and glass beads. Additionally, the characteristic phases of the gripping process are indicated for the measurement with both type of beads. First, the object is pushed into the granulate (immersion phase) which corresponds to an increasing force in z-direction. At the beginning of the relaxation phase, the force decreases suddenly due to the abrupt stopping of the z-stage. In the relaxation phase, the force in z-direction continuously saturates to a constant value due to the reorganization and deformation of the particles. For the EPS beads, this process takes much longer as compared to the glass beads. The same holds true for the subsequent evacuation phase: in both cases, the force in z-direction decreases smoothly until a constant value is reached. As before, this process is much quicker for the glass particles due to their higher stiffness. Note that for the EPS beads the sign of the z-force attains negative values in the evacuation phase, which corresponds to a lifting force onto the object even during the evacuation phase where the z-stage is not moved at all. This is due to the softness of the EPS, which causes the whole gripper to shrink significantly while the evacuation is performed. After the membrane is evacuated to the desired pressure, the z-stage is lowered and the force, as measured by the load cell (see Fig. 1), decreases continuously in both cases, glass and EPS. Only now, the force for the glass beads decreases to negative values corresponding to a lifting force onto the object. If the z-stage is lowered down further, the gripper is no longer able to hold the object, the contact between the object and the gripper suddenly ceases and the measured force quickly vanishes. The minimum of the force in z-direction corresponds to the maximum possible holding force of the system. Note that for the glass beads we observe a significantly lower maximum holding force compared to the EPS beads.
Maximal holding forces for glass beads (orange triangles) and EPS beads (blue circles) for six independent measurements. The solid lines indicate the mean value of the holding force which is noted left to the line (colour figure online)
Figure 3 shows the maximum holding forces for six independent experiments for both, glass beads and EPS beads. The data indicate, that on average, the holding force is increased by a factor of approximately 19 for the EPS beads.
Operation principle for soft particle granular grippers
Sketch of the simplified model for the granular gripper
In this section, we first explain the holding force enhancement by means of a simplified continuum mechanical model. In the second part, we use particle simulations to show that the results are valid also for the granular system.
Continuum mechanical description
We consider the simplified model of the granular gripping system illustrated in Fig. 4. The membrane of the gripper encloses the volume of a hemispherical shell, i.e. the volume between two concentric hemispheres of radius \(r_\text {in}\) and \(r_\text {out}\). The object to be gripped is a sphere, such that only its upper hemisphere is in contact with the gripper and interlocking between the object and the gripper is not possible. We further assume, that there are no suction effects between the object and the membrane. In the experiment, this can be achieved, e.g., by using a porous object or an object with a rough surface. With these preconditions, the remaining gripping mechanism is due to friction between the membrane and the object. According to Coulomb's law, the friction is limited by the normal force \(F_\text {n}\). In the following, we therefore derive the normal force the gripper exerts onto the object. This force results from the evacuation of the membrane. Due to the applied vacuum and the atmospheric pressure, there is a pressure difference \(p_\text {vac}\) acting onto the surface of the membrane. To compute the resulting deformation of the gripper, we assume that the bulk of the granular material inside the membrane behaves like a linear elastic material with Young's modulus E and Poisson's ratio \(\nu\). Further, we now consider the full spherical shell (gripper part plus dashed line in Fig. 4) to simplify the boundary conditions. The problem we have to solve now corresponds to a pressurized hollow sphere where the pressure \(p_\text {in}\) is acting onto the inner surface at \(r = r_\text {in}\) and \(p_\text {out}\) is acting onto the outer surface at \(r = r_\text {out}\) (see Fig. 5). For this spherically symmetric linear elastic problem, the displacement field \(\mathbf {u}\) in the sphere reads (e.g. [4])
$$\begin{aligned} {\bf{u}}(r)=&\frac{1}{2E(r_\text {out}^3-r_\text {in}^3)r^2}\nonumber \\& \times \left[ 2\left( p_\text {in}r_\text {in}^3-p_\text {out}r_\text {out}^3\right)(1-2\nu )r^3+(p_\text {in}-p_\text {out})(1+\nu )r_\text {out}^3r_\text {in}^3 \right] \hat{e}_r. \end{aligned}$$
Here we use spherical coordinates where r is the radial coordinate and \(\hat{e}_r\) is the unit vector in radial direction (see Fig. 5).
Sketch of a pressurized hollow sphere including mathematical notation
We now assume, that the sphere we grip (see Fig. 4) is ideally rigid, i.e., it does not deform under load. In our simplified, spherically symmetric geometry, Fig. 5, this corresponds to the case that the hollow sphere is filled by an undeformable smaller sphere. In this situation, we have to fulfill the constraint
$$\begin{aligned} {\bf{u}}(r=r_\text {in})=0. \end{aligned}$$
For finite Young's modulus, the hollow sphere will be deformed according to Eq. (1) due to the external pressure. The evacuation of the membrane corresponds to \(p_\text {vac} \equiv p_\text {out} = p_\text {in}\). To comply with the criterion Eq. (2), the object inside the hollow sphere has to react with a counterpressure \(p_\text {c}\) such that it does not get compressed and, therefore, \(p_\text {in}= p_\text {vac} + p_\text {c}\). If we solve Eq. (2) for the counterpressure \(p_\text {c}\) we find
$$\begin{aligned} p_\text {c} = p_\text {vac}\frac{\left( 2-\left( \frac{r_\text {out}}{r_\text {in}}\right) ^3\right) (2\nu -1)}{2(1-2\nu )+\left( \frac{r_\text {out}}{r_\text {in}}\right) ^3(1+\nu )}. \end{aligned}$$
Naively one would expect the counterpressure to depend on the the Young's modulus of the material of the hollow sphere, however, this is not the case and \(p_\text {c}\) only depends on the vacuum pressure \(p_\text {vac}\), geometry and the Poisson ratio \(\nu\) of the material.
Counterpressure \(\frac{p_\text {c}}{p_\text {vac}}\) (color coded) as a function of \(\frac{r_\text {out}}{r_\text {in}}\) and the Poisson ratio \(\nu\). The white lines indicate where \(p_\text {c}\) changes its sign, see text (colour figure online)
For \(p_\text {c}>0\), the object (inner rigid sphere) gets squeezed by the outer hollow sphere. This mechanism also applies for the hollow hemisphere shown in Fig. 4 and, hence, for the simple model of the granular gripping system sketched in Fig. 4. Figure 6 shows the counterpressure, \(p_\text {c}\), as a function of \(\frac{r_\text {out}}{r_\text {in}}\) and the Poisson ratio \(\nu\). The white lines at \(\frac{r_\text {out}}{r_\text {in}}=\root 3 \of {2}\) and \(\nu =0.5\) indicate where \(p_\text {c}\) changes its sign, and, thus, where the squeezing effect (\(p_\text {c}>0\)) occurs. In turn, the squeezing effect causes frictional forces between the membrane of the gripper and the object to be gripped. These frictional forces make it possible to hold and lift the object even in situations where neither interlocking nor vacuum effects between the gripper and the object are possible. In the following, we calculate the gripping force which results from the frictional forces between the object and the membrane.
The force on a small element of surface of the object \(\mathrm {d}\mathbf {A}\) resulting from the compression of the hollow hemisphere is given by \(-p_\text {c}\mathrm {d}\mathbf {A}\). In spherical coordinates, the element of surface reads \(\mathrm {d}\mathbf {A}=r^2\sin \theta \,\mathrm {d}\phi \,\mathrm {d}\theta \,\hat{e}_r\), where we use the notation defined in Fig. 5. This force that acts normal to the surface of the object results in a frictional force \(-p_\text {c}\mu \mathrm {d}A\,\hat{e}_\theta\) which is proportional to the normal force and points in negative \(\hat{e}_\theta\)-direction. Here, \(\mu\) denotes the coefficient of friction,
$$\begin{aligned} \hat{e}_r= \begin{pmatrix} \sin \theta \cos \phi \\ \sin \theta \sin \phi \\ \cos \theta \end{pmatrix}\quad \text {and}\quad \hat{e}_\theta = \begin{pmatrix} \cos \theta \cos \phi \\ \cos \theta \sin \phi \\ -\sin \theta \end{pmatrix}. \end{aligned}$$
Adding up both contributions the force on one element of the surface of the object is given by
$$\begin{aligned} \mathrm {d}{\bf {F}}=-p_\text {c}\mathrm {d}\mathbf {A}-p_\text {c}\, \mu \,\mathrm {d}A\,\hat{e}_\theta . \end{aligned}$$
Thus, the total force on the object reads
$$\begin{aligned} {\bf {F}}= -p_\text {c}r_\text {in}^2 \int _0^\frac{\pi }{2}\mathrm {d}\theta \int _0^{2\pi }\mathrm {d}\phi \sin \theta \begin{pmatrix} \sin \theta \cos \phi &{}+&{}\mu \cos \theta \cos \phi \\ \sin \theta \sin \phi &{}+&{}\mu \cos \theta \sin \phi \\ \cos \theta &{}-&{}\mu \sin \theta \end{pmatrix} \end{aligned}$$
where we integrate the upper hemisphere of the object. The first two components of the integral vanish as \(\int _0^{2\pi }\mathrm {d}\phi \sin \phi =\int _0^{2\pi }\mathrm {d}\phi \cos \phi =0\) and for the remaining z-component of the force we obtain
$$\begin{aligned} {\bf {F}} = -r_\text {in}^2p_\text {c}2\pi \left( \frac{1}{2}-\mu \frac{\pi }{4}\right) \hat{e}_z. \end{aligned}$$
To grip the object, \({\bf {F}}\) needs to point in positive z-direction, i.e. \(\mu >\frac{2}{\pi }\). For friction coefficients below the value \(\mu _c\equiv \frac{2}{\pi }\), the object will be pressed out of the gripper. For \(\mu >\mu _c\) we obtain a gripping force which is proportional to the applied vacuum pressure according to Eq. (3).
The constraint Eq. (2) can also be fulfilled in a trivial way: for \(E\rightarrow \infty\), i.e. the case that the hollow sphere itself is made from an ideally rigid material, there is no deformation at all, independent on the value of the vacuum pressure \(p_\text {vac}\). As there are no materials with infinite Young's modulus this solution seems artificial and not relevant at first glance, however, almost all experiments on granular grippers use granular particles from rather rigid materials such as glass beads, see e.g. [10, 14, 20, 22], and compared to the forces resulting from the vacuum, i.e. the ambient air pressure, dense packings of glass or steel particles may very well be considered as undeformable and rigid. Thus, in this case, the applied vacuum does not lead to the squeezing effect which is present for soft granular particles and the holding forces are reduced. To our knowledge, there is only one publication where particles from soft materials were tested [33], without discussion of the resulting gripping forces.
Particle based simulations of the gripping process
To explain the gripping force enhancement, we so far assumed that the bulk of the granulate inside the membrane behaves like a linear elastic solid. Of course, a granulate does not obey this simple constitutive law in general. While from a fundamental theoretical point of view the continuum description of granulates is questionable [11], there are many examples where hydrodynamic descriptions of granular systems yield reliable results, e.g. [12, 13] and references therein. In these cases, however, other constitutive relations than those of a linear elastic solid are applied, and finding better constitutive laws for granular systems (especially for dense granular flows) is still a subject of ongoing research e.g. [2, 16, 17, 25, 31]. To justify our simplifying assumption of a linear elastic solid we, therefore, perform corresponding particle simulations in this section.
Simulation technique
Sketch of the mass spring model of the elastic membrane. The mass particles are ordered on a hexagonal unit cell. Distance springs are indicated. Two neighboring triangle faces enclose the relative angle \(\varTheta\) defined by the relative direction of their surface normals
The characteristic phases of the granular gripping process as obtained from the particle simulations. From left to right: initial state, immersion of the object into the granulate, evacuation of the gripper and release of the object
As described in the review article [10] up to date there is lack of tools for the detailed simulation of granular gripping processes. In this section we therefore describe a new simulation technique that includes the physics of the granulate on the particle level as well as the mechanics of the membrane and the interaction of the membrane with both, the particles of the granulate and the object.
To simulate the particle dynamics we apply the discrete element method (DEM). The main idea of DEM is to solve Newton's equation of motion to obtain the trajectory, \(\mathbf {r}_i(t)\), for each particle,
$$\begin{aligned} m_i\frac{\text {d}^2\mathbf {r}_i}{\text {d}t^2} = {\bf {F}}_i = \sum \limits _{i=1,\,j\ne i}^{N} {\bf {F}}_{ij} + {\bf {F}}_i^{\,\text {ext}}\,, \end{aligned}$$
where \({\bf {F}}_i^{\,\text {ext}}\) is an external force, such as gravity, acting on the particle and \({\bf {F}}_{ij}\) is the force acting on particle i due to its contact with particle j. A similar equation applies to the rotational degrees of freedom. For a more in-depth introduction of DEM we refer to e.g. [26, 29]. Regarding the interactions, a variety of contact models including normal and tangential forces are available. In our work we choose the Hertz–Mindlin no-slip contact model [9].
In order to simulate the granular gripper's flexible membrane enclosing the granular specimen, we need a method that integrates well with the above described concept of DEM simulations and still yields accurate results. In order to achieve this, we employ a mass spring model which, by design, is easily incorporated into DEM simulations. Additionally, these mass spring models may be used to accurately simulate elastic materials both in 3D and 2D [19]. Here we chose the computationally cheaper 2D approach, because a thin membrane can be treated as a 2D object to a good approximation.
A mass spring model can be seen as a network consisting of particles carrying mass as vertices and springs as edges, which are used to interconnect the vertices, as shown in Fig. 7. This concept can be integrated easily into DEM simulations by adding a force term \({\bf {F}}_{ij}^{\,\text {spring}}\) for the springs to the equation of motion. With this term, which is non-zero only if particles i and j are connected, Equation (8) becomes
$$\begin{aligned} m_i\frac{\text {d}^2\mathbf {r}_i}{\text {d}t^2} = {\bf {F}}_i = \sum \limits _{i=1,j\ne i}^{N} {\bf {F}}_{ij} + {\bf {F}}_{ij}^{\,\text {spring}} + {\bf {F}}_i^{\,\text {ext}}\,. \end{aligned}$$
With correctly chosen parameters, this concept may be used to represent a membrane, where the interaction between the membrane and the enclosed granular specimen is automatically defined by the contacts between the vertex particles and the granulate. In fact, this approach has been used in recent literature, e.g. [8, 30], to simulate triaxial tests. However, this approach may lead to problems. It is, for example, possible that the vertex particles move far apart, due to stretching of the membrane, such that particles from the inside may slip through. More importantly for our application, this setup leads to a membrane with a potentially large thickness, depending on the size of the used particles. This would impede a proper simulation of the interaction between the granular gripper and an object that is gripped. Additionally, this approach does not give any smooth surface, which is vital for an accurate simulation of the frictional forces between the membrane and the gripped object. In order to overcome these issues, the vertex particles do not interact with any particle outside the membrane itself. For calculating interactions between the membrane and other particles, we instead use additional wall elements, whose edges are defined by the springs that connect the virtual vertex particles. By tracking the collisions between the vertex particles and the wall elements, this approach additionally allows to detect self intersections of the membrane. To model the bending strength of the membrane, we added additional springs that penalize the bending between two neighboring wall elements. The bending penalty is added as an additional force term to the equation of motion and is quantified by the relative angle between two elements. For a mathematical description of this force, we kindly refer the reader to previous work on cloth simulations done by Bridson et al. [5]. Figure 7 schematically displays the whole setup together with the definition of the angle. The dynamics of the membrane are mapped to the following steps for each time step iteration of the DEM simulation:
The position of the wall elements is updated according to the positions of the vertex particles. Similarly, the velocity is passed on to the wall elements as it is needed for the calculation of the (frictional) contact forces.
Contacts between wall elements and other objects in the simulation are detected and computed using the Hertz–Mindlin non slip model.
The forces acting on the wall elements are distributed to the surrounding vertex particles according to the barycentric coordinates of the contact point.
Additional forces due to the angle between adjacent wall elements and the displacement of the springs are calculated and added to the node particles.
The equations of motion are integrated and the positions and velocities of the vertex particles are updated.
The vacuum, which is vital for the gripping process, causes a pressure difference \(\varDelta p\) between the inside and the outside of the membrane which, in turn, leads to forces acting on the membrane. In our model, these forces are determined on a per wall element basis: The magnitude is calculated by multiplying the element's surface area with the pressure difference \(\varDelta p\). The direction is determined by the element's inwards pointing normal vector. The resulting forces are then applied in step 3. This procedure reproduces the correct force magnitude when compared to reality, merely the force direction is expected to deviate locally, due to the flat wall elements' finite size.
Pressure onto the surface of the rigid sphere inside the granular gripper according to the model setup shown in Fig. 4. The blue circles show the data obtained from particle simulations, the dashed grey line shows a hyperbolic fit to the data (colour figure online)
Using this setup we were able to simulate the gripping process. Figure 8 shows simulation snapshots for the different phases of the gripping process. First, we reconsider the model scenario of the pressurized hollow sphere from the previous section (see Fig. 5). However, we now replace the linear elastic material by two spherical membranes of radius \(r_\text {in}\approx 0.02\,\text {m}\) and \(r_\text {out}\approx 0.034\,\text {m}\) where the space between the membranes is filled by granulate. Compatible to the inner membrane also the inner object has a radius of \(0.02\,\text {m}\). The membranes are represented by 1280 triangles each, and the stiffness constant k for the springs that interconnect the vertices of the triangles is chosen to be \(k=325\frac{{\text {N}}}{{\text {m}}}\), such that a membrane of thickness \(t=0.3\,\text {mm}\), an elastic modulus \(E=\frac{2\sqrt{3}}{3t}k=1.25\,\times \,10^{6}\,\text {Pa}\) and the Poisson ratio \(\nu =0.33\) is approximated [19]. The space between the two membranes is filled with particles up to random close packing. The parameters of the simulated particles are given in Table 1.
Table 1 Parameters used for the simulation
Now a pressure gradient \(\delta p = 90\,\text {kPa}\) is applied in the simulation. Figure 9 shows the resulting pressure on a rigid spherical object inside the hollow sphere. If the stiffness of the material of the particles is larger than approximately \(10^8\,\text {Pa}\), the bulk of the granulate may be considered as a rigid material and the pressure vanishes as described for the case of a linear elastic material in the previous section. If the stiffness E of the particle material decreases below \(10^8\,\text {Pa}\), the pressure increases as \(\frac{1}{E}\). This again coincides with the theoretic prediction for a linear elastic solid instead of the granulate (see Eq. 1). In our simulations, we assume spherical particles and overlaps of the spheres are interpreted as mutual deformations of the spheres, which, in turn, cause repulsive forces. This interpretation of the overlap is only valid for overlaps, which are small compared to the size of the particles. If the stiffness of the particles material is below \(\approx 5\,\times \,10^6\,\text {Pa}\) this is no longer valid and our simulations results are not reliable anymore. We therefore limit Fig. 9 (and the later Fig. 10) to elastic moduli above \(E\approx 5.0\,\times \,10^6\,\text {Pa}\).
Next, we determine the maximal holding force for the granular gripping process shown in Fig. 8. For this purpose we fill a spherical bag of radius \(r=0.04\,\text {m}\) with granulate, evacuate it and thereby grip a spherical object. The parameters of the membrane and the particles are the same as for the hollow sphere simulations. Additionally, the membrane's friction coefficient is set to 1.16 and the particles' coefficient of restitution is changed to 0.926 to mimic the material properties of latex. The parameters of the gripped object are given in Table 2.
Table 2 Simulation parameters of the gripped sphere
Maximal holding force for the granular gripping process shown in Fig. 8 as a function of the stiffness of the material of the granular particles. The blue circles show the data obtained from particle simulations, the dashed grey line shows a hyperbolic fit to the data (colour figure online)
Corresponding to the linear elastic theory of the previous section and in agreement with the course of the pressure curve shown in Fig. 9, we again identify the limit where the bulk of the granulate behaves ideally rigid for Young's modulus \(E>10^8\, \text {Pa}\), see Fig. 10. In this regime, the squeezing effect and, correspondingly, the holding force, is small. Note that unlike the case where a linear elastic solid is considered instead of the granulate, the holding force is small but does not vanish. Similarly to the pressure, for Young's modulus \(10^7\, \text {Pa}<E<10^8\, \text {Pa}\), the holding force increases like 1/E.
Summary and outlook
Most research on application examples for granular grippers use particles of rather rigid material. This approach is obvious, as for stiff particles the gripper hardens well, from which a strong gripping performance is expected. However, this only works well in situations where geometric interlocking between the object and the gripper or suction effects between the membrane and the object are possible. If this is not possible due to the shape of the object or its surface properties, friction is the only way to build up significant holding forces. Significant friction however, is only possible, if the membrane is distinctly pressed onto the object. In this work we have shown that this is not possible if rigid granular particles are used. By experiments, theory and particle simulations we have revealed that using soft instead of almost rigid particles unavoidably leads to a squeezing effect and, hence, to significant friction and holding forces. In the course of that, we developed a new and promising method for the simulation of granular gripping processes.
Apparently, the holding force enhancement due to soft particles is limited: On the one hand, the soft particles increase the holding force, but on the other, they make the gripper deformable in its entirety and a trade off between both effects is necessary.
Amend, J.R., Brown, E., Rodenberg, N., Jaeger, H.M., Lipson, H.: A positive pressure universal gripper based on the jamming of granular material. IEEE Trans. Rob. 28, 341–350 (2012). https://doi.org/10.1109/TRO.2011.2171093
Barker, T., Gray, J.M.N.T.: Partial regularisation of the incompressible μ(I)-rheology for granular flow. J. Fluid Mech. 828, 5–32 (2017). https://doi.org/10.1017/jfm.2017.428
Article MathSciNet MATH ADS Google Scholar
Biroli, G.: A new kind of phase transition? Nat. Phys. 3, 222–223 (2007). https://doi.org/10.1038/nphys580
Bower, A.: Appl. Mech. Solids (2009). https://doi.org/10.1201/9781439802489
Bridson, R., Marino, S., Fedkiw, R.: Simulation of clothing with folds and wrinkles. In: Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '03, pp. 28–36. Eurographics Association, Goslar (2003)
Brown, E., Rodenberg, N., Amend, J., Mozeika, A., Steltz, E., Zakin, M.R., Lipson, H., Jaeger, H.M.: Universal robotic gripper based on the jamming of granular material. Proc. Natl. Acad. Sci. 107, 18809–18814 (2010). https://doi.org/10.1073/pnas.1003250107
Burckhardt, C., Rienmüller, T., Weissmantel, H.: A shape adaptive gripper finger for robots. In: Burckhardt, C. (ed) Proceedings 18th International Symposium on Industrial Robots, Lausanne, Switzerland, April 26–28, pp. 241–250. IFS Publications, Springer (1988)
de Bono, J., Mcdowell, G., Wanatowski, D.: Discrete element modelling of a flexible membrane for triaxial testing of granular material at high pressures. Geotech. Lett. 2, 199–203 (2012). https://doi.org/10.1680/geolett.12.00040
Di Renzo, A., Di Maio, F.P.: Comparison of contact-force models for the simulation of collisions in DEM-based granular flow codes. Chem. Eng. Sci. 59, 525–541 (2004). https://doi.org/10.1016/j.ces.2003.09.037
Fitzgerald, S.G., Delaney, G.W., Howard, D.: A review of jamming actuation in soft robotics. Actuators 9, 154 (2020). https://doi.org/10.3390/act9040104
Goldhirsch, I.: Scales and kinetics of granular flows. Chaos 9, 659–672 (1999). https://doi.org/10.1063/1.166440
Article MATH ADS Google Scholar
Goldhirsch, I.: Granular gases: probing the boundaries of hydrodynamics. In: Pöschel, T., Luding, S. (eds) Granular Gases, Lecture Notes in Physics, vol. 564, pp. 79–99. Springer, Berlin (2001). https://doi.org/10.1007/3-540-44506-4_4
Goldhirsch, I.: Rapid granular flows. Ann. Rev. Fluid Mech. 35, 267–293 (2003). https://doi.org/10.1146/annurev.fluid.35.101101.161114
Gómez-Paccapelo, J.M., Santarossa, A.A., Bustos, H.D., Pugnaloni, L.A.: Effect of the granular material on the maximum holding force of a granular gripper. Granul. Matter 23, 4 (2020). https://doi.org/10.1007/s10035-020-01069-z
Jiang, Y., Amend, J.R., Lipson, H., Saxena, A.: Learning hardware agnostic grasps for a universal jamming gripper. In: 2012 IEEE International Conference on Robotics and Automation, pp. 2385–2391 (2012). https://doi.org/10.1109/ICRA.2012.6225049
Jop, P., Forterre, Y., Pouliquen, O.: A constitutive law for dense granular flows. Nature 441, 727–730 (2006). https://doi.org/10.1038/nature04801
Kamrin, K., Henann, D.L.: Nonlocal modeling of granular flows down inclines. Soft Matter 11, 179–185 (2015). https://doi.org/10.1039/C4SM01838A
Kapadia, J., Yim, M.: Design and performance of nubbed fluidizing jamming grippers. In: 2012 IEEE International Conference on Robotics and Automation, pp. 5301–5306 (2012). https://doi.org/10.1109/ICRA.2012.6225111
Kot, M., Nagahashi, H.: Mass spring models with adjustable Poisson's ratio. Visual Comput. 33, 283–291 (2017). https://doi.org/10.1007/s00371-015-1194-8
Li, Y., Chen, Y., Yang, Y., Li, Y.: Soft robotic grippers based on particle transmission. IEEE/ASME Trans. Mechatron. 24(3), 969–978 (2019). https://doi.org/10.1109/TMECH.2019.2907045
Licht, S., Collins, E., Badlissi, G., Rizzo, D.: A partially filled jamming gripper for underwater recovery of objects resting on soft surfaces. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6461–6468 (2018). https://doi.org/10.1109/IROS.2018.8593361
Licht, S., Collins, E., Ballat-Durand, D., Lopes-Mendes, M.: Universal jamming grippers for deep-sea manipulation. In: OCEANS 2016 MTS/IEEE Monterey, pp. 1–5 (2016). https://doi.org/10.1109/OCEANS.2016.7761237
Liu, A., Nagel, S. (eds.): Jamming and Rheology: Constrained Dynamics on Microscopic and Macroscopic Scales. CRC Press, London (2001)
Liu, A.J., Nagel, S.R.: Jamming is not just cool any more. Nature 396, 21–22 (1998). https://doi.org/10.1038/23819
Luding, S.: Towards dense, realistic granular media in 2d. Nonlinearity 22, R101 (2009). https://doi.org/10.1088/0951-7715/22/12/R01
Matuttis, H.G., Chen, J.: Understanding the Discrete Element Method: Simulation of Non-spherical Particles for Granular and Multi-body Systems. Wiley, New York (2014). https://doi.org/10.1002/9781118567210
Monkman, G.J., Hesse, S., Steinmann, R., Henrik, S.: Robot Grippers. Wiley-VCH, New York (2006). https://doi.org/10.1002/9783527610280
Perovskii, A.: Universal grippers for industrial ronots. Russ. Eng. J. 60, 3–4 (1980)
Pöschel, T., Schwager, T.: Computational Granular Dynamics: Models and Algorithms. Springer, Berlin (2005). https://doi.org/10.1007/3-540-27720-X
Qu, T., Feng, Y.T., Wang, Y., Wang, M.: Discrete element modelling of flexible membrane boundaries for triaxial tests. Comput. Geotech. 115, 103514 (2019). https://doi.org/10.1016/j.compgeo.2019.103154
Schaeffer, D.G., Barker, T., Tsuji, D., Gremaud, P., Shearer, M., Gray, J.M.N.T.: Constitutive relations for compressible granular flow in the inertial regime. J. Fluid Mech. 874, 926–951 (2019). https://doi.org/10.1017/jfm.2019.476
Schmidt, I.: Flexible moulding jaws for grippers. Ind. Robot An Int. J. 5, 24–26 (1978). https://doi.org/10.1108/eb004491
Valenzuela-Coloma, H.R., Lau-Cortes, Y.s., Fuentes-Romero, R.E., Zagal, J.C., Mendoza-Garcia, R.F.: Mentaca: An universal jamming gripper on wheels. In: 2015 Chilean Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), pp. 817–823 (2015). https://doi.org/10.1109/Chilecon.2015.7404666
H.G., A.S. and T.P. gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project Number 411517575. P.M. gratefully acknowledges funding by the DFG under project number 398618334. All the authors thank W. Pucheanu for his contribution in the design and construction of the experimental setup. The work was also supported by the Interdisciplinary Center for Nanostructured Films (IZNF), the Central Institute for Scientific Computing (ZISC), and the Interdisciplinary Center for Functional Particle Systems (FPS) at Friedrich-Alexander University Erlangen-Nürnberg.
Open Access funding enabled and organized by Projekt DEAL.
Institute for Multiscale Simulations, Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 3, 91058, Erlangen, Germany
Holger Götz, Angel Santarossa, Achim Sack, Thorsten Pöschel & Patric Müller
Holger Götz
Angel Santarossa
Achim Sack
Thorsten Pöschel
Patric Müller
Correspondence to Thorsten Pöschel.
Götz, H., Santarossa, A., Sack, A. et al. Soft particles reinforce robotic grippers: robotic grippers based on granular jamming of soft particles. Granular Matter 24, 31 (2022). https://doi.org/10.1007/s10035-021-01193-4
Granular gripper
Granular jamming
|
CommonCrawl
|
Inelastic Boltzmann equation driven by a particle thermal bath
Global strong solutions in $ {\mathbb{R}}^3 $ for ionic Vlasov-Poisson systems
August 2021, 14(4): 599-638. doi: 10.3934/krm.2021017
Incompressible Navier-Stokes-Fourier limit from the Landau equation
Mohamad Rachid
Université de Nantes, Laboratoire de Mathematiques Jean Leray, 2, rue de la Houssinière, BP 92208 F-44322 Nantes Cedex 3, France
Received December 2020 Revised March 2021 Published August 2021 Early access May 2021
In this work, we provide a result on the derivation of the incompressible Navier-Stokes-Fourier system from the Landau equation for hard, Maxwellian and moderately soft potentials. To this end, we first investigate the Cauchy theory associated to the rescaled Landau equation for small initial data. Our approach is based on proving estimates of some adapted Sobolev norms of the solution that are uniform in the Knudsen number. These uniform estimates also allow us to obtain a result of weak convergence towards the fluid limit system.
Keywords: Landau equation, Cauchy problem, stability, perturbative solutions, hydrodynamical limit, Navier-Stokes.
Mathematics Subject Classification: 35Q20, 35K55, 45K05, 76P05, 47H20, 82C40.
Citation: Mohamad Rachid. Incompressible Navier-Stokes-Fourier limit from the Landau equation. Kinetic & Related Models, 2021, 14 (4) : 599-638. doi: 10.3934/krm.2021017
R. J. Alonso, B. Lods and I. Tristani, Fluid dynamic limit of boltzmann equation for granular hard–spheres in a nearly elastic regime, arXiv: 2008.05173, 2020. Google Scholar
D. Arsénio and L. Saint-Raymond, From the Vlasov-Maxwell-Boltzmann System to Incompressible Viscous Electro-Magneto-Hydrodynamics. Vol. 1, EMS Monographs in Mathematics. European Mathematical Society (EMS), Zürich, 2019. doi: 10.4171/193. Google Scholar
C. Bardos, F. Golse and C. D. Levermore, The acoustic limit for the Boltzmann equation, Arch. Ration. Mech. Anal., 153 (2000), 177-204. doi: 10.1007/s002050000080. Google Scholar
C. Bardos, F. Golse and D. Levermore, Fluid dynamic limits of kinetic equations Ⅰ: Formal derivations, J. Statist. Phys., 63 (1991), 323-344. doi: 10.1007/BF01026608. Google Scholar
F. Boyer and P. Fabrie, Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models, vol. 183., Springer, New York, 2013. doi: 10.1007/978-1-4614-5975-0. Google Scholar
M. Briant, From the Boltzmann equation to the incompressible Navier-Stokes equations on the torus: A quantitative error estimate, J. Differential Equations, 259 (2015), 6072-6141. doi: 10.1016/j.jde.2015.07.022. Google Scholar
M. Briant, S. Merino-Aceituno and C. Mouhot, From Boltzmann to incompressible Navier-Stokes in Sobolev spaces with polynomial weight, Anal. Appl. (Singap.), 17 (2019), 85-116. doi: 10.1142/S021953051850015X. Google Scholar
K. Carrapatoso, I. Tristani and K.-C. Wu, Cauchy problem and exponential stability for the inhomogeneous Landau equation, Arch. Ration. Mech. Anal., 221 (2016), 363-418. doi: 10.1007/s00205-015-0963-x. Google Scholar
P. Degond, Global existence of smooth solutions for the Vlasov-Fokker-Planck equation in $1 $ and $2 $ space dimensions, Ann. Sci. École Norm. Sup., 19 (1986), 519-542. doi: 10.24033/asens.1516. Google Scholar
R. J. DiPerna and P. L. Lions, On the Cauchy problem for Boltzmann equations: Global existence and weak stability, Ann. of Math., 130 (1989), 321-366. doi: 10.2307/1971423. Google Scholar
R. S. Ellis and M. A. Pinsky, The first and second fluid approximations to the linearized Boltzmann equation, J. Math. Pures Appl., 54 (1975), 125-156. Google Scholar
F. Golse and C. D. Levermore, Stokes-Fourier and acoustic limits for the Boltzmann equation, Comm. Pure Appl. Math., 55 (2002), 336-393. doi: 10.1002/cpa.3011. Google Scholar
F. Golse and L. Saint-Raymond, The Navier-Stokes limit of the Boltzmann equation for bounded collision kernels, Invent. Math., 155 (2004), 81–161. doi: 10.1007/s00222-003-0316-5. Google Scholar
M. P. Gualdani, S. Mischler and C. Mouhot, Factorization of non-symmetric operators and exponential $H$-theorem, Mém. Soc. Math. Fr. (N.S.), 153 (2017), 137. doi: 10.24033/msmf.461. Google Scholar
Y. Guo, The Landau equation in a periodic box, Comm. Math. Phys., 231 (2002), 391-434. doi: 10.1007/s00220-002-0729-9. Google Scholar
Y. Guo, The Boltzmann equation in the whole space, Indiana Univ. Math. J., 53 (2004), 1081-1094. doi: 10.1512/iumj.2004.53.2574. Google Scholar
Y. Guo, Boltzmann diffusive limit beyond the Navier-Stokes approximation, Commun. Pure Appl. Math., 59 (2006), 626–687. doi: 10.1002/cpa.20121. Google Scholar
N. Jiang, C. D. Levermore and N. Masmoudi, Remarks on the acoustic limit for the Boltzmann equation, Comm. Partial Differential Equations, 35 (2010), 1590-1609. doi: 10.1080/03605302.2010.496096. Google Scholar
N. Jiang and N. Masmoudi, Boundary layers and incompressible Navier-Stokes-Fourier limit of the Boltzmann equation in bounded domain Ⅰ, Comm. Pure Appl. Math., 70 (2017), 90-171. doi: 10.1002/cpa.21631. Google Scholar
N. Jiang, C.-J. Xu and H. Zhao, Incompressible Navier-Stokes-Fourier limit from the Boltzmann equation: Classical solutions, Indiana Univ. Math. J., 67 (2018), 1817–1855. doi: 10.1512/iumj.2018.67.5940. Google Scholar
J.-L. Lions, Équations Différentielles Opérationnelles et Problèmes aux Limites, Die Grundlehren der mathematischen Wissenschaften, Bd. 111. Springer-Verlag, Berlin-Göttingen-Heidelberg, 1961. Google Scholar
P.-L. Lions and N. Masmoudi, Une approche locale de la limite incompressible, C. R. Acad. Sci., Paris, Sér. I, Math., 329 (1999), 387–392. doi: 10.1016/S0764-4442(00)88611-5. Google Scholar
P.-L. Lions and N. Masmoudi, From Boltzmann equation to Navier-Stokes and Euler equations Ⅰ, Arch. Ration. Mech. Anal., 158 (2001), 173-193. doi: 10.1007/s002050100143. Google Scholar
P.-L. Lions and N. Masmoudi, From Boltzmann equation to Navier-Stokes and Euler equations Ⅱ, Arch. Ration. Mech. Anal., 158 (2001), 195-211. Google Scholar
A. J. Majda, A. L. Bertozzi and A. Ogawa, Vorticity and incompressible flow. Cambridge texts in applied mathematics, Appl. Mech. Rev., 55 (2002), B77–B78. Google Scholar
N. Masmoudi and L. Saint-Raymond, From the Boltzmann equation to the Stokes-Fourier system in a bounded domain, Comm. Pure Appl. Math., 56 (2003), 1263-1293. doi: 10.1002/cpa.10095. Google Scholar
S. Mischler, Kinetic equations with Maxwell boundary conditions, Ann. Sci. Éc. Norm. Supér., 43 (2010), 719-760. doi: 10.24033/asens.2132. Google Scholar
C. Mouhot and R. M. Strain, Spectral gap and coercivity estimates for linearized Boltzmann collision operators without angular cutoff, J. Math. Pures Appl., 87 (2007), 515-535. doi: 10.1016/j.matpur.2007.03.003. Google Scholar
L. Saint-Raymond, Hydrodynamic Limits of the Boltzmann Equation, vol. 1971 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-92847-8. Google Scholar
H. Wang and K.-C. Wu, Solving linearized Landau equation pointwisely, arXiv preprint, 2017. arXiv: 1709.00839. Google Scholar
Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Francesca Crispo, Paolo Maremonti. A remark on the partial regularity of a suitable weak solution to the Navier-Stokes Cauchy problem. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1283-1294. doi: 10.3934/dcds.2017053
Ruihong Ji, Yongfu Wang. Mass concentration phenomenon to the 2D Cauchy problem of the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 1117-1133. doi: 10.3934/dcds.2019047
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5383-5405. doi: 10.3934/dcdsb.2020348
Jingjing Zhang, Ting Zhang. Local well-posedness of perturbed Navier-Stokes system around Landau solutions. Electronic Research Archive, 2021, 29 (4) : 2719-2739. doi: 10.3934/era.2021010
C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403
Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747
Jie Liao, Xiao-Ping Wang. Stability of an efficient Navier-Stokes solver with Navier boundary condition. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 153-171. doi: 10.3934/dcdsb.2012.17.153
Chuong V. Tran, Theodore G. Shepherd, Han-Ru Cho. Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 483-494. doi: 10.3934/dcdsb.2002.2.483
Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934/krm.2019032
Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277
Fabio Ramos, Edriss S. Titi. Invariant measures for the $3$D Navier-Stokes-Voigt equations and their Navier-Stokes limit. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 375-403. doi: 10.3934/dcds.2010.28.375
Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215
Mehdi Badra, Fabien Caubet, Jérémi Dardé. Stability estimates for Navier-Stokes equations and application to inverse problems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2379-2407. doi: 10.3934/dcdsb.2016052
Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595
Linjie Xiong. Incompressible Limit of isentropic Navier-Stokes equations with Navier-slip boundary. Kinetic & Related Models, 2018, 11 (3) : 469-490. doi: 10.3934/krm.2018021
Matthew Paddick. The strong inviscid limit of the isentropic compressible Navier-Stokes equations with Navier boundary conditions. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2673-2709. doi: 10.3934/dcds.2016.36.2673
|
CommonCrawl
|
Valsartan blocks thrombospondin/transforming growth factor/Smads to inhibit aortic remodeling in diabetic rats
Hui Sun1,2,
Yong Zhao3,
Xiuping Bi1,2,
Shaohua Li3,
Guohai Su2,
Ya Miao1,
Xiao Ma1,
Yun Zhang1,
Wei Zhang1 &
Ming Zhong1
Diagnostic Pathology volume 10, Article number: 18 (2015) Cite this article
Angiotensin II (Ang II) and transforming growth factor β (TGFβ) are closely involved in the pathogenesis of diabetic complications. We aimed to determine whether an aberrant thrombospondin 1 (TSP1)–mediated TGFβ1/Smads signaling pathway specifically affects vascular fibrosis in diabetic rats and whether valsartan, an Ang II subtype 1 receptor blocker, has an anti-fibrotic effect.
Age-matched male Wistar rats were randomly divided into 3 groups: control (n = 8), diabetes (n = 16) and valsartan (30 mg/kg/day) (n = 16). Type 2 diabetes mellitus (T2DM) was induced by a high-calorie diet and streptozotocin injection. Morphological and biomechanical properties of the thoracic aorta were assessed by echocardiography and cardiac catheterization. Masson staining was used for histological evaluation of extracellular matrix (ECM). The expression of components in the TSP1–mediated TGFβ1/Smads signaling pathway was analyzed by immunohistochemistry and real-time quantitative reverse transcription polymerase chain reaction.
As compared with controls, diabetic aortas showed reduced distensibility and compliance, with excess ECM deposition. Components in the TSP1-mediated TGFβ1/Smads signaling pathway, including TSP1, TGFβ1, TGFβ type II receptor (TβRII), Smad2 and Smad3, were accumulated in vascular smooth muscle cytoplasm of diabetic aortas and their protein and mRNA levels were upregulated. All these abnormalities were attenuated by valsartan.
TSP1-mediated TGFβ1/Smads pathway activation plays an important role in marcovascular remodeling in T2DM in rat. Valsartan can block the pathway and ameliorate vascular fibrosis.
The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1053842818141195
Diabetes is a serious health problem worldwide, and the total number of diabetic patients is projected to increase from 171 million in 2000 to 366 million in 2030 [1]. A growing body of evidence supports diabetes associated with various cardiovascular complications, including macro- and microangiopathies. Hyperglycemia-induced unfavorable remodeling has been reported in the thoracic aorta [2], coronary artery [3], renal vasculature [4] and intestinal arterioles [5] of diabetic animal models. Vascular remodeling, characterized by alterations in the composition and assembly of extracellular matrix (ECM), is involved in accelerated atherosclerosis.
Transforming growth factor β (TGFβ) plays a critical role in modulating the synthesis and degradation of ECM. It is secreted as a latent complex (L-TGFβ), which contains a latency-associated peptide (LAP) and a C-terminal bioactive region. On stimulation with multiple factors by enzymatic cleavage of or physical interaction with LAP, an active form of TGFβ (A-TGFβ) is released from its latent precursor. A-TGFβ exerts its effects on target genes by binding to specific receptors (TβRs) and subsequent phosphorylation of Smads [6,7]. Experimental and clinical studies indicate that hyperglycemia stimulates the production of TGFβ1, thrombospondin 1 (TSP1), and angiotensin II (Ang II) in the diabetic condition [8,9].
TSP1 is a matricellular protein involved in ECM formation. It can activate TGFβ1 endogenously by binding to the LAP and mature domain of TGFβ1 [10]. TSP1-mediated TGFβ1/Smads signaling contributes to target-organ damage in animals with diabetic nephropathy [11] and diabetic cardiomyopathy [12]. In addition, glucose or Ang II alone or in combination upregulates TSP1 and elevates TGFβ1 activity. These effects can be antagonized by Ang II subtype 1 receptor blockers (ARBs), which suggests stimulation of the renin–angiotensin system (RAS) in the development and progression of renal and cardiac fibrosis [13].
However, we lack information on macrovascular lesions provoked by TSP1 in diabetes. Therefore, we hypothesized that hyperglycemia promotes the accumulation of ECM in the thoracic aorta through an Ang II-TSP1-TGFβ1/Smads pathway and examined whether valsartan, an ARB widely used in clinical practice, could reverse such arterial remodeling in rat.
Age-matched male Wistar rats (200–240 g, 48–50 days) obtained from Shandong University Laboratories Animal Center (Jinan, China) were randomly divided into 3 groups: control (n = 8), diabetic (n = 16) and valsartan (n = 16). Animals in the control group were fed normal chow (8% fat, 16% protein, 50% carbohydrate, and 22% other ingredients; total calories 14 kJ/g) and the other 2 groups a high-calorie diet (25% fat, 14% protein, 51% carbohydrate, and 10% other ingredients; total calories 20 kJ/g). Four weeks later, venous blood was sampled for measuring fasting plasma glucose (FPG) and fasting insulin (Ins). After another week, streptozotocin (STZ; Sigma, St. Louis, MO; 30 mg/kg, dissolved in ice-cold 10 mM citrate buffer, pH 4.4) was administered intraperitoneally to diabetic and valsartan groups, and an equivalent volume of citrate buffer was administered to the control group. One week after STZ injection, blood samples were collected from the tail vein for measuring FPG and Ins. Diabetes was defined as FPG ≥11.1 mmol/L and insulin sensitivity index [ISI = Ln(FPG × Ins)-1] lower than that of controls. Rats in the valsartan group were given valsartan (30 mg/kg) via intragastric administration every day, and those in control and diabetic groups received the same dose of normal saline. Animals were maintained in individual air-filtered metabolic cages with free access to water for 16 weeks. FPG and Ins were measured at the end of the experiment, with ISI calculated. This study conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1996). The protocol was granted by the institutional medical ethics review board.
Echocardiography and cardiac catheterization
At the end of the experiment, rats were anesthetized with chloral hydrate (300 mg/kg, intraperitoneally). Transthoracic echocardiography was performed with use of a SONOS 7500 (Hewlett-Packard, Andover, MA, USA) with a 12 MHz transducer. The inner diameter of the thoracic aorta was measured in systolic and diastolic phases (Ds, Dd) in a long axis view. Subsequently, a catheter (PE-50) was introduced into the aortic arch via the right carotid artery and connected to a pressure transducer for measuring aortic systolic blood pressure (SBP) and diastolic blood pressure (DBP). Aortic distensibility and compliance were determined by calculating distensibility coefficient (DC) and compliance coefficient (CC), respectively, by the following formulas [14,15]:
$$ \mathrm{D}\mathrm{C}=\left(\varDelta \mathrm{A}/\mathrm{A}\right)/\varDelta \mathrm{P}=2\left(\varDelta \mathrm{D}\mathrm{x}\mathrm{D}\mathrm{d}+\varDelta {\mathrm{D}}^2\right)/\left(\varDelta \mathrm{P}\mathrm{x}\mathrm{D}{\mathrm{d}}^2\right). $$
$$ \mathrm{C}\mathrm{C}=\left(\varDelta \mathrm{V}/\varDelta \mathrm{L}\right)/\varDelta \mathrm{P}=\mathrm{A}/\mathrm{P}=\uppi \left(2\varDelta \mathrm{DxDd}+\varDelta {\mathrm{D}}^2\right)/4\varDelta \mathrm{P}. $$
$$ \varDelta \mathrm{P}=\mathrm{S}\mathrm{B}\mathrm{P}\hbox{-} \mathrm{D}\mathrm{B}\mathrm{P}.\ \varDelta \mathrm{D}=\mathrm{D}\mathrm{d}\hbox{-} \mathrm{D}\mathrm{s}. $$
Tissue preparation
Animals were killed by an overdose of chloral hydrate. The thoracic aorta was excised immediately and dropped into an ice-cold NaCl 0.9% buffer. Tissues (5 × 5 × 5 mm3 aortic wall) for immunohistochemistry were fixed in 10% formaldehyde and paraffin embedded. The remaining aorta was cut into small tissue blocks and stored in foil packets at −80 °C for the following experiments.
Histological evaluation of extracellular matrix (ECM)
Sections 4 μm thick were deparaffined and stained with Masson's trichrome for ECM. Ten successive microscopy fields were examined with use of the JD801 Imaging Analysis System (Jiangsu JEDA Science-Technology Development Co.). The content of aortic ECM was semi-quantified as the proportion of area occupied by Masson's staining to total area.
Immunohistochemistry involved a microwave-based antigen-retrieval technique. After the removal of paraffin, endogenous peroxidase was neutralized with H2O2 (0.3% vol/vol) for 10 min. Sections were placed in phosphate-buffered saline (PBS) for 15 min and protein-blocking solution (Immunotech, Cedex, France) for another 30 min, incubated with primary antibodies overnight at 4°C, then with secondary antibodies for 1 hour at room temperature, and finally horseradish peroxidase–conjugated streptavidin (Dako; diluted 1:500) for visualization. The expression of TSP1, L-TGFβ1, A-TGFβ1, TβRII and p-Smad2/3 was evaluated by use of the JD801 imaging analysis system. The percentage positive staining in the vascular wall was semi-quantified under a microscope.
Real time quantitative reverse transcription polymerase chain reaction (RT-PCR)
Total RNA was extracted from aortic tissues by use of Trizol reagent and treated with DNase to avoid DNA contamination. After quantification at the extinction coefficient of 260 nm, total RNA was reverse-transcribed into cDNA following the manufacturer's instructions (TakaRa, Dalian, China), and real-time PCR involved an ABI Prism 7000 sequence detector system with the SYBR Green Reaction Kit. Primers are in Table 1. Amplification products were analyzed by a melting curve, which confirmed a single PCR product in all reactions. The expression of specific genes was normalized to that of β-actin as the housekeeping gene.
Table 1 cDNA Primer sequences for real-time RT-PCR
Data are expressed as mean ± SD. Statistical analysis involved use of SPSS 11.0 (SPSS, Chicago, IL), with unpaired Student t test for comparisons between 2 groups and ANOVA followed by Scheffe's procedure for 3 groups. P < 0.05 was considered statistically significant.
Characteristics of experimental animals
During the experiment, 3 rats died in the diabetic group and 2 in the valsartan group. These deaths were attributable to ketoacidosis, infections or other complications induced by hyperglycemia. The FPG and ISI of 3 rats treated with a high-calorie diet and STZ did not meet the definition of diabetes. Finally, 8 rats were included in control group, 11 in diabetic group, and 13 in valsartan group. Biochemical characteristics, including FPG, Ins, and ISI, were similar between diabetic and valsartan groups across the experiment. However, as compared with controls, diabetic and valsartan groups showed significantly elevated Ins before STZ injection (P < 0.05), higher FPG one week after STZ injection (P < 0.01), and consistently lower ISI (P < 0.01; Figure 1).
Biochemical characteristics of thoracic aortas. Measurements of fasting plasma glucose (A) and fasting insulin (B) before STZ injection (4 week), 1 week after STZ injection (6 week) and the end of the experiment (22 week), with ISI (C) calculated. *P < 0.05 and **P < 0.01, vs controls. Abbreviations: ISI, insulin sensitivity index; STZ, streptozotocin.
Morphological and biomechanical properties of thoracic aortas
Compared with controls, diabetic aortas were enlarged in systolic and diastolic diameters (P < 0.01) but reduced in distensibility and compliance (P < 0.05), which suggested macrovascular remodeling (Table 2). As compared with diabetic aortas, valsartan aortas showed increased distensibility and compliance (P < 0.05) and reduced systolic and diastolic diameters, but not significantly, which indicated some improvement in remodeling with valsartan.
Table 2 Morphological and biomechanical properties of rat thoracic aorta with diabetes or valsartan treatment
Fibrosis in thoracic aortas
Masson staining demonstrated well-arranged aortic fibrous tissue in control rats (Figure 2A). The diabetic group showed disarranged fibers (Figure 2B). However, histological manifestations were attenuated in the valsartan group as compared with the diabetic group (Figure 2C).
Masson's staining of thoracic aortas. Control (A), diabetic (B) and valsartan (C) aortas showing extracellular matrix (green).
The content of ECM in the thoracic aorta was higher in diabetic rats than controls (25.73 ± 4.85% vs. 17.12 ± 4.65%; P < 0.01). As compared with diabetic aortas, valsartan aortas showed reduced ECM content (20.81 ± 5.41% vs. 25.73 ± 4.85%, P < 0.05).
Protein content of components in the TSP1-mediated TGF β1/Smads pathway
On immunohistochemistry (Figure 3), staining for TSP1, L-TGFβ1 and A-TGFβ1, TβRII, and phosphorylated Smad 2/3 (p-Smad2/3) in vascular smooth muscle cytoplasm was high in diabetic aortas, moderate in valsartan aortas and low in controls.
Immunohistochemistry of protein content of components in the TSP1-mediated TGFβ1/Smads signaling pathway in aortas. Staining for TSP1, A-TGFβ1, L-TGFβ1, TβRII, and p-Smad2/3 in aortic medial layer of control, diabetic and valsartan aortas and quantification (bottom). *P < 0.05, **P < 0.01, ***P < 0.001. Abbreviations: A-TGFβ1, active transforming growth factor β1; L-TGFβ1, latent transforming growth factor β1; p-Smad2/3, phosphorylated Smad2/3; TβRII, TGFβ type II receptor; TSP1, thrombospondin 1.
Transcription of components in TSP1-mediated TGF β1/Smads pathway
Real-time quantitative RT-PCR (Figure 4) demonstrated greatly upregulated mRNA levels of TSP1, TGFβ1, TβRII, Smad2 and Smad3 in diabetic than control aortas, with levels in valsartan aortas not significantly different from those in controls.
RT-PCR analysis of mRNA level of components in the TSP1-mediated TGF β1/Smads signaling pathway in aortas. *P < 0.05. Abbreviations: TGFβ1, transforming growth factor β1; TβRII, TGFβ type II receptor; TSP1, thrombospondin 1.
We examined whether hyperglycemia in diabetic rats promotes the accumulation of ECM in the thoracic aorta through an Ang II-TSP1-TGFβ1/Smads pathway and whether valsartan, an ARB widely used in clinical practice, could reverse such arterial remodeling. Diabetic aortas showed reduced distensibility and compliance, with excess ECM deposition, as compared with controls. Components in the TSP1-mediated TGFβ1/Smads signaling pathway, including TSP1, TGFβ1, TGFβ type II receptor (TβRII), Smad2 and Smad3, were accumulated in diabetic vascular smooth muscle cytoplasm, and their protein and mRNA levels were upregulated. All these abnormalities were attenuated by valsartan. Thus, activation of a TSP1-mediated TGFβ1/Smads pathway plays an important role in marcovascular remodeling in T2DM, and valsartan may hold promise for blocking the pathway and ameliorating vascular fibrosis in diabetes.
Vascular complications of T2DM, including cardiovascular diseases, retinopathy and nephropathy, impose a substantial socioeconomic burden on public health. Approximately 50% of patients with T2DM die prematurely of a cardiovascular cause and 10% die of renal failure [16]. Abnormal arterial remodeling, paralleled by accelerated atherosclerosis, is responsible for the elevated incidence of ischemic complications in diabetes. This process extends to blood vessels of various caliber and leads to an excessive accumulation of ECM. At the macrovascular level, these alterations bring about narrowed lumen, increased stiffness and decreased vasomotion [17]. In the present and previous studies [2], we used a high-calorie diet and low-dose STZ injection to establish an animal T2DM model with specific metabolic characteristics and demonstrated structural and functional remodeling in the thoracic aorta.
The molecular mechanisms of arterial remodeling are not fully elucidated. Multifunctional cytokines seem to play a crucial role. TSP1-dependent TGFβ activation is involved in the development of cardiac fibrosis in rats with diabetes and elevated Ang II level [18]. Our current study showed that TSP1-mediated TGFβ1/Smads signaling is intensively involved in macrovascular fibrosis induced by hyperglycemia. In parallel to upregulated TSP1 mRNA in diabetic aortas, that of the factors A-TGFβ, TβRII, and p-Smad2/3 was upregulated, as was cellular staining, which indicates TGFβ1 signaling activity. TSP1 is an extracellular calcium-binding multifunctional protein first discovered in activated platelets. It is also secreted by endothelial cells and smooth muscle cells. TSP1 triggers the activation but not expression of TGFβ1 by interacting with the LAP of latent TGFβ1 [19]. To initiate its cellular action, TGFβ1 binds to TβRII and TβRI in sequence. After activation, TβRI recruits and phosphorylates the ligand-specific receptor-activated Smads (R-Smads), Smad2 and Smad3, which then form heterometric complexes with a co-Smad, Smad4, for subsequent nuclear signaling. Smad7 is an inhibitory Smad (I-Smad) and inactivates transcription by binding with R-Smads or a co-Smad [20,21]. TGFβ1/Smads signaling modulates ECM by stimulating fibrillar collagen genes and inhibiting matrix metalloproteinase genes [6]. Consistent with findings from rats with diabetic cardiomyopathy [12,18], we observed a significant increase in ECM content with activation of TGFβ1/Smads signaling.
Under high glucose, Ang II production is elevated, with disproportionate matrix deposition [22], which is related to a mechanism dependent on protein kinase C (PKC) [8]. Although we did not determine Ang II level, the increased TSP1-mediated TGFβ1/Smads signaling in diabetic aortas was inhibited by an ARB, valsartan, and the pathological features and biomechanical dysfunction of the diabetic thoracic aorta were substantially improved.
These results suggest an important role of RAS activation in diabetic fibrosis. Early experiments revealed that glucose itself stimulates enhanced TSP1 transcription in the aorta and carotid arteries [23], whereas in mesangial cells, glucose stimulates TSP1 expression and TGFβ activity through nuclear protein USF2 via PKC, p38 mitogen-activated protein kinase (p38 MAPK) and extracellular signal-regulated kinase (ERK) pathways [24]. In a hyperglycemic environment, Ang II stimulates TSP1 upregulation and promotes subsequent activation of TGFβ1. This process is facilitated by the canonical Ang II subtype 1 receptor (AT1R) through p38 MAPK and c-Jun NH2-terminal kinase (c-JNK) but not ERK1/2 [25]. Evidence from our current study and other reports [13,18] suggests that the synergistic effects of glucose and Ang II contribute to increased TSP1 expression and consequent TGFβ1 activation.
The findings in this study that unfavorable morphological and functional alterations in diabetic aortas may be partially reversed by inhibiting the detrimental effects of Ang II are important for clinical practice. It provides new insight into the mechanisms accounting for the vascular benefits of interventions that block RAS overactivation in diabetes. Recent clinical trials demonstrated that stringent control of glycemia decreased the rate of microvascular outcomes [26] but did not reduce major cardiovascular events as compared with standard therapy in high-risk patients with T2DM [27]. In addition, tight control of systolic blood pressure was not associated with improved cardiovascular outcome as compared with usual control treatment [28,29]. However, treatment with an RAS antagonist-based regimen, including an Ang II converting enzyme (ACE) inhibitor or ARB, prevented more cardiovascular events than did other regimens in diabetic patients with or without hypertension [30,31]. Although numerous therapeutic strategies being developed target the TGFβ1/Smads signaling pathway for treating fibrosis, only a few studies have been performed in humans [32,33]. Given the concern about unpredictable side effects of novel therapies, a practical approach for TGFβ1 antagonism is to extend the usefulness of available pharmaceuticals. Tranilast, a membrane-stabilizing agent of mast cells used for treating bronchial asthma, suppresses collagen synthesis in early and advanced diabetic nephropathy by interfering with the actions of TGFβ1 [34,35]. Similarly, as a type of competent antihypertensive agents with favorable tolerability and safety, RAS inhibitors are promising for combating diabetic fibrosis.
TSP1-mediated TGFβ1/Smads signaling is activated and contributes to the redundant accumulation of ECM induced by hyperglycemia in the rat diabetic thoracic aorta. Blocking the RAS inhibits the expression of signaling components and ameliorates the morphological and biomechanical features of large arteries with diabetes, which suggests an involvement of Ang II. Targeting the Ang II-TSP1-TGFβ1/Smads signaling pathway is a feasible therapeutic option to correct the aberrant macrovascular remodeling in diabetes.
Wild S, Roglic G, Green A, Sicree R, King H. Global prevalence of diabetes: estimates for the year 2000 and projections for 2030. Diabetes Care. 2004;27:1047–53.
Sun H, Zhong M, Miao Y, Ma X, Gong HP, Tan HW, et al. Impaired elastic properties of the aorta in fat-fed, streptozotocin-treated rats. Vascular remodeling in diabetic arteries. Cardiology. 2009;114:107–13.
McDonald TO, Gerrity RG, Jen C, Chen HJ, Wark K, Wight TN, et al. Diabetes and arterial extracellular matrix changes in a porcine model of atherosclerosis. J Histochem Cytochem. 2007;55:1149–57.
Turoni CM, Reynoso HA, Marañón RO, Coviello A, de Peral Bruno M. Structural changes in the renal vasculature in streptozotocin-induced diabetic rats without hypertension. Nephron Physiol. 2005;99:50–7.
Connors BA, Bohlen HG, Evan AP. Vascular endothelium and smooth muscle remodeling accompanies hypertrophy of intestinal arterioles in streptozotocin diabetic rats. Microvasc Res. 1995;49:340–9.
Verrecchia F, Mauviel A. Transforming growth factor-beta signaling through the Smad pathway: role in extracellular matrix gene expression and regulation. J Invest Dermatol. 2002;118:211–5.
Yang SN, Burch ML, Tannock LR, Evanko S, Osman N, Little PJ. Transforming growth factor-β regulation of proteoglycan synthesis in vascular smooth muscle: contribution to lipid binding and accelerated atherosclerosis in diabetes. J Diabetes. 2010;2:233–42.
Ikehara K, Tada H, Kuboki K, Inokuchi T. Role of protein kinase C-angiotensin II pathway for extracellular matrix production in cultured human mesangial cells exposed to high glucose levels. Diabetes Res Clin Pract. 2003;59:25–30.
Hohenstein B, Daniel C, Hausknecht B, Boehmer K, Riess R, Amann KU, et al. Correlation of enhanced thrombospondin-1 expression, TGF-beta signalling and proteinuria in human type-2 diabetic nephropathy. Nephrol Dial Transplant. 2008;23:3880–7.
Daniel C, Schaub K, Amann K, Lawler J, Hugo C. Thrombospondin-1 is an endogenous activator of TGF-beta in experimental diabetic nephropathy in vivo. Diabetes. 2007;56:2982–9.
Lu A, Miao M, Schoeb TR, Agarwal A, Murphy-Ullrich JE. Blockade of TSP1-dependent TGF-β activity reduces renal injury and proteinuria in a murine model of diabetic nephropathy. Am J Pathol. 2011;178:2573–86.
Tang M, Zhou F, Zhang W, Guo Z, Shang Y, Lu H, et al. The role of thrombospondin-1-mediated TGF-β1 on collagen type III synthesis induced by high glucose. Mol Cell Biochem. 2011;346:49–56.
Zhou Y, Poczatek MH, Berecek KH, Murphy-Ullrich JE. Thrombospondin 1 mediates angiotensin II induction of TGF-beta activation by cardiac and renal cells under both high and low glucose conditions. Biochem Biophys Res Commun. 2006;339:633–41.
van der Heijden-Spek JJ, Staessen JA, Fagard RH, Hoeks AP, Boudier HA, van Bortel LM. Effect of age on brachial artery wall properties differs from the aorta and is gender dependent, a population study. Hypertension. 2000;35:637–42.
Hermans MM, Henry R, Dekker JM, Kooman JP, Kostense PJ, Nijpels G, et al. Estimated glomerular filtration rate and urinary albumin excretion are independently associated with greater arterial stiffness: the Hoorn Study. J Am Soc Nephrol. 2007;18:1942–52.
van Dieren S, Beulens JW, van der Schouw YT, Grobbee DE, Neal B. The global burden of diabetes and its complications: an emerging pandemic. Eur J Cardiovasc Prev Rehabil. 2010;17 Suppl 1:S3–8.
Spinetti G, Kraenkel N, Emanueli C, Madeddu P. Diabetes and vessel wall remodelling: from mechanistic insights to regenerative therapies. Cardiovasc Res. 2008;78:265–73.
Belmadani S, Bernal J, Wei CC, Pallero MA, Dell'italia L, Murphy-Ullrich JE, et al. A thrombospondin-1 antagonist of transforming growth factor-beta activation blocks cardiomyopathy in rats with diabetes and elevated angiotensin II. Am J Pathol. 2007;171:777–89.
Lopez-Dee Z, Pidcock K, Gutierrez LS. Thrombospondin-1: multiple paths to inflammation. Mediators Inflamm. 2011;2011:296069.
Clarke DC, Liu X. Decoding the quantitative nature of TGF-beta/Smad signaling. Trends Cell Biol. 2008;18:430–42.
Euler-Taimor G, Heger J. The complex pattern of SMAD signaling in the cardiovascular system. Cardiovasc Res. 2006;69:15–25.
Ko SH, Hong OK, Kim JW, Ahn YB, Song KH, Cha BY, et al. High glucose increases extracellular matrix production in pancreatic stellate cells by activating the renin-angiotensin system. J Cell Biochem. 2006;98:343–55.
Stenina OI, Krukovets I, Wang K, Zhou Z, Forudi F, Penn MS, et al. Increased expression of thrombospondin-1 in vessel wall of diabetic Zucker rat. Circulation. 2003;107:3209–15.
Wang S, Skorczewski J, Feng X, Mei L, Murphy-Ullrich JE. Glucose up-regulates thrombospondin 1 gene transcription and transforming growth factor-beta activity through antagonism of cGMP-dependent protein kinase repression via upstream stimulatory factor 2. J Biol Chem. 2004;279:34311–22.
Naito T, Masaki T, Nikolic-Paterson DJ, Tanji C, Yorioka N, Kohno N. Angiotensin II induces thrombospondin-1 production in human mesangial cells via p38 MAPK and JNK, a mechanism for activation of latent TGF-beta1. Am J Physiol Renal Physiol. 2004;286:F278–87.
Ismail-Beigi F, Craven T, Banerji MA, Basile J, Calles J, Cohen RM, et al. Effect of intensive treatment of hyperglycaemia on microvascular outcomes in type 2 diabetes, an analysis of the ACCORD randomised trial. Lancet. 2010;376:419–30.
ACCORD Study Group, Gerstein HC, Miller ME, Genuth S, Ismail-Beigi F, Buse JB, et al. Long-term effects of intensive glucose lowering on cardiovascular outcomes. N Engl J Med. 2011;364:818–28.
ACCORD Study Group, Cushman WC, Evans GW, Byington RP, Goff Jr DC, Grimm Jr RH, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med. 2010;362:1575–85.
Cooper-DeHoff RM, Gong Y, Handberg EM, Bavry AA, Denardo SJ, Bakris GL, et al. Tight blood pressure control and cardiovascular outcomes among hypertensive patients with diabetes and coronary artery disease. JAMA. 2010;304:61–8.
Patel A, ADVANCE Collaborative Group, MacMahon S, Chalmers J, Neal B, Woodward M, et al. Effects of a fixed combination of perindopril and indapamide on macrovascular and microvascular outcomes in patients with type 2 diabetes mellitus (the ADVANCE trial), a randomised controlled trial. Lancet. 2007;370:829–40.
Lindholm LH, Ibsen H, Dahlöf B, Devereux RB, Beevers G, de Faire U, et al. Cardiovascular morbidity and mortality in patients with diabetes in the Losartan Intervention For Endpoint reduction in hypertension study (LIFE), a randomised trial against atenolol. Lancet. 2002;359:1004–10.
Denton CP, Merkel PA, Furst DE, Khanna D, Emery P, Hsu VM, et al. Scleroderma Clinical Trials Consortium. Recombinant human anti-transforming growth factor beta1 antibody therapy in systemic sclerosis, a multicenter, randomized, placebo-controlled phase I/II trial of CAT-192. Arthritis Rheum. 2007;56:323–33.
Trachtman H, Fervenza FC, Gipson DS, Heering P, Jayne DR, Peters H, et al. A phase 1, single-dose study of fresolimumab, an anti-TGF-β antibody, in treatment-resistant primary focal segmental glomerulosclerosis. Kidney Int. 2011;79:1236–43.
Soma J, Sato K, Saito H, Tsuchiya Y. Effect of tranilast in early-stage diabetic nephropathy. Nephrol Dial Transplant. 2006;21:2795–9.
Soma J, Sugawara T, Huang YD, Nakajima J, Kawamura M. Tranilast slows the progression of advanced diabetic nephropathy. Nephron. 2002;92:693–8.
This work was supported by grants from the National Natural Science Foundation of China [grant numbers 30900608, 30971215, 81070141, 81170087]; the Provincial Natural Science Foundation of Shandong [grant numbers ZR2010HQ048]; the Shandong Provincial Medicine and Health Science Technology Development Plan Program of China [grant number 2011BJZD05]; and Jinan Science & Technology International Cooperation Project [201401356] . We thank Jifeng Bian and Tonggang Qi for superb technical assistance.
The Key Laboratory of Cardiovascular Remodeling and Function Research, Chinese Ministry of Education and Chinese Ministry of Public Health, the Department of Cardiology, Shandong University, Qilu Hospital, No.107, Wen Hua Xi Road, Jinan, Shandong Province, 250012, China
Hui Sun
, Xiuping Bi
, Ya Miao
, Xiao Ma
, Yun Zhang
, Wei Zhang
& Ming Zhong
Department of Cardiology, Jinan Central Hospital Affiliated to Shandong University, Jinan, 250013, China
& Guohai Su
Department of Geriatric Cardiology, Provincial Hospital Affiliated to Shandong University, Jinan, 250021, China
& Shaohua Li
Search for Hui Sun in:
Search for Yong Zhao in:
Search for Xiuping Bi in:
Search for Shaohua Li in:
Search for Guohai Su in:
Search for Ya Miao in:
Search for Xiao Ma in:
Search for Yun Zhang in:
Search for Wei Zhang in:
Search for Ming Zhong in:
Correspondence to Ming Zhong.
None of the authors have any commercial or other association that might pose a conflict of interest. All authors are responsible for the content and writing of the paper.
SH and ZY carried out the experiments and drafted the manuscript. SGH and BXP performed data analysis. MY and MX performed the histological examination and collect imaging data. ZY reviewed and contributed to manuscript submissions. ZW and ZM designed the study. All the authors read and approved the final manuscript.
Hui Sun and Yong Zhao are co-first authors.
Hui Sun and Yong Zhao contributed equally to this work.
Sun, H., Zhao, Y., Bi, X. et al. Valsartan blocks thrombospondin/transforming growth factor/Smads to inhibit aortic remodeling in diabetic rats. Diagn Pathol 10, 18 (2015) doi:10.1186/s13000-015-0246-8
Macrovascular remodeling
Thrombospondin 1
Transforming growth factor β1
Smads
|
CommonCrawl
|
Browse by Academic Unit (A-Z)
White Rose Consortium (91886)
The University of York (16894)
Faculty of Social Sciences (York) (4087)
Economics and Related Studies (York) (612)
School of Politics, Economics and Philosophy (York) (10)
Group by: Creators Name | Item Type
Jump to: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | Y | Z
Abadir, K.M. and Rockinger, M. (2003) Density functionals, with an option-pricing application. Econometric Theory. pp. 778-811. ISSN 0266-4666
Abhakorn, Pongrapeeporn, Smith, Peter Nigel orcid.org/0000-0003-2786-7192 and Wickens, Mike (2016) Can Stochastic Discount Factor Models Explain the Cross Section of Equity Returns? Review of Financial Economics. pp. 56-68. ISSN 1058-3300
Acemoglu, Daron, De Feo, Giuseppe and De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663 (2020) Weak States : Causes and Consequences of the Sicilian Mafia. Review of Economic Studies. 537–581. ISSN 0034-6527
Aina, Carmen and Nicoletti, Cheti orcid.org/0000-0002-7237-2597 (2018) The intergenerational transmission of liberal professions. Labour economics. pp. 108-120. ISSN 0927-5371
Alhalboni, Maryam orcid.org/0000-0001-8979-5724, Baldwin, Kenneth and Helmi, Mohamad Husam (2018) A Structural Model of "Alpha" for the Capital Adequacy Ratios of Islamic Banks. Journal of International Financial Markets, Institutions & Money. ISSN 1042-4431 (In Press)
Almeida-Santos, F. and Mumford, K. (2005) Employee training and wage compression in Britain. Manchester School, 73 (3). pp. 321-342. ISSN 1463-6786
Almeida-Santos, F. and Mumford, K.A. (2004) Employee training in Australia: evidence from the AWIRS. Economic Record, 80 (s1). S53-S64. ISSN 0013-0249
Alvarez-Cuadrado, F., Monteiro, G. and Turnovsky, S.J. (2004) Habit Formation, Catching up with the Joneses, and Economic Growth. Journal of Economic Growth, 9 (1). pp. 47-80. ISSN 1381-4338
Andersson, Tommy and Kratz, Jorgen orcid.org/0000-0002-8355-058X (2020) Pairwise Kidney Exchange over the Blood Group Barrier. Review of Economic Studies. 1091–1133. ISSN 0034-6527
Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220, Castelli, Adriana orcid.org/0000-0002-2546-419X and Gaughan, James Michael orcid.org/0000-0002-8409-140X (2015) Hospital trusts productivity in the English NHS: : uncovering possible drivers of productivity variations. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220, Castelli, Adriana orcid.org/0000-0002-2546-419X, Chalkley, Martin John orcid.org/0000-0002-1091-8259 et al. (1 more author) (2019) Can productivity growth measures identify best performing hospitals? Evidence from the English National Health Service. Health Economics. ISSN 1057-9230
Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220, Castelli, Adriana orcid.org/0000-0002-2546-419X, Chalkley, Martin John orcid.org/0000-0002-1091-8259 et al. (1 more author) (2016) Hospital productivity growth in the English NHS 2008/09 to 2013/14. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York
Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220, Chalkley, Martin John orcid.org/0000-0002-1091-8259 and Rice, Nigel orcid.org/0000-0003-0312-823X (2016) Medical spending and hospital inpatient care in England : An analysis over time. Report. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Aragon Aragon, María Jose orcid.org/0000-0002-3787-6220, Castelli, Adriana orcid.org/0000-0002-2546-419X and Gaughan, James orcid.org/0000-0002-8409-140X (2017) Hospital Trusts productivity in the English NHS : Uncovering possible drivers of productivity variations. PLoS ONE. e0182253. ISSN 1932-6203
Atanasova, C.V. and Wilson, N. (2003) Bank borrowing constraints and the demand for trade credit: evidence from panel data. Managerial and Decision Economics, 24 (6-7). pp. 503-514. ISSN 0143-6570
Attema, Arthur E, Brouwer, Werner B. F. and Claxton, Karl orcid.org/0000-0003-2002-4694 (2018) Discounting in Economic Evaluations. PharmacoEconomics. ISSN 1179-2027
Augeraud-Veron, Emmanuelle, Bambi, Mauro orcid.org/0000-0002-9929-850X and Gozzi, Fausto (2017) Solving internal habits formation models through dynamic programming in infinite dimension. Journal of Optimization Theory and Applications. pp. 584-611. ISSN 0022-3239
Aygün, Orhan and Lanari Bo, Inacio (2020) College Admission with Multidimensional Privileges: The Brazilian Affirmative Action Case. American Economic Journal: Microeconomics. ISSN 1945-7669 (In Press)
Bailey, Natalia, Pesaran, M. Hashem and Smith, Lynette Vanessa orcid.org/0000-0003-0489-047X (2019) A multiple testing approach to the regularisation of large sample correlation matrices. Journal of Econometrics. pp. 507-534. ISSN 0304-4076
Balasko, Y. (2003) Economies with Price-dependent Preferences. Journal of Economic Theory, 109 (2). pp. 333-359. ISSN 0022-0531
Balasko, Y. (2003) Temporary financial equilibrium. Economic Theory, 21 (1). pp. 1-18. ISSN 0938-2259
Balasko, Y. (2004) The equilibrium manifold keeps the memory of individual demand functions. Economic Theory, 24 ( 3). pp. 493-501. ISSN 0938-2259
Balfoussia, C. and Wickens, M. (2007) Macroeconomic sources of risk in the term structure. Journal of Money Credit and Banking, 39 (1). pp. 205-236. ISSN 0022-2879
Balfoussia, H. and Wickens, M. (2006) Extracting Inflation Expectations from the Term Structure: the Fisher Equation in a Multivariate SDF Framework. Journal of Finance and Economics, 11 (3). pp. 261-277. ISSN 1076-9307
Bambi, Mauro orcid.org/0000-0002-9929-850X (2015) Time-to-build and the capital structure. Economics Letters. ISSN 0165-1765
Bambi, Mauro orcid.org/0000-0002-9929-850X, Gozzi, Fausto, Federico, Salvatore et al. (1 more author) (2016) Generically distributed investments on flexible projects and endogenous growth. Economic Theory. pp. 521-558. ISSN 1432-0479
Bambra, Clare L., Munford, Luke, Brown, Heather et al. (8 more authors) (2018) Health for Wealth : Building a Healthier Northern Powerhouse for UK Productivity. Research Report. Northern Health Sciences Alliance , Newcastle.
Bandi, F.M. and Phillips, P.C.B. (2003) Fully nonparametric estimation of scalar diffusion models. Econometrica, 71 (1). pp. 241-283. ISSN 0012-9682
Barbosa, Estela Capelas and Cookson, Richard orcid.org/0000-0003-0052-996X (2019) Multiple inequity in health care: An example from Brazil. Social Science & Medicine. 1 - 8. ISSN 1873-5347
Bask, Mikael and Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2020) Extrapolative Expectations and Macroeconomic Dynamics : Evidence from an Estimated DSGE Model. International Journal of Finance and Economics. ISSN 1099-1158
Basu, Anirban, Jones, Andrew Michael orcid.org/0000-0003-4114-1785 and Dias, Pedro Rosa (2018) Heterogeneity in the Impact of Type of Schooling on Adult Health and Lifestyle. Journal of Health Economics. pp. 1-14. ISSN 0167-6296
Baum, Christopher and Zerilli, Paola Z orcid.org/0000-0001-6589-5552 (2016) Jumps and stochastic volatility in crude oil futures prices using conditional moments of integrated volatility. Energy economics. pp. 175-181. ISSN 0140-9883
Baum, Christopher, Zerilli, Paola Z orcid.org/0000-0001-6589-5552 and Chen, Liyuan (2019) Stochastic volatility, jumps and leverage in energy and stock markets : evidence from high frequency data. Energy economics. 104481. ISSN 0140-9883
Bayrak, Oben and Hey, John Denis orcid.org/0000-0001-6692-1484 (2017) Expected utility theory with imprecise probability perception : explaining preference reversals. Applied Economics Letters. pp. 906-910. ISSN 1466-4291
Beacham, Matthew Ian and Datta, Bipasa orcid.org/0000-0002-8109-0786 (2013) Who becomes the winner? Effects of venture capital on firms' innovative incentives. Working Paper.
Beatty, T.K.M. (2007) Recovering the shadow value of nutrients. American Journal of Agricultural Economics, 89 (1). pp. 52-62. ISSN 0002-9092
Beatty, T.K.M. and LaFrance, J.T. (2005) United States demand for food and nutrition in the Twentieth Century. American Journal of Agricultural Economics, 87 ( 5). pp. 1159-1166. ISSN 0002-9092
Beatty, T.K.M. and Larsen, E.R. (2005) Using Engel curves to estimate bias in the Canadian CPI as a cost of living index. Canadian Journal of Economics, 38 (2). pp. 482-499. ISSN 0008-4085
Benzeval, Michaela, Kumari, Meena and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2016) How do biomarkers and genetics contribute to understanding society? Health Economics. pp. 1219-1222. ISSN 1057-9230
Berloffa, G. and Simmons, P. (2003) Unemployment Risk, Labour Force Participation and Savings. Review of Economic Studies, 70 (3). pp. 521-539. ISSN 0034-6527
Bernhardt, Dan, Koufopoulos, Kostas and Trigilia, Giulio (2020) Is there a paradox of pledgeability? Journal of Financial Economics. pp. 606-611. ISSN 0304-405X
Bhattacharya, A. (2002) Coalitional stability with a credibility constraint. Mathematical Social Sciences, 43 (1). pp. 27-44. ISSN 0165-4896
Bhattacharya, A. (2004) On the Equal Division Core. Social Choice and Welfare, 22 (22). pp. 391-399. ISSN 0176-1714
Bhattacharya, A. and Abderrahmane, Z. (2006) The Core as the Set of Eventually Stable Outcomes: A Note. Games and Economic Behaviour, 54 (1). pp. 25-30. ISSN 0899-8256
Bhattacharya, Anindya orcid.org/0000-0002-2853-8078, Brosi, Victoria Katharina Franziska and Ciardiello, Francesco (2018) THE UNCOVERED SET AND THE CORE: COX'S RESULT REVISITED. Journal of mechanism and institution design. ISSN 2399-844X
Bisceglia, Michele, Cellini, Roberto, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2020) Optimal dynamic volume-based price regulation. International journal of industrial organization. 102675. ISSN 0167-7187
Black, Nicole, Hughes, Robert and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2018) The health care costs of childhood obesity in Australia : an instrumental variables approach. Economics and Human Biology. pp. 1-13. ISSN 1570-677X
Bojke, Chris, Castelli, Adriana orcid.org/0000-0002-2546-419X, Grasic, Katja et al. (2 more authors) (2018) Accounting for the quality of NHS output. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Bojke, Chris, Grasic, Katja and Street, Andrew David orcid.org/0000-0002-2540-0364 (2017) How should hospital reimbursement be refined to supportconcentration of complex care services? Health Economics. ISSN 1057-9230
Bojke, Christopher orcid.org/0000-0003-2601-0314, Castelli, Adriana orcid.org/0000-0002-2546-419X, Grasic, Katja et al. (3 more authors) (2017) Productivity of the English NHS : 2014/15 Update. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Bojke, Laura orcid.org/0000-0001-7921-9109, Soares, Marta Ferreira Oliveira orcid.org/0000-0003-1579-8513, Fox, Aimee orcid.org/0000-0001-6944-7554 et al. (7 more authors) (2019) Developing a reference protocol for expert elicitation in healthcare decision making. Health Technology Assessment Reports. (In Press)
Bone, J. (2003) Simple Arrow-type propositions in the Edgeworth domain. Social Choice and Welfare, 20 (1). pp. 41-48. ISSN 0176-1714
Bone, J.D., Hey, J.D. and Suckling, J.R. (2003) Do People Plan Ahead? Applied Economics Letters, 10 (5). pp. 277-280. ISSN 1350-4851
Bono, J., Guo, L., Raue, B. A. et al. (107 more authors) (2018) First measurement of Ξ− polarization in photoproduction. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics. pp. 280-286. ISSN 0370-2693
Borm, Peter, Funaki, Yukihiko and Ju, Yuan orcid.org/0000-0002-7541-9856 (2020) The Balanced Threat Agreement for Individual Externality Negotiation Problems. Homo Oeconomicus. pp. 1-19. ISSN 2366-6161
Bos, Olivier, Roussillon, Beatrice and Schweinzer, Paul orcid.org/0000-0002-6437-7224 (2016) Agreeing on efficient emissions reduction. Scandinavian Journal of Economics. pp. 1-32. ISSN 1467-9442
Bouwmeester, S., Verkoeijen, P. P.J.L., Aczel, B. et al. (53 more authors) (2017) Registered Replication Report : Rand, Greene, and Nowak (2012). Perspectives on Psychological Science. pp. 527-542. ISSN 1745-6924
Bowden, S. (2002) Ownership responsibilities and corporate governance: The crisis at Rolls Royce, 1968-1971. Business History, 44 (3). pp. 31-62. ISSN 0007-6791
Bowden, S., Foreman-Peck, J. and Richardson, T. (2001) The Post-War Productivity Failure: Insights from Oxford (Cowley). Business History, 43 (3). pp. 54-78. ISSN 0007-6791
Bowden, S., Higgins, D.M. and Price, C. (2006) A Very Peculiar Practice: Underemployment in Britain during the Interwar Years. European Review of Economic History, 10 (1). pp. 89-108. ISSN 1361-4916
Bowden, S. and Tweedale, G. (2003) Poisoned by the Fluff; Compensation and Litigation for Byssinosis in the Lancashire Cotton Industry. Journal of Law and Society, 29 (29 4). pp. 560-79. ISSN 0263-323X
Bravo, F. (2006) Bartlett-type adjustments for empirical discrepancy test statistics. Journal of Statistical Planning and Inference, 136 (3). pp. 537-554. ISSN 0378-3758
Bravo, F. (2005) Blockwise Empirical Entropy Tests for Time Series Regressions. Journal of Time Series Analysis, 26 (2). pp. 185-210. ISSN 0143-9782
Bravo, F. (2002) Blockwise empirical Cressie-Read test statistics for a-mixing processes. Statistics & Probability Letters, 58 (3). pp. 319-325. ISSN 0167-7152
Bravo, F. (2003) Second-Order Power Comparisons for a Class of Nonparametric Likelihood-Based Tests. Biometrika, 90 (4). pp. 881-890. ISSN 0006-3444
Bravo, Francesco orcid.org/0000-0002-8034-334X (2016) Local information theoretic methods for smooth coefficients dynamic panel data models. Journal of Time Series Analysis. pp. 690-708. ISSN 1467-9892
Bravo, Francesco orcid.org/0000-0002-8034-334X (2019) Robust estimation and inference for general varying coefficients models with missing observations. TEST. pp. 1-23. ISSN 1863-8260
Bravo, Francesco orcid.org/0000-0002-8034-334X (2018) Semiparametric quantile regression with random censoring. Annals of the Institute of Statistical Mathematics. pp. 1-31. ISSN 0020-3157
Bravo, Francesco orcid.org/0000-0002-8034-334X (2020) Two-step combined nonparametric likelihood estimation of misspecified semiparametric models. Journal of Nonparametric Statistics. ISSN 1048-5252 (In Press)
Bravo, Francesco orcid.org/0000-0002-8034-334X, Chu, Ba and Jacho-Chávez, David (2016) Semiparametric estimation of moment conditions models with weakly dependent data. Journal of Nonparametric Statistics. pp. 1-29. ISSN 1048-5252
Bravo, Francesco orcid.org/0000-0002-8034-334X, Escanciano, Juan Carlos and van Keilegom, Ingrid (2020) Two-step semiparametric empirical likelihood inference. Annals of Statistics. pp. 1-26. ISSN 0090-5364
Bravo, Francesco orcid.org/0000-0002-8034-334X and Jacho-Chavez, David T. (2016) Semiparametric quasi-likelihood estimation with missing data. Communications in Statistics, Theory and Methods. pp. 1345-1369. ISSN 0361-0926
Bravo, Francesco orcid.org/0000-0002-8034-334X, Jacho-Chávez, D.T. and Chu, Ba (2017) Generalized Empirical Likelihood M Testing for Semiparametric Models with Time Series Data. Econometrics and Statistics. ISSN 2452-3062
Bravo, Francesco orcid.org/0000-0002-8034-334X, Li, Degui orcid.org/0000-0001-6802-308X and Tjostheim, Dag (2020) Robust Nonlinear Regression Estimation in Null Recurrent Time Series. Journal of Econometrics. ISSN 0304-4076 (In Press)
Brekke, Kurt R., Levaggi, Rosella, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2016) Patient Mobility and Health Care Quality when Regions and Patients Differ in Income. Journal of Health Economics. pp. 372-387. ISSN 0167-6296
Brekke, Kurt R., Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2017) Horizontal Mergers and Product Quality. Canadian Journal of Economics. ISSN 0008-4085
Brekke, Kurt R., Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2017) Hospital Mergers with Regulated Prices. Scandinavian Journal of Economics. 597–627. ISSN 1467-9442
Brekke, Kurt, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2017) Can competition reduce quality? Journal of Institutional and Theoretical Economics. ISSN 0932-4569
Brouwer, Werner B. F., Culyer, Anthony J., van Exel, N. Job A. et al. (1 more author) (2008) Welfarism vs. extra-welfarism. Journal of health economics. pp. 325-338. ISSN 0167-6296
Budd, J. and Mumford, K.A. (2004) Trade unions and family friendly work practices in Britain. Industrial and Labor Relations Review, 57 (2). pp. 204-222. ISSN 0019-7939
Burgess, Simon, Propper, Carol, Ratto, Marisa et al. (1 more author) (2017) Incentives in the Public Sector : Evidence from a Government Agency. The Economic Journal. F117-F141. ISSN 0013-0133
Burgess, Simon, Propper, Carol, Ratto, Marisa et al. (1 more author) (2012) Incentives in the public sector : some preliminary evidence from a government agency. Discussion Paper. IZA Discussion Papers . Institute for the Study of Labor (IZA) , Bonn.
Burridge, P. and Taylor, A.M.R. (2006) Additive Outlier Detection via Extreme-Value Theory. Journal of Time Series Analysis, 27 (5). pp. 685-701. ISSN 0143-9782
Burridge, P. and Taylor, A.M.R. (2004) Bootstrapping the HEGY seasonal unit root tests. Journal of Econometrics, 123 (1). pp. 67-87. ISSN 0304-4076
Burridge, P. and Taylor, A.M.R. (2004) On regression-based tests for seasonal unit roots in the presence of periodic heteroscedasticity. Journal of Econometrics, 104 (1). pp. 91-117. ISSN 0304-4076
Burridge, P. and Taylor, A.M.R. (2001) On the properties of regression-based tests for seasonal unit roots in the presence of higher-order serial correlation. Journal of Business and Economic Statistics, 19 (3). pp. 374-379. ISSN 0735-0015
Butcher, Tim, Mumford, Karen Ann orcid.org/0000-0002-0190-5544 and Smith, Peter Nigel orcid.org/0000-0003-2786-7192 (2019) The Gender Earnings Gap in British Workplaces: A Knowledge Exchange Report. Report.
Butcher, Tim, Mumford, Karen Ann orcid.org/0000-0002-0190-5544 and Smith, Peter Nigel orcid.org/0000-0003-2786-7192 (2016) Workplaces, Low pay and the Gender Earnings Gap in Britain : A Co-production with the Low Pay Commission. Research Report. Low Pay Commission (LPC)
Canepa, A. and Godfrey, L.G. (2007) Improvement of the quasi-likelihood ratio test in ARMA models: some results for bootstrap methods. Journal of Time Series Analysis, 28 (3). pp. 434-453. ISSN 0143-9782
Caputo, Michael and Forster, Martin orcid.org/0000-0001-8598-9062 (2016) Optimal plans and timing under additive transformations to rewards. Oxford Economic Papers. pp. 604-626. ISSN 0030-7653
Carbone, E. and Hey, J.D. (2004) The Effect of Unemployment on Consumption: An Experimental Analysis. The Economic Journal, 114 (497). pp. 660-683. ISSN 0013-0133
Carbone, Enrica, Dong, Xueqi and Hey, John Denis orcid.org/0000-0001-6692-1484 (2017) Elicitation of Preferences under Ambiguity. Journal of Risk and Uncertainty. pp. 87-102. ISSN 0895-5646
Carrieri, Vincenzo, Davillas, Apostolos and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2020) A latent class approach to inequity in health using biomarker data. Health Economics. p. 808. ISSN 1057-9230
Carrieri, Vincenzo and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2017) The Income-Health Relationship "Beyond the Mean" : New Evidence from Biomarkers. Health Economics. pp. 937-956. ISSN 1057-9230
Carrieri, Vincenzo and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2018) Inequality of opportunity in health: a decomposition-based approach. Health Economics. pp. 1981-1995. ISSN 1057-9230
Carrieri, Vincenzo and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2018) Intergenerational transmission of nicotine within families: have e-cigarettes influenced passive smoking? Economics and Human Biology. pp. 83-93. ISSN 1570-677X
Carrieri, Vincenzo and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2016) Smoking for the poor and vaping for the rich? : Distributional concerns for novel nicotine delivery systems. Economics Letters. pp. 71-74. ISSN 0165-1765
Carrieri, Vincenzo, Principe, Francesco and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2020) Productivity shocks and labour market outcomes for top earners: evidence from Italian Serie A. Oxford Bulletin of Economics and Statistics. pp. 549-576. ISSN 0305-9049
Castelli, Adriana orcid.org/0000-0002-2546-419X, Chalkley, Martin John orcid.org/0000-0002-1091-8259, Gaughan, James Michael orcid.org/0000-0002-8409-140X et al. (2 more authors) (2019) Productivity of the English National Health Service : 2016/17 update. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Cellini, Roberto, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2018) A dynamic model of quality competition with endogenous prices. Journal of Economic Dynamics and Control. pp. 190-206. ISSN 0165-1889
Chalkidou, K., Culyer, Anthony J, Li, Ryan et al. (1 more author) (2017) We need a NICE for global development spending. F1000research. ISSN 2046-1402
Chalkidou, Kalipso, Claxton, Karl orcid.org/0000-0003-2002-4694, Silverman, Rachel et al. (1 more author) (2020) Value-based tiered pricing for universal health coverage : an idea worth revisiting. Gates open research. p. 16. ISSN 2572-4754 (In Press)
Chalkley, Martin John orcid.org/0000-0002-1091-8259, Cremer, Helmuth and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2016) Editorial of Special issue "Industrial Organisation of the Health Sector and Public Policy". Journal of health economics. pp. 256-257. ISSN 0167-6296
Chalkley, Martin John orcid.org/0000-0002-1091-8259, Mirelman, Andrew orcid.org/0000-0002-7622-0937, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2016) Paying for performance for health care in low- and middle-income countries: : an economic perspective. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Chalkley, Martin John orcid.org/0000-0002-1091-8259, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 (2016) Policies Towards Hospital and GP Competition in Five European Countries. Health Policy. ISSN 0168-8510
Chambers, Marcus, Thornton, Michael Alan orcid.org/0000-0002-4470-809X and McCrorie, Roderick (2018) Continuous Time Modelling Based on an Exact Discrete Time Representation. In: van Montfort, Kees, Oud, Johan and Voelkle, Manuel, (eds.) Continuous Time Modeling in the Behavioral and Related Sciences. Springer , 317 - 357.
Chambers, Dustin, Collins, Courtney and Krause, Alan orcid.org/0000-0002-6334-5124 (2019) How do Federal Regulations affect Consumer Prices? An Analysis of the Regressive Effects of Regulation. Public Choice. pp. 57-90. ISSN 0048-5829
Chambers, Marcus and Thornton, Michael Alan orcid.org/0000-0002-4470-809X (2012) DISCRETE TIME REPRESENTATION OF CONTINUOUS TIME ARMA PROCESSES. Econometric Theory. 219 -238. ISSN 0266-4666
Chattopadhyay, S. (2006) Optimality in Stochastic OLG Models: Theory for Test. Journal of Economic Theory, 131 (1). pp. 282-294. ISSN 0022-0531
Chattopadhyay, S. (2001) The unit root property and optimality: a simple proof. Journal of Mathematical Economics, 36 (2). pp. 151-159. ISSN 0304-4068
Chattopadhyay, Subir Kumar orcid.org/0000-0003-2845-6272 (2018) The Unit Root Property and Optimality with a Continuum of States---Pure Exchange. Journal of Mathematical Economics. ISSN 0304-4068
Chattopadhyay, Subir Kumar orcid.org/0000-0003-2845-6272 and Mitka, Malgorzata (2019) Nash Equilibrium in Tariffs in a Multi-country Trade Model. Journal of Mathematical Economics. pp. 225-242. ISSN 0304-4068
Chen, Bo, Fujishige, Satoru and Yang, Zaifu orcid.org/0000-0002-3265-7109 (2016) Random Decentralized Market Processes for Stable Job Matchings with Competitive Salaries. Journal of Economic Theory. pp. 25-36. ISSN 0022-0531
Chen, Jia orcid.org/0000-0002-2791-2486 (2019) Estimating Latent Group Structure in Time-Varying Coefficient Panel Data Models. Econometrics Journal. ISSN 1368-4221
Chen, Jia orcid.org/0000-0002-2791-2486, Li, Degui orcid.org/0000-0001-6802-308X and Linton, Oliver (2019) A new semiparametric estimation approach for large dynamic covariance matrices with multiple conditioning variables. Journal of Econometrics. pp. 155-176. ISSN 0304-4076
Chen, Jia orcid.org/0000-0002-2791-2486, Li, Degui orcid.org/0000-0001-6802-308X, Linton, Oliver et al. (1 more author) (2016) Semiparametric Dynamic Portfolio Choice with Multiple Conditioning Variables. Journal of Econometrics. pp. 309-318. ISSN 0304-4076
Chen, Jia orcid.org/0000-0002-2791-2486, Li, Degui orcid.org/0000-0001-6802-308X, Linton, Oliver et al. (1 more author) (2018) Semiparametric Ultra-High Dimensional Model Averaging of Nonlinear Dynamic Time Series. Journal of the American Statistical Association. pp. 919-932. ISSN 0162-1459
Chen, Jia orcid.org/0000-0002-2791-2486, Li, Degui orcid.org/0000-0001-6802-308X and Xia, Yingcun (2019) Estimation of a rank-reduced functional-coefficient panel data model with serial correlation. Journal of Multivariate Analysis. pp. 456-479.
Chen, Likai, Wang, Weining and Wu, Wei Biao (2020) Dynamic Semiparametric Factor Model with Structural Breaks. Journal of Business and Economic Statistics. ISSN 0735-0015
Chen, Liyuan, Zerilli, Paola Z orcid.org/0000-0001-6589-5552 and Baum, Christopher (2018) Leverage effects and stochastic volatility in spot oil returns : A Bayesian approach with VaR and CVaR applications. Energy economics. ISSN 0140-9883
Cherchye, Laurens, Demuynck, Thomas, De Rock, Bram et al. (1 more author) (2020) Revealed Preference Analysis with Normal Goods : Application to Cost-of-Living Indices. American Economic Journal: Microeconomics. pp. 165-188. ISSN 1945-7669
Chetter, Ian, Arundel, Catherine Ellen orcid.org/0000-0003-0512-4339, Bell, Kerry Jane orcid.org/0000-0001-5124-138X et al. (18 more authors) (2020) Surgical wounds healing by secondary intention : characterising and quantifying the problem, identifying effective treatments, and assessing the feasibility of conducting a randomised controlled trial of negative pressure wound therapy versus usual care. Programme Grants for Applied Research. ISSN 2050-4322
Chick, Stephen, Forster, Martin orcid.org/0000-0001-8598-9062 and Pertile, Paolo (2017) A Bayesian decision-theoretic model of sequential experimentation with delayed response. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY. pp. 1439-1462. ISSN 1369-7412
Chiodo, A.J., Chiodo, M., Owyang, M.T. et al. (1 more author) (2004) Subjective Probabilities: Psychological Theories and Economic Applications. The Federal Reserve Bank of St Louis Review, 86 (1). pp. 33-47.
Chirwa, Gowokani Chijere, Suhrcke, Marc Eckart orcid.org/0000-0001-7263-8626 and Moreno Serra, Rodrigo Antonio orcid.org/0000-0002-6619-4560 (2019) The impact of Ghana's national health insurance on psychological distress. Applied Health Economics and Health Policy. pp. 1-11. ISSN 1175-5652
Chirwa, Gowokani, Moreno Serra, Rodrigo orcid.org/0000-0002-6619-4560 and Suhrcke, Marc orcid.org/0000-0001-7263-8626 (2020) Socioeconomic Inequality in Premiums for a Community Based Health Insurance Scheme in Rwanda. Health Policy and Planning. ISSN 1460-2237
Clare, Andrew, Glover, Simon, Seaton, James et al. (2 more authors) (2020) Measuring Sequence Returns Risk. Journal of Retirement. pp. 65-79. ISSN 2326-6899
Clare, Andrew, Seaton, James, Smith, Peter Nigel orcid.org/0000-0003-2786-7192 et al. (1 more author) (2019) Can Sustainable Withdrawal Rates be Enhanced by Trend Following? International Journal of Finance and Economics. ISSN 1099-1158
Clare, Andrew, Seaton, James, Smith, Peter Nigel orcid.org/0000-0003-2786-7192 et al. (1 more author) (2016) The Trend is Our Friend : Risk Parity, Momentum and Trend Following in Global Asset Allocation. Journal of Behavioral and Experimental Finance. pp. 63-80. ISSN 2214-6350
Claxton, K. (2007) OFT, VBP: QED? Health Economics, 16 (6). pp. 545-558. ISSN 1057-9230
Claxton, K. orcid.org/0000-0003-2002-4694, McCabe, C., Tsuchiya, A. et al. (1 more author) (2006) Drugs for exceptionally rare diseases: a commentary on Hughes et al. Discussion Paper. QJM
Claxton, K. and Thompson, K. M. (2001) A Dynamic Programming Approach to Efficient Clinical Trial Design. Journal of Health Economics, 20 (5). pp. 797-822. ISSN 0167-6296
Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Martin, Stephen, Soares, Marta O orcid.org/0000-0003-1579-8513 et al. (6 more authors) (2013) Methods for the estimation of the NICE cost effectiveness threshold. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Claxton, K. orcid.org/0000-0003-2002-4694, McCabe, C. and Tsuchiya, A. (2005) Orphan drugs and the NHS: Should we value rarity. BMJ. pp. 1016-1019. ISSN 1756-1833
Claxton, K. orcid.org/0000-0003-2002-4694, Neumann, P.J., Araki, S. et al. (1 more author) (2001) Bayesian value-of-infomation analysis: an application to a policy model of Alzheimer's disease. International Journal of Technology Assessment in Health Care. pp. 38-55. ISSN 0266-4623
Claxton, K. orcid.org/0000-0003-2002-4694, Palmer, S. orcid.org/0000-0002-7268-2560, Bojke, L. orcid.org/0000-0001-7921-9109 et al. (2 more authors) (2006) Priority setting for research in health care: An application of value of information analysis to glycoprotein IIb/IIIa antagonists in non-ST elevation acute coronary syndrome. International Journal of Technology Assessment in Health Care. pp. 379-387. ISSN 0266-4623
Claxton, K. orcid.org/0000-0003-2002-4694, Palmer, S. orcid.org/0000-0002-7268-2560, Sculpher, M. orcid.org/0000-0003-3746-9913 et al. (1 more author) (2010) Appropriate perspectives for health care decisions. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Claxton, K. orcid.org/0000-0003-2002-4694, Raftery, J., McCabe, C. et al. (1 more author) (2006) Orphan drugs revisited : [comment]. QJM Monthly Journal of the Association of Physicians. discussion 350. pp. 341-5. ISSN 1460-2725
Claxton, K. orcid.org/0000-0003-2002-4694, Sculpher, M. orcid.org/0000-0003-3746-9913 and Carroll, S. (2011) Value-based pricing for pharmaceuticals : its role, specification and prospects in a newly devolved NHS. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York,UK.
Claxton, K. orcid.org/0000-0003-2002-4694, Sculpher, M. orcid.org/0000-0003-3746-9913 and Culyer, A. (2007) Mark versus Luke? Appropriate methods for the evaluation of public health interventions. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Claxton, Karl orcid.org/0000-0003-2002-4694 (2018) Comment : Positive tails and normative dogs. Health Economics. ISSN 1057-9230
Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2016) Pharmaceutical Pricing : Early Access, The Cancer Drugs Fund and the Role of NICE. Discussion Paper. Policy & Research Briefing . Centre for Health Economics, University of York
Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Griffin, Susan orcid.org/0000-0003-2188-8400, Koffijberg, H et al. (1 more author) (2013) Expected health benefits of additional evidence : Principles, methods and applications. A white paper for the Patient-Centered Outcomes Research Institute (PCORI). Research Report. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Palmer, Stephen John orcid.org/0000-0002-7268-2560, Longworth, L. et al. (6 more authors) (2011) Uncertainty, evidence and irrecoverable costs: informing approval, pricing and research decisions for health technologies. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Claxton, Karl orcid.org/0000-0003-2002-4694, Asaria, Miqdad orcid.org/0000-0002-3538-4417, Chansa, Collins et al. (4 more authors) (2019) Accounting for Timing when Assessing Health-Related Policies. Journal of Benefit-Cost Analysis. pp. 73-105. ISSN 2194-5888
Claxton, Karl orcid.org/0000-0003-2002-4694 and Culyer, Anthony J. (2007) Rights, responsibilities and NICE: a rejoinder to Harris. Journal of Medical Ethics. 462. -. ISSN 0306-6800
Claxton, Karl orcid.org/0000-0003-2002-4694, Lomas, James orcid.org/0000-0002-2478-7018 and Martin, Stephen (2018) The impact of NHS expenditure on health outcomes in England : Alternative approaches to identification in all-cause and disease specific models of mortality. Health Economics. ISSN 1057-9230
Claxton, Karl orcid.org/0000-0003-2002-4694, Martin, Steve, Soares, Marta orcid.org/0000-0003-1579-8513 et al. (6 more authors) (2015) Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold. Health technology assessment. pp. 1-542. ISSN 2046-4924
Claxton, Karl orcid.org/0000-0003-2002-4694, Palmer, Stephen orcid.org/0000-0002-7268-2560, Longworth, Louise et al. (5 more authors) (2016) A Comprehensive Algorithm for Approval of Health Technologies With, Without, or Only in Research : The Key Principles for Informing Coverage Decisions. Value in Health. pp. 885-891. ISSN 1524-4733
Conti, Stefano and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2008) Dimensions of Design Space: : A Decision-Theoretic Approach to Optimal Research Design. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Contoyannis, P., Jones, A.M. and Rice, N. (2004) The dynamics of health in the British household panel survey. Journal of Applied Econometrics, 19 (4). pp. 473-503. ISSN 0883-7252
Contoyannis, P. and Rice, N. (2001) The impact of health on wages: Evidence from the British Household Panel Survey. Empirical Economics, 26 (4). pp. 599-622. ISSN 0377-7332
Cookson, Richard Andrew orcid.org/0000-0003-0052-996X, Griffin, Susan orcid.org/0000-0003-2188-8400, Norheim, Ole F et al. (2 more authors) (2020) Distributional Cost-Effectiveness Analysis comes of age. Value in Health. ISSN 1524-4733
Cookson, Richard Andrew orcid.org/0000-0003-0052-996X, Gutacker, Nils orcid.org/0000-0002-2833-0621 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2015) Waiting time prioritisation: : evidence from England. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Cookson, Richard orcid.org/0000-0003-0052-996X, Mirelman, Andrew J orcid.org/0000-0002-7622-0937, Griffin, Susan orcid.org/0000-0003-2188-8400 et al. (5 more authors) (2017) Using Cost-Effectiveness Analysis to Address Health Equity Concerns. Value in Health. pp. 206-212. ISSN 1524-4733
Cooper, N.J., Claxton, K. orcid.org/0000-0003-2002-4694, Chilcott, J. et al. (4 more authors) (2003) Modelling the cost effectiveness of interferon beta and glatiramer acetate in the management of multiple sclerosis. BMJ. pp. 522-525. ISSN 1756-1833
Cornelissen, Thomas orcid.org/0000-0001-8259-5105 (2016) Do social interactions in the workplace lead to productivity spillover among co-workers? IZA World of Labor. pp. 1-10. ISSN 2054-9571
Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Beckmann, Michael and Kräkel, Matthias (2017) Self-Managed Working Time and Employee Effort: Theory and Evidence. Journal of Economic Behavior & Organization. pp. 285-302. ISSN 0167-2681
Cornelissen, Thomas orcid.org/0000-0001-8259-5105 and Dustmann, Christian (2019) Early School Exposure, Test Scores, and Noncognitive Outcomes. American Economic Journal: Economic Policy. pp. 35-63. ISSN 1945-774X
Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Dustmann, Christian, Raute, Anna et al. (1 more author) (2016) From LATE to MTE : Alternative methods for the evaluation of policy interventions. Labour Economics. pp. 47-60. ISSN 0927-5371
Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Dustmann, Christian, Raute, Anna et al. (1 more author) (2018) Who benefits from universal child care? : Estimating marginal returns to early child care attendance. Journal of Political Economy. 2356–2409. ISSN 1537-534X
Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Dustmann, Christian, Raute, Anna et al. (1 more author) (2018) Who benefits from universal child care? Estimating marginal returns to early child care attendance. Discussion Paper. CReaM (Centre for Research & Analysis of Migration)
Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Dustmann, Christian and Schönberg, Uta (2017) Peer effects in the workplace. American Economic Review. pp. 425-456. ISSN 0002-8282
Coroneo, Laura orcid.org/0000-0001-5740-9315, Corradi, Valentina and Santos Monteiro, Paulo orcid.org/0000-0002-2014-4824 (2013) Testing for optimal monetary policy via moment inequalities. Working Paper. Discussion Paper in Economics . Department of Economics and Related Studies, University of York , York.
Coroneo, Laura orcid.org/0000-0001-5740-9315, Corradi, Valentina and Santos Monteiro, Paulo orcid.org/0000-0002-2014-4824 (2018) Testing for Optimal Monetary Policy via Moment Inequalities. Journal of Applied Econometrics. ISSN 0883-7252
Coroneo, Laura orcid.org/0000-0001-5740-9315, Giannone, Domenico and Modugno, Michele (2014) Unspanned macroeconomic factors in the yield curve. Working Paper. Finance and Economics Discussion Series . Federal Reserve Board, Washington, D.C. , Brussels.
Coroneo, Laura orcid.org/0000-0001-5740-9315, Giannone, Domenico and Modugno, Michele (2015) Unspanned macroeconomic factors in the yield curve. Journal of Business and Economic Statistics. pp. 472-485. ISSN 0735-0015
Coroneo, Laura orcid.org/0000-0001-5740-9315, Jackson, Laura and Owyang, Michael (2020) International Stock Comovements with Endogenous Clusters. Journal of Economic Dynamics and Control. 103904. ISSN 0165-1889
Coroneo, Laura orcid.org/0000-0001-5740-9315 and Pastorello, Sergio (2020) European spreads at the interest rate lower bound. Journal of Economic Dynamics and Control. 103979. ISSN 0165-1889
Costa-Gomes, M.A. (2002) A suggested interpretation of some experimental results on preplay communication. Journal of Economic Theory, 104 (1). pp. 104-136. ISSN 0022-0531
Costa-Gomes, M.A. and Crawford, V.P. (2006) Cognition and Behavior in Two-Person Guessing Games: An Experimental Study. American Economic Review, 96 (5). pp. 1737-1768. ISSN 0002-8282
Costa-Gomes, M.A. and Zauner, K.G. (2003) Learning, non-equilibrium beliefs, and non-pecuniary payoffs in an experimental game. Economic Theory, 22 (2). pp. 263-288. ISSN 0938-2259
Costa-Gomes, Miguel, Ju, Yuan orcid.org/0000-0002-7541-9856 and Li, Jiawen (2018) Role-Reversal Consistency : An Experimental Study of the Golden Rule. Economic Inquiry. pp. 685-704. ISSN 1465-7295
Culyer, Anthony J and Bombard, Y (2011) An equity checklist : a framework for health technology assessments. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Culyer, A., McCabe, C., Briggs, A. et al. (5 more authors) (2007) Searching for a threshold, not setting one: The role of the National Institute for Health and Clinical Excellence. Journal of Health Services Research & Policy. pp. 56-58. ISSN 1758-1060
Culyer, A.J. (2001) Economics and ethics in health care. Journal of Medical Ethics. pp. 217-222. ISSN 0306-6800
Culyer, A.J. (2001) Equity - some theory and its policy implications. Journal of Medical Ethics. pp. 275-283. ISSN 0306-6800
Culyer, Anthony J (2017) Ethics, priorities and cancer. Journal of Cancer Policy. pp. 6-11. ISSN 2213-5383
Culyer, Anthony J (2016) HTA – algorithm or process? Comment on 'Expanded HTA: enhancing fairness and legitimacy'. International Journal of Health Policy and Management. 501–505. ISSN 2322-5939
Dagnelie, Olivier, De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663 and Maystadt, Jean-Francois (2018) Violence, Selection and Infant Mortality in Congo. Journal of Health Economics. pp. 153-177. ISSN 0167-6296
Dakin, Helen, Devlin, Nancy, Feng, Yan et al. (3 more authors) (2013) The influence of cost-effectiveness and other factors on NICE decisions. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , york, UK.
Daly, A., Meng, X., Kagawuchi, A. et al. (1 more author) (2006) The gender wage gap in four countries. Economic Record, 82 (257). pp. 165-176. ISSN 0013-0249
Datta, B. and Dixon, H. (2003) Free Internet Access and Regulation: A Note. Journal of Institutional and Theoretical Economics, 159 (3). pp. 594-598. ISSN 0932-4569
Datta, B. and Dixon, H. (2002) Technological Change, Entry and Stock Market Dynamics: An Analysis of Transition in a Monopolistic Economy. American Economic Review, 92 (2). pp. 231-235. ISSN 0002-8282
Datta, Bipasa orcid.org/0000-0002-8109-0786 and LO, Yu-Shan (2013) To block or not to block : Network Competition when Skype enters the mobile market. Working Paper.
Datta, Bipasa orcid.org/0000-0002-8109-0786 and Fraser, Clive D (2017) The Company You Keep: Qualitative Uncertainty in Providing a Club Good. Journal of Public Economic Theory. pp. 763-788. ISSN 1097-3923
Davies, P. J. orcid.org/0000-0002-9003-0603, Park, J., Grawe, H. et al. (77 more authors) (2019) Toward the limit of nuclear binding on the N=Z line : Spectroscopy of Cd 96. Physical Review C. 021302. ISSN 1089-490X
Davillas, Apostolos and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2020) Ex ante inequality of opportunity in health, decomposition and distributional analysis of biomarkers. Journal of Health Economics. 102251. ISSN 0167-6296
Davillas, Apostolos and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2018) Parametric models for biomarkers based on flexible size distributions. Health Economics. pp. 1617-1624. ISSN 1057-9230
Davillas, Apostolos and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2020) Regional inequalities in adiposity in England : distributional analysis of the of the contribution of individual-level characteristics and the small area obesogenic environment. Economics and Human Biology. 100887. ISSN 1570-677X
Dawson, D., Gravelle, H., O'Mahony, M. et al. (9 more authors) (2005) Developing new approaches to measuring NHS outputs and productivity. Research Report. CHE Research Paper (6). Centre for Health Economics , York, UK.
De Feo, Giuseppe and De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663 (2017) Mafia in the ballot box. American Economic Journal: Economic Policy. pp. 134-167. ISSN 1945-774X
De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663, Hodler, Roland, Raschky, Paul A. et al. (1 more author) (2018) Ethnic Favoritism : An Axiom of Politics? Journal of Development Economics. pp. 115-129. ISSN 0304-3878
De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663, Lisi, Domenico, Martorana, Marco et al. (1 more author) (2020) Does higher Institutional Quality improve the Appropriateness of Healthcare Provision? JOURNAL OF PUBLIC ECONOMICS. ISSN 0047-2727 (In Press)
De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663, Sekeris, Petros G. and Spengler, Dominic Emanuel (2018) Can Violence Harm Cooperation? Experimental Evidence. Journal of Environmental Economics and Management. pp. 342-359. ISSN 0095-0696
De Luca, Giacomo Davide orcid.org/0000-0001-5376-9663, Sekeris, Petros G. and Vargas, Juan Fernando (2018) Beyond Divide and Rule : Weak Dictators, Natural Resources and Civil Conflict. European Journal of Political Economy. pp. 205-221. ISSN 0176-2680
Del Boca, Daniela, Monfardini, Chiara and Nicoletti, Cheti orcid.org/0000-0002-7237-2597 (2017) Parental and Child Time Investments and the Cognitive Development of Adolescents. Journal of Labor Economics. pp. 565-608. ISSN 0734-306X
Di Giacomo, Marina, Piacenza, Massimiliano, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2017) Do Public Hospitals Respond to Changes in DRG Price Regulation? The Case of Birth Deliveries in the Italian NHS. Health Economics. pp. 23-37. ISSN 1057-9230
Diasakos, Theodoros and Koufopoulos, Konstantinos (2018) (Neutrally) Optimal Mechanism under Adverse Selection : The canonical insurance problem. Games and Economic Behaviour. 159–186. ISSN 0899-8256
Domenech, J. (2007) Working hours in the European periphery: The length of the working day in Spain, 1885 -1920. Explorations in Economic History, 44 (3). pp. 469-486. ISSN 0014-4983
Dusheiko, Mark Alan, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Martin, Stephen et al. (2 more authors) (2011) Does Better Disease Management in Primary Care Reduce Hospital Costs? Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Espinoza, Manuel Antonio, Manca, Andrea orcid.org/0000-0001-8342-8421, Claxton, Karl orcid.org/0000-0003-2002-4694 et al. (1 more author) (2017) Social value and individual choice: : The value of a choice-based decision-making process in a collectively funded health system. Health Economics. pp. 1-13. ISSN 1057-9230
Facchini, François, Melki, Mickael and Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 (2016) Labour Costs and the Size of Government. Oxford Bulletin of Economics and Statistics. pp. 251-275. ISSN 0305-9049
Fedotov, G. V., Skorodumina, Iu A., Burkert, V. D. et al. (48 more authors) (2018) Measurements of the γvp→p′π+π- cross section with the CLAS detector for 0.4 GeV2<Q2<1.0 GeV2 and 1.3 GeV<W<1.825 GeV. Physical Review C. 025203. ISSN 1089-490X
Feng, Yan, Kristensen, Soren Rud, Lorgelly, Paula et al. (4 more authors) (2019) Pay for Performance for Specialised Care in England: Strengths and Weaknesses. Health Policy. ISSN 1872-6054
Fenwick, E. (2006) An iterative Bayesian approach to health technology assessment: application to a policy of preoperative optimization for patients undergoing major elective surgery. Medical Decision Making, 26 (5). pp. 480-496. ISSN 0272-989X
Fenwick, E., Claxton, K. and Sculpher, M. (2005) The value of implementation and the value of information: combined and uneven development. Research Report. CHE Research Paper (5). Centre for Health Economics , York, UK.
Flavina, T.J. and Wickens, M.R. (2002) Macroeconomic Influences on Optimal Asset Allocation. Review of Financial Economics, 12 (2). pp. 207-231. ISSN 1058-3300
Flückiger, Matthias orcid.org/0000-0002-2242-0220 and Ludwig, Markus (2020) Malaria Suitability, Urbanization and Subnational Development in Sub-Saharan Africa. Journal of Urban Economics. ISSN 0094-1190
Forster, M. (2001) The Meaning of Death: Some Simulations of a Model of Healthy and Unhealthy Consumption. Journal of Health Economics, 20 (4). pp. 613-638. ISSN 0167-6296
Forster, M. and Jones, A.M. (2002) The role of tobacco taxes in starting and quitting smoking: duration analysis of British data. Journal of the Royal Statistical Soceity: Series A (Statistics in Society), 164 (3). pp. 517-547. ISSN 0964-1998
Fox, S. (2016) Results from the centers for disease control and prevention's predict the 2013-2014 Influenza Season Challenge. BMC Infectious Diseases. 357 (2016). ISSN 1471-2334
Fracasso, A. and Gulcin Ozkan, F. (2004) Fiscal policy, labour market structure and macroeconomic performance. Economics Letters, 83 (2). pp. 205-210. ISSN 0165-1765
French, Eric B, McCauley, Jeremy, Aragon, Maria et al. (25 more authors) (2017) End-Of-Life Medical Spending In Last Twelve Months Of Life Is Lower Than Previously Reported. Health affairs (Project Hope). pp. 1211-1217. ISSN 1544-5208
Ganelli, Giovanni and Rankin, Neil orcid.org/0000-0002-9140-2376 (2020) Fiscal Deficits as a Source of Boom and Bust under a Common Currency. Journal of International Money and Finance. 102149. ISSN 0261-5606
Garino, G. and Simmons, P. (2006) Costly State Verification with Varying Risk Preferences and Liability. Journal of Economic Surveys, 20 (1). pp. 71-110. ISSN 0950-0804
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2016) Delayed discharges and hospital type: : evidence from the English NHS. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gaughan, J. orcid.org/0000-0002-8409-140X, Gravelle, H. orcid.org/0000-0002-7753-4233, Santos, R. orcid.org/0000-0001-7953-1960 et al. (1 more author) (2013) Long term care provision, hospital length of stay and discharge destination for hip fracture and stroke patients : ESCHRU Report to Department of Health, March 2013. Working Paper. CHE Research Paper . Centre for Health Economics , York, UK.
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Santos, Rita orcid.org/0000-0001-7953-1960 et al. (1 more author) (2017) Long-term care provision, hospital bed blocking, and discharge destination for hip fracture and stroke patients. International Journal of Health Economics and Management. ISSN 2199-9023
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2017) Delayed discharges and hospital type : Evidence from the English NHS. Fiscal Studies. pp. 495-519.
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2014) Testing the bed-blocking hypothesis : does higher supply of nursing and care homes reduce delayed hospital discharges? Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gutacker, Nils orcid.org/0000-0002-2833-0621, Grasic, Katja et al. (3 more authors) (2019) Paying for Efficiency : Incentivising same-day discharge in the English NHS. Journal of health economics. ISSN 0167-6296
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Gutacker, Nils orcid.org/0000-0002-2833-0621, Grasic, Katja et al. (3 more authors) (2018) Paying for efficiency: Incentivising same-day discharges in the English NHS. Report. CHE Research Paper . Centre for Health Economics, University of York , York.
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Kasteridis, Panagiotis orcid.org/0000-0003-1623-4293, Mason, Anne orcid.org/0000-0002-5823-3064 et al. (1 more author) (2020) Why are there long waits at English Emergency Departments? European Journal of Health Economics. 209–218. ISSN 1618-7601
Gaughan, James Michael orcid.org/0000-0002-8409-140X, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 et al. (1 more author) (2020) Do Small Hospitals have Lower Quality? Evidence from the English NHS. Social Science & Medicine. 113500. pp. 1-9. ISSN 1873-5347
Gershkov, Alex, Li, Jianpei and Schweinzer, Paul orcid.org/0000-0002-6437-7224 (2014) How to share it out : the value of information in teams. Working Paper. DERS Discussion Papers in Economics . Department of Economics and Related Studies, University of York , York.
Ginnelly, L., Claxton, K., Sculpher, M.J. et al. (1 more author) (2005) Using value of information analysis to inform publicly funded research priorities. Applied Health Economics and Health Policy, 4 (1). pp. 37-46. ISSN 1175-5652
Giraitis, L. and Robinson, P.M. (2003) Edgeworth expansions for semiparametric Whittle estimation of long memory. Annals of Statistics, 31 (4). pp. 1325-1375. ISSN 0090-5364
Giraitis, L. and Surgailis, D. (2002) ARCH-type bilinear models with double long memory. Stochastic Processes and their applications, 100. pp. 275-300.
Glanville, Julie, Eyers, John, Jones, Andrew Michael orcid.org/0000-0003-4114-1785 et al. (5 more authors) (2017) Identifying quasi-experimental studies to inform systematic reviews. Journal of Clinical Epidemiology. ISSN 0895-4356
Godfrey, L.G. (2005) Controlling the Overall Significance Level of a Battery of Least Squares Diagnostic Tests. Oxford Bulletin of Economics and Statistics, 67 (2). pp. 263-279. ISSN 0305-9049
Godfrey, L.G., Orme, C. and Santos Silva, J.M.C. (2006) Simulation-based tests for heteroskedasticity in linear regression models: some further results. Econometrics Journal, 9 (1). pp. 76-97. ISSN 1368-4221
Godfrey, L.G. and Orme, C.D. (2002) Using bootstrap methods to obtain nonnormality robust Chow prediction tests. Economics Letters, 76 (3). pp. 429-436. ISSN 0165-1765
Godfrey, L.G. (2007) Alternative approaches to implementing Lagrange multiplier tests for serial correlation in dynamic regression models. Computation Statistics & Data Analysis. pp. 3282-3295.
Golinski, Adam orcid.org/0000-0001-8603-1171 and Spencer, Peter orcid.org/0000-0002-5595-5360 (2019) Estimating the term structure with linear regressions: Getting to the roots of the problem. Journal of Financial Econometrics. pp. 1-25. ISSN 1479-8409
Golinski, Adam orcid.org/0000-0001-8603-1171 (2021) Monetary Policy at the Zero Lower Bound: Information in the Federal Reserve's Balance Sheet. European economic review. 103613. ISSN 0014-2921
Golinski, Adam orcid.org/0000-0001-8603-1171 and Spencer, Peter orcid.org/0000-0002-5595-5360 (2017) The advantages of using excess returns to model the term structure. Journal of Financial Economics. pp. 163-181. ISSN 0304-405X
Golinski, Adam orcid.org/0000-0001-8603-1171 and Zaffaroni, Paolo (2016) Long Memory Affine Term Structure Models. Journal of Econometrics. pp. 33-56. ISSN 0304-4076
Grasic, Katja (2016) Agency staff in the NHS. In: Curtis, Lesley and Burns, Amanda, (eds.) Unit Costs of Health and Social Care 2016. Personal Social Services Research Unit, University of Kent at Canterbury , p. 7.
Gravelle, H. orcid.org/0000-0002-7753-4233, Santos, R. orcid.org/0000-0001-7953-1960, Siciliani, L. orcid.org/0000-0003-1739-7289 et al. (1 more author) (2012) Hospital quality competition under fixed prices. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gravelle, Hugh orcid.org/0000-0002-7753-4233, Moscelli, Giuseppe, Santos, Rita orcid.org/0000-0001-7953-1960 et al. (1 more author) (2014) Patient choice and the effects of hospital market structure on mortality for AMI, hip fracture and stroke patients. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Moscelli, Guiseppe (2020) Hospital competition and quality for non-emergency patients in the English NHS. RAND JOURNAL OF ECONOMICS. ISSN 0741-6261
Gravelle, H. orcid.org/0000-0002-7753-4233, Jacobs, R. orcid.org/0000-0001-5225-6321, Martin, S. et al. (2 more authors) (2005) The Effects on Waiting Times of Expanding Provider Choice: evidence from a policy experiment. Research Report. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Dusheiko, Mark Alan, Martin, Stephen et al. (3 more authors) (2011) Modelling Individual Patient Hospital Expenditure for General Practice Budgets. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Santos, Rita orcid.org/0000-0001-7953-1960 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2014) Does a hospital's quality depend on the quality of other hospitals? : A spatial econometrics approach. Regional Science and Urban Economics. pp. 203-216. ISSN 0166-0462
Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Santos, Rita orcid.org/0000-0001-7953-1960 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2013) Does a hospital's quality depend on the quality of other Hospitals? A spatial econometrics approach to investigating hospital quality competition. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Greene, William, Harris, Mark, Knott, Rachel et al. (1 more author) (2020) Specification and testing of hierarchical ordered response models with anchoring vignettes. Journal of the Royal Statistical Society: Series A (Statistics in Society). ISSN 1467-985X
Greenhalgh, R.M., Brown, L.C., Epstein, D. et al. (4 more authors) (2005) Endovascular aneurysm repair and outcome in patients unfit for open repair of abdominal aortic aneurysm (EVAR trial 2): randomised controlled trial. The Lancet, 365 (9478). pp. 2187-2192. ISSN 0140-6736
Greenhalgh, R.M., Brown, L.C., Epstein, D. et al. (4 more authors) (2005) Endovascular aneurysm repair versus open repair in patients with abdominal aortic aneurysm (EVAR trial 1): randomised controlled trial. The Lancet, 365 (9478). pp. 2179-2186. ISSN 0140-6736
Grieve, Richard, Abrams, Keith, Claxton, Karl Philip orcid.org/0000-0003-2002-4694 et al. (9 more authors) (2016) Cancer Drugs Fund requires further reform : Reliance on "real world" observational data undermines evidence base for clinical practice. British medical journal. i5090. ISSN 0959-535X
Guidolin, M. and Ono, S. (2006) Are the Dynamic Linkages Between the Macroeconomy and Asset Prices Time-Varying? Journal of Economics and Business, 58 (5-6). pp. 480-518. ISSN 0148-6195
Gutacker, Nils orcid.org/0000-0002-2833-0621, Bojke, Chris orcid.org/0000-0003-2601-0314, Daidone, Silvio et al. (3 more authors) (2011) Truly Inefficient or Providing Better Quality of Care? : Analysing the Relationship Between Risk-Adjusted Hospital Costs and Patients' Health Outcomes. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gutacker, Nils orcid.org/0000-0002-2833-0621, Bojke, Chris orcid.org/0000-0003-2601-0314, Daidone, Silvio et al. (2 more authors) (2012) Analysing hospital variation in health outcome at the level of EQ-5D dimensions. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Gutacker, Nils orcid.org/0000-0002-2833-0621, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Cookson, Richard orcid.org/0000-0003-0052-996X (2016) Waiting time prioritisation : Evidence from England. Social Science & Medicine. pp. 140-151. ISSN 1873-5347
Gutacker, Nils orcid.org/0000-0002-2833-0621, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Moscelli, Giuseppe et al. (1 more author) (2016) Choice of hospital : which type of quality matters? Journal of Health Economics. pp. 230-246. ISSN 0167-6296
Gutacker, Nils orcid.org/0000-0002-2833-0621, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Moscelli, Giuseppe et al. (1 more author) (2015) Do patients choose hospitals that improve their health? Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Halunga, Andreea, Orme, Chris and Yamagata, Takashi orcid.org/0000-0001-5949-8833 (2017) A Heteroskedasticity Robust Breusch-Pagan Test for Contemporaneous Correlation in Dynamic Panel Data Models. Journal of Econometrics. pp. 209-230. ISSN 0304-4076
Han, C. and Phillips, P.C.B. (2005) GMM with many moment conditions. Econometrica, 74 (1). pp. 147-192. ISSN 0012-9682
Harris, D. and Poskitt, D.S. (2004) Determination of Cointegrating Rank in Partially Nonstationary Processess via a Generalised Von-Neuman Criterion. The Econometrics Journal, 7 (1). pp. 191-217. ISSN 1368-4221
Harris, Mark, Knott, Rachel, Lorgelly, Paula et al. (1 more author) (2020) Using externally collected vignettes to account for reporting heterogeneity in survey self-assessment. Economics Letters. 109325. ISSN 0165-1765
Hartley, K. and Sandler, T. (2003) The Future of the Defence Firm. Kyklos, 56 (3). pp. 361-380. ISSN 0023-5962
Hauck, K., Morton, Alec, Chalkidou, K. et al. (9 more authors) (2019) How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Social science and medicine. pp. 141-149. ISSN 1873-5347
Hauck, Katharina, Martin, Stephen and Smith, Peter Charles orcid.org/0000-0003-0058-7588 (2016) Priorities for action on the social determinants of health : Empirical evidence on the strongest associations with life expectancy in 54 low-income countries, 1990–2012. Social science and medicine. pp. 88-98. ISSN 1873-5347
Henderson, J., Hackman, G, Ruotsalainen, P. et al. (25 more authors) (2018) Testing microscopically derived descriptions of nuclear collectivity : Coulomb excitation of 22Mg. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics. pp. 468-473. ISSN 0370-2693
Hey, J.D. and Knoll, J.A. (2007) How far ahead do people plan? Economics Letters, 96 (1). pp. 8-13. ISSN 0165-1765
Hey, J.D. and Morone, A. (2004) Do markets drive out lemmings - or vice versa? Economica, 71 (284). pp. 637-659. ISSN 0013-0427
Hey, John Denis orcid.org/0000-0001-6692-1484 and Di Cagno, Daniela (2016) Does Money Impede Convergence? Experimental Economics. pp. 595-612. ISSN 1386-4157
Hey, John Denis orcid.org/0000-0001-6692-1484, Permana, Yudistira Hendra and Rochanahastin, Nuttaporn (2017) When and how to satisfice: an experimental investigation. THEORY AND DECISION. ISSN 0040-5833
Higgin, D.M. and Toms, T. (2006) Financial institutions and corporate strategy: David Alliance and the transformation of British textiles, c.1950-c.1990. Business History, 48 (4). pp. 453-478. ISSN 0007-6791
Horvath, Michal orcid.org/0000-0002-4902-6637 (2018) Children of the crisis : Fiscal councils in Portugal, Spain and Ireland. In: Beetsma, Roel and Debrun, Xavier, (eds.) Independent Fiscal Councils. Centre for Economic Policy Research , London , pp. 125-133.
Horvath, Michal orcid.org/0000-0002-4902-6637 (2018) EU Independent Fiscal Institutions : An Assessment of Potential Effectiveness. JCMS-JOURNAL OF COMMON MARKET STUDIES. pp. 504-519. ISSN 0021-9886
Horvath, Michal orcid.org/0000-0002-4902-6637 and Belgibayeva, Adiya (2019) Real Rigidities and Optimal Stabilization at the Zero Lower Bound in New Keynesian Economies. Macroeconomic Dynamics. pp. 1371-1400. ISSN 1365-1005
Horvath, Michal orcid.org/0000-0002-4902-6637, Siebertova, Zuzana, Senaj, Matus et al. (2 more authors) (2019) The end of the flat tax experiment in Slovakia : An evaluation using behavioural microsimulation in a dynamic macroeconomic framework. Economic Modelling. pp. 171-184.
Hou, Aijun, Wang, Weining, Chen, Cathy et al. (1 more author) (2020) Pricing cryptocurrency options. Journal of Financial Econometrics. pp. 250-279. ISSN 1479-8409
Howdon, Daniel David Henry and Rice, Nigel orcid.org/0000-0003-0312-823X (2018) Health care expenditures, age, proximity to death and morbidity : implications for an ageing population. Journal of health economics. pp. 60-74. ISSN 0167-6296
Howdon, Daniel David Henry and Rice, Nigel orcid.org/0000-0003-0312-823X (2015) Health care expenditures, age, proximity to death and morbidity: implications for an ageing population. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Howdon, Daniel and Jones, Andrew M. orcid.org/0000-0003-4114-1785 (2015) A discrete latent factor model of smoking, cancer and mortality. Economics and Human Biology. pp. 57-73. ISSN 1570-677X
Iacone, Fabrizio orcid.org/0000-0002-2681-9036, Nielsen, Morten and Taylor, Robert (2021) Semiparametric Tests for the Order of Integration in the Possible Presence of Level Breaks. Journal of Business and Economic Statistics. ISSN 0735-0015 (In Press)
Iglesias, C.P. and Claxton, K. (2006) Comprehensive decision analytic model and Bayesian value of information analysis: pentoxifylline in the treatment of chronic venous leg ulcers. Pharmacoeconomics, 24 (5). pp. 465-478. ISSN 1170-7690
Isaranuwatchai, Wanrudee, Teerawattananon, Yot, Archer, Rachel A. et al. (16 more authors) (2020) Prevention of non-communicable disease : Best buys, wasted buys, and contestable buys. The BMJ. m141. ISSN 0959-8146
Ismihan, M. and Gulcin Ozkan, F. (2004) Does central bank independence lower inflation? Economics Letters, 84 (3). pp. 305-309. ISSN 0165-1765
Jackson, W A orcid.org/0000-0001-5194-7307 (1996) Cultural materialism and institutional economics. Review of Social Economy. pp. 221-244. ISSN 0034-6764
Jackson, W A orcid.org/0000-0001-5194-7307 (1992) The employment distribution and the creation of financial dependence. Journal of post keynesian economics. pp. 267-280. ISSN 0160-3477
Jackson, W.A. (2005) Capabilities, culture and social structure. Review of Social Economy, 63 (1). pp. 101-124. ISSN 0034-6764
Jackson, W.A. (2002) Functional explanation in economics: A qualified defence. Journal of Economic Methodology, 9 (2). pp. 169-189. ISSN 1350-178X
Jackson, W.A. (2006) Post-Fordism and population ageing. International Review of Applied Economics, 20 (4). pp. 449-467. ISSN 0269-2171
Jackson, W.A. (2003) Social structure in economic theory. Journal of Economic Issues, 37 (3). pp. 727-746. ISSN 0021-3624
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2007) Economic flexibility : a structural analysis. In: Ioannides, Stavros and Nielsen, Klaus, (eds.) Economics and the Social Sciences. Edward Elgar , Cheltenham , pp. 215-232.
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2013) The desocialising of economic theory. International Journal of Social Economics. pp. 809-825. ISSN 0306-8293
Jackson, W A orcid.org/0000-0001-5194-7307 (1999) Basic income and the right to work: a Keynesian approach. Journal of post keynesian economics. pp. 639-662. ISSN 0160-3477
Jackson, W A orcid.org/0000-0001-5194-7307 (2003) Social structure in economic theory. Journal of Economic Issues. pp. 727-746. ISSN 0021-3624
Jackson, W. orcid.org/0000-0001-5194-7307 (2009) Retirement policies and the life cycle : current trends and future prospects. Review of Political Economy. pp. 515-536. ISSN 0953-8259
Jackson, W.A. orcid.org/0000-0001-5194-7307 (2005) Capabilities, culture and social structure. Review of Social Economy. pp. 101-124. ISSN 0034-6764
Jackson, W.A. orcid.org/0000-0001-5194-7307 (2002) Functional explanation in economics: A qualified defence. Journal of Economic Methodology. pp. 169-189. ISSN 1350-178X
Jackson, W.A. orcid.org/0000-0001-5194-7307 (2006) Post-Fordism and population ageing. International Review of Applied Economics. pp. 449-467. ISSN 0269-2171
Jackson, William A. orcid.org/0000-0001-5194-7307 (2007) On the social structure of markets. CAMBRIDGE JOURNAL OF ECONOMICS. pp. 235-253. ISSN 0309-166X
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2019) Active and passive trading relations. Journal of Economic Issues. pp. 98-114. ISSN 0021-3624
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2001) Age, health and medical expenditure. In: Davis, John B., (ed.) The Social Economics of Health Care. Advances in Social Economics . Routledge , London , pp. 195-218.
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2016) Capabilities, culture and social structure. In: Dolfsma, W., Figart, D.M., McMaster, R., Mutari, E. and White, M.D., (eds.) Social Economics. Critical Concepts in Economics . Routledge , London .
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1993) Culture, society and economic theory. Review of Political Economy. pp. 453-469. ISSN 0953-8259
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1991) Dependence and population ageing. In: Hutton, John, Hutton, Sandra, Pinch, Trevor and Shiell, Alan, (eds.) Dependency to Enterprise. Routledge , London , pp. 132-140.
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2015) Distributive justice with and without culture. Journal of Cultural Economy. pp. 673-688. ISSN 1753-0369
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1999) Dualism, duality and the complexity of economic institutions. International Journal of Social Economics. pp. 545-558. ISSN 0306-8293
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2014) External capabilities and the limits to social policy. In: Otto, H.-U. and Ziegler, H., (eds.) Critical Social Policy and the Capability Approach. Barbara Budrich , Opladen , pp. 125-142.
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2012) Factor shares, business cycles and the distributive loop. Metroeconomica. pp. 493-511. ISSN 1467-999X
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1995) Naturalism in economics. Journal of Economic Issues. pp. 761-780. ISSN 0021-3624
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1991) On the treatment of population ageing in economic theory. Ageing and Society. pp. 59-68. ISSN 0144-686X
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1995) Population growth : a comparison of evolutionary views. International Journal of Social Economics. pp. 3-16. ISSN 0306-8293
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (2018) Strategic pluralism and monism in heterodox economics. Review of Radical Political Economics. pp. 237-251. ISSN 0486-6134
Jackson, William Anthony orcid.org/0000-0001-5194-7307 (1994) The economics of ageing and the political economy of old age. International Review of Applied Economics. pp. 31-45. ISSN 0269-2171
Jacob, Nikita orcid.org/0000-0001-5546-4521, Munford, Luke, Rice, Nigel orcid.org/0000-0003-0312-823X et al. (1 more author) (2019) The disutility of commuting? The effect of gender and local labor markets. Regional Science and Urban Economics. pp. 264-275. ISSN 0166-0462 (In Press)
Jacobs, R., Martin, S., Goddard, M. et al. (2 more authors) (2006) Exploring the determinants of NHS performance ratings: lessons for performance assessment systems. Journal of Health Services Research and Policy, 11 (4). pp. 211-217. ISSN 1355-8196
Jacobs, R. orcid.org/0000-0001-5225-6321, Chalkley, M. orcid.org/0000-0002-1091-8259, Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220 et al. (3 more authors) (2018) Funding approaches for mental health services : Is there still a role for clustering? BJPsych Advances. ISSN 2056-4686
Jacobs, Rowena orcid.org/0000-0001-5225-6321, Aylott, Lauren, Dare, Ceri et al. (12 more authors) (2020) The association between primary care quality and healthcare utilisation, costs and outcomes for people with serious mental illness: retrospective observational study. Health Services and Delivery Research. HS&DR 13/54/40. ISSN 2050-4357
Janiak, Alexandre and Santos Monteiro, Paulo orcid.org/0000-0002-2014-4824 (2016) Towards a quantitative theory of automatic stabilizers : The role of demographics. Journal of Monetary Economics. pp. 35-49. ISSN 0304-3932
Jobjornsson, Sebastian, Forster, Martin orcid.org/0000-0001-8598-9062, Pertile, Paolo et al. (1 more author) (2016) Late-Stage Pharmaceutical R&D and Pricing Policies under Two-Stage Regulation. Journal of health economics. pp. 298-311. ISSN 0167-6296
Johannesen, Kasper M, Claxton, Karl orcid.org/0000-0003-2002-4694, Sculpher, Mark J orcid.org/0000-0003-3746-9913 et al. (1 more author) (2017) How to design the cost-effectiveness appraisal process of new healthcare technologies to maximise population health : A conceptual framework. Health Economics. ISSN 1057-9230
Jones, A.M., Koolman, X. and Rice, N. (2006) Health-related non-response in the BHPS and ECHP: using inverse probability weighted estimators in nonlinear models. Journal of the Royal Statistical Soceity: Series A (Statistics in Society), 169 (3). pp. 543-569. ISSN 0964-1998
Jones, A.M. and Labeaga, J.M. (2002) Individual Heterogeneity and Censoring in Panel Data Estimates of Tobacco Expenditure. Journal of Applied Econometrics, 18 (2). pp. 157-177. ISSN 0883-7252
Jones, A.M. and López, A.N. (2004) Measurement and explanation of socioeconomic inequality in health with longitudinal data. Health Economics, 13 (10). pp. 1015-1030. ISSN 1057-9230
Jones, Andrew M orcid.org/0000-0003-4114-1785, Lomas, James orcid.org/0000-0002-2478-7018 and Rice, Nigel orcid.org/0000-0003-0312-823X (2015) Healthcare Cost Regressions : Going Beyond the Mean to Estimate the Full Distribution. Health Economics. pp. 1192-1212. ISSN 1057-9230
Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2017) Data visualization and health econometrics. Foundations and Trends in Econometrics.
Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2019) Equity, opportunity and health. Empirica. pp. 413-421. ISSN 1573-6911
Jones, Andrew Michael orcid.org/0000-0003-4114-1785 and Bijwaard, Govert (2019) An IPW estimator for mediation effects in hazard models: with an application to schooling, cognitive ability and mortality. Empirical Economics. pp. 129-175. ISSN 0377-7332
Jones, Andrew Michael orcid.org/0000-0003-4114-1785, Laporte, Audrey, Zucchelli, Eugenio et al. (1 more author) (2019) Dynamic panel data estimation of an integrated Grossman and Becker-Murphy model of health and addiction. Empirical Economics. pp. 703-733. ISSN 0377-7332
Jones, Andrew Michael orcid.org/0000-0003-4114-1785, Lomas, James orcid.org/0000-0002-2478-7018, Moore, Peter et al. (1 more author) (2015) A quasi-Monte Carlo comparison of developments in parametric and semi-parametric regression methods for heavy-tailed and non-normal data : with an application to healthcare costs. Journal of the Royal Statistical Society: Series A (Statistics in Society). pp. 951-974. ISSN 1467-985X
Jones, Andrew Michael orcid.org/0000-0003-4114-1785, Rice, Nigel orcid.org/0000-0003-0312-823X and Zantomio, Francesca (2019) Acute health shocks and labour market outcomes : evidence from the post crash era. Economics and Human Biology. ISSN 1570-677X
Ju, Y., Borm, P. and Ruys, P. (2007) The consensus value: a new solution concept for cooperative games. Social Choice and Welfare, 28 (4). pp. 685-703. ISSN 0176-1714
Ju, Yuan orcid.org/0000-0002-7541-9856 and Borm, Peter (2008) Externalities and compensation: Primeval games and solutions. Journal of Mathematical Economics. pp. 367-382. ISSN 0304-4068
Ju, Yuan orcid.org/0000-0002-7541-9856 (2012) Reject and Renegotiate : the Shapley value in multilateral bargaining. Journal of Mathematical Economics. pp. 431-436. ISSN 0304-4068
Justo, N., Espinoza, M., Ratto, B. et al. (7 more authors) (2019) Real-World Evidence in Healthcare Decision Making : Global trends and case studies from Latin America. Value in Health. pp. 739-749. ISSN 1524-4733
Juvenal, Luciana and Santos Monteiro, Paulo orcid.org/0000-0002-2014-4824 (2017) Trade and Synchronization in a Multi-Country Economy. European economic review. 385–415. ISSN 0014-2921
Kasteridis, Panagiotis orcid.org/0000-0003-1623-4293, Ride, Jemimah orcid.org/0000-0002-1820-5499, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (12 more authors) (2019) Association Between Antipsychotic Polypharmacy and Outcomes for People With Serious Mental Illness in England. Psychiatric services (Washington, D.C.). appips201800504. ISSN 1075-2730
Kasteridis, Panos orcid.org/0000-0003-1623-4293, Goddard, Maria Karen orcid.org/0000-0002-1517-7461, Jacobs, Rowena orcid.org/0000-0001-5225-6321 et al. (2 more authors) (2015) The impact of primary care quality on inpatient length of stay for people with dementia : An analysis by discharge destination. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Kennedy, A.D.M., Sculpher, M.J., Coulter, A. et al. (9 more authors) (2002) Effects of decision aids for menorrhagia on treatment choices, health outcomes, and costs: a randomized controlled trial. Journal of American Medical Association, 288 (21). pp. 2701-2708. ISSN 0098-7484
Kifmann, Mathias and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2016) Average-Cost Pricing and Dynamic Selection Incentives in the Hospital Sector. Health Economics. pp. 1-17. ISSN 1057-9230
Klien, Michael, Melki, Mickael and Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 (2020) Voter turnout and intergenerational redistribution. Journal of comparative economics. pp. 1-23. ISSN 0147-5967
Koufopoulos, Konstantinos and Biswas, Swarnava (2019) Bank competition and financing efficiency under asymmetric information. Journal of corporate finance. ISSN 1872-6313
Koufopoulos, Konstantinos and Boukouras, Aristotelis (2017) Efficient allocations in economies with asymmetric information when the realized frequency of types is common knowledge. Economic theory. 75–98. ISSN 0938-2259
Koufopoulos, Konstantinos and Kozhan, Roman (2016) Optimal insurance under adverse selection and ambiguity aversion. Economic theory. 659–687. ISSN 0938-2259
Koufopoulos, Konstantinos and Kozhan, Roman (2014) Welfare-improving ambiguity in insurance markets with asymmetric information. Journal of Economic Theory. 551 - 560. ISSN 0022-0531
Koufopoulos, Konstantinos, Kozhan, Roman and Trigilia, Giulio (2019) Optimal Security Design under Asymmetric Information and Profit Manipulation. Review of Corporate Finance Studies. 146–173.
Krause, A. (2006) Redistributive taxation and public education. Journal of Public Economic Theory, 8 (5). pp. 807-819. ISSN 1097-3923
Krause, Alan orcid.org/0000-0002-6334-5124 (2017) On Redistributive Taxation under the Threat of High-Skill Emigration. Social Choice and Welfare. 845–856. ISSN 0176-1714
Krause, Alan orcid.org/0000-0002-6334-5124 and Guo, Jang-Ting (2017) Changing Social Preferences and Optimal Redistributive Taxation. Oxford Economic Papers. pp. 73-92. ISSN 0030-7653
Kreutzberg, A. and Jacobs, R. orcid.org/0000-0001-5225-6321 (2020) Improving access to services for psychotic patients: does implementing a waiting time target make a diference. European Journal of Health Economics. ISSN 1618-7601
Kringos, Dionne, Nuti, Sabina, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (4 more authors) (2019) Re-thinking performance assessment for primary care : opinion of the Expert Panel on Effective Ways of Investing in Health. European Journal of General Practice. pp. 55-61. ISSN 1751-1402
Kristensen, Soren, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Sutton, Matt (2016) Optimal Price-Setting in Pay for Performance Schemes in Health Care. Journal of Economic Behavior & Organization. pp. 57-77. ISSN 0167-2681
Kronenberg, Christoph orcid.org/0000-0002-6659-306X, Jacobs, Rowena orcid.org/0000-0001-5225-6321 and Zucchelli, Eugenio (2017) The impact of the UK National Minimum Wage on mental health. SSM - Population Health. pp. 749-755. ISSN 2352-8273
LaFrance, J.T., Beatty, T.K.M., Pope, R.D. et al. (1 more author) (2002) Information theoretic measures of the income distribution in food demand. Journal of Econometrics, 107 (1-2). pp. 235-257. ISSN 0304-4076
Lanari Bo, Inacio (2016) Fair implementation of diversity in school choice. Games and Economic Behaviour. pp. 54-63. ISSN 0899-8256
Lanari Bo, Inacio and Hakimov, Rustamdjan (2020) Iterative Versus Standard Deferred Acceptance : Experimental Evidence. The Economic Journal. 356–392. ISSN 0013-0133
Lanari Bo, Inacio and Ko, Chiu Yu (2020) Competitive screening and information transmission. Journal of Public Economic Theory. ISSN 1097-3923
Larsson, R. and Abadir, K.M. (2001) The joint moment generating function of quadratic forms in multivariate autoregressive series - The case with deterministic components. Econometric Theory. pp. 222-246. ISSN 0266-4666
Laudicella, Mauro, Martin, Stephen, Li Donni, Paolo et al. (1 more author) (2018) Do reduced hospital mortality rates lead to increased utilization of inpatient emergency care? A population-based cohort study. Health services research. pp. 2324-2345. ISSN 1475-6773
Laudicella, Mauro, Walsh, Brendan, Burns, Elaine et al. (2 more authors) (2018) What is the impact of rerouting a cancer diagnosis from emergency presentation to GP referral on resource use and survival? Evidence from a population-based study. BMC Cancer. 394. ISSN 1471-2407
Lica, R., Rotaru, F., Borge, M. J.G. et al. (47 more authors) (2019) Normal and intruder configurations in Si 34 populated in the β- Decay of Mg 34 and Al 34. Physical Review C. 034306. ISSN 1089-490X
Licǎ, R., Rotaru, F., Borge, M. J G et al. (36 more authors) (2017) Identification of the crossing point at N=21 between normal and intruder configurations. Physical Review C. 021301(R). p. 21301. ISSN 1089-490X
Lin, Lihui (2021) Does the procedure matter? Journal of Behavioral and Experimental Economics, Volume 90. 101618. ISSN 2214-8043
Lin, Yu-Hsuan, Huang, Chunli, Offidani, Manuel orcid.org/0000-0002-3500-8198 et al. (2 more authors) (2019) Theory of Spin Injection in Two-dimensional Metals with Proximity-Induced Spin-Orbit Coupling. Phys. Rev. B. 245424.
Lisi, Domenico, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2020) Hospital Competition under Pay-for-Performance : Quality, Mortality and Readmissions. Journal of economics & management strategy. ISSN 1058-6407
Lomas, James Richard Scott orcid.org/0000-0002-2478-7018, Martin, Stephen and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2019) Estimating the marginal productivity of the English National Health Service from 2003 to 2012. Value in Health. pp. 995-1002. ISSN 1524-4733
Lomas, James Richard Scott orcid.org/0000-0002-2478-7018, Martin, Stephen and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2018) Estimating the marginal productivity of the English National Health Service from 2003/04 to 2012/13. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Lomas, James orcid.org/0000-0002-2478-7018, Claxton, Karl orcid.org/0000-0003-2002-4694, Martin, Stephen et al. (1 more author) (2018) Resolving the 'cost-effective but unaffordable' 'paradox': estimating the health opportunity costs of non-marginal budget impacts : Estimating the Health Opportunity Costs of Nonmarginal Budget Impacts. Value in Health. pp. 266-275. ISSN 1524-4733
Longo, Francesco orcid.org/0000-0002-1833-7328, Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Lomas, James orcid.org/0000-0002-2478-7018 et al. (1 more author) (2020) Does public long-term care expenditure improve care-related quality of life in England? Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Longo, Francesco orcid.org/0000-0002-1833-7328, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 et al. (1 more author) (2017) Do Hospitals Respond to Rivals' Quality and Efficiency? A Spatial Panel Econometric Analysis. Health Economics. pp. 38-62. ISSN 1057-9230
Longo, Francesco orcid.org/0000-0002-1833-7328, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 et al. (1 more author) (2017) Do hospitals respond to rivals' quality and efficiency? : A spatial econometrics approach. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Longo, Francesco orcid.org/0000-0002-1833-7328, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Moscelli, Giuseppe et al. (1 more author) (2019) Does Hospital Competition Improve Efficiency? The Effect of the Patient Choice Reform in England. Health Economics. ISSN 1057-9230
Longo, Francesco orcid.org/0000-0002-1833-7328, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Moscelli, Giuseppe et al. (1 more author) (2017) Does hospital competition improve efficiency? The effect of the patient choice reform in England. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Longo, Francesco orcid.org/0000-0002-1833-7328, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Street, Andrew David orcid.org/0000-0002-2540-0364 (2017) Are costs differences between specialist and general hospitals compensated by the prospective payment system? European Journal of Health Economics. ISSN 1618-7601
Love-Koh, James orcid.org/0000-0001-9009-5346, Cookson, Richard orcid.org/0000-0003-0052-996X, Claxton, Karl orcid.org/0000-0003-2002-4694 et al. (1 more author) (2020) Estimating Social Variation in the Health Effects of Changes in Health Care Expenditure. Medical Decision Making. 272989X20904360. pp. 170-182. ISSN 1552-681X
Luce, B.R., Shih, Y.T. and Claxton, K. orcid.org/0000-0003-2002-4694 (2001) Bayesian approaches to technology assessment and decision making. International Journal of Technology Assessment in Health Care. pp. 1-5. ISSN 0266-4623
Madeira, Carlos and Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2019) The effect of FOMC votes on financial markets. Review of economics and statistics. pp. 921-932. ISSN 0034-6535
Malone, Daniel C, Berg, Nancy S, Claxton, Karl orcid.org/0000-0003-2002-4694 et al. (8 more authors) (2016) International Society for Pharmacoeconomics and Outcomes Research Comments on the American Society of Clinical Oncology Value Framework. Journal of Clinical Oncology. pp. 2936-2937. ISSN 1527-7755
Maloney, John and Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 (2011) Voting and the macroeconomy : separating trend from cycle. Discussion Paper. University of York Discussion Papers in Economics . Department of Economics and Related Studies, University of York , York, UK.
Mariotti, Francesco, Dixon, Maria, Mumford, Karen Ann orcid.org/0000-0002-0190-5544 et al. (1 more author) (2016) Job Insecurity within the Household : Are Australian Householders Caring when it Comes to Risk Sharing? Australian Journal of Labour Economics. pp. 77-90. ISSN 1328-1143
Mariotti, Francesco, Mumford, Karen Ann orcid.org/0000-0002-0190-5544 and Pena-Boquete, Yolanda (2017) Education, Job Insecurity and the Within Country Migration of Couples. IZA Journal of Migration. ISSN 2193-9039
Marsh, P. (2007) Constructing Optimal tests on a Lagged dependent variable. Journal of Time Series Analysis, 28 (5). pp. 723-743. ISSN 0143-9782
Marsh, P. (2007) Goodness of fit tests via exponential series density estimation. Computational Statistics & Data Analysis, 51 (5). pp. 2428-2441. ISSN 0167-9473
Marsh, P.W. (2007) The available information for invariant tests of a unit root. Econometric Theory, 23 (4). pp. 686-710. ISSN 0266-4666
Marsh, P (2004) Transformations for multivariate statistics. Econometric Theory. pp. 963-987. ISSN 0266-4666
Marsh, P.W.N. (1998) Saddlepoint approximations for noncentral quadratic forms. Econometric Theory. pp. 539-559. ISSN 0266-4666
Martin, S., Smith, P.C. and Leatherman, S. (2006) Value for money in the English NHS: summary of the evidence. Research Report. CHE Research Paper (18). Centre for Health Economics , York, UK.
Martin, Stephen, Street, Andrew David orcid.org/0000-0002-2540-0364, Han, Lu orcid.org/0000-0001-7198-3380 et al. (1 more author) (2014) The impact of hospital financing on the quality of inpatient care in England. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Martin, Stephen, Lomas, James Richard Scott orcid.org/0000-0002-2478-7018 and Grant, University of York orcid.org/0000-0003-2002-4694 (2019) Is an ounce of prevention worth a pound of cure? : Estimates of the impact of English public health grant on mortality and morbidity. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Martin, Stephen, Lomas, James orcid.org/0000-0002-2478-7018 and Claxton, Karl orcid.org/0000-0003-2002-4694 (2020) Is an ounce of prevention worth a pound of cure? : A cross-sectional study of the impact of English public health grant on mortality and morbidity. BMJ Open. e036411. ISSN 2044-6055
Martin, Stephen, Street, Andrew orcid.org/0000-0002-2540-0364, Han, Lu orcid.org/0000-0001-7198-3380 et al. (1 more author) (2016) Have hospital readmissions increased in the face of reductions in length of stay? Evidence from England. Health Policy. pp. 89-99. ISSN 0168-8510
Martin, Steve, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Smith, Peter Charles orcid.org/0000-0003-0058-7588 (2020) Socioeconomic Inequalities in Waiting Times for Primary Care across ten OECD countries. Social Science & Medicine. 113230. ISSN 1873-5347
Mason, A. orcid.org/0000-0002-5823-3064, Miraldo, M., Siciliani, L. orcid.org/0000-0003-1739-7289 et al. (2 more authors) (2008) Establishing a fair playing field for payment by results. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Mason, Anne orcid.org/0000-0002-5823-3064, Rodriguez Santana, Idaira De Las Nieves orcid.org/0000-0003-2022-3239, Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220 et al. (4 more authors) (2019) Drivers of Health Care Expenditure. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Matta, A., Catford, W. N., Orr, N. A. et al. (36 more authors) (2019) Shell evolution approaching the N=20 island of inversion : Structure of Mg 29. Physical Review C. 044320. ISSN 1089-490X
Mauskopf, Josephine, Standaert, Baudouin, Connolly, Mark P. et al. (7 more authors) (2018) Economic analysis of vaccination programs. Value in Health. pp. 1133-1149. ISSN 1524-4733
Mayston, D. (2007) Competition and Resource Effectiveness in Education. The Manchester School, 75 (1). pp. 47-64. ISSN 1463-6786
Mayston, D.J. (2003) Measuring and managing educational performance. Journal of the Operational Research Society, 54 (7). pp. 679-691. ISSN 0160-5682
Mayston, David John orcid.org/0000-0002-1157-3600 (2016) Convexity, quality and efficiency in education. Journal of the Operational Research Society. 446–455. ISSN 1476-9360
Mayston, David John orcid.org/0000-0002-1157-3600 (2016) Data Envelopment Analysis, Endogeneity and the Quality Frontier for Public Services. Annals of Operations Research. 185–203.
McCabe, Christopher, Claxton, Karl orcid.org/0000-0003-2002-4694 and O'Hagan, Anthony (2008) Why licensing authorities need to consider the net value of new drugs in assigning review priorities: Addressing the tension between licensing and reimbursement. International Journal of Technology Assessment in Health Care. pp. 140-145. ISSN 0266-4623
McDougall, Cynthia orcid.org/0000-0001-8038-1402, Pearson, Dominic A.S., Torgerson, David John orcid.org/0000-0002-1667-4275 et al. (1 more author) (2017) The effect of digital technology on prisoner behavior and reoffending : a natural stepped-wedge design. Journal of Experimental Criminology. 13. pp. 455-482. ISSN 1572-8315
McGuire, Finn, Revill, Paul orcid.org/0000-0001-8632-0600, Twea, Pakwanja et al. (3 more authors) (2020) Allocating resources to support universal health coverage : development of a geographical funding formula in Malawi. BMJ Global health. ISSN 2059-7908
McKenna, C. orcid.org/0000-0002-7759-4084, Chalabi, Z., Epstein, D. et al. (1 more author) (2008) Budgetary policies and available actions: a generalisation of decision rules for allocation and research decisions. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
McManus, Richard, Ozkan, Fatma Gulcin orcid.org/0000-0001-7652-1361 and Trzeciakiewicz, Dawid (2018) Expansionary contractions and fiscal free lunches : too good to be true. Scandinavian Journal of Economics. pp. 32-54. ISSN 1467-9442
McManus, Richard, Ozkan, Fatma Gulcin orcid.org/0000-0001-7652-1361 and Trzeciakiewicz, Dawid (2019) Why are fiscal multipliers asymmetric? : The role of credit constraints. Economica. ISSN 0013-0427
Meagher, K.J. and Zauner, K.G. (2005) Location-then-price competition with uncertain consumer tastes. Economic Theory, 25 (4). pp. 799-818. ISSN 0938-2259
Meagher, K.J. and Zauner, K.G. (2004) Product differentiation and location decisions under demand uncertainty. International Journal of Economic Theory, 117 (2). pp. 201-216. ISSN 0022-0531
Meschi, Elena, Swaffield, Joanna Kate orcid.org/0000-0002-9157-6691 and Vignoles, Anna (2018) The role of local labour market conditions and pupil attainment on post-compulsory schooling decisions. International Journal of Manpower. ISSN 0143-7720 (In Press)
Miners, A.H., Garau, M., Fidan, D. et al. (1 more author) (2005) Comparing estimates of cost effectiveness submitted to the National Institute for Clinical Excellence (NICE) by different organisations: retrospective study. BMJ, 330 (7482). pp. 65-68. ISSN 0959-535X
Miraldo, Marisa, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Street, Andrew David orcid.org/0000-0002-2540-0364 (2008) Price adjustment in the hospital sector. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Moon, H.R. and Phillips, P.C.B. (2004) GMM estimation of autoregressive roots near unity with panel data. Econometrica, 72 (2). pp. 467-522. ISSN 0012-9682
Moran, V. and Jacobs, R. orcid.org/0000-0001-5225-6321 (2018) Investigating the relationship between costs and outcomes for English mental health providers : A bi-variate multi-level regression analysis. European Journal of Health Economics. pp. 709-718. ISSN 1618-7601
Moreno Serra, Rodrigo Antonio orcid.org/0000-0002-6619-4560, Anaya Montes, Misael and Smith, Peter Charles orcid.org/0000-0003-0058-7588 (2019) Potential determinants of health system efficiency: Evidence from Latin America and the Caribbean. PLoS ONE. ISSN 1932-6203
Morton, Alec, Adler, Amanda I, Bell, David et al. (7 more authors) (2016) Unrelated Future Costs and Unrelated Future Benefits : Reflections on NICE Guide to the Methods of Technology Appraisal. Health Economics. pp. 933-938. ISSN 1057-9230
Morton, Alec, Thomas, Ranjeeta and Smith, Peter C. orcid.org/0000-0003-0058-7588 (2016) Decision rules for allocation of finances to health systems strengthening. Journal of health economics. pp. 97-108. ISSN 0167-6296
Morys, Ingo Matthias orcid.org/0000-0002-6460-2575 (2017) A century of monetary reform in South-East Europe : From political autonomy to the gold standard, 1815-1910. Financial History Review. 1. pp. 3-21. ISSN 0968-5650
Morys, Matthias orcid.org/0000-0002-6460-2575 (2016) World War I and the emergence of modern central banks in South-East Europe. In: Banques centrales dans la grande guerre. Conference proceedings of central banks in the Great War, 13th and 14th November 2014 in Paris. UNSPECIFIED. Presses Sciences-Po . (In Press)
Morys, Matthias orcid.org/0000-0002-6460-2575 (2020) The gold standard, fiscal dominance and financial supervision in Greece and South-East Europe, 1841-1939. European Review of Economic History. ISSN 1361-4916
Moscelli, Giuseppe, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (1 more author) (2016) Location, Quality and Choice of Hospital : Evidence from England 2002–2013. Regional Science and Urban Economics. pp. 112-124. ISSN 0166-0462
Moscelli, G., Gravelle, H. orcid.org/0000-0002-7753-4233, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2018) The effect of hospital ownership on quality of care : evidence from England. Journal of Economic Behavior & Organization. pp. 322-344.
Moscelli, Giuseppe, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2019) Effects of market structure and patient choice on hospital quality for planned patients. Discussion Paper. CHE Research paper . Centre for Health Economics, University of York , York, UK.
Moscelli, Giuseppe, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2016) Market structure, patient choice and hospital quality for elective patients. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Moscelli, Giuseppe, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2017) The effect of hospital ownership on quality of care : evidence from England. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York
Moscelli, Giuseppe, Gravelle, Hugh orcid.org/0000-0002-7753-4233, Siciliani, Luigi orcid.org/0000-0003-1739-7289 et al. (1 more author) (2018) Heterogeneous effects of patient choice and hospital competition on mortality. Social science and medicine. pp. 50-58. ISSN 1873-5347
Moscelli, Giuseppe, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (1 more author) (2018) Socioeconomic Inequality of Access to Healthcare : Does Choice Explain the Gradient? Journal of Health Economics. pp. 290-314. ISSN 0167-6296
Moscelli, Giuseppe, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (1 more author) (2015) Socioeconomic inequality of access to healthcare: : Does patients' choice explain the gradient?Evidence from the English NHS. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Moscelli, Giuseppe, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (1 more author) (2016) Location, quality and choice of hospital: : Evidence from England 2002/3-2012/13. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Moscelli, Giuseppe, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Tonei, Valentina (2016) Do Waiting Times Affect Health Outcomes? : Evidence from Coronary Bypass. Social science and medicine. pp. 151-159. ISSN 1873-5347
Moscone, Francesco, Siciliani, Luigi orcid.org/0000-0003-1739-7289, Tosetti, Elisa et al. (1 more author) (2020) Do Public and Private Hospitals differ in Quality? Evidence from Italy. Regional Science and Urban Economics. pp. 1-12. ISSN 0166-0462
Mumford, K. and Smith, P.N. (2004) Job reallocation and average job tenure: theory and workplace evidence from Australia. Scottish Journal of Political Economy, 51 (3). pp. 402-421. ISSN 0036-9292
Mumford, K.A. (2004) Job tenure in Britain: individual versus workplace effects. Economica, 71 (282). pp. 275-297. ISSN 0013-0427
Mumford, Karen Ann orcid.org/0000-0002-0190-5544 (2016) On Gender, Research Discipline and being an Economics Journal Editor in the UK. Royal Economics Society Newsletter. pp. 15-19.
Mumford, Karen Ann orcid.org/0000-0002-0190-5544, Pena-Boquete, Yolanda and Parera-Nicolau, Antonia (2020) Labour supply and childcare: : Allowing both parents to choose. Oxford Bulletin of Economics and Statistics. pp. 577-602. ISSN 0305-9049
Mumford, Karen Ann orcid.org/0000-0002-0190-5544 and Sechel, Cristina (2019) Job Satisfaction amongst Academic Economists in the UK. Economics Letters. pp. 55-58. ISSN 0165-1765
Mumford, Karen Ann orcid.org/0000-0002-0190-5544 and Sechel, Cristina (2020) Pay and Job Rank Among Academic Economists in the UK : Is Gender Relevant? British Journal of Industrial Relations. pp. 82-113. ISSN 1467-8543
Murota, Kazuo, Shioura, Akiyoshi and Yang, Zaifu orcid.org/0000-0002-3265-7109 (2016) Time bounds for iterative auctions : a unified approach by discrete convex analysis. discrete optimization. pp. 36-62.
Nakamura, Ryota, Lomas, James Richard Scott orcid.org/0000-0002-2478-7018, Claxton, Karl Philip orcid.org/0000-0003-2002-4694 et al. (3 more authors) (2016) Assessing the impact of health care expenditures on mortality using cross-country data. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Nelson, E.A., O'Meara, S., Craig, D. et al. (10 more authors) (2006) A series of systematic reviews to inform a decision analysis for sampling and treating infected diabetic foot ulcers. Health Technology Assessment, 10 (12). pp. 1-235. ISSN 1366-5278
Nicoletti, Cheti orcid.org/0000-0002-7237-2597 (2017) Housework share between partners : Experimental evidence on gender-specific preferences. Social Science Research. pp. 118-139. ISSN 0049-089X
Nicoletti, Cheti orcid.org/0000-0002-7237-2597 and Rabe, Birgitta (2016) Sibling spillover effects in school achievement. Discussion Paper. Discussion Papers .
Nicoletti, Cheti orcid.org/0000-0002-7237-2597 and Rabe, Birgitta (2019) Sibling spillover effects in school achievement. Journal of Applied Econometrics. pp. 482-501. ISSN 0883-7252
Nicoletti, Cheti orcid.org/0000-0002-7237-2597 and Rabe, Birgitta (2018) The effect of school spending on student achievement: addressing biases in value-added models. Journal of the Royal Statistical Society: Series A (Statistics in Society). pp. 487-515. ISSN 1467-985X
Nicoletti, Cheti orcid.org/0000-0002-7237-2597, Salvanes, Kjell and Tominey, Emma orcid.org/0000-0002-0287-3935 (2018) The Family Peer Effect on Mothers' Labor Supply. American Economic Journal: Applied Economics. pp. 206-234. ISSN 1945-7782
Nicoletti, Cheti orcid.org/0000-0002-7237-2597 and Tonei, Valentina (2020) Do parental time investments react to changes in child's skills and health? European economic review. 103491. ISSN 0014-2921
O'Hara, Jane Kathryn, Grasic, Katja, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (5 more authors) (2018) Identifying positive deviants in healthcare quality and safety: a mixed methods study. Journal of the Royal Society of Medicine. p. 276. ISSN 1758-1095
Ochalek, Jessica Marie orcid.org/0000-0003-0744-1178, Asaria, Miqdad, Chuar, Pei Fen et al. (3 more authors) (2019) Assessing health opportunity costs for the Indian health care systems. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , UK.
Ochalek, Jessica Marie orcid.org/0000-0003-0744-1178, Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Revill, Paul orcid.org/0000-0001-8632-0600 et al. (2 more authors) (2016) Supporting the development of an essential health package: principles and initial assessment for Malawi. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Ochalek, Jessica Marie orcid.org/0000-0003-0744-1178, Lomas, James orcid.org/0000-0002-2478-7018 and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2015) Cost per DALY averted thresholds for low- and middle-income countries : evidence from cross country data. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Ochalek, Jessica orcid.org/0000-0003-0744-1178, Abbas, Kaja, Claxton, Karl orcid.org/0000-0003-2002-4694 et al. (2 more authors) (2020) Assessing the value of human papillomavirus vaccination in Gavi-eligible low-income and middle-income countries. BMJ Global health. e003006. ISSN 2059-7908
Ochalek, Jessica orcid.org/0000-0003-0744-1178, Claxton, Karl orcid.org/0000-0003-2002-4694, Lomas, James orcid.org/0000-0002-2478-7018 et al. (1 more author) (2020) Valuing health outcomes : developing better defaults based on health opportunity costs. Expert Review of Pharmacoeconomics & Outcomes Research. pp. 1-8. ISSN 1744-8379
Ochalek, Jessica orcid.org/0000-0003-0744-1178, Lomas, James orcid.org/0000-0002-2478-7018 and Claxton, Karl orcid.org/0000-0003-2002-4694 (2018) Estimating health opportunity costs in low-income and middle-income countries : a novel approach and evidence from cross-country data. BMJ Global health. e000964. ISSN 2059-7908
Ochalek, Jessica orcid.org/0000-0003-0744-1178, Revill, Paul orcid.org/0000-0001-8632-0600, Manthalu, Gerald et al. (5 more authors) (2018) Supporting the development of a health benefits package in Malawi. BMJ Global health. e000607.
Olivella, Pau and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2017) Reputational Concerns with Altruistic Providers. Journal of Health Economics. pp. 1-13. ISSN 0167-6296
Orme, C.D. and Yamagata, T. (2006) The asymptotic distribution of the F-test statistic for individual effects. Econometrics Journal, 9 (3). pp. 404-422. ISSN 1368-4221
Oswald, A.J. and Powdthavee, N. (2008) Death, happiness, and the calculation of compensatory damages. Journal of Legal Studies. S217-S251. ISSN 0047-2530
Ozkan, F.G. (2005) Currency and financial crises in Turkey, 2000-2001: bad fundementals or bad luck? The World Economy, 28 (4). pp. 541-572. ISSN 0378-5920
Ozkan, F.G. (2003) Explaining ERM realignments: insights from optimising models of currency crises. Journal of Macroeconomics, 25 (4). pp. 491-507. ISSN 0164-0704
Ozkan, Fatma Gulcin orcid.org/0000-0001-7652-1361 and McManus, Richard (2018) Who does better for the economy? : Presidents versus parliamentary democracies. Public Choice. ISSN 0048-5829
Ozkan, Fatma Gulcin orcid.org/0000-0001-7652-1361 and Unsal, D. Filiz (2017) It is not your fault but it is your problem: : Global financial crisis and emerging markets. Oxford Economic Papers. pp. 1-21. ISSN 0030-7653
Palm, Willy, Webb, Erin, Hernandez-Quevedo, Cristina et al. (4 more authors) (2020) Gaps in Coverage and Access in the European Union. Health Policy. pp. 1-35. ISSN 1872-6054
Paulden, Mike and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2009) Budget Allocation and the Revealed Social Rate of Time Preference for Health. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Paulden, M. and Culyer, A.J. (2010) Does cost-effectiveness analysis discriminate against patients with short life expectancy? : Matters of logic and matters of context. Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Philips, Z., Ginnelly, L., Sculpher, M. et al. (5 more authors) (2004) Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technology Assessment, 8 (36). pp. 1-169. ISSN 1366-5278
Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 (2017) The Economic Consequences of Political Donation Limits. Economica. pp. 1-58. ISSN 0013-0427
Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 (2018) Sovereign Debt : Election Concerns and the Democratic Disadvantage. Oxford Economic Papers. ISSN 0030-7653
Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 and Melki, Mickael (2019) New Evidence on the Historical Growth of Government in Europe : The Role of Labor Costs. European Journal of Political Economy. pp. 445-460. ISSN 0176-2680
Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 and Melki, Mickael (2020) Polarization and Corruption in America. European economic review. 103397. ISSN 0014-2921
Pickering, Andrew Christopher orcid.org/0000-0003-1545-2192 and Rajput, Sheraz (2017) Inequality and the composition of taxes. International tax and public finance. ISSN 0927-5940
Ploberger, W. and Phillips, P.C.B. (2003) Empirical limits for time series econometric models. Econometrica, 71 (2). pp. 627-673. ISSN 0012-9682
Polito, Vito and Spencer, Peter orcid.org/0000-0002-5595-5360 (2016) The Optimal Control of Heteroscedastic Macroeconomic Models. Journal of Applied Econometrics. 1430–1444. ISSN 0883-7252
Poskitt, D.S. (2005) A Note on the Specification and Estimation of ARMAX Systems. Journal of Time Series Analysis, 26 (2). pp. 157-183. ISSN 0143-9782
Poskitt, D.S. (2006) On the identification and estimation of nonstationary and cointegrated ARMAX systems. Econometric Theory, 22 (6). pp. 1138-1175. ISSN 0266-4666
Poskitta, D.S. and Skeels, C.L. (2007) Approximating the Distribution of the Instrumental Variables Estimator when the Concentration Parameter is Small. Journal of Econometrics, 139 (1). pp. 217-236. ISSN 0304-4076
Qizilbash, M. (2005) The mere addition paradox, parity and critical-level utilitarianism. Social Choice and Welfare, 24 (3). pp. 413-431. ISSN 0176-1714
Rachet Jacquet, Laurie, Gutacker, Nils orcid.org/0000-0002-2833-0621 and Siciliani, Luigi orcid.org/0000-0003-1739-7289 (2019) The causal effect of hospital volume on health gains from hip replacement surgery. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Randon, E. and Simmons, P. (2007) Correcting market failure due to interdependent preferences: when is piecemeal policy possible? Journal of Public Economic Theory, 9 (5). pp. 831-866. ISSN 1097-3923
Rawlins, M.D. and Culyer, A.J. (2004) National Institute for Clinical Excellence and its value judgments. BMJ. pp. 224-227. ISSN 1756-1833
Realdon, M. (2006) Pricing the credit risk of secured debt and financial leasing. Journal of Business Finance and Accounting, 33 (7-8). pp. 1298-1320. ISSN 0306-686X
Realdon, M. (2006) Quadratic term structure models in discrete time. Finance Research Letters, 3 (4). pp. 277-289. ISSN 1544-6123
Realdon, M. (2004) Valuation of exchangeable convertible bonds. International Journal of Theoretical and Applied Finance, 7 (6). pp. 701-721. ISSN 0219-0249
Realdon, M. (2007) Valuation of the firm's liabilities when equity holders are also creditors. Journal of Business Finance and Accounting, 34 (5-6). pp. 950-975. ISSN 0306-686X
Reichert, A. and Jacobs, R. orcid.org/0000-0001-5225-6321 (2018) Socioeconomic inequalities in duration of untreated psychosis : evidence from administrative data in England. Psychological Medicine. pp. 822-833. ISSN 0033-2917
Reichert, A. and Jacobs, R. orcid.org/0000-0001-5225-6321 (2018) The impact of waiting time on patient outcomes : Evidence from Early Intervention in Psychosis services in England. Health Economics. pp. 1772-1787. ISSN 1057-9230
Revill, Paul orcid.org/0000-0001-8632-0600, Walker, Simon Mark orcid.org/0000-0002-5750-3691, Madan, Jason et al. (5 more authors) (2014) Using cost-effectiveness thresholds to determine value for money in low-and middle-income country healthcare systems: Are current international norms fit for purpose? Working Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Rice, Nigel orcid.org/0000-0003-0312-823X and Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220 (2018) The determinants of health care expenditure growth. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Richards, D.A., Godfrey, L., Richardson, G. orcid.org/0000-0002-2360-4566 et al. (4 more authors) (2002) Nurse telephone triage for same day appointments in general practice: multiple interrupted time series trial of effect on workload and costs. British medical journal. pp. 1214-1217. ISSN 0959-535X
Ride, Jemimah orcid.org/0000-0002-1820-5499, Kasteridis, Panagiotis orcid.org/0000-0003-1623-4293, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (12 more authors) (2019) Impact of family practice continuity of care on unplanned hospital use for people with serious mental illness. Health services research. pp. 1316-1325. ISSN 1475-6773
Ride, Jemimah orcid.org/0000-0002-1820-5499, Kasteridis, Panagiotis orcid.org/0000-0003-1623-4293, Gutacker, Nils orcid.org/0000-0002-2833-0621 et al. (13 more authors) (2018) Do care plans and annual reviews of physical health influence unplanned hospital utilisation for people with serious mental illness? Analysis of linked longitudinal primary and secondary healthcare records in England. : Analysis of linked longitudinal primary and secondary healthcare records in England. BMJ Open. e023135. ISSN 2044-6055
Robinson, P.M. and Iacone, F. (2005) Cointegration in Fractional Systems with Deterministic Trends. Journal of Econometrics, 129 (1-2). pp. 263-298. ISSN 0304-4076
Robson, Matthew orcid.org/0000-0003-4558-7637, Asaria, Miqdad orcid.org/0000-0002-3538-4417, Tsuchiya, Aki et al. (2 more authors) (2016) Eliciting the level of health inequality aversion in England. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2017) Assessing the empirical relevance of labor frictions to business cycle fluctuations. Oxford Bulletin of Economics and Statistics. pp. 1-21. ISSN 0305-9049
Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2015) Firm-specific capital, inflation persistence and the sources of business cycles. European economic review. pp. 229-243. ISSN 0014-2921
Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2014) Overtime Labor, Employment Frictions, and the New Keynesian Phillips Curve. Review of economics and statistics. pp. 767-778. ISSN 0034-6535
Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 (2013) Simulation and estimation of macroeconomic models in Dynare. In: Hashimzade, Nigar and Thornton, Michael, (eds.) Handbook of Research Methods and Applications in Empirical Macroeconomics. Handbooks of Research Methods and Application Series . Edward Elgar , Cheltenham , pp. 593-608.
Rodrigues Madeira, Joao Antonio orcid.org/0000-0002-7380-9009 and Palma, Nuno (2018) Measuring monetary policy deviations from the Taylor rule. Economics Letters. pp. 25-27. ISSN 0165-1765
Rodriguez Santana, Idaira orcid.org/0000-0003-2022-3239, Anaya Montes, Misael, Chalkley, Martin John orcid.org/0000-0002-1091-8259 et al. (3 more authors) (2020) The impact of extending nurse working hours on staff sickness absence : evidence from a large mental health hospital in England. International Journal of Nursing Studies. 103611. ISSN 0020-7489
Rodriguez Santana, Idaira orcid.org/0000-0003-2022-3239, Aragon Aragon, Maria Jose Monserratt orcid.org/0000-0002-3787-6220, Rice, Nigel orcid.org/0000-0003-0312-823X et al. (1 more author) (2020) Trends in and drivers of Healthcare Expenditure in the English NHS : a retrospective analysis. Health Economics Review. 20. ISSN 2191-1991
Rodriguez Santana, Idaira orcid.org/0000-0003-2022-3239 and Chalkley, Martin orcid.org/0000-0002-1091-8259 (2017) Getting the right balance? : A mixed logit analysis of the relationship between UK training doctors' characteristics and their specialties using the 2013 National Training Survey. BMJ Open. e015219. ISSN 2044-6055
Rohner, D. (2006) Beach holiday in Bali or East Timor? Why conflict can lead to under and overexploitation of natural resources. Economics Letters, 92 (1). pp. 113-117. ISSN 0165-1765
Rohner, D. and Frey, B.S. (2007) Blood and Ink! The Common-Interest-Game Between Terrorists and the Media. Public Choice, 133 (1-2). pp. 129-145. ISSN 0048-5829
Rojas Blanco, Laura Cristina and Mumford, Karen Ann orcid.org/0000-0002-0190-5544 (2010) Royal Economic Society Women's Committee survey on the gender and ethnic balance of academic economics 2010. Research Report. Royal Economic Society
Rothery, Claire orcid.org/0000-0002-7759-4084, Claxton, Karl orcid.org/0000-0003-2002-4694, Palmer, Stephen orcid.org/0000-0002-7268-2560 et al. (3 more authors) (2017) Characterising uncertainty in the assessment of medical devices and determining future research needs. Health Economics. pp. 109-123. ISSN 1057-9230
Roussillon, Beatrice and Schweinzer, Paul orcid.org/0000-0002-6437-7224 (2010) Efficient emissions reduction. Working Paper. University of Manchester Discussion Paper Series .
Roy, P., Akbar, Z., Crede, V. et al. (47 more authors) (2018) Measurement of the beam asymmetry Σ and the target asymmetry T in the photoproduction of ω mesons off the proton using CLAS at Jefferson Laboratory. Physical Review C. 055202. ISSN 1089-490X
Sa, Luis, Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2019) Dynamic Hospital Competition Under Rationing by Waiting Times. Journal of Health Economics. pp. 260-282. ISSN 0167-6296
Sandler, T. and Hartley, K. (2001) Economics of alliances: the lessons for collective action. Journal of Economic Literature, 39 (3). pp. 869-896. ISSN 0022-0515
Sandner, Malte, Cornelissen, Thomas orcid.org/0000-0001-8259-5105, Jungmann, Tanja et al. (1 more author) (2018) Evaluating the effects of a targeted home visiting program on maternal and child health outcomes. Journal of Health Economics. pp. 269-283. ISSN 0167-6296
Santos, Rita orcid.org/0000-0001-7953-1960, Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 and Propper, Carol (2013) Does quality affect patients' choice of doctor? Evidence from the UK. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Santos, Rita orcid.org/0000-0001-7953-1960, Rice, Nigel orcid.org/0000-0003-0312-823X and Gravelle, Hugh Stanley Emrys orcid.org/0000-0002-7753-4233 (2020) Patterns of emergency admissions for ambulatory care sensitive conditions: : a spatial cross-sectional analysis of observational data. BMJ Open. ISSN 2044-6055
Saramago Goncalves, Pedro Rafael orcid.org/0000-0001-9063-8590, Claxton, Karl Philip orcid.org/0000-0003-2002-4694, Welton, N.J. et al. (1 more author) (2020) Bayesian econometric modelling of observational data for cost-effectiveness analysis : establishing the value of Negative Pressure Wound Therapy in the healing of open surgical wounds. Journal of the Royal Statistical Society: Series A (Statistics in Society). pp. 1575-1593. ISSN 1467-985X
Saramago, Pedro orcid.org/0000-0001-9063-8590, Espinoza, Manuel A, Sutton, Alex J et al. (2 more authors) (2019) The Value of Further Research : The Added Value of Individual-Participant Level Data. Applied Health Economics and Health Policy. ISSN 1175-5652
Schmidt, U. and Hey, J.D. (2004) Are Preference Reversals Errors: An Experimental Investigation. Journal of Risk and Uncertainty, 29 (3). pp. 207-218. ISSN 0895-5646
Schmitt, Laetitia Helene Marie AND Brugere orcid.org/0000-0003-1052-488X and Brugere, Cecile (2013) Capturing Ecosystem Services, Stakeholders' Preferences and Trade-Offs in Coastal Aquaculture Decisions : A Bayesian Belief Network Application. PLoS ONE. e75956. ISSN 1932-6203
Schurer, Stefanie, Shields, Michael and Jones, Andrew Michael orcid.org/0000-0003-4114-1785 (2014) Socioeconomic inequalities in bodily pain by age groups : longitudinal evidence from Australia, Britain and Germany. Journal of the royal statistical society series a-Statistics in society. pp. 783-806. ISSN 0035-9238
Sculpher, Mark orcid.org/0000-0003-3746-9913, Claxton, Karl orcid.org/0000-0003-2002-4694 and Pearson, Steven D (2017) Developing a Value Framework : The Need to Reflect the Opportunity Costs of Funding Decisions. Value in Health. pp. 234-239. ISSN 1524-4733
Seo, Myung Hwan and Shin, Yongcheol (2016) Dynamic panels with threshold effect and endogeneity. Journal of Econometrics. pp. 169-186. ISSN 0304-4076
Serlenga, Laura, Kapetanios, G. and Shin, Yongcheol (2020) Estimation and Inference for Multi-dimensional Heterogeneous Panel Datasets with Hierarchical Multi-factor Error Structure. Journal of Econometrics. ISSN 0304-4076
Sharma, Rajan, Gu, Yuanyuan, Sinha, Kompal et al. (2 more authors) (2019) Mapping the Strengths and Difficulties Questionnaire onto the Child Health Utility 9D in a large study of children. Quality of life research. 2429–2441. ISSN 1573-2649
Shimoji, M. (2002) On forward induction in money-burning games. Economic Theory, 19 (3). pp. 637-648. ISSN 0938-2259
Shimoji, M. (2003) On the equivalence of weak dominance and sequential best response. Games and Economic Behavior, 48 (2). pp. 385-402. ISSN 0899-8256
Shimoji, Makoto orcid.org/0000-0002-7895-9620 (2017) Revenue Comparison of Discrete Private-Value Auctions via Weak Dominance. REVIEW OF ECONOMIC DESIGN. pp. 231-252. ISSN 1434-4742
Shin, Yongcheol and Greenwood-Nimmo, Matthew (2013) Taxation and the asymmetric adjustment of selected retail energy prices in the UK. Economics Letters. pp. 411-416. ISSN 0165-1765
Shin, Yongcheol, Greenwood-Nimmo, Matthew and Nguyen, Viet (2020) Measuring the Connectedness of the Global Economy. International journal of forecasting. ISSN 0169-2070 (In Press)
Shin, Yongcheol, Kim, Minjoo, Cai, Charlie et al. (1 more author) (2019) FARVaR: Functional Autoregressive Value-at-Risk : FARVaR. Journal of Financial Econometrics. pp. 284-337. ISSN 1479-8409
Shin, Yongcheol and Serlenga, Laura (2020) Gravitymodels of interprovincial migration flows in Canada with hierarchical multifactor structure. Empirical Economics. ISSN 0377-7332
Siciliani, L. (2004) Does more choice reduce waiting times? Health Economics, 14 (1). pp. 17-23. ISSN 1057-9230
Siciliani, L. (2006) A Dynamic Model of Supply of elective surgery in the Presence of Waiting Times and Waiting Lists. Journal of Health Economics, 25 (6). pp. 891-907. ISSN 0167-6296
Siciliani, L. and Martin, S. (2007) An empirical analysis of the impact of choice on waiting times. Health Economics, 16 (18). pp. 763-779. ISSN 1057-9230
Siciliani, Luigi orcid.org/0000-0003-1739-7289, Fiorio, Carlo and Riganti, Andrea (2017) The Effect of Waiting Times on Demand and Supply for Elective Surgery : Evidence from Italy. Health Economics. pp. 92-105. ISSN 1057-9230
Siciliani, Luigi orcid.org/0000-0003-1739-7289 and Straume, Odd Rune (2019) Competition and Equity in Health Care Markets. Journal of Health Economics. pp. 1-14. ISSN 0167-6296
Siciliani, Luigi orcid.org/0000-0003-1739-7289, Wild, Claudia, McKee, Martin et al. (6 more authors) (2020) Strengthening Vaccination Programmes and Health Systems in the European Union : A Framework for Action. Health Policy. pp. 511-518. ISSN 1872-6054
Simmons, Peter Jeremy orcid.org/0000-0001-7960-981X and Kanabar, Ricky (2016) To defer or not defer? : UK state pension and work decisions in a life cycle model. Applied economics. pp. 5699-5716. ISSN 0003-6846
Simmons, Peter Jeremy orcid.org/0000-0001-7960-981X and Randon, Emanuela (2016) A Top Dog Tale with Preference Complementarities. Journal of Economics. 47–63. ISSN 1617-7134
Simonsen, Nicolai Fink, Oxholm, Anne Sophie, Kristensen, Soren Rud et al. (1 more author) (2020) What explains differences in waiting times for health care across socioeconomic status? Health Economics. ISSN 1057-9230
Sinclair, L., Wadsworth, R. orcid.org/0000-0002-4187-3102, Dobaczewski, J. orcid.org/0000-0002-4158-3770 et al. (31 more authors) (2019) Half-lives of Sr 73 and y 76 and the consequences for the proton dripline. Physical Review C. 044311. ISSN 1089-490X
Smith, P. and Wickens, M. (2002) Asset pricing with observable stochastic discount factors. Journal of Economic Surveys, 16 (3). pp. 397-446. ISSN 0950-0804
Smith, P.C. and van Ackere, A. (2002) A note on the integration of system dynamics and economic models. Journal of Economic Dynamics and Control, 26 (1). pp. 1-10. ISSN 0165-1889
Smith, S.D. (2003) Merchants and planters revisited. Economic History Review, 55 (3). pp. 434-465. ISSN 0013-0117
Smith, Simon and Forster, Martin orcid.org/0000-0001-8598-9062 (2018) 'The Curse of the Caribbean'? Agency's impact on the productivity of sugar estates on St. Vincent and the Grenadines, 1814-1829. Journal of economic history. pp. 472-499. ISSN 0022-0507
Soares, Marta O orcid.org/0000-0003-1579-8513, Sculpher, Mark J orcid.org/0000-0003-3746-9913 and Claxton, Karl orcid.org/0000-0003-2002-4694 (2020) Health Opportunity Costs : Assessing the Implications of Uncertainty Using Elicitation Methods with Experts. Medical Decision Making. pp. 448-459. ISSN 0272-989X
Soares, Marta O orcid.org/0000-0003-1579-8513, Sculpher, Mark orcid.org/0000-0003-3746-9913 and Claxton, Karl Philip orcid.org/0000-0003-2002-4694 (2020) Authors' response to: "Health Opportunity Costs and Expert Elicitation: A Comment on Soares et al." by Sampson, Firth and Towse. Medical Decision Making. ISSN 1552-681X (In Press)
Soares, Marta O orcid.org/0000-0003-1579-8513, Sharples, Linda, Morton, Alec et al. (2 more authors) (2018) Experiences of structured elicitation for model based cost-effectiveness analyses. Value in Health. pp. 715-723. ISSN 1524-4733
Sorge, Marco, Zhang, Chendi and Koufopoulos, Konstantinos (2017) Short‐Term Corporate Debt around the World. Journal of Money Credit and Banking. pp. 997-1029. ISSN 0022-2879
Spencer, P. (2002) The Impact of ICT Investment on UK Productive Potential 1986-2000: New Statistical Methods and Tests. The Manchester School, 70 (s1). pp. 107-126. ISSN 1463-6786
Spencer, Peter orcid.org/0000-0002-5595-5360 and Liu, Zhuoshi (2010) An open-economy macro-finance model of international interdependence : The OECD, US and the UK. Journal of Banking and Finance. pp. 667-680. ISSN 1872-6372
Spencer, Peter orcid.org/0000-0002-5595-5360 (2016) US Bank Credit Spreads during the Financial Crisis. Journal of Banking & Finance. pp. 168-182. ISSN 1872-6372
Stamuli, Eugena orcid.org/0000-0003-4905-3704, Kesornsak, Withawin, Grevitt, Michael P et al. (2 more authors) (2017) A Cost-Effectiveness Analysis of Intradiscal Electrothermal Therapy (IDET) Compared with Circumferential Lumbar Fusion. Pain practice : the official journal of World Institute of Pain. ISSN 1533-2500
Stewart, M.B. and Swaffield, J.K. (2003) Using the BHPS wave 9 additional questions to evaluate the impact of the National Minimum Wage. Oxford Bulletin of Economics and Statistics, 64 (5). pp. 633-652. ISSN 0305-9049
Stokes, Jonathan, Guthrie, Bruce, Mercer, Stewart W et al. (2 more authors) (2021) Multimorbidity combinations, costs of hospital care and potentially preventable emergency admissions in England : A cohort study. Plos medicine. e1003514. ISSN 1549-1277
Sun, M.D., Liu, Z., Huang, T.H. et al. (29 more authors) (2020) Fine structure in the α decay of 223U. Physics Letters B. 135096. ISSN 0370-2693
Swaffield, J.K. (2001) Does measurement error bias fixed-effects estimates of the union wage effect? Oxford Bulletin of Economics and Statistics, 63 (4). pp. 437-457. ISSN 0305-9049
Swaffield, J.K. (2007) Estimates of the Impact of Labour Market Attachment and Attitudes on the Female Wage. The Manchester School, 75 (3). pp. 349-371. ISSN 1463-6786
Swaffield, Joanna Kate orcid.org/0000-0002-9157-6691, Snell, Carolyn Jane orcid.org/0000-0003-3448-5985, Bradshaw, Jonathan Richard orcid.org/0000-0001-9395-6754 et al. (1 more author) (2016) Identifying sustainable pathways out of in-work poverty : Main report on the 'Working Life in York' survey : ESRC Knowledge Exchange Scheme ES/L002086/1. Research Report. Commissioned Report for JRF/JRHT, CYC & YSJU
Swaffield, Joanna Kate orcid.org/0000-0002-9157-6691, Snell, Carolyn Jane orcid.org/0000-0003-3448-5985, Tunstall, Rebecca Katharine orcid.org/0000-0001-8095-8080 et al. (1 more author) (2018) An Evaluation of the Living Wage : Identifying Pathways Out of In-Work Poverty. Social Policy and Society. 3. pp. 379-392. ISSN 1475-3073
Talman, A.J.J. and Thijssen, J.J.J. (2006) Existence of equilibrium and price adjustments in a finance economy with incomplete markets. Journal of Mathematical Economics, 42 (3). pp. 255-268. ISSN 0304-4068
Thijssen, J.J.J., Huisman, K.J.M. and Kort, P.M. (2004) The Effect of Information Streams on Captial Budgeting Decisions. European Journal of Operational Research, 157 (3). pp. 759-774. ISSN 0377-2217
Thomas, Ranjeeta, Friebel, Rocco, Barker, Kerrie et al. (12 more authors) (2019) Work and home productivity of people living with HIV in Zambia and South Africa. Aids. pp. 1063-1071. ISSN 1473-5571
Thornton, Michael Alan orcid.org/0000-0002-4470-809X (2019) Exact Discrete Representations of Linear Continuous Time Models with Mixed Frequency Data. Journal of Time Series Analysis. 5. pp. 951-967. ISSN 1467-9892
Thornton, Michael Alan orcid.org/0000-0002-4470-809X (2013) Removing seasonality under a changing regime : Filtering new car sales. Computational Statistics & Data Analysis. 4. pp. 4-14. ISSN 0167-9473
Thornton, Michael Alan orcid.org/0000-0002-4470-809X and Chambers, Marcus (2017) Continuous Time ARMA Processes : Discrete Time Representation and Likelihood Evaluation. Journal of Economic Dynamics and Control. pp. 48-65. ISSN 0165-1889
Thornton, Michael Alan orcid.org/0000-0002-4470-809X and Chambers, Marcus (2016) The Exact Discretisation of CARMA Models with Applications in Finance. Journal of empirical finance. 739 - 761. ISSN 0927-5398
Tominey, Emma orcid.org/0000-0002-0287-3935 (2016) Female Labour Supply and Household Employment Shocks : Maternity Leave as an Insurance Mechanism. European economic review. pp. 256-271. ISSN 0014-2921
Tominey, Emma orcid.org/0000-0002-0287-3935, Carneiro, Pedro, Lopez Garcia, Italo et al. (1 more author) (2020) Intergenerational Mobility and the Timing of Parental Income. Journal of Political Economy. ISSN 1537-534X (In Press)
Tominey, Emma orcid.org/0000-0002-0287-3935, Nicoletti, Cheti orcid.org/0000-0002-7237-2597 and Salvanes, Kjell (2016) The Family Peer Effect on Mothers' Labour Supply. Discussion Paper.
Tonei, Valentina (2018) Mother's health after baby's birth: does delivery method matter? : Does the delivery method matter? Journal of Health Economics. pp. 182-196. ISSN 0167-6296
Tu, J., Chen, X. B., Ruan, X. Z. et al. (10 more authors) (2020) Direct observation of hidden spin polarization in 2H-MoT e2. Physical Review B. 035102. ISSN 2469-9969
Tunstall, Rebecca Katharine orcid.org/0000-0001-8095-8080 and Swaffield, Joanna Kate orcid.org/0000-0002-9157-6691 (2016) Identifying sustainable pathways out of in-work poverty: Follow-up qualitative interviews report on the 'Working Life in York' survey : ESRC Knowledge Exchange Scheme ES/L002086/1. Research Report. Commissioned Report for JRF/JRHT, CYC & YSJU
Underwood, C. I.D., Gan, G., He, Z. H. et al. (4 more authors) (2020) Characterization of flowing liquid films as a regenerating plasma mirror for high repetition-rate laser contrast enhancement. Laser and Particle Beams. pp. 128-134. ISSN 0263-0346
Von Hinke Kessler Scholder, Stephanie Marees Ljuba orcid.org/0000-0002-8272-076X, Leckie, George and Nicoletti, Cheti orcid.org/0000-0002-7237-2597 (2019) The use of instrumental variables in peer effects models. Oxford Bulletin of Economics and Statistics. ISSN 0305-9049
van Doorslaer, E. and Jones, A.M. (2002) Inequalities in Self-reported Health: Validation of a New Approach to Measurement. Journal of Health Economics, 22 (1). pp. 61-87. ISSN 0167-6296
van Schalkwyk, May, Bourek, Aleš, Kringos, Dionne et al. (4 more authors) (2020) The best person (or machine) for the job : rethinking task shifting in healthcare. Health policy (Amsterdam, Netherlands). pp. 1379-1386. ISSN 1872-6054
van den Brink, Rene, Funaki, Yukihiko and Ju, Yuan orcid.org/0000-0002-7541-9856 (2013) Reconciling marginalism with egalitarianism : consistency, monotonicity, and implementation of egalitarian Shapley values. Social Choice and Welfare. pp. 693-714. ISSN 0176-1714
Wagner, Peter A. (2018) Who goes first? Strategic delay under information asymmetry. Theoretical Economics. pp. 341-375. ISSN 1555-7561
Walker, Simon Mark orcid.org/0000-0002-5750-3691, Sculpher, Mark orcid.org/0000-0003-3746-9913, Claxton, Karl Philip orcid.org/0000-0003-2002-4694 et al. (1 more author) (2012) Coverage with evidence development, only in research, risk sharing or patient access scheme? : A framework for coverage decisions. Report. CHE Research Paper, 77 . Centre for Health Economics, University of York , York UK.
Wamers, F., Marganiec, J., Aksouh, F. et al. (56 more authors) (2018) Comparison of electromagnetic and nuclear dissociation of $^{17}\mathrm{Ne}$. Physical Review C. 34612. ISSN 1089-490X
Wang, F., Sun, B. H., Liu, Z. et al. (42 more authors) (2017) Spectroscopic factor and proton formation probability for the d3/2 proton emitter 151mLu. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics. pp. 83-87. ISSN 0370-2693
Wang, Weining (2020) LASSO-Driven Inference in Time and Space. Annals of Statistics. ISSN 0090-5364 (In Press)
Watson, N. and Woods, B. (2005) The origins and early developments of special/adaptive wheelchair seating. Social History of Medicine, 18 ( 3). pp. 459-474. ISSN 0951-631X
Wei, X. (2018) Beam-target helicity asymmetry e in K0 Λ and K0 Σ0 photoproduction on the neutron. Physical Review C. 045205. ISSN 1089-490X
Wei, X. (2018) Measurement of unpolarized and polarized cross sections for deeply virtual Compton scattering on the proton at Jefferson Laboratory with CLAS. Physical Review C. 045203. ISSN 1089-490X
White, Jonathan, Gutacker, Nils orcid.org/0000-0002-2833-0621, Jacobs, Rowena orcid.org/0000-0001-5225-6321 et al. (1 more author) (2014) Hospital admissions for severe mental illness in England: : Changes in equity of utilisation at the small area level between 2006 and 2010. Social Science & Medicine. pp. 243-251. ISSN 1873-5347
Wickens, M.R. and Motto, R. (2001) Estimating shocks and impulse response functions. Journal of Applied Econometrics, 16 (3). pp. 371-387. ISSN 0883-7252
Wilkinson, Thomas, Sculpher, Mark J orcid.org/0000-0003-3746-9913, Claxton, Karl orcid.org/0000-0003-2002-4694 et al. (8 more authors) (2016) The International Decision Support Initiative Reference Case for Economic Evaluation : An Aid to Thought. Value in Health. pp. 921-928. ISSN 1524-4733
Woods, Beth orcid.org/0000-0002-7669-9415, Revill, Paul orcid.org/0000-0001-8632-0600, Sculpher, Mark orcid.org/0000-0003-3746-9913 et al. (1 more author) (2016) Country-Level Cost-Effectiveness Thresholds : Initial Estimates and the Need for Further Research. Value in Health. pp. 929-935. ISSN 1524-4733
Woods, Beth orcid.org/0000-0002-7669-9415, Revill, Paul orcid.org/0000-0001-8632-0600, Sculpher, Mark orcid.org/0000-0003-3746-9913 et al. (1 more author) (2015) Country-level cost-effectiveness thresholds : initial estimates and the need for further research. Discussion Paper. CHE Research Paper . Centre for Health Economics, University of York , York, UK.
Woods, Beth orcid.org/0000-0002-7669-9415, Rothery, Claire orcid.org/0000-0002-7759-4084, Anderson, Sarah-Jane et al. (4 more authors) (2018) Appraising the value of evidence generation activities : an HIV modelling study. BMJ Global health. e000488. ISSN 2059-7908
Woods, Beth orcid.org/0000-0002-7669-9415, Schmitt, Laetitia orcid.org/0000-0003-1052-488X, Rothery, Claire orcid.org/0000-0002-7759-4084 et al. (4 more authors) (2020) Practical metrics for establishing the health benefits of research to support research prioritisation. BMJ Global health. ISSN 2059-7908
Woods, Bethan Sarah orcid.org/0000-0002-7669-9415, Revill, Paul orcid.org/0000-0001-8632-0600, Sculpher, Mark John orcid.org/0000-0003-3746-9913 et al. (1 more author) (2016) Country-level cost-effectiveness thresholds : initial estimates and the need for further research. Value in Health. 929–935. ISSN 1524-4733
Yamagata, T. (2006) The small sample performance of the Wald test in the sample selection model under the multicollinearity problem. Economics Letters, 93 (1). pp. 75-81. ISSN 0165-1765
Yamagata, T. and Orme, C.D. (2005) On testing sample selection bias under the multicollinearity problem. Econometric Reviews, 24 (4). pp. 467-481. ISSN 0747-4938
Yamagata, Takashi orcid.org/0000-0001-5949-8833, Norkute, Milda, Sarafidis, Vasilis et al. (1 more author) (2020) Instrumental Variable Estimation of Dynamic Linear Panel Data Models with Defactored Regressors and a Multifactor Error Structure. Journal of Econometrics. pp. 1-31. ISSN 0304-4076
Yang, Zaifu orcid.org/0000-0002-3265-7109 and Fujishige, Satoru (2017) On a spontaneous decentralized market process. Journal of mechanism and institution design. pp. 1-37. ISSN 2399-844X
Zauner, K.G. (2002) The existence of equilibrium in games with randomly perturbed payoffs and applications to experimental economics. Mathematical Social Sciences, 44 (1). pp. 115-120. ISSN 0165-4896
Zhou, Wenting and Hey, John Denis orcid.org/0000-0001-6692-1484 (2017) Context Matters. Experimental Economics. ISSN 1386-4157
|
CommonCrawl
|
Published Article: Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae
Title: Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae
Recent work has revealed that the light curves of hydrogen-poor (Type I) superluminous supernovae (SLSNe), thought to be powered by magnetar central engines, do not always follow the smooth decline predicted by a simple magnetar spin-down model. Here we present the first systematic study of the prevalence and properties of "bumps" in the post-peak light curves of 34 SLSNe. We find that the majority (44%–76%) of events cannot be explained by a smooth magnetar model alone. We do not find any difference in supernova properties between events with and without bumps. By fitting a simple Gaussian model to the light-curve residuals, we characterize each bump with an amplitude, temperature, phase, and duration. We find that most bumps correspond with an increase in the photospheric temperature of the ejecta, although we do not see drastic changes in spectroscopic features during the bump. We also find a moderate correlation (ρ≈ 0.5;p≈ 0.01) between the phase of the bumps and the rise time, implying that such bumps tend to happen at a certain "evolutionary phase," (3.7 ± 1.4)trise. Most bumps are consistent with having diffused from a central source of variable luminosity, although sources further out in the ejecta are not excluded. more » With this evidence, we explore whether the cause of these bumps is intrinsic to the supernova (e.g., a variable central engine) or extrinsic (e.g., circumstellar interaction). Both cases are plausible, requiring low-level variability in the magnetar input luminosity, small decreases in the ejecta opacity, or a thin circumstellar shell or disk.
Hosseinzadeh, Griffin; Berger, Edo; Metzger, Brian D.; Gomez, Sebastian; Nicholl, Matt; Blanchard, Peter
Article No. 14
DOI PREFIX: 10.3847
SN 2017gci: a nearby Type I Superluminous Supernova with a bumpy tail
https://doi.org/10.1093/mnras/staa4035
Fiore, A ; Chen, T-W ; Jerkstrand, A ; Benetti, S ; Ciolfi, R ; Inserra, C ; Cappellaro, E ; Pastorello, A ; Leloudas, G ; Schulze, S ; et al ( February 2021 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT We present and discuss the optical spectrophotometric observations of the nearby (z = 0.087) Type I superluminous supernova (SLSN I) SN 2017gci, whose peak K-corrected absolute magnitude reaches Mg = −21.5 mag. Its photometric and spectroscopic evolution includes features of both slow- and of fast-evolving SLSN I, thus favoring a continuum distribution between the two SLSN-I subclasses. In particular, similarly to other SLSNe I, the multiband light curves (LCs) of SN 2017gci show two re-brightenings at about 103 and 142 d after the maximum light. Interestingly, this broadly agrees with a broad emission feature emerging around 6520 Å after ∼51 d from the maximum light, which is followed by a sharp knee in the LC. If we interpret this feature as Hα, this could support the fact that the bumps are the signature of late interactions of the ejecta with a (hydrogen-rich) circumstellar material. Then we fitted magnetar- and CSM-interaction-powered synthetic LCs on to the bolometric one of SN 2017gci. In the magnetar case, the fit suggests a polar magnetic field Bp ≃ 6 × 1014 G, an initial period of the magnetar Pinitial ≃ 2.8 ms, an ejecta mass $M_{\rm ejecta}\simeq 9\, \mathrm{M}_\odot $ and an ejecta opacity $\kappa \simeq 0.08\, \mathrm{cm}^{2}\, \rm{g}^{-1}$. A CSM-interactionmore »scenario would imply a CSM mass $\simeq 5\, \mathrm{M}_\odot $ and an ejecta mass $\simeq 12\, \mathrm{M}_\odot $. Finally, the nebular spectrum of phase + 187 d was modeled, deriving a mass of $\sim 10\, {\rm M}_\odot$ for the ejecta. Our models suggest that either a magnetar or CSM interaction might be the power sources for SN 2017gci and that its progenitor was a massive ($40\, {\rm M}_\odot$) star.« less
High-Cadence TESS and Ground-based Data of SN 2019esa, the Less Energetic Sibling of SN 2006gy ∗
https://doi.org/10.3847/1538-4357/ac8ea7
Andrews, Jennifer E. ; Pearson, Jeniveve ; Lundquist, M. J. ; Sand, David J. ; Jencson, Jacob E. ; Bostroem, K. Azalee ; Hosseinzadeh, Griffin ; Valenti, S. ; Smith, Nathan ; Amaro, R. C. ; et al ( October 2022 , The Astrophysical Journal)
We present photometric and spectroscopic observations of the nearby (D≈ 28 Mpc) interacting supernova (SN) 2019esa, discovered within hours of explosion and serendipitously observed by the Transiting Exoplanet Survey Satellite (TESS). Early, high-cadence light curves from both TESS and the DLT40 survey tightly constrain the time of explosion, and show a 30 day rise to maximum light followed by a near-constant linear decline in luminosity. Optical spectroscopy over the first 40 days revealed a reddened object with narrow Balmer emission lines seen in Type IIn SNe. The slow rise to maximum in the optical light curve combined with the lack of broad Hαemission suggest the presence of very optically thick and close circumstellar material (CSM) that quickly decelerated the SN ejecta. This CSM was likely created from a massive star progenitor with anṀ∼ 0.2M☉yr−1lost in a previous eruptive episode 3–4 yr before eruption, similar to giant eruptions of luminous blue variable stars. At late times, strong intermediate-width Caii, Fei, and Feiilines are seen in the optical spectra, identical to those seen in the superluminous interacting SN 2006gy. The strong CSM interaction masks the underlying explosion mechanism in SN 2019esa, but the combination of the luminosity,more »strength of the Hαlines, and mass-loss rate of the progenitor seem to be inconsistent with a Type Ia CSM model and instead point to a core-collapse origin.
Luminous Supernovae: Unveiling a Population between Superluminous and Normal Core-collapse Supernovae
https://doi.org/10.3847/1538-4357/ac9842
Gomez, Sebastian ; Berger, Edo ; Nicholl, Matt ; Blanchard, Peter K. ; Hosseinzadeh, Griffin ( December 2022 , The Astrophysical Journal)
Stripped-envelope core-collapse supernovae can be divided into two broad classes: the common Type Ib/c supernovae (SNe Ib/c), powered by the radioactive decay of56Ni, and the rare superluminous supernovae (SLSNe), most likely powered by the spin-down of a magnetar central engine. Up to now, the intermediate regime between these two populations has remained mostly unexplored. Here, we present a comprehensive study of 40luminous supernovae(LSNe), SNe with peak magnitudes ofMr= −19 to −20 mag, bound by SLSNe on the bright end and by SNe Ib/c on the dim end. Spectroscopically, LSNe appear to form a continuum between Type Ic SNe and SLSNe. Given their intermediate nature, we model the light curves of all LSNe using a combined magnetar plus radioactive decay model and find that they are indeed intermediate, not only in terms of their peak luminosity and spectra, but also in their rise times, power sources, and physical parameters. We subclassify LSNe into distinct groups that are either as fast evolving as SNe Ib/c or as slow evolving as SLSNe, and appear to be either radioactively or magnetar powered, respectively. Our findings indicate that LSNe are powered by either an overabundant production of56Ni or by weak magnetar engines, and maymore »serve as the missing link between the two populations.
The Diverse Properties of Type Icn Supernovae Point to Multiple Progenitor Channels
https://doi.org/10.3847/1538-4357/ac8ff6
Pellegrino, C. ; Howell, D. A. ; Terreran, G. ; Arcavi, I. ; Bostroem, K. A. ; Brown, P. J. ; Burke, J. ; Dong, Y. ; Gilkis, A. ; Hiramatsu, D. ; et al ( October 2022 , The Astrophysical Journal)
We present a sample of Type Icn supernovae (SNe Icn), a newly discovered class of transients characterized by their interaction with H- and He-poor circumstellar material (CSM). This sample is the largest collection of SNe Icn to date and includes observations of two published objects (SN 2019hgp and SN 2021csp) and two objects not yet published in the literature (SN 2019jc and SN 2021ckj). The SNe Icn display a range of peak luminosities, rise times, and decline rates, as well as diverse late-time spectral features. To investigate their explosion and progenitor properties, we fit their bolometric light curves to a semianalytical model consisting of luminosity inputs from circumstellar interaction and radioactive decay of56Ni. We infer low ejecta masses (≲2M⊙) and56Ni masses (≲0.04M⊙) from the light curves, suggesting that normal stripped-envelope supernova (SESN) explosions within a dense CSM cannot be the underlying mechanism powering SNe Icn. Additionally, we find that an estimate of the star formation rate density at the location of SN 2019jc lies at the lower end of a distribution of SESNe, in conflict with a massive star progenitor of this object. Based on its estimated ejecta mass,56Ni mass, and explosion site properties, we suggest a low-mass, ultra-strippedmore »star as the progenitor of SN 2019jc. For other SNe Icn, we suggest that a Wolf–Rayet star progenitor may better explain their observed properties. This study demonstrates that multiple progenitor channels may produce SNe Icn and other interaction-powered transients.
The Zwicky Transient Facility phase I sample of hydrogen-rich superluminous supernovae without strong narrow emission lines
https://doi.org/10.1093/mnras/stac2218
Kangas, T. ; Yan, Lin ; Schulze, S. ; Fransson, C. ; Sollerman, J. ; Lunnan, R. ; Omand, C. M. B. ; Andreoni, I. ; Burruss, R. ; Chen, T-W ; et al ( August 2022 , Monthly Notices of the Royal Astronomical Society)
We present a sample of 14 hydrogen-rich superluminous supernovae (SLSNe II) from the Zwicky Transient Facility (ZTF) between 2018 and 2020. We include all classified SLSNe with peaks Mg < −20 mag with observed broad but not narrow Balmer emission, corresponding to roughly 20 per cent of all hydrogen-rich SLSNe in ZTF phase I. We examine the light curves and spectra of SLSNe II and attempt to constrain their power source using light-curve models. The brightest events are photometrically and spectroscopically similar to the prototypical SN 2008es, while others are found spectroscopically more reminiscent of non-superluminous SNe II, especially SNe II-L. 56Ni decay as the primary power source is ruled out. Light-curve models generally cannot distinguish between circumstellar interaction (CSI) and a magnetar central engine, but an excess of ultraviolet (UV) emission signifying CSI is seen in most of the SNe with UV data, at a wide range of photometric properties. Simultaneously, the broad H α profiles of the brightest SLSNe II can be explained through electron scattering in a symmetric circumstellar medium (CSM). In other SLSNe II without narrow lines, the CSM may be confined and wholly overrun by the ejecta. CSI, possibly involving mass lost in recent eruptions, is implied to be the dominant power source inmore »most SLSNe II, and the diversity in properties is likely the result of different mass loss histories. Based on their radiated energy, an additional power source may be required for the brightest SLSNe II, however – possibly a central engine combined with CSI.
Publisher's Version of Record at https://doi.org/10.3847/1538-4357/ac67dd
Hosseinzadeh, Griffin, Berger, Edo, Metzger, Brian D., Gomez, Sebastian, Nicholl, Matt, and Blanchard, Peter. Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae. The Astrophysical Journal 933.1 Web. doi:10.3847/1538-4357/ac67dd.
Hosseinzadeh, Griffin, Berger, Edo, Metzger, Brian D., Gomez, Sebastian, Nicholl, Matt, & Blanchard, Peter. Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae. The Astrophysical Journal, 933 (1). https://doi.org/10.3847/1538-4357/ac67dd
Hosseinzadeh, Griffin, Berger, Edo, Metzger, Brian D., Gomez, Sebastian, Nicholl, Matt, and Blanchard, Peter. "Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae". The Astrophysical Journal 933 (1). Country unknown/Code not available: DOI PREFIX: 10.3847. https://doi.org/10.3847/1538-4357/ac67dd. https://par.nsf.gov/biblio/10368444.
place = {Country unknown/Code not available}, title = {Bumpy Declining Light Curves Are Common in Hydrogen-poor Superluminous Supernovae}, url = {https://par.nsf.gov/biblio/10368444}, DOI = {10.3847/1538-4357/ac67dd}, abstractNote = {Abstract Recent work has revealed that the light curves of hydrogen-poor (Type I) superluminous supernovae (SLSNe), thought to be powered by magnetar central engines, do not always follow the smooth decline predicted by a simple magnetar spin-down model. Here we present the first systematic study of the prevalence and properties of "bumps" in the post-peak light curves of 34 SLSNe. We find that the majority (44%–76%) of events cannot be explained by a smooth magnetar model alone. We do not find any difference in supernova properties between events with and without bumps. By fitting a simple Gaussian model to the light-curve residuals, we characterize each bump with an amplitude, temperature, phase, and duration. We find that most bumps correspond with an increase in the photospheric temperature of the ejecta, although we do not see drastic changes in spectroscopic features during the bump. We also find a moderate correlation (ρ≈ 0.5;p≈ 0.01) between the phase of the bumps and the rise time, implying that such bumps tend to happen at a certain "evolutionary phase," (3.7 ± 1.4)trise. Most bumps are consistent with having diffused from a central source of variable luminosity, although sources further out in the ejecta are not excluded. With this evidence, we explore whether the cause of these bumps is intrinsic to the supernova (e.g., a variable central engine) or extrinsic (e.g., circumstellar interaction). Both cases are plausible, requiring low-level variability in the magnetar input luminosity, small decreases in the ejecta opacity, or a thin circumstellar shell or disk.}, journal = {The Astrophysical Journal}, volume = {933}, number = {1}, publisher = {DOI PREFIX: 10.3847}, author = {Hosseinzadeh, Griffin and Berger, Edo and Metzger, Brian D. and Gomez, Sebastian and Nicholl, Matt and Blanchard, Peter}, }
|
CommonCrawl
|
Why are imaginary numbers important
In order to understand and appreciate the need for imaginary numbers, you should start from the very basics of math, the numbers. Even though this is a high school level topic, we will. Imaginary numbers are important since it exceeds the limitation of what we thought mathematics is for a long time. A square root of a negative number.. First lets review all sets of different number types, where they fail, and where in their list is the set of complex numbers. (Complex numbers include imaginary numbers as a special case.) Each set in the list will be shown to arise from the inade..
Part 1: What Are Imaginary Numbers? Why Do We Need Them
The square root of any negative number is not a real number. denoted as i for imaginary because it does not exist, in the normal concept of numbers.Complex numbers (which include real and.
Learn about the imaginary unit i, about the imaginary numbers, and about square roots of negative numbers. Google Classroom Facebook Twitter. Email. The imaginary unit i. Intro to the imaginary numbers. Intro to the imaginary numbers. This is the currently selected item
An imaginary number is a real number that has been multiplied by i, an imaginary unit that is equivalent to the square root of -1. This means that imaginary numbers are essentially negative perfect squares. Since it is otherwise impossible to achieve a negative square root through standard multiplication, imaginary numbers become necessary to.
An imaginary number is a number that, when squared, has a negative result. Essentially, an imaginary number is the square root of a negative number and does not have a tangible value Imaginary numbers, like real numbers, are simply ideas without any physical existence. They are both very useful (though with real numbers, it is much more obvious why that is so). But it is hard to see how one would (convincingly) argue that real numbers actually exist while imaginary numbers do not Why Do Electrical Engineers Use Imaginary Numbers - Visit us today for local electrician training programs, including campus locations, start dates an {Electricians are an important part of all our daily lives,The use of electricians within our daily lives are very important,Electricians are a necessity in our
Why are negative numbers important? I can't hold -4 fingers up, I can't put -2 cookies in a jar...so in a sense, they don't exist. Same with imaginary numbers. In a lot of contexts, they aren't relevant. But if you are trying to do something very specific, like compute the impedance of an AC circuit, they are invaluable Here comes an important point- IMAGINARY NUMBERS are Imaginary but their existance is not Imaginary they really exist. it was imaginary in the sense as it was left to the people's imagination to imagine a solution to the square root of negative numbers and use the letter i this was fancy and impressive The word imaginary can be a bit misleading in the sense that it implies imaginary numbers don't exist or that they aren't important. A better way to think about it is that normal (real) numbers can directly refer to actual quantities, for example the number 3 can refer to 3 loaves of bread Imaginary numbers run contra to common sense on a basic level, but you must accept them as a system, and then they make sense: remember that nothing makes 2+2=4 except the fact that we SAY SO. Imaginary numbers are just another class of number, exactly like the two new classes of numbers we've seen so far. Let's see why and how imaginary numbers came about. Let's see.
Imaginary Numbers are not Imaginary. Imaginary Numbers were once thought to be impossible, and so they were called Imaginary (to make fun of them).. But then people researched them more and discovered they were actually useful and important because they filled a gap in mathematics but the imaginary name has stuck.. And that is also how the name Real Numbers came about (real is not. Argand was also a pioneer in relating imaginary numbers to geometry via the concept of complex numbers. Complex numbers are numbers with a real part and an imaginary part. For instance, 4 + 2i is a complex number with a real part equal to 4 and an imaginary part equal to 2i. It turns out that both real numbers and imaginary numbers are also.
Why are imaginary numbers important? Study
Imaginary Numbers Definition. Imaginary numbers are the numbers when squared it gives the negative result. In other words, imaginary numbers are defined as the square root of the negative numbers where it does not have a definite value
Complex numbers are broadly used in physics, normally as a calculation tool that makes things easier due to Euler's formula. In the end, it is only the real component that has physical meaning or the two parts (real and imaginary) are treated separately as real quantities
An international research team has proven that the imaginary part of quantum mechanics can be observed in action in the real world. For almost a century, physicists have been intrigued by the fundamental question: why are complex numbers so important in quantum mechanics, that is, numbers containing.
The Story of Imaginary Numbers and why they are not imaginary 1. Prelude, and a Challenge 2. The Oath, the Sign, and the Discovery 3. The Dilemma 4. Bombelli's Breakthrough 5. Geometric Progress: John Wallis 6. Completing the Jigsaw Girolamo Cardano (Pavia, Bologna) The Story of Imaginary Numbers and why they are not imaginary
Why is imaginary number s important for Electrical Engineering?Without it, it's impossible to analyze the electric circuit. We tell you why.https://www.iklea..
These are numbers we can't picture, numbers that normal human consciousness cannot comprehend. And when we add the imaginary numbers to the real numbers, we have the complex number system. The first number system in which it's possible to explain satisfactorily the crystal formation of ice. It's like a vast, open landscape. The horizons
Using complex numbers means you are trying to describe a value in a different domain and in complex number systems, the Imaginary number doesn't mean that the value of capacitor is imaginary. The imaginary number helps to signify the vector rotation when voltage is applied across it or when current flows through it
Why are imaginary numbers important? - Quor
In a general manner, imaginary numbers are used where beyond real numbers a category is needed which is less real. One important example are quantum observables which are treating with complex (and thus with imaginary) numbers. Only the measurable values of an observable, called eigenvalues, are real numbers, and inversely only real numbers.
Imaginary numbers are extremely useful in all areas of physics, because you can use the natural exponent and imaginary powers (exp(a+bi)) to represent sinusoids that grow or decay over time. That's everything from a weight on a spring to current through a wire (which is my field) in a single general representation
-- but having to learn about imaginary numbers -- numbers that didn't even exist -- numbers that were made up by some guy back in the 16th century -- I just didn't understand that at all -- it may be important for some fields like quantum mechanics and electrical engineering, but for a farmer or for a wildlife biologist -- which is what I.
An imaginary number is the square root of a negative number. That is why they are called imaginary, what René Descartes called them, because he thought such a number could not exist. In this paper, I will discuss how complex numbers and imaginary numbers were discovered, the interesting math of complex numbers, and how they are used in other.
A short and sweet video explaining why Complex Numbers are so interesting!
-The word imaginary was meant to be downgrading for these numbers, because at a certain point in time they were deemed useless. -Even after imaginary numbers were concluded as important and useful, mathematicians decided that it would be best to keep that name
Why is the unit when you are dealing with imaginary numbers important: When you're dealing with the theoretical concept of imaginary numbers, the term unit is used to describe first term and is equivalent to how the numeral one, is the first number which exists
Why are imaginary numbers important? - Answer
But imaginary numbers, and the complex numbers they help define, turn out to be incredibly useful. They have a far-reaching impact in physics, engineering, number theory and geometry . And they are the first step into a world of strange number systems, some of which are being proposed as models of the mysterious relationships underlying our.
the imaginary unit, i, is defined as. i= sqrt(-1). Obviously there is no real number that is the square root of an imaginary number. However, in doing math problems you inevitably need to take square roots of negative numbers, which you can't do
us sign: Uh oh. This question makes most people cringe the first time they see it. You want the square root of a number less than zero? That's absurd
Complex numbers are made up of two components, real and imaginary. They have the form a + bi, where the numbers a and b are real. The bi component is responsible for the specific features of complex numbers. The key role here is played by the imaginary number i, i.e. the square root of -1
Either one of them can be termed real or imaginary. Since complex numbers provide ready means of describing numbers with two dimensions, they come in handy to describe the wave function. But there is no reason why an alternate mathematical structure that provides two dimensions to represent a number can not be employed to describe wave function Imaginary numbers live in a world of their own; the numbers are counted on an entirely different plane or axis that is solely devised for them. However, imaginary numbers have acquired a somewhat nefarious reputation, considering that their discovery has compounded the difficulty of problems that math was already replete with. I mean, as if the numbers we already had weren't enough An international research team has proven that the imaginary part of quantum mechanics can be observed in action in the real world. For almost a century, physicists have been intrigued by the fundamental question: why are complex numbers so important in quantum mechanics, that is, numbers containing a component with the imaginary number i Representation of Waves via Complex Numbers In mathematics, the symbol is conventionally used to represent the square-root of minus one: that is, the solution of (Riley 1974). A real number, (say), can take any value in a continuum of values lying between and . On the other hand, an imaginary number takes the general form , where is a real number The name of imaginary numbers. The name of the imaginary numbers includes the impression of the numbers are in the imagination and they do not actually exist. However, the imaginary number is rotational transformation as I explained it in this page. The imaginary number exists almost as same as rotational transformation existing well
An imaginary number is a complex number that can be written as a real number multiplied by the imaginary unit i, which is defined by its property i 2 = −1. The square of an imaginary number bi is −b 2.For example, 5i is an imaginary number, and its square is −25.By definition, zero is considered to be both real and imaginary. Originally coined in the 17th century by René Descartes as a. For real numbers, a horizontal number line is used, with numbers increasing in value as you move to the left. John Wallis added a vertical line to represent the imaginary numbers. This is called the complex number plane where the x-axis is named the real axis and the y-axis is named the imaginary axis By the way, imaginary and complex numbers really became important with Cardano's formula for solving cubic equation. For some cubic equations having only real solutions, Cardano's formula would require working with complex numbers (the imaginary parts cancelled out at the end) I've been reading up on imaginary numbers and how they work. I even have a rudimentary understanding of how the complex plane works. But one thing that has eluded me is just why they were invented in the first place.why they were invented in the first place
Imaginary numbers belong to the complex number system. All numbers of the equation a + bi, where a and b are real numbers are a part of the complex number system. Imaginary Numbers at Work Imaginary numbers are used in a variety of fields and holds many uses. Without imaginary numbers you wouldn't be able to listen to the radio or talk on. If you become a mathematician, engineer or physicist, imaginary numbers become very important. Imaginary numbers are mainly used in mathematical modeling. They can affect values in models where the state of a model at a particular moment in time is affected by the state of a model at an earlier time Imaginary Number. The number is the basis of any imaginary number, which, in general, is any real number times i. For example, 5i is an imaginary number and is equivalent to - 1 ÷ 5. The real numbers are those numbers that can be expressed as terminating, repeating, or nonrepeating decimals; they include positive and negative numbers
The most fundamental of the imaginary numbers, so called because, in reality, no number can be multiplied by itself to produce a negative number (and, therefore, negative numbers have no real. A complex number is a number comprising area land imaginary part. It can be written in the form a+ib, where a and b are real numbers, and i is the standard imaginary unit with the property i2=-1. The complex numbers contain the ordinary real numbers, but extend them by adding in extra numbers and correspondingly expanding the understanding of. The imaginary number unlike real numbers cannot be represented on a number line but are real in the sense that it is used in Mathematics. Imaginary numbers are also known as complex numbers. Imaginary numbers also show up in equations of quadratic planes where the imaginary numbers don't touch the x-axis
Imaginary numbers are not called real numbers - but this meaning of real is the mathematic definition, pertaining to cauchy sequences... and does not at all refer to the generic meaning of real, which you seem to be implying. we can easily see why imaginary and complex numbers are so very important to things like electronics and. An imaginary number is a complex number that can be defined as a real number multiplied by the imaginary number i. i is defined as the square root of negative one. If you've taken basic math, you know that the square of every real number is a positive number, and that the square root of every real number is, therefore, a positive number.
A complex number is a mathematical tool, and it is widely used in mechanics, electrodynamics, optics and other related fields of physics to provide an elegant formulation of the corresponding. The Nikola Tesla Numbers. This idea of a slowly increasing snowball is vitally important when we explore the 3 6 9 numerology. It relates to an idea known as vortex mathematics. In this form of math's, the number 1, 2, 4, 5, 7 and 8 are the numbers that represent the physical world while the 3 6 9 numerology belongs t We use imaginary numbers to represent time delays in circuits. That's all. There is a long story about what imaginary numbers mean in pure math and why they are called imaginary Why Imaginary Numbers are as Real as Real Numbers Imaginary numbers are a fine and wonderful refuge of the divine spirit almost an amphibian between being and non-being. ~ Gottfried Leibniz. For any positive number, one can find it's square root. For example, 2 2 = 4 and so the square root of 4 is 2. But what about negative numbers
Intro to the imaginary numbers (article) Khan Academ
Complex numbers are the sums and differences of real and imaginary numbers. In order to work with complex numbers, we must first understand imaginary numbers. Real numbers are the numbers that we are most familiar with such as: 1, 0.67, -5, etc. The square root of any real number has two roots, a positive and a negative
Well, i^2 is a negative number. Therefore, i is not a positive number, and therefore the field of complex numbers is not ordered. Also, a negative number squared is a positive number, and i^2 is a negative number. Therefore, i is not a negative number. Imaginary numbers are neither positive nor negative. Now for the 1 thru 4 bit. 1. -1 = (sqrt.
ed through imaginary polynomial equations by engineers
The imaginary part is an imaginary number , that is, the square-root of a negative number. To keep things standardized, the imaginary part is usually reduced to an ordinary number multiplied by the square-root of negative one. As an example, the complex number: t '1 % &1.041 , is first reduced to: t '1 % 1.041 &1 , and then t
Complex Numbers can also have zero real or imaginary parts such as: Z = 6 + j0 or Z = 0 + j4.In this case the points are plotted directly onto the real or imaginary axis. Also, the angle of a complex number can be calculated using simple trigonometry to calculate the angles of right-angled triangles, or measured anti-clockwise around the Argand diagram starting from the positive real axis
What Are Real-Life Uses of Imaginary Numbers
The imaginary number i: i p 1 i2 = 1: (1) Every imaginary number is expressed as a real-valued multiple of i: p 9 = p 9 p 1 = p 9i= 3i: A complex number: z= a+ bi; (2) where a;bare real, is the sum of a real and an imaginary number. The real part of z: Refzg= ais a real number. The imaginary part of z: Imfzg= bis a also a real number. Imaginary Numbers displays the fruits of this cross-fertilization by collecting the best creative writing about mathematical topics from the past hundred years. In this engaging anthology, we can explore the many ways writers have played with mathematical ideas Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Weierstrass, and many more in the 20th century.Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number. It is known, for example, that I. Newton did not include imaginary quantities within the notion of number, and that G. Leibniz said that complex numbers are a fine and wonderful refuge of the divine spirit, as if it were an amphibian of existence and non-existence
What Are Imaginary Numbers? Live Scienc
The course starts with the basics. You will get an in depth understanding of the fundamentals of complex numbers. Fundamentals are the most important part of building expert knowledge and skills. You will learn everything from what is number axis all the way up to different representation forms of complex numbers and conversions this video is going to be a quick review of complex numbers if you studied complex numbers in the past this will knock off some of the rust and it'll help explain why we use complex numbers in electrical engineering if complex numbers are new to you I highly recommend you go look on the Khan Academy videos that Sal's done on complex numbers and those are in the algebra 2 section so let's get. This seemingly toy theorem has very significant corollaries and I would say that almost all applications of complex numbers is linked to it. For example, it's known that complex numbers make a great aid in solving some important kinds of differential equations. The simplest case where they are needed, is: a*y'' + b*y' + c*y = 0
Q: What the heck are imaginary numbers, how are they
So the complex conjugate is just the act of multiplying the imaginary part of a complex number by negative one. Why is this important? The complex conjugate has a nice property that if you multiply a complex number by its conjugate, the imaginary parts will cancel out 1 Complex Numbers in Quantum Mechanics Complex numbers and variables can be useful in classical physics. However, they are not essential. To emphasize this, recall that forces, positions, momenta, potentials, electric and magnetic fields are all real quantities, and the equations describing them One: It's an important placeholder digit in our number system. Two: It's a useful number in its own right. The first uses of zero in human history can be traced back to around 5,000 years ago. Here giving a longer than normal introduction to imaginary and complex numbers because as a student I couldn't see why some lecturers and professors wanted to use complex numbers instead of tangents, sines and cosines which I knew from school and also because the explanations in Maths books are not always very helpful
An important concept with complex or imaginary numbers is the complex conjugate. If y = (2 + 4i), then y* = (2 - 4i) is the complex conjugate.The multiplication of y by y* yields a real rather than imaginary number (2 + 8i - 8i + 16) = (18). This operation will be performed throughout the text to generate real from imaginary numbers A complex number is expressed in standard form when written a + bi where a is the real part and bi is the imaginary part. For example, [latex]5+2i[/latex] is a complex number. So, too, is [latex]3+4i\sqrt{3}[/latex]. Imaginary numbers are distinguished from real numbers because a squared imaginary number produces a negative real number
Why Do Electrical Engineers Use Imaginary Numbers
There are some important things to note about this. Firstly, technically speaking, all real and imaginary numbers are complex. For real numbers, b happens to be 0, and for imaginary numbers, a is 0. However, All complex numbers are not imaginary or real, 3+6i sits on neither line but in the middle of the plane
A number whose square is less than or equal to zero is termed as an imaginary number. Let's take an example, √-5 is an imaginary number and its square is -5. An imaginary number can be written as a real number but multiplied by the imaginary unit.in a+bi complex number i is called the imaginary unit,in given expression a is the real part.
Imaginary Numbers were originally laughed at, and so got the name imaginary. That's all of the most important number types in mathematics. From the Counting Numbers through to the Complex Numbers. There are other types of numbers, because mathematics is a broad subject, but that should do you for now
ELI5: Why imaginary numbers are important? : explainlikeimfiv
The complex number, $6i$, only contains an imaginary part, so its position is expected to lie along the imaginary axis - in fact, $6$ units on the positive imaginary axis. From inspection alone, we can see that the distance of $6i$ from the origin is $6$
which has the form [2.53]. We call the set of numbers of the form [2.53] the complex numbers and denote this set .Given a complex number z = a + bi, we call the real number a the real part of z.We call the real number b the imaginary part of z.This motivates the Re and Im functions that map a complex number z = a + bi to its real and imaginary parts a and b, respectively
At first, imaginary numbers were considered useless (an imaginary number is a number that, when squared, gives a negative result; e.g. 5i = -25). But by the Enlightenment Era, thinkers began to.
Why are imaginary numbers called imaginary? If they
Math has many important constants that give the discipline structure, like pi and i, the imaginary number equal to the square root of -1.But one constant that's equally important, though perhaps. Complex numbers of the form , or just , are called Imaginary Numbers. This somewhat problematic nomenclature is perhaps one reason why they can be viewed as mystical by some people. Having defined , we can state that , where is any positive Real Number. A Complex Number, , can be seen to have a Real part, , and an Imaginary part, . We write and
Imaginary Numbers - Maths Career
Complex numbers are made up of two components, real and imaginary. They have the form a + bi, where the numbers a and b are real. The bi component is responsible for the specific features of.
The most important thing to remember about imaginary numbers is the pattern of exponents. For the most part, dealing with imaginary numbers is pretty similar to dealing with polynomials (though do not mistake i for just another variable-it hates that). Just think of complex numbers as polynomials with a new set of rules to follow, and you.
As we've discussed, every complex number is made by adding a real number to an imaginary number: a + b•i, where a is the real part and b is the imaginary part. We can plot a complex number on the complex plane—the position along the x-axis of this plane represents the real part of the complex number and the position along the y-axis.
A common visualisation of complex numbers is the use of Argand Diagrams. To construct this, picture a Cartesian grid with the x-axis being real numbers and the y-axis being imaginary numbers
Remarks on the History of Complex Numbers. The study of numbers comes usually in succession. Children start with the counting numbers. Move to the negative integers and fractions. Dig into the decimal fractions and sometimes continue to the real numbers. The complex numbers come last, if at all. Every expansion of the notion of numbers has a valid practical explanatio
a and b are real while i is imaginary. A new paper says that rather than complex numbers being a purely mathematical invention to facilitate calculations for physicists, quantum states and complex numbers are instead ironically and inextricably linked. They even can show it experimentally
this is just part of the code that matters and needs to be fixed. I don't know what i'm doing wrong here. all the variables are simple numbers, it's true that one is needed for that other, but there shouldn't be anything wrong with that. the answer for which I'm getting imaginary numbers is supposed to be part of a loop, so it's important I get it right. please ignore the variables that are.
There are many important numbers that have made this world what it currently is. But the following 10 are the most important numbers, or constants, in the entire world. Imaginary Unit: i. First of all, complex numbers (and imaginary numbers) do appear in real-world phenomena; they have lots of practical applications. But now, on to the philosophical portion of the problem. Numbers are abstractions. They don't exist in the same way that, say, physical objects exist. You can give me two apples, but you can't just give me a two Numbers are really *two dimensional*; and just like the integer 1 is the unit distance on the axis of the real numbers, i is the unit distance on the axis of the imaginary numbers
What use are imaginary numbers in the real world? Do they
Let's looks at some of the important features of complex numbers using math module function. Phase of complex number The phase of a complex number is the angle between the real axis and the vector representing the imaginary part Complex numbers are an important and useful extension of the real numbers. In particular, they can be thought of as an extension which allows us to take the square root of a negative number. We define the imaginary unit as the number which squares to \( -1 \), \[ \begin{aligned} i^2 = -1. \end{aligned} \ Complex numbers and imaginary numbers surround us all the time and, as any mathematician will tell you, they are no less real (or less important) than numbers like 1, 2 and 3. Uses of complex numbers in our daily life almost always go unnoticed, but they surround you whenever you turn on a light, pick up a guitar or even watch a tree swaying in.
A Mathematical History: Imaginary Numbers
Imaginary Numbers are defined in Mathematics as numbers so big, you can't even think about how big they are. However, a parallel school of thought claims that the concept of an imaginary number of based on the ancient Indian war game I am thinking of a number from one to ten.A fair guess in this case would be seven (7)(VII), as the Indians have had the number placed in their minds for time. I am confused as why do we need to represent the complex numbers with the imaginary y-axis if we can simply represent them as (x,y) ? I've read that Multiplication by i is an anti-clockwise rotation of a quarter-circle over y-axis.. Multiplying 1 by i gives i The Imaginary Numbers are a codename for artificial, genetically engineered humans that appears as enemies in the later stages of Front Mission 3. Along with the Real Numbers, the Imaginary Numbers are the result of a research project initiated by the Ravnui National Laboratory under the oversight of Bal Gorbovsky, with the primary motive is to create a perfect human being in every aspect.
Imaginary Numbers - MAT
In India, negative numbers did not appear until about 620 CE in the work of Brahmagupta (598 - 670) who used the ideas of 'fortunes' and 'debts' for positive and negative.By this time a system based on place-value was established in India, with zero being used in the Indian number sytem. Brahmagupta used a special sign for negatives and stated the rules for dealing with positive and negative. These are much better described by complex numbers. Rather than the circuit element's state having to be described by two different real numbers V and I, it can be described by a single complex number z = V + i I. Similarly, inductance and capacitance can be thought of as the real and imaginary parts of another single complex number w = C + i L. The terms 'real' and 'imaginary' are not meant to refer to the legitimacy of the numbers involved. If you like, you could just as easily refer to the real numbers as 'happy numbers' and the imaginary numbers as 'super happy numbers.' The important point is, that the names we give to these numbers are just labels Its simple, apart from the magnitudes of current and voltage in AC circuits the relative phase of current and voltage is also very important, therefore the impedance is given in complex form In statistics, the average and the median are two different representations of the center of a data set and can often give two very different stories about the data, especially when the data set contains outliers. The mean, also referred to by statisticians as the average, is the most common statistic used to measure the [
This article provides insight into the importance of complex conjugates in electrical engineering. Complex Numbers. Complex numbers are numbers which are represented in the form $$ z = x + i y $$, where x and y are the real and imaginary parts (respectively) and $$ i =\sqrt{-1} $$.. Complex numbers can also be represented in polar form, which has a magnitude term and an angular term As a conclusion, the most important thing you should know is that they are composed of a real part and an imaginary part, that the number i is equal to the root of -1 and therefore, the number i squared is equal to -1 and how to obtain the conjugate of a complex number Or, one can expand this number system to include additional concepts, such as negative numbers, fractions, even the so-called imaginary numbers (which are not really imaginary at all). Each of these concepts exists provided we look for it in the context of a large enough number system
Imaginary numbers: a brief history to these complex
Plus, many consider the math involved to be a lot more elegant using complex arithmetic (where, for strictly real input, the cosine correlation or even component of an FFT result is put in the real component, and the sine correlation or odd component of the FFT result is put in the imaginary component of a complex number. whether the sinusoidal real and imaginary components are periodic. In addition to the basic signals discussed in this lecture, a number of ad-ditional signals play an important role as building blocks. These are intro-duced in Lecture 3. Suggested Reading Section 2.2, Transformations of the Independent Variable, pages 12-1 Other authors have already discussed how important complex numbers can be for object rotation. Here I am adding a couple of other examples where we can see the use of complex/ imaginary numbers. A really cool application of complex numbers is Fractals which is used in procedural generations i
Imaginary Numbers (Definition, Rules, Operations, & Examples
Complex numbers include everyday real numbers like 3, -8, and 7/13, but in addition, we have to include all of the imaginary numbers, like i, 3i, and -πi, as well as combinations of real and imaginary.You see, complex numbers are what you get when you mix real and imaginary numbers together — a very complicated relationship indeed You may be wondering why it is even necessary to raise the numbers to the second power if we're just going to solve for the square root, anyway. As noted earlier, raising the real and imaginary numbers to the second power pulls them out of the complex number by eliminating all imaginary parts (j squared is -1)
Numbers for the greater part of history have been viewed alternately as concepts and as quantities. Now, this raises problems about many types of numbers, which include negative numbers and imaginary numbers, because these cannot be viewed as quantities although there are compelling theories that can treat them logically as concepts Simplifying Radicals with Imaginary Numbers Maze Activity Students will simplify 13 radicals which include negative coefficients, negative radicands and imaginary numbers. This resource allows for student self-checking and works well as independent work, homework assignment, or even to leave wit Why quadratic equation may have complex solutions? Anywhere you read you will learn that when you calculate the discriminant (the expression inside the square root) and if it is greater than 0 then you have two solutions, when it is equal to 0 than you have two equal solutions, but if it is less than 0 then there are no solutions among real numbers So you see, in its basic form, you need two numbers to represent the ratio of two sine waves: amplitude and phase. A complex number is a mathematical convenience to carry over those two values, although not directly amplitude and phase as such, but the x-y components of the related phasor or vector
Heat dissipation calculation from power consumption.
Types of birth defects.
Toys R Us hiring age.
Sea scallops nutrition facts.
Fossil record evolution.
Track my PIP online.
Aquarium filter media Setup.
Fairfield movie theater.
Disney Movie Club refund.
Lexus LS 600h for sale.
Salon Du Chocolat delivery.
PGA Golf Management salary.
MHW Appreciation Ticket after event.
Thousand abbreviation k capital or lowercase.
Isla Verde, San Juan.
Differences between ethics and law pdf.
General Assistance NJ amount.
London to Paris Coach.
Valet parking definition.
How to remove red eye in PicsArt.
Pandora Music sign in.
Future plc.
How do i Book an appointment with Uber Greenlight Hub.
Speed control of BLDC motor using MATLAB Simulink.
Arthur Court Tray.
BMW X3 airbag reset tool.
How much is 1 crore rupees in pounds.
MySpace music Downloader Android.
Tailored car covers.
Blackberry adaptor.
Panda io game.
How to prevent wiretapping.
Cheap Ray Ban Aviator Sunglasses.
German banana beer.
Samsung Series 9 review.
Oracle password hash value converter.
Gluten free Spanish food.
What must a person do to be ready for heaven.
Internal hard drive not showing up Mac.
Uplifting sympathy card messages.
|
CommonCrawl
|
Decoding reactive structures in dilute alloy catalysts
Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution
Kevin Tran & Zachary W. Ulissi
High-throughput calculations of catalytic properties of bimetallic alloy surfaces
Osman Mamun, Kirsten T. Winther, … Thomas Bligaard
Single-atom alloy catalysts designed by first-principles calculations and artificial intelligence
Zhong-Kang Han, Debalaya Sarker, … Sergey V. Levchenko
A Bayesian framework for adsorption energy prediction on bimetallic alloy catalysts
Heterogeneous catalyst design by generative adversarial network and first-principles based microkinetics
Atsushi Ishikawa
First-principles-based multiscale modelling of heterogeneous catalysis
Albert Bruix, Johannes T. Margraf, … Karsten Reuter
Computational high-throughput screening of alloy nanoclusters for electrocatalytic hydrogen evolution
Xinnan Mao, Lu Wang, … Jijun Zhao
Identification of stable adsorption sites and diffusion paths on nanocluster surfaces: an automated scanning algorithm
Tibor Szilvási, Benjamin W. J. Chen & Manos Mavrikakis
Enhancing catalytic performance of dilute metal alloy nanomaterials
Mathilde Luneau, Erjia Guan, … Cynthia M. Friend
Nicholas Marcella ORCID: orcid.org/0000-0002-2224-532X1 na1,
Jin Soo Lim ORCID: orcid.org/0000-0001-8406-75682 na1,
Anna M. Płonka1 na1,
George Yan3 na1,
Cameron J. Owen ORCID: orcid.org/0000-0002-2543-74152,
Jessi E. S. van der Hoeven2,4,
Alexandre C. Foucher ORCID: orcid.org/0000-0001-5042-40025,
Hio Tong Ngan ORCID: orcid.org/0000-0001-6987-20673,
Steven B. Torrisi ORCID: orcid.org/0000-0002-4283-80776,
Nebojsa S. Marinkovic ORCID: orcid.org/0000-0003-3579-34537,
Eric A. Stach ORCID: orcid.org/0000-0002-3366-21535,
Jason F. Weaver8,
Joanna Aizenberg ORCID: orcid.org/0000-0002-2343-87052,4,
Philippe Sautet ORCID: orcid.org/0000-0002-8444-33483,9,
Boris Kozinsky ORCID: orcid.org/0000-0002-0638-539X4,10 &
Anatoly I. Frenkel ORCID: orcid.org/0000-0002-5451-12071,11
Nature Communications volume 13, Article number: 832 (2022) Cite this article
Catalytic mechanisms
Heterogeneous catalysis
Porous materials
Rational catalyst design is crucial toward achieving more energy-efficient and sustainable catalytic processes. Understanding and modeling catalytic reaction pathways and kinetics require atomic level knowledge of the active sites. These structures often change dynamically during reactions and are difficult to decipher. A prototypical example is the hydrogen-deuterium exchange reaction catalyzed by dilute Pd-in-Au alloy nanoparticles. From a combination of catalytic activity measurements, machine learning-enabled spectroscopic analysis, and first-principles based kinetic modeling, we demonstrate that the active species are surface Pd ensembles containing only a few (from 1 to 3) Pd atoms. These species simultaneously explain the observed X-ray spectra and equate the experimental and theoretical values of the apparent activation energy. Remarkably, we find that the catalytic activity can be tuned on demand by controlling the size of the Pd ensembles through catalyst pretreatment. Our data-driven multimodal approach enables decoding of reactive structures in complex and dynamic alloy catalysts.
In recent decades, worldwide energy demand has risen dramatically1, and nearly 30% of industrial energy use is tied to chemical production2 that relies heavily on heterogeneous catalysis. To increase the efficiency and sustainability of catalytic processes, multidisciplinary analytical strategies are required beyond the conventional trial-and-error approach to design more efficient catalysts3. Specifically, rational catalyst design aims to establish and apply fundamental structure-activity relationships at the nanoscale. Two central requirements are (i) the identification of the active sites and (ii) a theoretical framework for the simulation and prediction of reaction mechanisms and kinetics. Combined experimental characterizations and first-principles modeling have been employed to investigate predefined active site structures4,5,6. However, accurate identification of the active site remains a significant challenge due to the dynamic nature of surface composition and morphology evolving in time7,8,9.
The class of dilute alloy catalysts has tremendous industrial importance because they can enhance the activity and selectivity of chemical reactions while minimizing the use of precious metals10,11,12,13,14. Major advancements are sought in understanding the nature of their catalytically active sites14. In bimetallic nanoparticle systems, intra- and inter-particle heterogeneities give rise to a diverse population of sites and particle morphologies15. Furthermore, the active site can dynamically restructure under reaction conditions16,17,18,19. Although density functional theory (DFT) is widely used to investigate the thermodynamics of surface segregation in alloy systems20,21,22,23,24, the large computational cost precludes its use for direct sampling of the large underlying configurational space as well as the long timescale. As such, the relationship between the active site structure and the corresponding reaction pathways and kinetics remains hidden.
A useful tool for unlocking this relationship is an operando experiment, in which changes in structural and chemical features are measured simultaneously with the catalytic activity25,26,27. However, the small amount of active component in dilute alloys, compounded with low weight loading of the particles, require characterization tools with sufficient surface and chemical sensitivities. Among many scattering and absorption-based techniques, in situ X-ray absorption spectroscopy (XAS), in the form of extended X-ray absorption fine structure (EXAFS) and X-ray absorption near edge structure (XANES), has proven to be well-suited for studying dilute alloy catalysts28,29,30. In particular, the recently developed neural network-assisted XANES inversion method (NN-XANES) has enabled the extraction of local coordination numbers directly from XANES31. This technique makes the analysis of active sites in dilute alloys possible under reaction conditions, far exceeding the capabilities of conventional EXAFS fitting28. To the best of our knowledge, NN-XANES, while demonstrated to be useful for investigating the structure and local composition of bimetallic nanoparticles28,32, has not yet been applied to decoding active site structure and activity mechanisms in any functional nanomaterial systems.
Obtaining coordination numbers and other structural descriptors of a particular catalytic component is necessary but not sufficient to conclusively determine the active site geometry. To do so, two challenges must be resolved. First, one needs to decouple XAS signals originating from spectator species that contribute to the spectrum but not to the reaction. Second, given a set of possible active site geometries, more than one candidate structure may agree with the structural descriptors obtained from NN-XANES. We overcome these challenges by combining catalytic activity measurements, machine-learning enabled spectroscopic analysis, and first-principles-based kinetic modeling (Fig. 1). Through joint experimental and theoretical methods, we determine (i) the local structural descriptors of the catalyst and (ii) the apparent kinetic parameters of the reaction network. Both the structural and kinetic criteria must be satisfied to ascertain the dominant active site species. This multimodal approach is demonstrated for the prototypical HD exchange reaction on Pd8Au92/RCT-SiO2 (RCT = raspberry--colloid--templated), previously shown to have excellent catalytic performance and stability for this reaction33, CO oxidation29, and selective hydrogenation30,34. Remarkably, we find that the activity of HD exchange is determined by the size of surface Pd ensembles at the order of only a few atoms, which can be controlled directly through different catalyst treatments.
Fig. 1: Multipronged strategy for decoding reactive structures.
Measured X-ray absorption spectra are correlated directly to catalytic activity via theoretical modeling. (ML-XAS descriptors panel) Local structural descriptors, such as coordination number (C) and interatomic distance (R) between elements a (green) and b (yellow), are extracted from X-ray absorption spectra through machine learning (ML-XAS), via the Neural Network, to constrain the set of candidate active sites. (Computed activation energy panel) The corresponding reaction pathways, from states A to B through the transition state ‡, are modeled using first-principles-based microkinetic modeling. (Measured activation energy panel) Apparent activation energies from computations are compared to experimental measurements, where colors represent different reaction starting conditions of the hydrogen (H) deuterium (D) exchange reaction (H2 + D2 ⇋ 2HD), to further narrow down the dominant active sites. (Active structure panel) Three possible nanoparticle motifs are shown where the active sites are green and vary between 1 and 3 atoms.
Treatment alters the catalyst activity
The dilute alloy Pd8Au92/RCT-SiO2 catalyst was synthesized according to the established procedures33 (Methods). The catalyst is composed of 4.6 ± 0.6 nm (mean size) bimetallic nanoparticles with 8.3 at% Pd, embedded in RCT-SiO2. From electron microscopy and energy dispersive spectroscopy (EDS) measurements (Methods), the majority of Pd is homogeneously mixed with Au prior to any treatments and catalysis (Supplementary Fig. 1).
Previous experiments have established that treating the catalyst with O2 and H2 alters the activity30, implying treatment-induced changes of the catalyst surface structure. The exact kinetic mechanism of surface rearrangement in dilute alloys is still unknown, but it has long been established that oxidative environments result in a thermodynamic preference for Pd to reside on the surface of PdAu alloy nanoparticle and model surfaces35,36. To investigate the result of the treatment-induced restructuring of the surface, the HD exchange reaction was examined after three sequential treatments (A, B, C) of our sample (Fig. 2a; Methods). State 1 (S1) was obtained after O2 treatment at 500 °C for 30 min (treatment A), followed by State 2 (S2) after H2 treatment at 150 °C for 30 min (treatment B), followed by State 3 (S3) after another H2 treatment over 210 min with step-wise heating to 150 °C (treatment C). Throughout both heating and cooling phases, a steady-state HD formation rate was established at each temperature (Supplementary Fig. 3a–c).
Fig. 2: Pd8Au92/RCT-SiO2 is subjected to HD exchange reaction after three sequential pretreatments.
a Initial state S0 undergoes O2 treatment to starting state S1 (treatment A), followed by H2 treatments to starting states S2 and then S3 (treatments B and C respectively). The hydrogen regime is shaded blue, and the oxygen regime is shaded yellow. b Activity versus temperature for three HD exchange experiments, labeled HD-1 to HD-3, corresponding to three different starting states (S1-S3). The solid and open circles indicate the heating and cooling regimes, respectively. c Normalized XANES collected for the different samples (pink = S0, blue = S1, yellow = S2, orange = S3, and dashed black = Pd foil). The inset shows a shift in the spectral center of mass "c". d Coordination numbers (blue = Pd–Pd bonds and red = Pd–Au bonds) from NN-XANES are consistent with Pd atoms having more Pd neighbors after O2 treatment (yellow shade) and less Pd neighbors after H2 treatment (blue shade). Atomic Pd ensembles in (c) and (d) are depicted with bird's-eye-view schematics (yellow = Au; green = Pd).
The three starting states exhibited three distinct HD exchange reaction kinetics (numbered correspondingly) (Fig. 2b). The apparent activation energies (Ea) and axis intercepts (A) were obtained from the Arrhenius analysis (Supplementary Fig. 3d, e). The HD-1 heating step exhibited the largest Ea = 0.67 ± 0.05 eV and A = 30. Starting with the HD-1 cooling phase, the values decreased rapidly, settling to Ea = 0.34 ± 0.05 eV and A = 18 by HD-2. The pronounced hysteresis between HD-1 heating and cooling suggests a change occured during the reaction in addition to that induced by the treatment. Overall, the changes in the kinetic parameters suggest that the number and the nature of the active species present in S1 have changed in S2 and S3. This hypothesis is further supported and elucidated by spectroscopy and theoretical modeling.
Treatment restructures the catalyst
Pd K-edge XAFS spectra were collected (Methods) for the catalyst in the initial state (S0), as well as S1 to S3 after sequential treatments A through C, respectively (Fig. 2a). Treatment-induced changes in the XANES were observed, quantified by the shift in the spectral center of mass (Fig. 2c, inset). After treatment A with O2 (S0 to S1), the center of mass shifts toward the limit defined by the bulk Pd reference, and, after treatment B with H2 (S1 to S2)—away from the bulk Pd past S0, remaining at the same position after further treatment C with H2 (S2 to S3). These shifts are direct evidence of the structural changes induced by the catalyst treatments. The shift away from the bulk Pd upon H2 treatment has been attributed to an increase in the number of Pd-Au neighbors resulting from Pd dissolution into the Au host28.
To understand these structural changes more precisely, quantitative analysis was performed using partial coordination numbers (CNs) (\({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}\) and \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}\)) obtained from the NN-XANES inversion method6 (Fig. 2d; Methods) and conventional EXAFS fitting (Supplementary Fig. 4; Methods). Because of the relatively low sensitivity of EXAFS to the Pd–Pd contribution in dilute Pd limit, only a weak Pd-Pd contribution was detected in S0 and S1 but not in S2 and S3 (Supplementary Table 1). The NN-XANES analysis reveals the same trend in the CNs as the EXAFS fitting but with lower relative uncertainties, allowing us to detect Pd-Pd bonding at all regimes. In all samples, \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}\) < 1.0, as expected for dilute alloys where Pd thermodynamically prefers to be fully coordinated with Au30.
From S0 to S1, treatment A with O2 slightly increases \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}\) and decreases \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}\), consistent with mild segregation of Pd to the surface. In contrast, from S1 to S2, treatment B with H2 decreases \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}\) from 0.73 to 0.25 and increases \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}\) from 10.57 to 11.38, consistent with Pd dissolution into the subsurface. Further evidence of dissolution is seen in the increase of the total Pd CN from 11.30 to 11.63. The full 12-fold coordination would correspond to 100% of Pd residing in the subsurface and being inaccessible for heterogeneous catalysis. The total Pd CN below 12 indicates the presence of some undercoordinated Pd on the surface. From S2 to S3, additional treatment C with H2 exhibits the same trend to a much lesser extent, i.e., \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}\) decreases and the total Pd CN increases.
The extent of Pd mixing with Au was analyzed by comparing the ratio of coordination numbers \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}:{C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}\) to the ratio of compositions \({x}_{{{{{{\rm{Pd}}}}}}}:{x}_{{{{{{\rm{Au}}}}}}}\) (0.083:0.917). In all states, \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}:{C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}\) is less than \({x}_{{{{{{\rm{Pd}}}}}}}:{x}_{{{{{{\rm{Au}}}}}}}\), which corresponds to a tendency for Pd to disperse in Au, consistent with the EDS observations (Supplementary Fig. 1b, c). The dispersion tendency decreases after O2 treatment and increases after H2 treatment. These results are also consistent with DFT-computed segregation free energies (\({G}_{{{{{{\rm{seg}}}}}}}\)) of representative surface Pd ensembles in the presence of chemisorbed oxygen and hydrogen, referenced to gas-phase molecules and subsurface Pd monomers (Supplementary Fig. 5; Sec. 1, Supplementary Methods). Globally, Pd prefers to remain dispersed in the subsurface (\({G}_{{{{{{\rm{seg}}}}}}}=0\)), highlighting the metastable nature of these ensembles. The next favorable structure is the extended surface Pd oxide model7 (\({G}_{{{{{{\rm{seg}}}}}}}=0.17\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}\)), considered as a limiting case of larger ensembles, but it is precluded from our reactivity modeling as it is inconsistent with the EELS map (Supplementary Fig. 1d) as well as the observed \({C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}} \, < \, 1.0\) across all samples. The precise mechanistic and kinetic relevance of such oxide phases remains beyond the scope of this work and would require much more advanced atomistic modeling approaches.
The relative stability of the ensembles is inverted upon chemisorption. Across all cases, O2 provides the largest thermodynamic driving force to form larger metastable ensembles (\({G}_{{{{{{\rm{seg}}}}}}}=0 \sim 0.45\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}\); Pd5O4 to trimer). Once O2 is removed after the initial pretreatment, these larger ensembles are expected to lower the surface free energy by fragmenting toward smaller-sized ensembles. Under H2, the same driving force for segregation becomes less pronounced (\({G}_{{{{{{\rm{seg}}}}}}}=0.35 \sim 0.4\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}\); Pd monolayer to trimer). Moreover, H2 chemisorption remains largely endergonic on Pd ensembles under the pretreatment condition (\({G}_{{{{{{\rm{ads}}}}}}}=0 \sim 0.5\,{{{{{\rm{eV}}}}}}\); trimer to monomer), which would favor H2 desorption and partial dissolution of Pd into even smaller ensembles.
Modeling resolves Pd ensemble reactivity
To ascertain the atomic-level structure of the active sites, HD exchange reaction pathways were characterized via transition state modeling using DFT calculations (Supplementary Figs. 6–8; Methods). On the basis of the quantitative CN analysis that established dilute alloy motifs, several types of model surfaces with close-packed facets were considered. These structures enable systematic investigation of the effect of (i) the atomic arrangement of Pd in the active site (ensemble effect) and (ii) the local coordination environment of the active site (facet effect). The zero-point-corrected energy barriers associated with H2/D2 dissociative adsorption and HD recombinative desorption are shown in Fig. 3. Spillover and migration of atomic H/D are relatively facile and of secondary importance in determining the overall catalytic kinetics (Sec. 2, Supplementary Methods). Moreover, migration into the subsurface37,38 was not considered in our dilute systems, where Pd largely remains dispersed as isolated atoms in the interior of Au. A recent study of Pd/Ag(111) showed that subsurface H can become metastable only in the presence of locally extended Pd under high H2 pressures39.
Fig. 3: DFT modeling establishes the Sabatier optimum with dilute Pd ensembles.
Three close-packed facets are considered: (111) for terrace; (211) and (331) for step edges. Five model surfaces are considered: pure Au, Pd monomer, dimer, trimer, and Pd monolayer. As indicated by the plotted energy barriers, there is a trade-off between H2/D2 dissociative adsorption (filled) and HD recombinative desorption (hatched) with increasing surface Pd content. Horizontal lines indicate experimentally measured apparent activation energies with uncertainties over shaded regions: the values for sample S1 (O2-treated) and S2/S3 (H2-treated) agree well with those computed for the Pd trimer and monomer/dimer, respectively. All DFT-computed values have an uncertainty of 0.05 eV.
The Sabatier optimum is clearly demonstrated by the dilute Pd ensembles, bounded by the chemisorption-limited Au surface on the left and the desorption-limited Pd monolayer on the right. As the surface Pd content increases, chemisorption becomes more facile, with barriers decreasing from ~0.3 to ~0.1 eV across Pd monomer to trimer, finally becoming barrierless on Pd monolayer which exhibits pure Pd-like behavior (Supplementary Fig. 8c). At the same time, desorption becomes more difficult, with barriers increasing from ~0.3 to ~0.6 eV across Pd monomer to trimer, finally exceeding 1.0 eV on Pd monolayer. This trade-off is driven by the ensemble effect, with minimal influence of the facet effect, except in the case of pure Au. Moreover, there is good agreement between the computed energy barriers and the experimentally measured apparent activation energies (Fig. 3; Supplementary Fig. 3d). Specifically, the measured apparent activation energies of 0.29–0.34 eV for S2 and S3 (H2-treated) agree well with the computed energy barriers of ~0.3-0.4 eV for Pd monomer and dimer, whereas the measured value of 0.67 eV for S1 (O2-treated) matches the computed value of ~0.65 eV for the Pd trimer. This trend is consistent with a previous study33 and proposes Pd trimers and monomers/dimers as compelling active site candidates upon O2- and H2-treatment, respectively.
To bridge theory and experiment, we performed microkinetic simulations using DFT-derived kinetic parameters as inputs (Methods; Sec. 3-4, Supplementary Methods). This method circumvents the previously employed assumption of a single rate-limiting step33. Pd dimers have the highest activity for HD exchange, with a low apparent barrier of 0.22 eV at 50 °C (Fig. 4b). Although Pd monomers have a similarly low apparent barrier of 0.25 eV, their activity is lower by ~2 orders of magnitude due to the greater required loss of entropy of gas-phase H2/D233 (Fig. 4a). Due to more difficult desorption, Pd trimers exhibit an intermediate level of activity, with a higher apparent activation energy of 0.52 eV at 50 °C (Fig. 4c). Although the degree of rate control analysis shows that there is no single rate-limiting transition state in all three cases, the transition states of D2 dissociation and HD recombination control the rate of H/D exchange over Pd monomers/dimers (Supplementary Fig. 9a–f), and desorption of the HD molecule controls the rate of H/D exchange over Pd trimers (Supplementary Fig. 9g–i), consistent with the DFT energetics of Fig. 3. These observations reinforce the catalytic predominance of Pd trimers in O2-treated samples, in contrast to Pd monomers/dimers in H2-treated samples.
Fig. 4: First-principles-based microkinetic modeling bridges theory and experiment.
a–c Computed activity, the apparent activation energy (\({E}_{{{{{{\rm{a}}}}}}}\)), and most abundant intermediates (insets; superscript = H-Pd coordination number) of HD exchange reaction on dilute Pd ensembles. The activity increases in the order of Pd monomers, trimers, and dimers. At 50 °C, trimers have a higher \({E}_{{{{{{\rm{a}}}}}}}\) (0.52 eV) compared to monomers and dimers (0.20–0.25 eV). d Pd speciation (at.%) within the surface (blue) and subsurface/bulk (light and dark orange) responsible for the observed coordination numbers in the initial state (S0), and after pretreatments A (S1), B (S2), and C (S3). On the basis of theoretical modeling, surfaces of S0/S1 are constrained to trimers, and either monomers or dimers for S2 and S3 each. a–d Pd atoms are green and Au atoms are yellow.
Treatment controls the Pd distribution
To quantitatively analyze the distribution of Pd ensembles in terms of the structural descriptors obtained from NN-XANES, we parametrized the partial CNs in terms of the distribution of Pd in Au, i.e., the number of Pd monomers, dimers, and trimers on a model catalyst surface and in the interior (Methods). Figure 4d summarizes the distribution of the Pd ensembles responsible for the observed CNs in samples S0-S3 using a representative icosahedral Au nanoparticle model of size 4.6 nm (in agreement with the TEM measurements). The distributions were obtained with constraints on the type of surface species present, as inferred from theoretical modeling. Samples S0 and S1 (calcined in air and O2-treated, respectively) are characterized by a surface consisting of Pd trimers and an interior of monomers and dimers. In contrast, the H2-treated samples S2 and S3 are characterized by a surface consisting of a small number of Pd dimers and/or monomers with an interior dominated by monomers. The surface Pd content decreases from S2 to S3, consistent with H2-induced dissolution of Pd. This numerical analysis confirms that the dilute Pd ensembles satisfy both the kinetic and structural criteria of the active site.
A multipronged strategy has been developed to resolve the active site structure of a dilute bimetallic catalyst at the atomic level, a central requirement for advancing rational catalyst design. By combining catalysis, machine learning-enabled spectroscopic analysis, and first-principles-based kinetic modeling, we demonstrate the effect of catalyst treatment on its nanostructure, active site distribution, and reactivity toward the prototypical HD exchange reaction. A dilute Pd-in-Au alloy supported on raspberry-colloid-templated SiO2 was chosen for its excellent catalytic performance and stability29,30,33,34. Upon H2 treatment, the activity and the apparent activation energy decreased significantly. These observations are attributed to treatment-induced catalyst restructuring, quantitatively analyzed in terms of the coordination numbers extracted from neural network-assisted inversion of the X-ray absorption spectra28. The majority of Pd remained dispersed inside the Au host, with a small amount of Pd segregating toward the surface upon O2 treatment and dissolving into the bulk upon H2 treatment.
On the basis of this motif, theoretical modeling of the reaction network on several model surfaces has established dilute Pd ensembles as the catalytically predominant active sites. These ensembles numerically correspond to the observed coordination numbers, thereby satisfying both the structural and kinetic criteria of the active site. Remarkably, the reactivity is tuned by modulating the active site on the order of only a few atoms (n = 1–3) through catalyst treatment. Our multidisciplinary approach considerably narrows down the large configurational space to enable precise identification of the active site and can be applied to more complex reactions in related dilute alloy systems, such as selective hydrogenation of alkynes34,40 and CO oxidation29.
Catalyst synthesis
For the RCT (raspberry-colloid-templated) synthesis, we refer to the procedure reported by van der Hoeven et al.33. The gold nanoparticles were prepared using a procedure described by Piella et al.41 on a 450 mL scale at 343 K. The reaction mixture contained 0.3 mL 2.5 mM tannic acid, 3.0 mL 150 mM K2CO3, and 3.0 mL 25 mM HAuCl4 in H2O. The raspberry colloids were prepared by attaching the gold nanoparticles to the sacrificial polystyrene (PS) colloids (dPS = 393 nm). To 150 mL AuNPs, 1.5 mL aqueous PVP solution (0.1 g PVP per mL H2O) and 12 mL thiol-functionalized PS colloids (5.0 wt% in water) were added. After washing three times with MQ H2O the colloids were redispersed in 12 mL MilliQ H2O. The Pd growth on the AuNPs attached to polystyrene colloids was done at low pH to ensure sufficiently slow reaction rates and selective growth on the AuNPs42. To 12 mL raspberry colloid dispersion (5.0 wt% PS in water), 150 mL in MQ H2O, 1.5 mL 0.1 M HCl, 270 µL 10 mM Na2PdCl4 and 270 µL 40 mM ascorbic acid were added to obtain the Pd8Au92 NPs. The raspberry colloids were washed twice, redispersed in 12 mL MQ H2O, and dried at 65 °C in air. Next, the colloidal crystal was infiltrated with a pre-hydrolyzed TEOS solution (33 vol% of a 0.10 M HCl in H2O solution, 33 vol% ethanol, 33 vol% TEOS). Finally, the samples were calcined to remove PS colloids by heating them in static air from room temperature to 773 K with 1.9 K/min and held at 773 K for 2 h. Inductively coupled plasma mass spectrometry (ICP-MS, Agilent Technologies 7700x) was used for compositional analysis (metal composition and metal weight loading). The exact composition is 8.3 at% Pd, 4.4 wt% total metal loading.
HD exchange experiments
Catalysis experiments were performed using the same flow cell as the Synchrotron in situ XAS experiments to directly correlate the structural changes observed in XAS with the activity of the sample toward HD exchange. A total of three HD exchange experiments were performed for the Pd8Au92 sample after three different sequential treatments (A, B, C) (Fig. 2a). Treatment A consisted of 30 min heating at 500 °C in 20% O2 atmosphere (balance Ar); treatment B consisted of 30 min heating at 150 °C in 25% H2 atmosphere (balance Ar); and treatment C consisted of the same H2 treatment for 210 min, with temperatures maintained at 100, 120, 140, 120, 100 °C in sequence for 30 min at each temperature to allow full equilibration of the structural changes induced by the treatment. The temperature was increased and decreased at a rate of 10 °C/min.
The three sequential treatments resulted in three distinct starting states: State 1 (S1), State 2 (S2), and State 3 (S3) after treatment A, B, and C, respectively. In HD reaction 1 (HD-1) with starting state S1, HD exchange was monitored during heating (HD-1 heating) and cooling (HD-1 cooling) stages (Fig. 2b). The same procedure applies for HD reactions 2 and 3 (HD-2 and HD-3, respectively).
Each treatment resulted in different HD exchange activities, so the sample amount was varied to keep the conversion well below 50% (maximum conversion in the reaction with a statistical mixture of 1H2:1D2:1HD) in the entire temperature range so as to accurately measure the catalytic performance (Supplementary Fig. 3e). The undiluted sample was loaded into quartz capillary of internal diameter 1 mm. The ends of the sample bead were blocked with quartz wool to avoid the powder displacement in the gas flow. The reactions were performed in the gas mixture of 12.5% H2, 12.5% D2, 72% Ar, and 3% N2, with a total flow of 15 mL/min. The temperature was increased between 30 °C and 150 °C with 10/20 °C steps and kept at each step for 5–30 min to achieve equilibrium (Supplementary Fig. 3a–c). The reaction products were measured with an online mass spectrometer (RGA, Hiden Analytical).
The MS signals of H2 and D2 changed upon consumption, forming the basis of extracting the HD formation rate (Fig. 2b) and the apparent activation energy from the Arrhenius analysis (Supplementary Figs. 2, 3). The baseline signal, indicating the sensitivity of the MS toward H2/D2/HD, was obtained from the bypass. The conversion was calculated using both H2 and D2 signals from which the average activity was extracted. The activity values in the range of 1–20% were used from each temperature step for the Arrhenius analysis.
Transmission electron microscopy (TEM) and energy-dispersive X-ray spectroscopy (EDS)
TEM was performed with a JEOL NEOARM operating at 200 kV. All images are high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) data. The diameter of the used condenser lens aperture was 40 μm, the probe current was 150 pA, and the camera length was 4 cm. EDS was performed with two detectors provided by JEOL Ltd. and the maps were obtained with DigitalMicrograph, a software developed by Gatan Inc.
In situ X-ray absorption fine structure spectroscopy (XAS)
XAS experiments at the Pd K-edge were carried out at the QAS beamline of the NSLSII. Ten milligrams of sample was loaded into a borosilicate capillary. The sample treatments were identical to those performed for activity measurements: treatment A was performed with O2 (500 °C, 20% O2 balance He, 30 min, cooled down in O2 to room temperature for data collection); treatment B with H2 (150 °C, 25% H2 balance N2, 30 min, cooled down in H2 to room temperature for data collection); and treatment C with H2 and slow step-wise heating up to 140 °C (25% H2 balance N2, 30 min, cooled down in H2 to room temperature for data collection). The entire experiment, including treatments and data collection steps, was performed with 15 scc gas flow. Data were collected at room temperature after each treatment. Each step was equilibrated for at least 30 min.
Extended X-ray absorption fine structure (EXAFS) analysis
The analysis of EXAFS data was performed with IFEFIT package (Supplementary Fig. 4)43. The S02 amplitude reduction factor value was obtained from the fitting of Pd foil that was previously measured at the same beamline. The obtained value of 0.78 was used in all subsequent fittings of S0-S3 spectra. For the fitting of S0 & S1 data, two nearest-neighbor photoelectron paths Au–Au and Pd–Au were chosen. For S2 & S3, only Au–Au path was chosen for the fitting, as adding Pd–Au path made the fit unstable. Fitting k range values were: 2–12 Å−1, 2–11 Å−1, 2–9 Å−1, and 2–10 Å−1 for S0-S3, respectively. Fitting R range values were 1.75–3.7 Å, 1.4–3.5 Å, 1.45–3.5 Å, and 1.4–3.6 Å for S0-S3, respectively. Both k and R fitting ranges were optimized for each dataset separately, in order to minimize the standard deviations. The best-fitting results are summarized in Supplementary Table 1.
Neural network-assisted X-ray near edge structure inversion analysis (NN-XANES)
The development and validation of the NN-XANES inversion method for PdAu bimetallic nanoparticles are reported in our previous work28. Before the application of the trained NN object in Mathematica 12, the experimental data were preprocessed with Athena. The catalyst Pd K-edge spectra of S0-S3 were aligned, edge step normalized, and then interpolated to a 95 point non-uniform energy mesh that spanned energies from Emin = 24339.8 eV to Emax = 24416.6 eV with a step size of 0.6 eV for data points near the absorption edge, which gradually increased to 1.7 eV for points approaching Emax. The same procedure was completed for a Pd K-edge spectrum of Pd foil that was collected at the same time as the catalyst spectra. The normalized and interpolated Pd foil spectrum was subtracted from the normalized and interpolated spectra of S0-S3. Finally, the energies and absorption coefficients were normalized between 0 and 1 using the commonly used min-max normalization procedure:
$$Z=\frac{X-{{\min }}(X)}{{{\max }}(X)-{{\min }}(X)}.$$
Here X is the training data input, min(X) is the smallest input, max(X) is the largest input, and Z is the normalized input. To estimate relative errors in the coordination number predictions, 10 independently trained NNs were applied to the processed data28,31. The coordination numbers and their uncertainties are presented in Supplementary Table 1.
Density functional theory (DFT)
We perform DFT calculations using plane-wave basis sets and the projector augmented-wave (PAW) method44 as implemented in the Vienna Ab Initio Simulation Package (VASP)45. The plane-wave kinetic energy cutoff is set at 450 eV. The Methfessel-Paxton smearing scheme46 is employed with a broadening value of 0.2 eV. All structures are optimized via ionic relaxation, with the total energy and forces converged to 10−5 eV and 0.02 eV/Å, respectively. Gas-phase species are optimized in a 14 × 15 × 16 Å3 cell at the Γ-point with spin polarization. Lattice constants of bulk face-centered cubic Au and Pd are optimized according to the third-order Birch-Murnaghan equation of state, using a 19 × 19 × 19 k-point grid. We maintain the lattice constant of pure Au for all Pd-doped Au systems, given the dilute concentration of Pd in our model systems. All slab models are spaced by 16 Å of vacuum along the direction normal to the surface in order to avoid spurious interactions between adjacent unit cells. We fix the bottom layer(s) at bulk positions to mimic bulk properties. Supplementary Table 2 describes the computational set-up for the slab models of close-packed surfaces considered in our study.
We employ the Perdew-Burke-Ernzerhof (PBE) parametrization47 of the generalized gradient approximation (GGA) of the exchange-correlation functional. PBE provides Au and Pd lattice constants of 4.16 and 3.94 Å, within <0.1 Å of the experimental benchmark of 4.08 and 3.88 Å, respectively48. We also examine the dissociative adsorption energy of H2 on Pd(111), defined as the change in energy from an isolated slab and a gas-phase H2 to a combined system interacting as atomic H adsorbed on the slab:
$${E}_{{{{{{\rm{ads}}}}}}}=E\left[{{{{{{\rm{H}}}}}}}_{({{{{{\rm{ads}}}}}})}/{{{{{\rm{Pd}}}}}}(111)\right]-E\left[{{{{{\rm{Pd}}}}}}\left(111\right)\right]-\frac{1}{2}E\left[{{{{{{\rm{H}}}}}}}_{2\left({{{{{\rm{g}}}}}}\right)}\right].$$
PBE provides H adsorption energy of −0.57 eV at the low-coverage limit, within 0.1 eV of the experimental benchmark of −0.47 eV49. Based on these observations, we conclude that PBE is an appropriate standard reference functional that can provide a reasonable comparison with experiments for H2 chemisorption on Pd/Au systems.
We perform transition state modeling using the VASP Transition State Tools (VTST). Transition state pathways are first optimized via the climbing-image nudged elastic band (CI-NEB) method50, using three intermediate images generated by linear interpolation with a spring constant of 5 eV/Å2. The total forces, defined as the sum of the spring force along the chain and the true force orthogonal to the chain, are converged to 0.05 eV/Å. Then, the image with the highest energy is fully optimized to a first-order saddle point via the dimer method51, this time converging the total energy and forces to 10−7 eV and 0.01 eV/Å, respectively. We confirm that the normal modes of all transition states contain only one imaginary frequency by calculating the Hessian matrix within the harmonic approximation, using central differences of 0.01 Å at the same level of accuracy as the dimer method.
Vibrational frequencies associated with geometrically inequivalent configurations of all isotopic species (H2, D2, HD, H, D) are obtained by calculating the Hessian matrix in a similar manner. Given the large difference in the masses of the adsorbates and the metal substrate, only the adsorbate degrees of freedom are considered in the calculations.
Zero-point energy corrections (ZPC) are applied to all dissociative adsorption and recombinative desorption processes by correcting the electronic energies of the corresponding initial, transition, and final states with their respective zero-point energies:
$${E}_{{{{{{\rm{ZPC}}}}}}}=\frac{1}{2}\mathop{\sum}\limits_{i}h{\nu }_{i}.$$
Here, \(h\) is the Planck constant and \({\nu }_{i}\) is the non-imaginary vibrational frequency of normal mode \(i\).
Conversion from the electronic energy at 0 K (\({E}_{{{{{{\rm{DFT}}}}}}}\)) to the ideal-gas Gibbs free energy (\(G\)) at a given temperature \(T\) and pressure \(P\) is given by:
$$G\left(T,P\right)={E}_{{{{{{\rm{DFT}}}}}}}+{E}_{{{{{{\rm{ZPC}}}}}}}+{k}_{{{{{{\rm{B}}}}}}}T+{\int }_{0}^{T}{c}_{V}{dT}-T\cdot S\left(T,P\right)$$
where \({c}_{V}\) is the constant-volume heat capacity. See Supplementary Table 3 for the statistical thermodynamics expressions of the integrated heat capacity and the entropy. Translational and rotational degrees of freedom are included for gas-phase molecules only.
Microkinetic modeling
The microkinetic models were parameterized using the energetics of the H/D exchange reaction network, computed with DFT using the PBE functional (Methods). For a dilute Pd/Au alloy, the effect of H coverage on reaction energetics can be considered to be small because the Pd sites are far apart. On Pd(111), the H coverage also does not significantly impact the adsorption energy52. The rate constant of an elementary step is given by the Eyring equation:
$$k=\frac{{k}_{{{{{{\rm{B}}}}}}}T}{h}{{\exp }}\left(-\frac{{\triangle G}^{o{{\ddagger}} }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
where \({k}_{{{{{{\rm{B}}}}}}}\) is the Boltzmann constant, \(h\) is the Planck constant, \(T\) is the temperature, and \({\triangle G}^{o{{\ddagger}} }\) is the Gibbs free energy of activation at standard pressure.
The rate constant of adsorption for species \(i\) is given by the collision theory:
$${k}_{{{{{{\rm{ads}}}}}},i}=\frac{\sigma {AP}^\circ }{\sqrt{2\pi {m}_{i}{k}_{{{{{{\rm{B}}}}}}}T}}{{\exp }}\left(-\frac{\triangle {E}_{{{{{{\rm{ads}}}}}}}^{{{\ddagger}} }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
where \(\sigma\) is the sticking coefficient, \(A\) is the surface area of the Pd ensemble, \(P^\circ\) is the standard pressure (1 bar), \({m}_{i}\) is the mass of the adsorbate, and \(\triangle {E}_{{{{{{\rm{ads}}}}}}}^{{{\ddagger}} }\) is the activation energy of adsorption. In this work, the sticking coefficient is set to 1, and the molecular adsorption process is taken to be barrierless. The surface areas are calculated using the bulk lattice constants of Pd and Au optimized with the PBE functional (3.94 and 4.16 Å, respectively). The atomic fraction of Pd in the alloy is set to 10%, in line with the concentration of 8% in the experimental samples. From Vegard's law, the area occupied by one atom on (111) facet is \(7.41\times {10}^{-20}\,{{{{{{\rm{m}}}}}}}^{2}\), which is then multiplied by the number of Pd atoms in each ensemble.
The corresponding rate constant for desorption is given by
$${k}_{{{{{{\rm{des}}}}}},i}=\frac{{k}_{{{{{{\rm{ads}}}}}},i}}{{K}_{{{{{{\rm{ads}}}}}},i}}$$
with the equilibrium constant of adsorption, \({K}_{{{{{{\rm{ads}}}}}},i}\):
$${K}_{{{{{{\rm{ads}}}}}},i}={{\exp }}\left(-\frac{{\triangle G}_{{{{{{\rm{ads}}}}}},i}^{^\circ }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
where \({\triangle G}_{{{{{{\rm{ads}}}}}},i}^{^\circ }\) is the Gibbs free energy of adsorption at standard pressure. The translational, rotational, and vibrational degrees of freedom fare considered for gaseous species, whereas only the vibrational degrees of freedom are included for surface intermediates and transition states. All vibrational frequencies below 100 cm−1 are rounded up to 100 cm−1.
The rate of elementary step \(j\) was computed as follows:
$${r}_{j}={k}_{j}^{{{{{{\rm{fwd}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{IS}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{gas}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}}-{k}_{j}^{{{{{{\rm{rev}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{IS}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{gas}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}}.$$
Here, \({k}_{j}^{{{{{{\rm{fwd}}}}}}}\) and \({k}_{j}^{{{{{{\rm{rev}}}}}}}\) are the forward and reverse rate constants, and \({\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}\) and \({\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}\) are the stoichiometric coefficients of reactant \(i\) in the forward and reverse directions, respectively. The activity \({\alpha }_{i}\) is taken as the surface coverage fraction \({\theta }_{i}\) for intermediate states (labeled IS; including bare sites) and as the ratio of the partial pressure to the standard pressure, \({P}_{i}/P^\circ\), for gaseous species53.
The time-dependent coverages of surface intermediates are obtained as the steady-state solution of the following system of ordinary differential equations:
$$\frac{d{\theta }_{i}}{{dt}}=-\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}{r}_{j}+\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}{r}_{j}$$
Following Wang et al.54, the steady-state solution is achieved in two steps. Starting from a bare surface, the equations are first integrated over 50 s until they have approximately reached a steady state. The resulting coverages are then used as an initial guess for numerical solution as follows:
$$0=-\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}{r}_{j}+\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}{r}_{j}$$
$${\theta }_{{{{{{{\rm{Pd}}}}}}}_{n}}\left(t=0\right)=\mathop{\sum}\limits_{i}{\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},i}$$
$$1=n\mathop{\sum}\limits_{i}{\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},i}+\mathop{\sum}\limits_{i}{\theta }_{{{{{{\rm{Au}}}}}},i}$$
Here, \({\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},{i}}\) and \({\theta }_{{{{{{\rm{Au}}}}}},{i}}\) are the surface coverages of species i on Pdn and Au sites, respectively, and n is the number of Pd atoms in the ensemble.
The steady-state rates of HD formation over Pdn/Au(111) are solved at temperatures of 25–150 °C. The partial pressures of H2 and D2 are set to 9.9 kPa and that of HD to 0.2 kPa, with a balance of inert for the total pressure of 100 kPa. The reaction pathways are analyzed by computing the apparent activation energy, steady-state intermediate coverages, and the degrees of rate control for all surface intermediates and transition states55. The derivatives are evaluated numerically using step sizes of 0.1 °C and \({10}^{-4}\,{{{{{\rm{eV}}}}}}\) for the apparent activation energy and the degree of rate control, respectively.
Coordination number parameterization
For a bimetallic system of Pd and Au atoms, the average first nearest neighbor coordination numbers from the Pd perspective can be represented as the vector \(\{{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}},{\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\}\), where \({\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}\) is the average Pd–Pd coordination number and \({\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\) is the average Pd–Au coordination number. In general, the first nearest neighbor coordination numbers can be parametrized in terms of the Pd speciation with the following system of equations:
$$\left[\begin{array}{c}{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}\\ {\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\end{array}\right]=\frac{1}{N}\left[\begin{array}{ccc}{\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{1}} & \cdots & {\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{i}}\\ {\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{1}} & \cdots & {\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{i}}\end{array}\right]\left[\begin{array}{ccc}{N}_{{{{{{\rm{Pd}}}}}}}^{(1)} & \cdots & 0\\ \vdots & \ddots & \vdots \\ 0 & \cdots & {N}_{{{{{{\rm{Pd}}}}}}}^{(i)}\end{array}\right]\left[\begin{array}{c}{s}_{1}\\ \vdots \\ {s}_{i}\end{array}\right]$$
where \({s}_{i}\) refers to the speciation, i.e. the number of occurrences of a specific structural configuration of Pd (e.g. surface monomer, dimer, etc.), \({N}_{{{{{{\rm{Pd}}}}}}}^{(i)}\) is the number of Pd atoms that are part of the motif \({s}_{i}\), \({\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{i}}\) and \({\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{i}}\) are the partial Pd–Pd and Pd–Au coordination numbers of each Pd atom in \({s}_{i}\), respectively, and N is the total number of Pd atoms in the nanoparticle, i.e., \(N={x}_{{{{{{\rm{Pd}}}}}}}\cdot {N}_{{{{{{\rm{tot}}}}}}}\), where \({x}_{{{{{{\rm{Pd}}}}}}}\) is the atomic ratio of Pd and \({N}_{{{{{{\rm{tot}}}}}}}\) is the total number of atoms in the particle.
To investigate the effect of surface monomers, dimers, and trimers, as well as subsurface dimers and monomers, we modify the system of equations:
$$\left[\begin{array}{c}{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}\\ {\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\end{array}\right]=\frac{1}{N}\,\left[\begin{array}{ccccc}0 & 1 & 2 & 1 & 0 \\ 9 & 8 & 7 & 11 & 12\end{array}\right]\,\left[\begin{array}{ccccc}1 & 0 & 0 & 0 & 0\\ 0 & 2 & 0 & 0 & 0\\ 0 & 0 & 3 & 0 & 0\\ 0 & 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 0 & 1\end{array}\right]\,\left[\begin{array}{c}{N}_{{{{{{\rm{m}}}}}}}\\ {N}_{{{{{{\rm{d}}}}}}}\\ {N}_{{{{{{\rm{t}}}}}}}\\ {N}_{{{{{{\rm{sd}}}}}}}\\ {N}_{{{{{{\rm{b}}}}}}}\end{array}\right]\,$$
where \({N}_{{{{{{\rm{m}}}}}}}\), \({N}_{{{{{{\rm{d}}}}}}}\), \({N}_{{{{{{\rm{t}}}}}}}\), \({N}_{{{{{{\rm{sd}}}}}}}\), and \({N}_{{{{{{\rm{b}}}}}}}\) are the numbers of surface monomers, surface dimers, surface trimers, subsurface dimers, and subsurface (bulk) monomers. The number of Pd atoms in each species are 1, 2, 3, 2, and 1, respectively, and the partial coordination numbers are determined by assuming a (111) surface orientation with bulk face-centered cubic packing. The assumption of (111) facet is backed by DFT calculations showing that exposed Pd atoms are thermodynamically more stable at close-packed terrace sites. The total number of Pd atoms is estimated as \(N={x}_{{{{{{\rm{Pd}}}}}}}\cdot {N}_{{{{{{\rm{tot}}}}}}}\), where \({x}_{{{{{{\rm{Pd}}}}}}}\) is obtained from ICP-MS (0.083) and \({N}_{{{{{{\rm{tot}}}}}}}\) from TEM (4.6 \(\pm\) 0.8 nm) as follows:
$${N}_{{{{{{\rm{tot}}}}}}}=\frac{1}{3}\left(2n+1\right)(5{n}^{2}+5n+3)$$
Here, \(n\) is the side length of an icosahedron, and n = 8.4 for an icosahedron that is 4.6 nm, resulting in 2360 total atoms. We use the equation for an icosahedron because the original Au nanoparticles are icosahedra, and they are doped with a dilute amount of Pd which we assume does not substantially distort the morphology of the nanoparticle.
All data generated or analyzed during this study are included in this published article (and its supplementary information files).
Dong, K., Dong, X. & Jiang, Q. How renewable energy consumption lower global CO2 emissions? Evidence from countries with different income levels. World Econ. 43, 1665–1698 (2020).
US Energy Information Administration, Energy use in industry. https://www.eia.gov/energyexplained/use-of-energy/industry.php (2021).
Personick, M. L. et al. Catalyst design for enhanced sustainability through fundamental surface chemistry. Philos. Trans. R. Soc., A 374, 20150077 (2016).
Dahl, S. et al. Role of Steps in N2 Activation on Ru(0001). Phys. Rev. Lett. 83, 8314 (1999).
Nørskov, J. K. et al. The nature of the active site in heterogeneous metal catalysis. Chem. Soc. Rev. 37, 2163–2171 (2008).
Yates, J. T. Surface chemistry at metallic step defect sites. J. Vac. Sci. Technol. A 13, 1359 (1995).
van Spronsen, M. A. et al. Dynamics of surface alloys: rearrangement of Pd/Ag(111) induced by CO and O2. J. Phys. Chem. C. 123, 8312–8323 (2019).
Tao, F., Zhang, S., Luan, N. & Zhang, X. Action of bimetallic nanocatalysts under reaction conditions and during catalysis: evolution of chemistry from high vacuum conditions to reaction conditions. Chem. Soc. Rev. 41, 7980–7993 (2012).
Lim, J. S. et al. Evolution of metastable structures at bimetallic surfaces from microscopy and machine-learning molecular dynamics. J. Am. Chem. Soc. 142, 15907–15916 (2020).
Wrasman, C. J. et al. Synthesis of colloidal Pd/Au dilute alloy nanocrystals and their potential for selective catalytic oxidations. J. Am. Chem. Soc. 140, 12930–12939 (2018).
Luneau, M. et al. Guidelines to achieving high selectivity for the hydrogenation of α,β-unsaturated aldehydes with bimetallic and dilute alloy catalysts: a review. Chem. Rev. 120, 12834–12872 (2020).
Giannakakis, G., Flytzani-Stephanopoulos, M. & Sykes, E. C. H. Single-atom alloys as a reductionist approach to the rational design of heterogeneous catalysts. Acc. Chem. Res. 52, 237–247 (2019).
Liu, L. & Corma, A. Metal catalysts for heterogeneous catalysis: from single atoms to nanoclusters and nanoparticles. Chem. Rev. 118, 4981–5079 (2018).
Hannagan, R. T., Giannakakis, G., Flytzani-Stephanopoulos, M. & Sykes, E. C. H. Single-atom alloy catalysis. Chem. Rev. 120, 12044–12088 (2020).
Gao, F. & Goodman, D. W. Pd–Au bimetallic catalysts: understanding alloy effects from planar models and (supported) nanoparticles. Chem. Soc. Rev. 41, 8009–8020 (2012).
Tao, F. et al. Reaction-driven restructuring of Rh-Pd and Pt-Pd core-shell nanoparticles. Science 322, 932–934 (2008).
Zafeiratos, S., Piccinin, S. & Teschner, D. Alloys in catalysis: phase separation and surface segregation phenomena in response to the reactive environment. Catal. Sci. Technol. 2, 1787–1801 (2012).
Vignola, E. et al. Acetylene adsorption on Pd-Ag alloys: evidence for limited island formation and strong reverse segregation from Monte Carlo simulations. J. Phys. Chem. C 122, 15456–15463 (2018).
Lim, J. S., Molinari, N., Duanmu, K., Sautet, P. & Kozinsky, B. Automated detection and characterization of surface restructuring events in bimetallic catalysts. J. Phys. Chem. C 123, 16332–16344 (2019).
Hirschl, R., Delbecq, F., Sautet, P. & Hafner, J. Adsorption of unsaturated aldehydes on the (111) surface of a Pt-Fe alloy catalyst from first principles. J. Catal. 217, 354–366 (2003).
Chun, H. et al. First-principle-data-integrated machine-learning approach for high-throughput searching of ternary electrocatalyst toward oxygen reduction reaction. Chem. Catal. 1, 855–869 (2021).
Kim, H. Y. & Henkelman, G. CO adsorption-driven surface segregation of Pd on Au/Pd bimetallic surfaces: role of defects and effect on CO oxidation. ACS Catal. 3, 2541–2546 (2013).
An, H., Ha, H., Yoo, M. & Kim, H. Y. Understanding the atomic-level process of CO-adsorption-driven surface segregation of Pd in (AuPd)147 bimetallic nanoparticles. Nanoscale 9, 12077–12086 (2017).
Yang, Y., Shen, X. & Han, Y.-F. Diffusion mechanisms of metal atoms in PdAu bimetallic catalyst under CO atmosphere based on ab initio molecular dynamics. Appl. Surf. Sci. 483, 991–1005 (2019).
Weckhuysen, B. M. Preface: recent advances in the in-situ characterization of heterogeneous catalysts. Chem. Soc. Rev. 39, 4557–4559 (2010).
Kondrat, S. A. & van Bokhoven, J. A. A perspective on counting catalytic active sites and rates of reaction using X-ray spectroscopy. Top. Catal. 62, 1218–1227 (2019).
Frenkel, A. I. et al. Critical review: effects of complex interactions on structure and dynamics of supported metal catalysts. J. Vac. Sci. Technol. A 32, 20801 (2014).
Marcella, N. et al. Neural network assisted analysis of bimetallic nanocatalysts using X-ray absorption near edge structure spectroscopy. Phys. Chem. Chem. Phys. 22, 18902–18910 (2020).
Luneau, M. et al. Dilute Pd/Au alloy nanoparticles embedded in colloid-templated porous SiO2: stable Au-based oxidation catalysts. Chem. Mater. 31, 5759–5768 (2019).
Luneau, M. et al. Enhancing catalytic performance of dilute metal alloy nanomaterials. Commun. Chem. 3, 46 (2020).
Timoshenko, J., Lu, D., Lin, Y. & Frenkel, A. I. Supervised machine-learning-based determination of three-dimensional structure of metallic nanoparticles. J. Phys. Chem. Lett. 8, 5091–5098 (2017).
Liu, Y. et al. Probing active sites in CuxPdy cluster catalysts by machine-learning-assisted X-ray absorption spectroscopy. ACS Appl. Mater. Interf. 13, 53363–53374 (2021).
van der Hoeven, J. E. S. et al. Entropic control of HD exchange rates over dilute Pd-in-Au alloy nanoparticle catalysts. ACS Catal. 11, 6971–6981 (2021).
Luneau, M. et al. Achieving high selectivity for alkyne hydrogenation at high conversions with compositionally optimized PdAu nanoparticle catalysts in Raspberry colloid templated SiO2. ACS Catal. 10, 441–450 (2020).
Gao, F., Wang, Y. & Goodman, D. W. Reaction kinetics and polarization-modulation infrared reflection absorption spectroscopy (PM-IRAS) investigation of CO oxidation over supported Pd-Au alloy catalysts. J. Phys. Chem. C 114, 4036–4043 (2010).
Gibson, E. K. et al. Restructuring of AuPd nanoparticles studied by a combined XAFS/DRIFTS approach. Chem. Mater. 27, 3714–3720 (2015).
Sen, I. & Gellman, A. J. Kinetic fingerprints of catalysis by subsurface hydrogen. ACS Catal. 8, 10486–10497 (2018).
Greeley, J. & Mavrikakis, M. Surface and subsurface hydrogen: adsorption properties on transition metals and near-surface alloys. J. Phys. Chem. B 109, 3460–3471 (2005).
O'Connor, C. R. et al. Facilitating hydrogen atom migration via a dense phase on palladium islands to a surrounding silver surface. Proc. Natl Acad. Sci. USA 117, 22657–22664 (2020).
Vignola, E. et al. Evaluating the risk of C–C bond formation during selective hydrogenation of acetylene on palladium. ACS Catal. 8, 1662–1671 (2018).
Piella, J., Bastús, N. G. & Puntes, V. Size-controlled synthesis of Sub-10-nanometer citrate-stabilized gold nanoparticles and related optical properties. Chem. Mater. 28, 1066–1075 (2016).
van der Hoeven, J. E. S. et al. Structural control over bimetallic core–shell nanorods for surface-enhanced Raman spectroscopy. ACS Omega 6, 7034–7046 (2021).
Ravel, B. & Newville, M. ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectroscopy using IFEFFIT. J. Synchrotron Radiat. 12, 537–541 (2005).
Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994).
Kresse, G. Ab initio molecular dynamics for liquid metals. J. Non-Cryst. Solids 193, 222–229 (1995).
Methfessel, M. & Paxton, A. T. High-precision sampling for Brillouin-zone integration in metals. Phys. Rev. B 40, 3616–3621 (1989).
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996).
Csonka, G. I. et al. Assessing the performance of recent density functionals for bulk solids. Phys. Rev. B 79, 155107 (2009).
Wellendorff, J. et al. A benchmark database for adsorption bond energies to transition metal surfaces and comparison to selected DFT functionals. Surf. Sci. 640, 36–44 (2015).
Henkelman, G. & Jónsson, H. Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points. J. Chem. Phys. 113, 9978–9985 (2000).
Henkelman, G. & Jónsson, H. A dimer method for finding saddle points on high dimensional potential surfaces using only first derivatives. J. Chem. Phys. 111, 7010–7022 (1999).
Chen, B. W. J. & Mavrikakis, M. How coverage influences thermodynamic and kinetic isotope effects for H2/D2 dissociative adsorption on transition metals. Catal. Sci. Technol. 10, 671–689 (2020).
Cortright, R. D. & Dumesic, J. A. Kinetics of heterogeneous catalytic reactions: analysis of reaction schemes. Adv. Catal. 46, 161–264 (2001).
Wang, T. et al. Rational design of selective metal catalysts for alcohol amination with ammonia. Nat. Catal. 2, 773–779 (2019).
Mao, Z. & Campbell, C. T. Apparent activation energies in complex reaction mechanisms: a simple relationship via degrees of rate control. ACS Catal. 9, 9465–9473 (2019).
Towns, J. et al. XSEDE: accelerating scientific discovery. Comput. Sci. Eng. 16, 62–74 (2014).
The authors thank the Synchrotron Catalyst Consortium and the beamline scientists S. Ehrlich and L. Ma (QAS beamline at the NSLSII) for support during the beamline experiment. The authors acknowledge enlightening discussions with C. M. Friend, M. Aizenberg, and A. Boscoboinik and thank them for their comments on the completed manuscript. This project was primarily supported by Integrated Mesoscale Architectures for Sustainable Catalysis (IMASC), an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Award No. DE-SC0012573. C.J.O. was supported by the National Science Foundation Graduate Research Fellowship Program, Grant No. DGE1745303. S.B.T. was supported by the Department of Energy Computational Science Graduate Fellowship (DOE CSGF), Grant No. DE-FG02-97ER25308. J.S.L., C.J.O., and S.B.T. used the Odyssey Cluster, FAS Division of Science, Research Computing Group at Harvard University. J.S.L., C.J.O., and H.T.N. used the National Energy Research Scientific Computing Center (NERSC), a US Department of Energy Office of Science User Facility supported under Contract No. DE-AC02-05CH11231, through allocation m3275. G.Y. and H.T.N. used the HOFFMAN2 cluster at the UCLA Institute for Digital Research and Education (IDRE) and the Extreme Science and Engineering Discovery Environment (XSEDE)56 supported by National Science Foundation Grant No. ACI-1548562, through allocation TG-CHE170060. N.M., S.B.T., and A.I.F. used the Center for Functional Nanomaterials, a US Department of Energy Office of Science User Facility; and the Scientific Data and Computing Center, a component of the Computational Science Initiative, at Brookhaven National Laboratory under Contract No. DE-SC0012704. N.M., A.M.P., and A.I.F. used the Beamline 7-BM (QAS) of the National Synchrotron Light Source II, a US Department of Energy Office of Science User Facility operated by Brookhaven National Laboratory under Contract No. DE-SC0012704. N.S.M. was supported by the Synchrotron Catalysis Consortium funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Grant No. DE-SC0012335 STEM/EDS measurements were carried out at the Singh Center for Nanotechnology at the University of Pennsylvania, supported by the National Science Foundation National Nanotechnology Coordinated Infrastructure Program grant NNCI-1542153. Additional support to the Nanoscale Characterization Facility at the Singh Center has been provided by the Laboratory for Research on the Structure of Matter (MRSEC) supported by the National Science Foundation (DMR-1720530).
These authors contributed equally: Nicholas Marcella, Jin Soo Lim, Anna M. Płonka, George Yan.
Department of Materials Science and Chemical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
Nicholas Marcella, Anna M. Płonka & Anatoly I. Frenkel
Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, 02138, USA
Jin Soo Lim, Cameron J. Owen, Jessi E. S. van der Hoeven & Joanna Aizenberg
Department of Chemical and Biomolecular Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
George Yan, Hio Tong Ngan & Philippe Sautet
Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, 02138, USA
Jessi E. S. van der Hoeven, Joanna Aizenberg & Boris Kozinsky
Department of Materials Science and Engineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
Alexandre C. Foucher & Eric A. Stach
Department of Physics, Harvard University, Cambridge, MA, 02138, USA
Steven B. Torrisi
Department of Chemical Engineering, Columbia University, New York, NY, 10027, USA
Nebojsa S. Marinkovic
Department of Chemical Engineering, University of Florida, Gainesville, FL, 32611, USA
Jason F. Weaver
Department of Chemistry and Biochemistry, University of California, Los Angeles, Los Angeles, CA, 90095, USA
Philippe Sautet
Robert Bosch LLC, Research and Technology Center, Cambridge, MA, 02139, USA
Boris Kozinsky
Chemistry Division, Brookhaven National Laboratory, Upton, NY, 11973, USA
Anatoly I. Frenkel
Nicholas Marcella
Jin Soo Lim
Anna M. Płonka
George Yan
Cameron J. Owen
Jessi E. S. van der Hoeven
Alexandre C. Foucher
Hio Tong Ngan
Eric A. Stach
Joanna Aizenberg
N.M., J.S.L., A.M.P., and G.Y. contributed equally to this work. J.E.Svd.H. synthesized the samples, advised by J.A. A.M.P. performed catalysis experiments with assistance from J.E.Svd.H., advised by A.I.F. N.M., A.M.P., and N.S.M. conducted X-ray absorption spectroscopy measurements, advised by A.I.F. A.C.F. conducted transmission electron microscopy and energy-dispersive X-ray spectroscopy measurements, advised by E.A.S. N.M. performed neural network analyses with assistance from S.B.T., advised by A.I.F. J.S.L. and C.J.O. performed DFT calculations with assistance from G.Y. and H.T.N., advised by B.K. and P.S. G.Y. performed microkinetic modeling with assistance from H.T.N., advised by P.S. and J.F.W. N.M., J.S.L., A.M.P., and G.Y. wrote the manuscript with input from all authors.
Correspondence to Boris Kozinsky or Anatoly I. Frenkel.
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Marcella, N., Lim, J.S., Płonka, A.M. et al. Decoding reactive structures in dilute alloy catalysts. Nat Commun 13, 832 (2022). https://doi.org/10.1038/s41467-022-28366-w
Tuning at the subnanometre scale
Sanjana Srinivas
Dionisios G. Vlachos
Nature Catalysis (2022)
|
CommonCrawl
|
Uniform estimates on the Fisher information for solutions to Boltzmann and Landau equations
KRM Home
Kinetic methods for inverse problems
October 2019, 12(5): 1131-1162. doi: 10.3934/krm.2019043
On the blow-up criterion and global existence of a nonlinear PDE system in biological transport networks
Bin Li a,b,
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
Department of Mathematics, Jincheng College of Sichuan University, Chengdu 611731, China
Received January 2019 Revised March 2019 Published July 2019
In this paper, we consider a parabolic-elliptic system of partial differential equations in the three dimensional setting that arises in the study of biological transport networks. We establish the local existence of strong solutions and present a blow-up criterion. We also show that the solutions exist globally in time under the some smallness conditions of initial data and of the source.
Keywords: Parabolic-elliptic system, Cauchy problem, blow-up criterion, global solution, biological transport networks.
Mathematics Subject Classification: Primary: 35Q92; Secondary: 35K55; 35J15; 35D35.
Citation: Bin Li. On the blow-up criterion and global existence of a nonlinear PDE system in biological transport networks. Kinetic & Related Models, 2019, 12 (5) : 1131-1162. doi: 10.3934/krm.2019043
G. Albi, M. Artina, M. Foransier and P. Markowich, Biological transportation networks: Modeling and simulation, Analysis Applications, 14 (2016), 1855-206. doi: 10.1142/S0219530515400059. Google Scholar
G. Albi, M. Burger, J. Haskovec, P. Markowich and M. Schlottbom, Continuum Modeling of Biological Network Formation, in Active Particles, vol. I. Modeling and Simulation in Science and Technology (eds. N. Bellomo, P. Degond and T. Tamdor), Birkhäuser, Boston, 2017, 1–48. Google Scholar
H. Bahouri, J. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Heidelberg, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar
N. Dengler and J. Kang, Vascular patterning and leaf shape, Current Opinion in Plant Biology, 4 (2001), 50-56. doi: 10.1016/S1369-5266(00)00135-7. Google Scholar
A. Eichmann, F. Le Noble, M. Autiero and P. Carmeliet, Guidance of vascular and neural network formation, Current Opinion in Neurobiology, 15 (2005), 108-115. doi: 10.1016/j.conb.2005.01.008. Google Scholar
A. Gmira and L. Véron, Large time behaviour of the solutions of a semilinear parabolic equation in $ \mathbb R^n$, Journal of Differential Equations, 53 (1984), 258-276. doi: 10.1016/0022-0396(84)90042-1. Google Scholar
J. Haskovec, P. Markowich and B. Perthame, Mathematical analysis of a PDE system for biological network formation, Comm. Partial Differential Equations, 40 (2015), 918-956. doi: 10.1080/03605302.2014.968792. Google Scholar
J. Haskovec, P. Markowich, B. Perthame and M. Schlottbomc, Notes on a PDE system for biological network formation, Nonlinear Analysis, 138 (2016), 127-155. doi: 10.1016/j.na.2015.12.018. Google Scholar
D. Hu and D. Cai, Adaptation and optimization of biological transport networks, Physical Review Letters, 111 (2013), 138701. doi: 10.1103/PhysRevLett.111.138701. Google Scholar
D. Hu, Optimization, Adaptation, and Initialization of Biological Transport Networks, Notes from lecture, 2014. Google Scholar
E. Katifori, G. Szollosi and M. Magnasco, Damage and fluctuations induce loops in optimal transport networks, Physical Review Letters, 104 (2010), 048704. doi: 10.1103/PhysRevLett.104.048704. Google Scholar
H. Lin and Z. Xiang, Global well-posedness for 2D incompressible magneto-micropolar fluid system with partial viscosity, Sci. China Math., in press, (2019), http://engine.scichina.com/doi/10.1007/s11425-018-9427-6. doi: 10.1007/s11425-018-9427-6. Google Scholar
J. Liu and X. Xu, Partial regularity of weak solutions to a PDE system with cubic nonlinearity, Journal of Differential Equations, 264 (2018), 5489-5526. doi: 10.1016/j.jde.2018.01.001. Google Scholar
R. Malinowski, Understanding of leaf development-the science of complexity, Plants, 2 (2013), 396-415. doi: 10.3390/plants2030396. Google Scholar
O. Michel and J. Biondi, Morphogenesis of neural networks, Neural Processing Letters, 2 (1995), 9-12. doi: 10.1007/BF02312376. Google Scholar
Y. Peng and Z. Xiang, Global solutions to the coupled chemotaxis-fluids system in a 3D unbounded domain with finite depth, Math. Models Methods Appl. Sci., 28 (2018), 869-920. doi: 10.1142/S0218202518500239. Google Scholar
Y. Peng and Z. Xiang, Global existence and convergence rates to achemotaxis-fluids system with mixed boundary conditions, Journal of Differential Equations, 267 (2019), 1277-1321. doi: 10.1016/j.jde.2019.02.007. Google Scholar
X. Ren, J. Wu, Z. Xiang and Z. Zhang, Global existence and decay of smooth solution for the 2-D MHD equations without magnetic diffusion, J. Funct. Anal., 267 (2014), 503-541. doi: 10.1016/j.jfa.2014.04.020. Google Scholar
X. Ren, Z. Xiang and Z. Zhang, Global well-posedness for the 2D MHD equations without magnetic diffusion in a strip domain, Nonlinearity, 29 (2016), 1257-1291. doi: 10.1088/0951-7715/29/4/1257. Google Scholar
X. Ren, Z. Xiang and Z. Zhang, Global existence and decay of smooth solutions for the 3-D MHD-type equations without magnetic diffusion, Sci. China Math., 59 (2016), 1949-1974. doi: 10.1007/s11425-016-5145-2. Google Scholar
D. Sedmera, Function and form in the developing cardiovascular system, Cardiovascular Research, 91 (2011), 252-259. doi: 10.1093/cvr/cvr062. Google Scholar
J. Serrin, On the interior regularity of weak solutions of Navier-Stokes equations, Arch. Rat. Mech. Anal., 9 (1962), 187-195. doi: 10.1007/BF00253344. Google Scholar
X. Xu, Life-span of smooth solutions to a PDE system with cubic nonlinearity, preprint, arXiv: 1706.06057. Google Scholar
X. Xu, Regularity theorems for a biological network formulation model in two space dimensions, Kinetic and Related Models, 11 (2018), 397-408. doi: 10.3934/krm.2018018. Google Scholar
Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001
Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021016
Pengyu Chen, Yongxiang Li, Xuping Zhang. Cauchy problem for stochastic non-autonomous evolution equations governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1531-1547. doi: 10.3934/dcdsb.2020171
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226
Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan. On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29 (1) : 1709-1734. doi: 10.3934/era.2020088
Bin Li
|
CommonCrawl
|
Nano Express
Nucleation mechanism of nano-sized NaZn13-type and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification
Xue-Ling Hou1,2,
Yun Xue1,2,
Chun-Yu Liu1,2,
Hui Xu1,2,
Ning Han3,
Chun-Wei Ma3 &
Manh-Huong Phan4
Nanoscale Research Letters volume 10, Article number: 143 (2015) Cite this article
The nucleation mechanism involving rapid solidification of undercooled La-Fe-Si melts has been studied experimentally and theoretically. The classical nucleation theory-based simulations show a competitive nucleation process between the α-(Fe,Si) phase (size approximately 10 to 30 nm) and the cubic NaZn13-type phase (hereinafter 1:13 phase, size approximately 200 to 400 nm) during rapid solidification, and that the undercooled temperature change ∆T plays an important factor in this process. The simulated results about the nucleation rates of the α-(Fe,Si) and 1:13 phases in La-Fe-Si ribbons fabricated by a melt-spinner using a copper wheel with a surface speed of 35 m/s agree well with the XRD, SEM, and TEM studies of the phase structure and microstructure of the ribbons. Our study paves the way for designing novel La-Fe-Si materials for a wide range of technological applications.
La-Fe-Si alloys exhibiting a giant magnetocaloric effect (GMCE) near room temperature are one of the most promising candidate materials for advanced magnetic refrigeration technology [1-4]. In La-Fe-Si alloys, the NaZn13-type phase (1:13 phase), which undergoes a first-order magneto-structural transition accompanied by a typical itinerant electron metamagnetic transition and a large volume change in the vicinity of its Curie temperature T C, has been reported to be a driving force for achieving the GMCE [5-8]. From a materials perspective, the 1:13 phase with a cubic NaZn13-type (Fm \( \overline{3} \) c) structure is very difficult to form directly from equilibrium solidification conditions. It has been shown that during an equilibrium solidification, α-(Fe,Si) phase (A2: Im \( \overline{3} \) m) dendrites firstly grow from the liquid as the primary phase and then a peritectic reaction with the surrounding liquid occurs to form the 1:13 phase (α-(Fe,Si) + L → 1:13 phase). Trace amounts of a La-rich or a LaFeSi phase are also found in the interdendritic region [9,10]. It is a major difficulty to produce the 1:13 phase because of its low phase stability at elevated temperatures and low atomic diffusivity [11,12]. Due to the incompleteness of the peritectic reaction, a large number of α-(Fe,Si) dendrites are preserved at room temperature. In as-cast conditions attained by conventional arc-melting techniques, La-Fe-Si alloys show a two-phase structure composed of α-(Fe,Si) and La-Fe-Si (Cu2Sb-type: P4/nmm) phases. It is therefore essential to anneal the as-cast alloys in vacuum at a high temperature for a long time (approximately 1,323 K, 30 days) to gain the desired 1:13 phase. Recently, the melt spinning technique has emerged as a more efficient approach for producing La(Fe,Si)13 materials, since the desired 1:13 phase could be obtained subject to a much shorter time annealing (approximately 1,273 K, 20 to 120 min) [11,12]. The primary contents of α-(Fe,Si) and 1:13 phases obtained from the melt spinning technique are entirely different from those obtained using conventional equilibrium solidification techniques [13]. However, the origin of this difference has remained an open question. While the nucleation mechanism of 1:13 and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification has been yet investigated, knowledge of which is key to exploiting their desirable properties for a wide range of technological applications.
To address these emerging and important issues in the present work, we have investigated theoretically and experimentally the nucleation mechanism of α-(Fe,Si) and 1:13 phases in melt-spun La-Fe-Si ribbons. Detailed microstructural studies of the wheel-side and free-side surfaces of the melt-spun ribbons are reported. Our simulated and experimental results consistently show that there exists a competitive nucleation process between the nano-sized α-(Fe,Si) and 1:13 phases during rapid solidification, and that the undercooled temperature change, ∆T, plays a crucial factor in this process. A similar trend has also been reported in other peritectic alloys [14-17].
Button ingots with a nominal composition of LaFe11.5Si1.5 were prepared by arc-melting 99% La, 99.9% Fe, and 99.5% Si crystals in an argon gas atmosphere. The ingots were remelted four times and each time the button was turned over to obtain a homogeneous composition. The button was broken into pieces, and these pieces were then put into a quartz tube with a nozzle. The chamber of the quartz tube was evacuated to a vacuum of 3 to 5 × 10−3 Pa and then filled with high-purity Ar. The samples were melted by electromagnetic induction and then ejected through the nozzle using a pressure difference into a turning cooper wheel. The surface speed of the Cu wheel was approximately 35 m/s to get ribbon samples with a thickness about 25 μm. Here, we denote the surface of the ribbon far from the copper wheel as the free surface, while the surface of the ribbon in direct contact with the copper wheel is referred to as the cooled surface. The phases and crystal structures of the ribbons were characterized by powder X-ray diffraction (XRD) using Cu-Kα radiation. The microstructure analysis was carried out by a scanning electron microscope (SEM) with an energy dispersive spectrometer (EDS) (model JSM-6700 F, JEOL Ltd., Tokyo, Japan) and a transmission electron microscope (TEM, model JEM-2010 F, JEOL Ltd., Tokyo, Japan). The TEM specimen was prepared by a dual-beam focused ion beam (FIB, model 600 i, FEI Company, Oregon, USA).
The room temperature XRD patterns, SEM, and simulated results using the classical nucleation theory, as shown in Figure 1a and c, reveal the change in composition and volume fraction of the α-(Fe,Si) and 1:13 phases on the cooled surface and free surfaces of a melt-spun ribbon. As one can see clearly in Figure 1a, the XRD patterns show that the majority of the 1:13 phase is on the cooled surface of the ribbon, while this phase diminishes, even disappears, when crossing toward the other surface of the ribbon. The majority of the α-(Fe,Si) phase is found on the free surface. The as-cast microstructure appears to be very different between the cooled and free surfaces of the ribbon (see regions A and B of Figure 1c). These results indicate that the rapid solidification process favors a direct formation of the 1:13 phase from the liquid melt of La-Fe-Si. By contrast, under an equilibrium solidification condition, the 1:13 phase is formed via a peritectic reaction process between the nascent α-(Fe,Si) and liquid (L) phase (1:13 → α-(Fe,Si) + L). It is worth noting that there is a distinct difference in the formed phase structure and microstructure between the cooled and free surfaces of the melt-spun ribbon. This can be attributed to the difference in the nucleation rates of the α-(Fe,Si) and 1:13 phases. According to the classical nucleation theory (CNT) [18,19], the heterogeneous nucleation rate can be determined by
$$ I=\frac{k_{\mathrm{B}}T{N}_{\mathrm{n}}}{3\pi \upeta (T){a}_o^3}. exp\left[-\frac{\varDelta G*}{k_{\mathrm{B}}T}\right], $$
where k B, η(T), N n, a 0, and ∆G* are the Boltzmann constant, the temperature-dependent viscosity of the undercooled melt, the potential nucleation sites, the average atomic distance, and the activation energy for forming a critical nucleus, respectively. ∆G* cas
$$ \varDelta {G}^{*}=\frac{16\pi }{3}\frac{\sigma^3}{\varDelta {G}_{\mathrm{V}}^2}f\left(\theta \right)=\frac{16}{3}\frac{\sigma^3\varDelta {S}_{\mathrm{f}}{T}_{\mathrm{l}}^3}{{\left({T}_{\mathrm{l}}-T\right)}^2}f\left(\theta \right), $$
where σ, ∆G v, ∆S f, T l, T, and f(θ) are the interfacial energy, the Gibbs free energy difference between liquid and solid, the entropy of fusion, the liquid temperature, the temperature, and the catalytic factor for nucleation, respectively. The interfacial energy, σ, can be estimated by the model developed by Spaepen [19,20]:
$$ \sigma =\alpha \frac{\varDelta {S}_{\mathrm{f}}T}{{\left({N}_1{V}_{\mathrm{m}}^2\right)}^{1/3},} $$
where α is the structure-dependent factor, N l is Avogadro constant, and V m is the volume. By inserting the essential parameters listed in Table 1 into Eqs. 1 to 3, the heterogeneous nucleation rates for the α-(Fe,Si) and 1:13 phases can be simulated. The calculated results (Figure 1b) show a competing nucleation between the α-(Fe,Si) and 1:13 phases that occurred during rapid solidification. As the undercooled temperature change, ∆T, is less than 707 K, the α-(Fe,Si) phase has a higher nucleation rate compared to that of the 1:13 phase on the free surface of the ribbon, and it is therefore a primary phase in a slow solidification process. For ∆T > 707 K, however, the reverse situation is observed. The nucleation rate of the 1:13 phase is faster than that of the α-(Fe,Si) phase on the cooled surface of the ribbon, thus resulting in the 1:13 phase as a primary solidification phase. These calculated results are well interpreted from the obtained XRD data (Figure 1a). When ∆T = 707 K, the nucleation rate of the 1:13 phase is equal to that of the α-(Fe,Si) phase. This is seen as an intersection of the two curves of Figure 1b, where the microstructure is found to be an obvious watershed between the cooled and free surfaces of the ribbon (see Figure 1c and Figure 2a); this watershed matches with the green lines of Figure 2b and c. It can also be seen in Figure 1c that small-sized grains in region A are on the cooled surface, and its microstructure is very different from that of region B of the free surface. The 'transition' region from region A (the cooled surface) to region B (the free surface) can be seen in cross-sectional SEM images with higher magnifications (Figure 2b and c), where the small-sized grains appeared as rectangular black dots in region A (Figure 2c). The chemical compositions were observed to change between regions A and B during the rapid solidification process of La-Fe-Si. EDS analysis showed that the content of La and Si in region A was higher than those of region B and of the nominal composition (see Table 2). The content of Fe was lower than those of region B and of the nominal composition. The variations in the chemical compositions in regions A and B are likely associated with the varying contents of α-(Fe,Si) phase.
The room temperature XRD patterns, SEM, and simulated results using the classical nucleation theory. (a) XRD patterns of both the cooled and free surfaces of a melt-spun La-Fe-Si ribbon; (b) nucleation rates of α-(Fe,Si) and La(Fe,Si)13 phases versus La-Fe-Si alloy; and (c) a cross-sectional SEM image of the ribbon indicating the microstructural difference between the cooled surface region (region A) and the free surface region (region B).
Table 1 Physical parameters of the La-Fe-Si alloy [21] used in our calculations
Cross-sectional SEM images. Cross-sectional SEM images of a melt-spun La-Fe-Si ribbon (a) with different magnifications (b,c). There exist three different regions: region A (the cooled surface), region B (the free surface), and a transitional region between A and B.
Table 2 The chemical compositions determined by EDS for regions A and B of the melt-spun La-Fe-Si ribbon (Figure 2 ) relative to its nominal composition
Figure 3a shows the global microstructural morphology of the cooled surface of the ribbon for region A. Expanded views of the white circle and square areas of Figure 3a are shown in Figures 3b and d, respectively. The HRTEM images and corresponding Fourier transforms for 'C' of Figure 3b, a large dark spherical precipitate region and its adjacent matrix, are displayed in Figure 3c, where the matrix is indexed to the structure of the 1:13 phase, while the spherical precipitate 'C' is indexed to the α-(Fe,Si) phase with approximately 97.26 at.% Fe and 2.74 at.% Si as determined by EDS. Using the same analysis, the spherical precipitates 'E', 'F', and 'G' in Figure 3d are determined to be the α-(Fe,Si) phase with the chemical compositions of approximately 96 to 98 at.% Fe and 4 to 2 at.% Si. It can be observed that the Moire fringes in Figure 3 g are two adjacent spherical precipitates of α-(Fe,Si) in 'E' of Figure 3d. These spherical α-(Fe,Si) phases are embedded in the 1:13 matrix. The sheet labeled 'D', with an adjacent 1:13 matrix, can be indexed to the α-(Fe,Si) phase by HRTEM images and corresponding Fourier transforms found in Figure 3e. Two types of shapes, such as the sphere and sheet of α-(Fe,Si), existed on the cooled surface of the ribbon during rapid solidification. The majority of the 1:13 phase, a matrix with equiaxed crystals of approximately 200 to 400 nm, is observed on the cooled surface with some spherical precipitations of α-(Fe,Si) (size, approximately 20 to 100 nm) as a minor phase as seen in the upper half of Figure 3a. The shape and density of the α-(Fe,Si) phase evolve along the white long arrows on the cooled surface from near to far from the copper wheel, in which the volume fraction of the fine spherical α-(Fe,Si) phase is found to decrease while that of the sheet-like α-(Fe,Si) phase increased. The spherical shape of the α-(Fe,Si) phase is replaced by the coarse irregular shape of the α-(Fe,Si) phase, and a higher density of the latter is precipitated when the ribbon surface is far from the cooper wheel (see short arrows and white triangles in Figure 3a). The corresponding selected area diffraction (SAED) pattern for the white triangle of Figure 3a is further identified as α-(Fe,Si) (Figure 3f). A higher degree of super-cooling gave rise to a nucleation rate of the 1:13 phase and the density and shape of the primary α-(Fe,Si) phase on the cooled surface of the ribbon.
The global microstructural morphology. A cross-sectional TEM micrograph of the cooled surface region (region A) (a), with higher magnifications of the regions of the sample indicated by the dashed circle (b) and box (d). HRTEM micrographs for regions C, D and E and its corresponding FFT patterns in (c), (e) and (g); SAED pattern of the 'dashed triangle' of Figure 4a is shown in Figure 4 (f).
Figure 4a shows a TEM micrograph of the free surface of the ribbon for region B with a magnification shown in Figure 4b. The microstructure consists of grain clusters, which were formed during the rapid solidification stage. The cluster boundary is well defined (see the white dashes in Figure 4a). Some nano-sized worm-like morphology was observed in the internal region of the clusters (Figure 4b). EDS revealed that the chemical composition of the black matrix was consistent with the α-(Fe,Si) phase (2.09 at.% La, 92.90 at.% Fe, and 5.00 at.% Si). The HRTEM images and corresponding Fourier transforms for the white circle of Figure 4c (regions 'G' and 'H') in Figure 4d and e can be indexed to the α-(Fe,Si) phase. The TEM analyses further confirm that the majority of the α-(Fe,Si) phase is in region B of Figure 2b, and the 1:13 phase as a majority is in region A of Figure 2b and c during rapid (melt-spinning) solidification process in La-Fe-Si alloys. These results are in good agreement with the XRD data and the simulated results using the classical nucleation theory. It is important to point out that in the melt spinning method, due to the enhanced ∆T in the cooled surface of a melt-spun La-Fe-Si ribbon, the desired 1:13 phase can be directly formed from the melt during melt-spinning. This clear understanding of the competitive nucleation mechanism between the 1:13 and α-(Fe,Si) phases allows us to address the emerging and important question of why the 1:13 phase does not form directly from the melt under equilibrium solidification conditions or under arc-melting, but from the rapid (melt spinning) solidification. It provides good guidance to the development of La-Fe-Si materials with desirable magnetic properties for a wide range of technological applications, such as magnetic refrigerant materials for use in active magnetic refrigerators.
The TEM micrographs results. TEM micrographs of the free surface region for region B (a), with higher magnifications of the 'dashed region' (b) and the 'dashed box' (c). HRTEM images and corresponding FFT patterns of region G (d) and region H (e).
The nucleation mechanism involving rapid solidification of undercooled La-Fe-Si melts has been studied theoretically and experimentally. We find that for ∆T < 707 K, the α-(Fe,Si) phase has a higher nucleation rate compared to that of the 1:13 phase, and it is a primary phase in a slow solidification process. For ∆T > 707 K, the nucleation rate of the 1:13 phase is faster than that of the α-(Fe,Si) phase, resulting in a primary solidification of the 1:13 phase. As ∆T = 707 K, both of the 1:13 and α-(Fe,Si) phases have equal nucleation rates, but the microstructural morphology is distinctly different on the cooled and free surfaces of the ribbon. The desired nano-sized 1:13 phase can be directly formed from the melt during melt-spinning due to the enhanced ∆T.
EDS:
energy dispersive spectrometer
GMCE:
giant magnetocaloric effect
HRTEM:
high resolution transmission electron microscope
SAED:
selected area diffraction
SEM:
TEM:
XRD:
Shen BG, Sun JR, Hu FX, Zhang HW, Cheng ZH. Recent progress in exploring magnetocaloric materials. Adv Mater. 2009;21:4545.
Lyubina J, Schäfer R, Martin N, Schultz L, Gutfleisch O. Novel design of La (Fe, Si)13 alloys towards high magnetic refrigeration performance. Adv Mater. 2010;22:3735.
Cheng X, Chen YG, Tang YB. High-temperature phase transition and magnetic property of LaFe11.6Si1.4 compound. J Alloys and Compd. 2011;509:8534.
Gutfleisch O, Yan A, Muller KH. Large magnetocaloric effect in melt-spun LaFe13-xSix. J Appl Phys. 2005;97:10M305.
Lyubina J, Gutfleisch O, Kuz'min MD, Richter M. La(Fe, Si)13-based magnetic refrigerants obtained by novel processing routes. J Magn Magn Mater. 2008;320:2252.
Yamada H. Metamagnetic transition and susceptibility maximum in anitinerant-electron system. Phys Rev B. 1993;47:11211.
Liu T, Chen YG, Tang YB, Xiao SF, Zhang EY, Wang JW. Structure and magnetic properties of shortly high temperature annealing LaFe11.6Si1.4 compound. J Alloys Compd. 2009;475:672.
Fujita A, Akamatsu Y, Fukamichi KJ. Itinerant electron metamagnetic transition in La(FexSi1−x)13 intermetallic compounds. J Appl Phys. 1999;85:4756.
Raghavan V. Fe-La-Si (Iron-Lanthanum-Silicon). J Phase Equilib. 2001;22:158.
Niitsu K, Kainuma R. Phase equilibria in the Fe-La-Si ternary system. Intermetallics. 2012;20:160.
Fujita A, Koiwai S, Fujieda S, Fukamichi K, Kobayashi T, Tsuji H. Magnetocaloric effect in spherical La(FexSi1-x)13 and their hydrides for active magnetic regenerator. J Appl Phys. 2009;105:07A936.
Liu J, Krautz M, Skokov K, Woodcock TG, Gutfleisch O. Systematic study of the microstructure. Entropy change and adiabatic temperature change in optimized La-Fe-Si alloys. Acta Materialia. 2011;59:3602.
Liu XB, Altounian Z, Tu GH. The structure and large magnetocaloric effect in rapidly quenched LaFe114Si16 compound. J Phys Condens Matter. 2004;16:8043.
Boettinger WJ. The structure of directionally solidified two-phase Sn-Cd peritectic alloys. Metall Trans. 1974;5:2023.
Umeda T, Okane T, Kurz W. Phase selection during solidification of peritectic alloys. Acta Mater. 1996;44:4209.
Trivedi R. Theory of layered-structure formation in peritectic systems. Metall Trans. 1995;A26:1583.
St John DH, Hogan LM. A simple prediction of the rate of the peritectic transformation. Acta Metall. 1987;35:171.
Christian JW. The Theory of Transformations in Metals and Alloys. Oxford: Pergamon Press; 1981. p. 12.
Spaepen F. The temperature dependence of the crystal-melt interfacial tension: a simple model. Mater Sci Eng A. 1994;178:15.
Chen YZ, Liu F, Yang GC, Zhou YH. Nucleation mechanisms involving in rapid solidification of undercooled Ni803B197 melts. Intermetallics. 2011;19:221.
Gong XM. Nucleation Kinetics of Crystalline Phases in Undercooled La-Fe-Si Melts. Master Thesis. 2008;41–4.
The authors gratefully acknowledge the support from the Instrumental Analysis & Research Center, Shanghai University. This work was partially supported by Shanghai Education Commission Project (Grant No. 12ZZ085), Shanghai Natural Science Foundation of China (Grant No. 13ZR1415300), and Shanghai leading Academic Discipline Project (Grant No. S30107). M-HP acknowledges support from The Florida Cluster for Advanced Smart Sensor Technologies (FCASST).
Laboratory for Microstructures, Shanghai University, Shanghai, 200444, China
Xue-Ling Hou, Yun Xue, Chun-Yu Liu & Hui Xu
School of Materials Science and Engineering, Shanghai University, Shanghai, 200072, China
Shanghai University of Engineering Science, Shanghai, 201620, China
Ning Han & Chun-Wei Ma
Department of Physics, University of South Florida, Tampa, FL, 33620, USA
Manh-Huong Phan
Xue-Ling Hou
Yun Xue
Chun-Yu Liu
Hui Xu
Ning Han
Chun-Wei Ma
Correspondence to Xue-Ling Hou or Manh-Huong Phan.
X-LH conceived of the study and participated in its design and coordination. YX performed TEM measurements, C-YL and HX performed SEM and the theoretical simulations. NH and C-WM fabricated the ribbons. X-LH and M-HP analyzed the data and wrote the paper. All authors read and approved the final manuscript.
Hou, XL., Xue, Y., Liu, CY. et al. Nucleation mechanism of nano-sized NaZn13-type and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification. Nanoscale Res Lett 10, 143 (2015). https://doi.org/10.1186/s11671-015-0843-1
Accepted: 27 February 2015
La(Fe,Si)13 ribbons
Nucleation mechanism
Rapid solidification
EMN Meeting
|
CommonCrawl
|
The figure shows one period of the output voltage of an inverter.$\alpha$ should be chosen such that $60^{\circ}<\alpha <90^{\circ}$. If $rms$ value of the fundamental component is $50V$, then $\alpha$ in degree is__________
Figure $(i)$ shows the circuit diagram of a chopper. The switch $S$ in the circuit in figure $(i)$ is switched such that the voltage $v_D$ across the diode has the wave shape as shown in figure $(ii)$. The capacitance $C$ is large so that the voltage across it ... . If switch $S$ and the diode are ideal, the peak to peak ripple $(\text{in}\: A)$ in the inductor current is _________.
ripple-voltages
The figure shows the circuit diagram of a rectifier. The load consists of a resistance $10Ω$ and an inductance $0.05H$ connected in series. Assuming ideal thyristor and ideal diode, the thyristor firing angle (in degree) needed to obtain an average load voltage of $70V$ is ______
The $dc$ current flowing in a circuit is measured by two ammeters, one $PMMC$ and another electrodynamometer type, connected in series. The $PMMC$ meter contains $100$ turns in the coil, the flux density in the air gap is $0.2\: Wb/m^2,$ and ... The spring constants of both the meters are equal. The value of current, at which the deflections of the two meters are same, is ________
electrodynamometer
The reading of the voltmeter $(rms)$ in volts, for the circuit shown in the figure is __________
nodal-analysis
A system matrix is given as follows. $A=\begin{bmatrix} 0 & 1 & -1\\ -6 & -11 &6 \\ -6& -11& 5 \end{bmatrix}$ The absolute value of the ratio of the maximum eigenvalue to the minimum eigenvalue is _______
asked Feb 12, 2017 in Linear Algebra by makhdoom ghaya (9.3k points)
linear-algebra
eigen-values
eigen-matrix
The Bode magnitude plot of the transfer function $G(s)=\dfrac{K(1+0.5s)(1+as)}{s(1+\frac{s}{8})(1+bs)(1+\frac{s}{36})}$ is shown below: Note that $-6dB/octave = -20 dB/decade.$ The value of $\dfrac{a}{bK}$ is ________.
For the given system, it is desired that the system be stable. The minimum value of $\alpha$ for this condition is ____________. .
Given that the op-amps in the figure are ideal, the output voltage $V_0$ is $(V_1-V_2)$ $2(V_1-V_2)$ $(V_1-V_2)/2$ $(V_1+V_2)$
Which of the following logic circuits is a realization of the function $F$ whose Karnaugh map is shown in figure
karnaugh
logic-gates
In the figure shown, assume the op-amp to be ideal. Which of the alternatives gives the correct Bode plots for the transfer function $\dfrac{V_o(\omega )}{V_i(\omega )}?$
An output device is interfaced with $8$-bit microprocessor $8085A$. The interfacing circuit is shown in figure The interfacing circuit makes use of $3$ Line to $8$ Line decoder having $3$ enable lines $E_1,\overline{E_2},\overline{E_3}$ . The address of the device is $50_H$ $5000_H$ $A0_H$ $A000_H$
The overcurrent relays for the line protection and loads connected at the buses are shown in the figure. The relays are $IDMT$ ... setting for relay $R_B$ to be $0.1$ and $5A$ respectively, the operating time of $R_B$ (in seconds) is ______________
A $15\: kW, 230\: V\:dc$ shunt motor has armature circuit resistance of $0.4\: Ω$ and field circuit resistance of $230\: Ω.$ At no load and rated voltage, the motor runs at $1400$ rpm and the line current drawn by the motor is $5\: A$. At full load, the motor draws a line current of $70\: A.$ Neglect armature reaction. The full load speed of the motor in rpm is _________.
revolution-per-minute
The following four vector fields are given in Cartesian co-ordinate system. The vector field which does not satisfy the property of magnetic flux density is $y^2a_x+z^2a_y+x^2a_z$ $y^2a_x+x^2a_y+z^2a_z$ $x^2a_x+y^2a_y+z^2a_z$ $y^2z^2a_x+x^2z^2a_y+x^2y^2a_z$
co-ordinate
The function shown in the figure can be represented as $u(t)-u(t-T)+\frac{(t-T)}{T}u(t-T)-\frac{(t-2T)}{T}u(t-2T)$ $u(t)+\frac{t}{T}u(t-T)-\frac{t}{T}u(t-2T)$ $u(t)-u(t-T)+\frac{(t-T)}{T}u(t)-\frac{(t-2T)}{T}u(t)$ $u(t)+\frac{(t-T)}{T}u(t-T)-2\frac{(t-2T)}{T}u(t-2T)$
impulse-function
Let $X(z)=\dfrac{1}{1-z^{-3}}$ be the $Z$ – transform of a causal signal $x[n]$ Then, the values of $x[2]$ and $x[3]$ are $0$ and $0$ $0$ and $1$ $1$ and $0$ $1$ and $1$
sausality
The core loss of a single phase, $230/115\:V, 50\:Hz$ power transformer is measured from $230\:V$ side by feeding the primary $(230\: V \: \text{side})$ from a variable voltage variable frequency source while keeping the secondary open circuited. The core loss is measured to be $1050\:W$ ... $508\:W$ and $542\:W$ $468\:W$ and $582\:W$ $498\:W$ and $552\:W$ $488\:W$ and $562\:W$
primary-winding
A $3$ phase, $50$ $Hz$, six pole induction motor has a rotor resistance of $0.1\: Ω$ and reactance of $0.92\: Ω$. Neglect the voltage drop in stator and assume that the rotor resistance is constant. Given that the full load slip is $3\%$, the ratio of maximum torque to full load torque is $1.567$ $1.712$ $1.948$ $2.134$
induction-motor
winding-resistance
A distribution feeder of $1\:km$ length having resistance, but negligible reactance, is fed from both the ends by $400V$, $50\:Hz$ balanced sources. Both voltage sources $S_1$ and $S_2$ ... $25\:A$ $50\:A$ and $50\:A$ $25\:A$ and $75\:A$ $0\:A$ and $100\:A$
single-line-diagram
A two bus power system shown in the figure supplies load of $1.0+j0.5\: \text{p.u}.$ The values of $V_1$ in $\text{p.u.}$ and $\delta_2$ respectively are $0.95$ and $6.00^{\circ}$ $1.05$ and $-5.44^{\circ}$ $1.1$ and $-6.00^{\circ}$ $1.1$ and $-27.12^{\circ}$
The fuel cost functions of two power plants are Plant $P_1 : C_1=0.05Pg_1^2+APg_1+B$ Plant $P_2 : C_2=0.10Pg_2^2+3APg_2+2B$ where, $P_{g1}$ and $P_{g2}$ are the generated powers of two plants, and $A$ and $B$ are the constants. If the two plants optimally share $1000\:MW$ load ... cost of $100$ $Rs/MWh$, the ratio of load shared by plants $P_1$ and $P_2$ is $1:4$ $2:3$ $3:2$ $4:1$
power-generation
Let $f(t)$ be continuous time signal and let $F(w)$be its Fourier Transform defined by $F(\omega )=\displaystyle{}\int_{-\infty }^{\infty }f(t)e^{-j\omega t} dt$ define $g(t)$ by $g(t)=\displaystyle{}\int_{-\infty }^{\infty }F(u)e^{-jut} du$ What is the ... . $g(t)$ would be proportional to $f(t)$ only if $f(t)$ is a sinusoidal function. $g(t)$ would never be proportional to $f(t)$.
continuous time
A three phase synchronous generator is to be connected to the infinite bus. The lamps are connected as shown in the figure for the synchronization. The phase sequence of bus voltage is $R-Y-B$ ... bus same as infinite bus and its frequency is more than infinite bus same as infinite bus and its frequency is less than infinite bus
In the figure, the value of resistor $R$ is $(25 + I/2)\: \text{ohms},$ where $I$ is the current in amperes. The current $I$ is ________
loop-analysis
dc-supply
An incandescent lamp is marked $40\:W, 240\: V$. If resistance at room temperature $(26^{\circ}C)$ is $120\:\Omega$, and temperature coefficient of resistance is $4.5\times 10^{-3} / ^{\circ}C$, then its $ON$ state filament temperature in $^{\circ}C$ is approximately _______
Let g:$[0,\infty )\rightarrow [0,\infty )$ be a function defined by $g(x)=x-[x]$, where $[x]$ represents the integer part of $x.($That is, it is the largest integer which is less than or equal to $x).$ The value of the constant term in the Fourier series expansion of $g(x)$ is _______
The figure shows the circuit of a rectifier fed from a $230-V(\text{rms}), 50-Hz$ sinusoidal voltage source. If we want to replace the current source with a resistor so that the rms value of the current supplied by the voltage source remains unchanged, the value of the resistance $(\text{in ohms})$ is __________(Assume diodes to be ideal.)
A cascade of three identical modulo - $5$ counters has an overall modulus of $5$ $25$ $125$ $625$
asked Feb 12, 2017 in Numerical Ability by makhdoom ghaya (9.3k points)
In the Wien Bridge oscillator circuit shown in figure, the bridge is balanced when $\frac{R_3}{R_4}=\frac{R_1}{R_2}\: \:, \:\:\:\: \omega =\frac{1}{\sqrt{R_1C_1R_2C_2}}$ $\frac{R_2}{R_1}=\frac{C_2}{C_1}\:\: , \:\:\:\: \omega =\frac{1}{R_1C_1R_2C_2}$ ... $\frac{R_3}{R_4}+\frac{R_1}{R_2}=\frac{C_2}{C_1}\:\: , \:\:\:\: \omega =\frac{1}{R_1C_1R_2C_2}$
The magnitude of the mid-band voltage gain of the circuit shown in figure is $($assuming $h_{fe}$ of the transistor to be $100)$ $1$ $10$ $20$ $100$
voltage-gain
mid-band-votage
In an unbalanced three phase system, phase current $I_a=1\angle (-90^{\circ})\: \text{pu}$ , negative sequence current $I_{b2}=4\angle (-150^{\circ})\: \text{pu}$, zero sequence current $I_{c0}=3\angle 90^{\circ}\: \text{pu}.$ The magnitude of phase current $I_b$ in $\text{pu}$ is $1.00$ $7.81$ $11.53$ $13.00$
unbalanced-system
sequence-networks
Figure shows four electronic switches $(i), (ii), (iii)$ and $(iv)$. Which of the switches can block voltages of either polarity $($applied between terminals $a$ and $b)$ when the active device is in the $OFF$ state? $(i), (ii)$ and $(iii)$ $(ii), (iii)$ and $(iv)$ $(ii)$ and $(iii)$ $(i)$ and $(iv)$
duality-switch
A fair coin is tossed $n$ times. The probability that the difference between the number of heads and tails is $(n-3)$ is $2^{-n}$ $0$ $^{n}C_{n-3}2^{-n}$ $2^{-n+3}$
The line integral of function $F = yzi$, in the counterclockwise direction, along the circle $x^2+y^2 = 1$ at $z = 1$ is $-2\pi$ $-\pi$ $\pi$ $2\pi$
line-integral
circle-equation
quadratic-function
A $2$-bus system and corresponding zero sequence network are shown in the figure. The transformers $T_1$and $T_2$ are connected as
bus-system
A star connected $400\:V, 50\:Hz, 4$ pole synchronous machine gave the following open circuit and short circuit test results: Open circuit test: $V_{oc} = 400\: V (\text{rms}, \text{line-to-line})$ at field current, $I_f = 2.3 A$ ... $I_f = 1.5 A$ The value of per phase synchronous impedance in $Ω$ at rated voltage is __________.
short-ciruit-test
An $8$ – pole, $3$ – phase, $50\: Hz$ induction motor is operating at a speed of $700\:\text{rpm}.$ The frequency of the rotor current of the motor in $Hz$ is _________.
motor-winding
rotor-current
For a specified input voltage and frequency, if the equivalent radius of the core of a transformer is reduced by half, the factor by which the number of turns in the primary should change to maintain the same no load current is $1/4$ $1/2$ $2$ $4$
|
CommonCrawl
|
Find unknowns in multiplication and division problems (5,10)
Multiplication as repeated addition (10x10)
Interpreting products (10x10)
Arrays as products (10x10)
Multiplication and division using groups (10x10)
Multiplication and division using arrays (10x10)
Find unknowns in multiplication and division problems (0,1,2,4,8)
Find unknowns in multiplication and division problems (3,6,7,9)
Find unknowns in multiplication and division problems (mixed)
Complete multiplication and division facts in a table (10x10)
Multiplication and division (turn arounds and fact families) (10x10)
Find quotients (10x10)
Number sentences and word problems (10x10)
Multiplication and division by 10
Properties of multiplication (10x10)
Multiplication and division by 10 and 100
Distributive property for multiplication
Use the distributive property
Multiply a two digit number by a small single digit number using an area model
Multiply a two digit number by a small single digit number
Multiply a single digit number by a two digit number using an area model
Multiply a single digit number by a two digit number
Multiply a single digit number by a three digit number using an area model
Multiply a single digit number by a three digit number
Multiply a single digit number by a four digit number using an area model
Multiply a single digit number by a four digit number using algorithm
Multiply 2 two digit numbers using an area model
Multiply 2 two digit numbers
Multiply a two digit number by a 3 digit number
Multiply 3 numbers together
Divide a 2 digit number by a 1 digit number using area or array model
Divide a 2 digit number by a 1 digit number
Divide a 3 digit number by a 1 digit number resulting in a remainder
Multiply various single and double digit numbers
Extend multiplicative strategies to larger numbers
Divide various numbers by single digits
Solve division problems presented within contexts
Solve multiplication and division problems involving objects or words
Multiply various single, 2 and 3 digit numbers
Divide various 4 digit numbers by 2 digit numbers
We've already seen how to multiply a one digit number with another one digit number. So what do we do when it's a two digit number multiplied with another $2$2 digit number?
When we multiply we are always talking about groups. $13\times24$13×24 is actually saying $13$13 groups of $24$24 or $24$24groups of $13$13. We could add up all these groups, but that is a lot of addition to do in our head! There is actually an easier way. Watch this video to see how.
Evaluate $49\times10$49×10
Reveal Solution
Find $11\times32$11×32.
NA3-6
Record and interpret additive and simple multiplicative strategies, using words, diagrams, and symbols, with an understanding of equality
|
CommonCrawl
|
Subsistence and Self-perpetuating Inequality
Posted by {"login"=>"dvollrath", "email"=>"[email protected]", "display_name"=>"dvollrath", "first_name"=>"", "last_name"=>""} on July 09, 2014 · 6 mins read
It's common in development or growth to think about subsistence constraints. From a macro perspective, we think of them as an explanation for the low income-elasticity of food expenditures, and therefore a cause of structural change away from agriculture. The idea has there in development for years. I don't know the full intellectual history here, but my understanding of it goes back to The Moral Economy of the Peasant by James C. Scott. He says you can understand a lot about the peasant mind-set by realizing they face subsistence constraints, and that this makes them incredibly risk averse. Kevin Donovan has a recent paper that looks at the macro consequences of this risk-aversion for agricultural development.
I think the concept of subsistence constraints and risk aversion is useful for thinking about inequality in general, even outside of developing countries. In particular, it offers an explanation for why inequality will be self-perpetuating and how mitigating inequality will be growth-enhancing.
If there is some subsistence constraint in your utility function, then your risk aversion is declining with income. You can take my word for it, or if you'd like to see it more clearly, let utility be
$$ U = \ln{y-\overline{y}} $$
and the coefficient of relative risk aversion is then
$$ \frac{-yU''}{U'} = \frac{y}{y-\overline{y}}. $$
Risk aversion approaches infinity as income approaches the subsistence constraint, meaning people will refuse to take any gamble that might put them below ${\overline{y}}$. Risk aversion approaches 1 as income rises. The value 1 itself isn't important, what's important is that as people get richer, they are willing to accept bigger and bigger gambles with their income, because they are less and less danger of falling below ${\overline{y}}$. Richer people are more risk-tolerant.
Combine this conception of subsistence and risk aversion with the commonly understood relationship of risk and reward. In finance and entrepreneurship, we generally believe that those willing to absorb higher risks are able to reap higher rewards. Entrepreneurs earned that money by taking a risk in starting a company. Aside from entrepreneurship, big fixed investments in education -- quitting your job to go back to school -- are very risky moves that in turn have high expected rewards.
Together, subsistence and the risk/reward correlation imply that inequality will be self-perpetuating. Poor people will not take on big risks (starting a business) because they cannot handle even a small probability of failure that takes them below subsistence. So they stay in low-wage jobs and don't undertake investments that might make them better off.
Well-off people are able to tolerate bigger risks, and so also earn higher rewards. They start businesses, get more education, or move across country for a new job. If it doesn't work out, they won't starve, so it's worth the risk. And because they undertake big risks, they tend to earn high rewards. The rich can expand their incomes and/or wealth at a faster rate than the poor, because of their higher risk tolerance. This naturally acts to expand inequality over time.
The crucial point is that there is no pathology to the poor refusing to take big risks. With subsistence constraints, the poor don't remain poor because they are lazy or stupid, but because they are rationally avoiding big risks that might push them below subsistence.
The conceit of people who espouse a "just deserts" theory of inequality (Greg Mankiw, Sean Hannity, et al) is that they would have been well-off regardless of where they start in life. They believe that they possess qualities -- smarts, skills, work ethic -- that make them valuable to the market, and that they are rewarded for that. But start them off in a truly poor household, one where the next meal is uncertain, and with 99.99% certainty they would not end up a professor at Harvard or, well, whatever Hannity is. There were big risks taken in their lives that had big payoffs. Risks they or their family would not have been able to conceive of taking if they were truly poor.
Subsistence constraints also imply that acting to mitigate inequality -- whether by raising incomes of the poor or making their incomes less uncertain -- would have a distinct positive effect on economic growth. Ensuring that people won't fall to subsistence (or below) means that more people are willing to start a business, and we get more economic activity, more competition, and more innovation. If more people are willing to go for advanced training (college or vocational school) then we get a more skilled workforce. Acting to insure or support poor incomes has positive spillovers.
Won't providing income support just incent all the poor people to stop working? Remember that most of the people screaming loudest about this - Casey Mulligan - are tenured professors who cannot be fired. Despite having no incentives whatsoever to continue working on research or provide more than perfunctory teaching, Mulligan continues to work. Why? Why does he not rationally show up to collect his check and then go home to eat Cheetos and watch Dr. Phil? If Casey Mulligan continues to work despite a guaranteed income, I'm fairly confident that the vast, vast majority of people will continue to work even if they are no longer at risk of falling into destitution.
|
CommonCrawl
|
Geometry, compatibility and structure preservation in computational differential equations
Seminars (GCS)
Videos and presentation materials from other INI events are also available.
Search seminar archive
GCSW01 8th July 2019
10:00 to 11:00 Hans Munthe-Kaas Why B-series, rooted trees, and free algebras? - 1
"We regard Butcher's work on the classification of numerical integration methods as an impressive example that concrete problem-oriented work can lead to far-reaching conceptual results". This quote by Alain Connes summarises nicely the mathematical depth and scope of the theory of Butcher's B-series. The aim of this joined lecture is to answer the question posed in the title by drawing a line from B-series to those far-reaching conceptional results they originated. Unfolding the precise mathematical picture underlying B-series requires a combination of different perspectives and tools from geometry (connections); analysis (generalisations of Taylor expansions), algebra (pre-/post-Lie and Hopf algebras) and combinatorics (free algebras on rooted trees). This summarises also the scope of these lectures. In the first lecture we will outline the geometric foundations of B-series, and their cousins Lie-Butcher series. The latter is adapted to studying differential equations on manifolds. The theory of connections and parallel transport will be explained. In the second and third lectures we discuss the algebraic and combinatorial structures arising from the study of invariant connections. Rooted trees play a particular role here as they provide optimal index sets for the terms in Taylor series and generalisations thereof. The final lecture will discuss various applications of the theory in the numerical analysis of integration schemes.
11:30 to 12:30 Charlie Elliott PDEs in Complex and Evolving Domains I
14:00 to 15:00 Elizabeth Mansfield Introduction to Lie groups and algebras - 1
In this series of three lectures, I will give a gentle introduction to Lie groups and their associated Lie algebras, concentrating on the major examples of importance in applications. I will next discuss Lie group actions, their invariants, their associated infinitesimal vector fields, and a little on moving frames. The final topic will be Noether's Theorem, which yields conservations laws for variational problems which are invariant under a Lie group action. Time permitting, I will show a little of the finite difference and finite element versions of Noether's Theorem.
In this first talk, I will consider some of the simplest and most useful Lie groups. I will show how to derive their Lie algebras, and will further discuss the Adjoint action of the group on its Lie algebra, the Lie bracket on the algebra, and the exponential map.
15:00 to 16:00 Christopher Budd Modified error estimates for discrete variational derivative methods
My three talks will be an exploration of geometric integration methods in the context of the numerical solution of PDEs. I will look both at discrete variational methods and their analysis using modified equations, and also of the role of adaptivity in studying and retaining qualitative features of PDEs.
16:30 to 17:30 Paola Francesca Antonietti High-order Discontinuous Galerkin methods for the numerical modelling of earthquake ground motion
A number of challenging geophysical applications requires a flexible representation of the geometry and an accurate approximation of the solution field. Paradigmatic examples include seismic wave propagation and fractured reservoir simulations. The main challenges are i) the complexity of the physical domain, due to the presence of localized geological irregularities, alluvial basins, faults and fractures; ii) the heterogeneities in the medium, with significant and sharp contrasts; and iii) the coexistence of different physical models. The high-order discontinuous Galerkin FEM possesses the built-in flexibility to naturally accommodate both non-matching meshes, possibly made of polygonal and polyhedral elements, and high-order approximations in any space dimension. In this talk I will discuss recent advances in the development and analysis of high-order DG methods for the numerical approximation of seismic wave propagation phenomena. I will analyse the stability and the theoretical properties of the scheme and present some simulations of real large-scale seismic events in three-dimensional complex media.
09:00 to 10:00 Douglas Arnold Finite Element Exterior Calculus - 1
These lectures aim to provide an introduction and overview of Finite Element Exterior Calculus, a transformative approach to designing and understanding numerical methods for partial differential equations. The first lecture will introduce some of the key tools--chain complexes and their cohomology, closed operators in Hilbert space, and their marriage in the notion of Hilbert complexes--and explore their application to PDEs. The lectures will continue with a study of the properties needed to effectively discretize Hilbert complexes, illustrating the abstract framework on the concrete example of the de Rham complex and its applications to problems such as Maxwell's equation. The third lecture will get into differential forms and their discretization by finite elements, bringing in new tools like the Koszul complex and bounded cochain projections and revealing the Periodic Table of Finite Elements. Finally in the final lecture we will examine new complexes, their discretization, and applications.
I will discuss Lie group actions, their invariants, their associated infinitesimal vector fields, and a little on moving frames.
11:30 to 12:30 Kurusch Ebrahimi-Fard Why B-series, rooted trees, and free algebras? - 2
14:00 to 15:00 Charlie Elliott PDEs in Complex and Evolving Domains II
15:00 to 16:00 Christian Lubich Variational Gaussian wave packets revisited
The talk reviews Gaussian wave packets that evolve according to the Dirac-Frenkel time-dependent variational principle for the semi-classically scaled Schr\"odinger equation. Old and new results on the approximation to the wave function are given, in particular an $L^2$ error bound that goes back to Hagedorn (1980) in a non-variational setting, and a new error bound for averages of observables with a Weyl symbol, which shows the double approximation order in the semi-classical scaling parameter in comparison with the norm estimate.
The variational equations of motion in Hagedorn's parametrization of the Gaussian are presented. They show a perfect quantum-classical correspondence and allow us to read off directly that the Ehrenfest time is determined by the Lyapunov exponent of the classical equations of motion.
A variational splitting integrator is formulated and its remarkable conservation and approximation properties are discussed. A new result shows that the integrator approximates averages of observables with the full order in the time stepsize, with an error constant that is uniform in the semiclassical parameter.
The material presented here for variational Gaussians is part of an Acta Numerica review article on computational methods for quantum dynamics in the semi-classical regime, which is currently in preparation in joint work with Caroline Lasser.
GCSW01 10th July 2019
09:00 to 10:00 Christopher Budd Blow-up in PDES and how to compute it
11:30 to 12:30 Charlie Elliott PDEs in Complex and Evolving Domains III
09:00 to 10:00 Charlie Elliott PDEs in Complex and Evolving Domains IV
10:00 to 11:00 Beth Wingate An introduction to time-parallel methods
I will give an introduction to time-parallel methods and discuss how they apply to PDEs, in particular those with where resonance plays an important role. I'll give some examples from ODEs before finally discussing PDEs.
15:00 to 16:00 Elizabeth Mansfield Noether's conservation laws - smooth and discrete
The final topic will be Noether's Theorem, which yields conservation laws for variational problems which are invariant under a Lie group action. . I will show a little of the finite difference and finite element versions of Noether's Theorem.
16:30 to 17:30 Panel
09:00 to 10:00 Reinout Quispel Discrete Darboux polynomials and the preservation of measure and integrals of ordinary differential equations
Preservation of phase space volume (or more generally measure), first integrals (such as energy), and second integrals have been important topics in geometric numerical integration for more than a decade, and methods have been developed to preserve each of these properties separately. Preserving two or more geometric properties simultaneously, however, has often been difficult, if not impossible. Then it was discovered that Kahan's 'unconventional' method seems to perform well in many cases [1]. Kahan himself, however, wrote: "I have used these unconventional methods for 24 years without quite understanding why they work so well as they do, when they work." The first approximation to such an understanding in computational terms was: Kahan's method works so well because
1. It is very successful at preserving multiple quantities simultaneously, eg modified energy and modified measure.
2. It is linearly implicit
3. It is the restriction of a Runge-Kutta method
However, point 1 above raises a further obvious question: Why does Kahan's method preserve both certain (modified) first integrals and certain (modified) measures? In this talk we invoke Darboux polynomials to try and answer this question. The method of Darboux polynomials (DPs) for ODEs was introduced by Darboux to detect rational integrals. Very recently we have advocated the use of DPs for discrete systems [2,3]. DPs provide a unified theory for the preservation of polynomial measures and second integrals, as well as rational first integrals. In this new perspective the answer we propose to the above question is: Kahan's method works so well because it is good at preserving (modified) Darboux polynomials. If time permits we may discuss extensions to polarization methods.
[1] Petrera et al, Regular and Chaotic Dynamics 16 (2011), 245–289.
[2] Celledoni et al, arxiv:1902.04685.
10:00 to 10:30 James Jackaman Lie symmetry preserving finite element methods
Through the mathematical construction of finite element methods, the "standard" finite element method will often preserve the underlying symmetry of a given differential equation. However, this is not always the case, and while historically much attention has been paid to the preserving of conserved quantities the preservation of Lie symmetries is an open problem. We introduce a methodology for the design of arbitrary order finite element methods which preserve the underlying Lie symmetries through an equivariant moving frames based invariantization procedure.
10:30 to 11:00 Candan Güdücü Port-Hamiltonian Systems
The framework of port-Hamiltonian systems (PH systems) combines both the Hamiltonian approach and the network approach, by associating with the interconnection structure of the network model a geometric structure given by a Dirac structure. In this talk, I introduce port-Hamiltonian (pH) systems and their underlying Dirac structures. Then, a coordinate-based representation of PH systems and some properties are shown. A Lanczos method for the solution of linear systems with nonsymmetric coefficient matrices and its application to pH systems are presenred.
11:30 to 12:30 Christopher Budd Adaptivity and optimal transport
GCS 17th July 2019
11:00 to 12:00 Christian Offen Detection of high codimensional bifurcations in variational PDEs
We derive bifurcation test equations for A-series singularities of nonlinear functionals and, based on these equations, we propose a numerical method for detecting high codimensional bifurcations in parameter-dependent PDEs such as parameter-dependent semilinear Poisson equations. As an example, we consider a Bratu-type problem and show how high codimensional bifurcations such as the swallowtail bifurcation can be found numerically.
Lisa Maria Kreusser, Robert I McLachlan, Christian Offen
GCS 22nd July 2019
14:00 to 15:00 Ralf Hiptmair Exterior Shape Calculus
GCS 23rd July 2019
14:00 to 15:00 Annalisa Buffa Dual complexes and mortaring for regular approximations of electromagnetic fields
15:00 to 16:00 Onno Bokhove A new wave-to-wire wave-energy model: from variational principle to compatible space-time discretisation
Amplification phenomena in a so-called bore-soliton-splash have led us to develop a novel wave-energy device with wave amplification in a contraction used to enhance wave-activated buoy motion and magnetically-induced energy generation. An experimental proof-of-principle shows that our wave-energy device works. Most importantly, we develop a novel wave-to-wire mathematical model of the combined wave hydrodynamics, wave-activated buoy motion and electric power generation by magnetic induction, from first principles, satisfying one grand variational principle in its conservative limit. Wave and buoy dynamics are coupled via a Lagrange multiplier, which boundary value at the waterline is subtly solved explicitly by imposing incompressibility in a weak sense. Dissipative features, such as electrical wire resistance and nonlinear LED-loads, are added a posteriori. New is also the intricate and compatible (finite-element) space-time discretisation of the linearised dynamics, guaranteeing numerical stability and the correct energy transfer between the three subsystems. Preliminary simulations of our simplified and linearised wave-energy model are encouraging, yet suboptimal, and involve a first study of the resonant behaviour and parameter dependence of the device.
14:00 to 15:00 Melvin Leok The Connections Between Discrete Geometric Mechanics, Information Geometry and Machine Learning
15:00 to 16:00 Benjamin Tapley Computational methods for simulating inertial particles in discrete incompressible flows.
GCS 31st July 2019
15:00 to 16:00 Anthony Bloch Optimal control and the geometry of integrable systems
In this talk we discuss a geometric approach to certain optimal control
problems and discuss the relationship of the solutions of these problem
to some classical integrable dynamical systems and their generalizations.
We consider the
so-called Clebsch optimal control problem and its relationship
to Lie group actions on manifolds. The integrable systems discussed include
the rigid body equations, geodesic flows on the ellipsoid, flows
on Stiefel manifolds, and the Toda lattice
flows. We discuss the Hamiltonian structure of these systems and relate
our work to some work of Moser. We also discuss the link to discrete dynamics
and symplectic integration.
GCS 7th August 2019
14:00 to 15:00 Arieh Iserles Fast approximation on the real line
Abstract: While approximation theory in an interval is thoroughly understood, the real line represents something of a mystery. In this talk we review the state of the art in this area, commencing from the familiar Hermite functions and moving to recent results characterising all orthonormal sets on $L_2(-\infty,\infty)$ that have a skew-symmetric (or skew-Hermitian) tridiagonal differentiation matrix and such that their first $n$ expansion coefficients can be calculated in $O(n \log n)$ operations. In particular, we describe the generalised Malmquist–Takenaka system. The talk concludes with a (too!) long list of open problems and challenges.
15:00 to 16:00 Robert McLachlan The Lie algebra of classical mechanics
Classical mechanical systems are defined by their kinetic and potential energies. They generate a Lie algebra under the canonical Poisson bracket which is useful in geometric integration. But because the kinetic energy is quadratic in the momenta, the Lie algebra obeys identities beyond those implied by skew symmetry and the Jacobi identity. Some Poisson brackets, or combinations of brackets, are zero for all choices of kinetic and potential energy. Therefore, we study and give a complete description of the universal object in this setting, the 'Lie algebra of classical mechanics' modelled on the Lie algebra generated by kinetic and potential energy of a simple mechanical system with respect to the canonical Poisson bracket.
Joint work with Ander Murua.
GCS 14th August 2019
14:00 to 15:00 David Garfinkle Numerical General Relativity
This talk will cover the basic properties of the equations of General Relativity, and issues involved in performing numerical simulations of those equations. Particular emphasis will be placed on three issues: (1) hyperbolicity of the equations. (2) preserving constraints. (3) dealing with black holes and spacetime singularities.
15:00 to 16:00 Ari Stern Finite element methods for Hamiltonian PDEs
Hamiltonian ODEs satisfy a symplectic conservation law, and there are many advantages to using numerical integrators that preserves this structure. This talk will discuss how the canonical Hamiltonian structure, and its preservation by a numerical method, can be generalized to PDEs. I will also provide a basic introduction to the finite element method and, time permitting, discuss how some classic symplectic integrators can be understood from this point of view.
GCS 21st August 2019
14:00 to 15:00 Fernando Casas More on composition methods: error estimation and pseudo-symmetry
In this talk I will review composition methods for the time integration of differential equations,
paying special attention to two recent contributions in this area. The first one is the construction
of a new local error estimator so that the additional computational effort required is almost insignificant.
The second one is related to a new family of high-order methods obtained from a basic symmetric
(symplectic) scheme in such a way that they are time-symmetric (symplectic) only up to a certain order.
15:00 to 16:00 Yajuan Sun Contact Hamiltonian system and its application in solving Vlasov-Poisson Fokker-Planck system
The Vlasov-Poisson Fokker-Planck system is a kinetic description of Brownian motion for a large number of particles in a surrounding bath. In solving this system, the contact structure is investigated. This motivates the study for the contact system and the construction of corresponding numerical methods. This talk will introduce the contact system and the construction of numerical methods based on generating function and variational principle.
14:00 to 15:00 Gerard Awanou Computational geometric optics: Monge-Ampere
I will review recent developments in the numerical resolution of the second boundary value problem for Monge-Ampere type equations and their applications to the design of reflectors and refractors.
15:00 to 16:00 Milo Viviani Lie-Poisson methods for isospectral flows and their application to long-time simulation of spherical ideal hydrodynamics
The theory of isospectral flows comprises a large class of continuous dynamical systems, particularly integrable systems and Lie–Poisson systems. Their discretization is a classical problem in numerical analysis. Preserving the spectra in the discrete flow requires the conservation of high order polynomials, which is hard to come by. Existing methods achieving this are complicated and usually fail to preserve the underlying Lie-Poisson structure. Here we present a class of numerical methods of arbitrary order for Hamiltonian and non-Hamiltonian isospectral flows, which preserve both the spectra and the Lie-Poisson structure. The methods are surprisingly simple, and avoid the use of constraints or exponential maps. Furthermore, due to preservation of the Lie–Poisson structure, they exhibit near conservation of the Hamiltonian function. As an illustration, we apply the methods to long-time simulation of the Euler equations on a sphere. Our findings suggest that our structure-preserving algorithms, on the one hand, perform at least as well as other popular methods (i.e. CLAM) without adding spurious hyperviscosity terms, on the other hand, show that the conservation of the Casimir functions can be actually used to predict the final state of the fluid
GCS 4th September 2019
14:00 to 15:00 Sina Ober-Blöbaum Variational formulations for dissipative systems
Variational principles are powerful tools for the modelling and simulation of conservative mechanical and electrical systems. As it is well-known, the fulfilment of a variational principle leads to the Euler-Lagrange equations of motion describing the dynamics of such systems. Furthermore, a variational discretisation directly yields unified numerical schemes with powerful structure-preserving properties. Since many years there have been several attempts to provide a variational description also for dissipative mechanical systems, a task that is addressed in the talk in order to construct both Lagrangian and Hamiltonian pictures of their dynamics. One way doing this is to use fractional terms in the Lagrangian or Hamiltonian function which allows for a purely variational derivation of dissipative systems. Another approach followed in this talk is to embed the non-conservative systems in larger conservative systems. These concepts are used to develop variational integrators for which superior qualitative numerical properties such as the correct energy dissipation rate are demonstrated.
15:00 to 16:00 Sigrid Leyendecker Mixed order and multirate variational integrators for the simulation of dynamics on different time scales
Mechanical systems with dynamics on varying time scales, e.g. including highly oscillatory motion, impose challenging questions for numerical integration schemes. High resolution is required to guarantee a stable integration of the fast frequencies. However, for the simulation of the slow dynamics, integration with a lower resolution is accurate enough - and computationally cheaper, especially for costly function evaluations. Two approaches are presented, a mixed order Galerkin variational integrator and a multirate variational integrator, and analysed with respect to the preservation of invariants, computational costs, accuracy and linear stability.
14:15 to 15:15 Chus Sanz-Serna Numerical Integrators for the Hamiltonian Monte Carlo Method
GCS 11th September 2019
14:00 to 15:00 Peter Hydon Conservation laws and Euler operators
A (local) conservation law of a given system of differential or difference equations is a divergence expression that is zero on all solutions. The Euler operator is a powerful tool in the formal theory of conservation laws that enables key results to be proved simply, including several generalizations of Noether's theorems. This talk begins with a short survey of the main ideas and results. The current method for inverting the divergence operator generates many unnecessary terms by integrating in all directions simultaneously. As a result, symbolic algebra packages create over-complicated representations of conservation laws, making it difficult to obtain efficient conservative finite difference approximations symbolically. A new approach resolves this problem by using partial Euler operators to construct near-optimal representations. The talk explains this approach, which was developed during the GCS programme.
15:00 to 16:00 Gianluca Frasca-Caccia Numerical preservation of local conservation laws
In the numerical treatment of partial differential equations (PDEs), the benefits of preserving global integral invariants are well-known. Preserving the underlying local conservation law gives, in general, a stricter constraint than conserving the global invariant obtained by integrating it in space. Conservation laws, in fact, hold throughout the domain and are satisfied by all solutions, independently of initial and boundary conditions. A new approach that uses symbolic algebra to develop bespoke finite difference schemes that preserve multiple local conservation laws has been recently applied to PDEs with polynomial nonlinearity. The talk illustrates this new strategy using some well-known equations as benchmark examples and shows comparisons between the obtained schemes and other integrators known in literature.
14:05 to 14:50 Blanca Ayuso De Dios Constructing Discontinuous Galerkin methods for Vlasov-type systems
The Vlasov-Poisson and the Vlasov-Maxwell systems are two classical models in collisionless kinetic theory. They are both derived as mean-field limit description of a large ensemble of interacting particles by electrostatic and electro-magnetic forces, respectively.
In this talk we describe how to design (semi-discrete!) discontinuous Galerkin finite element methods for approximating such Vlasov-type systems. We outline the error analysis of the schemes and discuss further properties of the proposed schemes, as well as their shortcomings. If time allows, we discuss further endeavours in alleviating the drawbacks of the schemes.
15:05 to 15:50 Linyu Peng Variational systems on the variational bicomplex
It is well know that symplecticity plays a fundamentally important role in Lagrangian and Hamiltonian systems. Numerical methods preserving symplecticity (or multisymplecticity for PDEs) have been greatly developed and applied during last decades. In this talk, we will show how the variational bicomplex, a double cochain complex on jet manifolds, provides a natural framework for understanding multisymplectic systems. The discrete counterpart, discrete multisymplectic systems on the difference variational bicomplex will briefly be introduced if time permits.
16:00 to 17:00 Marcus Webb Energy preserving spectral methods on the real line whose analysis strays into the complex plane (copy)
14:05 to 14:50 Martin Licht Newest Results in Newest Vertex Bisection
The algorithmic refinement of triangular meshes is an important component in numerical simulation codes. Newest vertex bisection is one of the most popular methods for geometrically stable local refinement. Its complexity analysis, however, is a fairly intricate recent result and many combinatorial aspects of this method are not yet fully understood. In this talk, we access newest vertex bisection from the perspective of theoretical computer science. We outline the amortized complexity analysis over generalized triangulations. An immediate application is the convergence and complexity analysis of adaptive finite element methods over embedded surfaces and singular surfaces. Moreover, we "combinatorialize" the complexity estimate and remove any geometry-dependent constants, which is only natural for this purely combinatorial algorithm and improves upon prior results. This is joint work with Michael Holst and Zhao Lyu.
GCSW02 30th September 2019
09:30 to 10:30 Deirdre Shoemaker Numerical Relativity in the Era of Gravitational Wave Observations
The birth and future of gravitational wave astronomy offers new opportunities and challenges for numerical methods in general relativity.
Numerical relativity in particular provides critical support to detect and interpret gravitational wave measurements. In this talk, I'll discuss the role numerical relativity is playing in the observed black hole binaries by LIGO and Virgo and its future potential for unveiling strong-field gravity in future detections with an emphasis on the computational challenges. I'll frame a discussion about what demands will be placed on this field to maximize the science output of the new era.
11:00 to 12:00 Michael Holst Some Research Problems in Mathematical and Numerical General Relativity
The 2017 Nobel Prize in Physics was awarded to three of the key scientists involved in the development of LIGO and its eventual successful first detections of gravitational waves. How do LIGO (and other gravitational wave detector) scientists know what they are detecting? The answer is that the signals detected by the devices are shown, after extensive data analysis and numerical simulations of the Einstein equations, to be a very close match to computer simulations of wave emission from very particular types of binary collisions.
In this lecture, we begin with a brief overview of the mathematical formulation of Einstein (evolution and constraint) equations, and then focus on some fundamental mathematics research questions involving the Einstein constraint equations. We begin with a look at the most useful mathematical formulation of the constraint equations, and then summarize the known existence, uniqueness, and multiplicity results through 2009. We then present a number of new existence and multiplicity results developed since 2009 that substantially change the solution theory for the constraint equations. In the second part of the talk, we consider approaches for developing "provably good" numerical methods for solving these types of geometric PDE systems on 2- and 3-manifolds. We examine how one proves rigorous error estimates for particular classes of numerical methods, including both classical finite element methods and newer methods from the finite element exterior calculus.
This lecture will touch on several joint projects that span more than a decade, involving a number of collaborators. The lecture is intended both for mathematicians interested in potential research problems in mathematical and numerical general relativity, as well as physicists interested in relevant new developments in mathematical and numerical methods for nonlinear geometric PDE.
13:30 to 14:30 Pau Figueras numerical relativity beyond astrophysics: new challenges and new dynamics
Motivated by more fundamental theories of gravity such as string theory, in recent years there has been a growing interesting in solving the Einstein equations numerically beyond the traditional astrophysical set up. For instance in spacetime dimensions higher than the four that we have observed, or in exotic spaces such as anti-de Sitter spaces. In this talk I will give an overview of the challenges that are often encountered when solving the Einstein equations in these new settings. In the second part of the talk I will provide some examples, such as the dynamics of unstable black holes in higher dimensions and gravitational collapse in anti-de Sitter spaces.
14:30 to 15:30 Ari Stern Structure-preserving time discretization: lessons for numerical relativity?
In numerical ODEs, there is a rich literature on methods that preserve certain geometric structures arising in physical systems, such as Hamiltonian/symplectic structure, symmetries, and conservation laws. I will give an introduction to these methods and discuss recent work extending some of these ideas to numerical PDEs in classical field theory.
16:00 to 17:00 Frans Pretorius Computational Challenges in Numerical Relativity
I will give a brief overview of the some of the challenges in computational solution of the Einstein field equations.I will then describe the background error subtraction technique, designed to allow for more computationally efficient solution of scenarios where a significant portion of the domain is close to a know, exact solution. To demonstrate, I will discuss application to tidal disruption of a star by a supermassive black hole, and studies of black hole superradiance.
GCSW02 1st October 2019
09:30 to 10:30 Lee Lindblom Solving PDEs Numerically on Manifolds with Arbitrary Spatial Topologies
11:00 to 12:00 Melvin Leok Variational discretizations of gauge field theories using group-equivariant interpolation spaces
Variational integrators are geometric structure-preserving numerical methods that preserve the symplectic structure, satisfy a discrete Noether's theorem, and exhibit exhibit excellent long-time energy stability properties. An exact discrete Lagrangian arises from Jacobi's solution of the Hamilton-Jacobi equation, and it generates the exact flow of a Lagrangian system. By approximating the exact discrete Lagrangian using an appropriate choice of interpolation space and quadrature rule, we obtain a systematic approach for constructing variational integrators. The convergence rates of such variational integrators are related to the best approximation properties of the interpolation space.
Many gauge field theories can be formulated variationally using a multisymplectic Lagrangian formulation, and we will present a characterization of the exact generating functionals that generate the multisymplectic relation. By discretizing these using group-equivariant spacetime finite element spaces, we obtain methods that exhibit a discrete multimomentum conservation law. We will then briefly describe an approach for constructing group-equivariant interpolation spaces that take values in the space of Lorentzian metrics that can be efficiently computed using a generalized polar decomposition. The goal is to eventually apply this to the construction of variational discretizations of general relativity, which is a second-order gauge field theory whose configuration manifold is the space of Lorentzian metrics.
13:30 to 14:30 Oscar Reula Hyperbolicity and boundary conditions.
Abstract: (In collaboration with Fernando Abalos.) Very often in physics, the evolution systems we have to deal with are not purely hyperbolic, but contain also constraints and gauge freedoms. After fixing these gauge freedoms we obtain a new system with constraints which we want to solve subject to initial and boundary values. In particular, these values have to imply the correct propagation of constraints. In general, after fixing some reduction to a purely evolutionary system, this is asserting by computing by hand what is called the constraint subsidiary system, namely a system which is satisfied by the constraints quantities when the fields satisfy the reduced evolution system.
If the subsidiary system is also hyperbolic then for the initial data case the situation is clear: we need to impose the constraints on the initial data and then they will correctly propagate along evolution. For the boundary data, we need to impose the constraint for all incoming constraint modes. These must be done by fixing some of the otherwise free boundary data, that is the incoming modes. Thus, there must be a relation between some of the incoming modes of the evolution system and all the incoming modes of the constraint subsidiary system. Under certain conditions on the constraints, this relation is known and understood, but those conditions are very restrictive. In this talk, we shall review the known results and discuss what is known so far for the general case and what are the open questions that still remain.
14:30 to 15:30 Snorre Christiansen Compatible finite element spaces for metrics with curvature
I will present some new finite element spaces for metrics with integrable curvature. These were obtained in the framework of finite element systems, developed for constructing differential complexes with adequate gluing conditions between the cells of a mesh. The new spaces have a higher regularity than those of Regge calculus, for which the scalar curvature contains measures supported on lower dimensional simplices (Dirac deltas). This is joint work with Kaibo Hu.
16:00 to 17:00 Second Chances
The formalism of discrete differential forms has been used very successfully in computational electrodynamics. It is based on the idea that only the observables (i.e., the electromagnetic field) should be discretised and that coordinates should not possess any relevance in the numerical method. This is reflected in the fact that Maxwell's theory can be written entirely in geometric terms using differential forms. Einstein's theory is entirely geometric as well and can also be written in terms of differential forms. In this talk I will describe an attempt to discretise Einstein's theory in a way similar to Maxwell's theory. I will describe the advantages and point out disadvantages. I will conclude with some remarks about more general discrete structures on manifolds.
GCSW02 2nd October 2019
09:30 to 10:30 Anil Hirani Discrete Vector Bundles with Connection and the First Chern Class
The use of differential forms in general relativity requires ingredients like the covariant exterior derivative and curvature. One potential approach to numerical relativity would require discretizations of these ingredients. I will describe a discrete combinatorial theory of vector bundles with connections. The main operator we develop is a discrete covariant exterior derivative that generalizes the coboundary operator and yields a discrete curvature and a discrete Bianchi identity. We test this theory by defining a discrete first Chern class, a topological invariant of vector bundles. This discrete theory is built by generalizing discrete exterior calculus (DEC) which is a discretization of exterior calculus on manifolds for real-valued differential forms. In the first part of the talk I will describe DEC and its applications to the Hodge-Laplace problem and Navier-Stokes equations on surfaces, and then I will develop the discrete covariant exterior derivative and its implications. This is joint work with Daniel Berwick-Evans and Mark Schubel.
11:00 to 12:00 Soeren Bartels Approximation of Harmonic Maps and Wave Maps
Partial differential equations with a nonlinear pointwise constraint defined by a manifold occur in a variety of applications: the magnetization of a ferromagnet can be described by a unit length vector field and the orientation of the rod-like molecules that constitute a liquid crystal is often modeled by a restricted vector field. Other applications arise in geometric modeling, nonlinear bending of solids, and quantum mechanics. Nodal finite element methods have to appropriately relax the pointwise constraint leading to a variational crime. Since exact solutions are typically nonunique and do not admit higher regularity properties, the correctness of discretizations has to be established by weaker means avoiding unrealistic conditions. The iterative solution of the nonlinear systems of equations can be based on linearizations of the constraint or by using appropriate constraint-preserving reformulations. The talk focusses on the approximation of harmonic maps and wave maps. The latter arise as a model problem in general relativity.
13:30 to 14:30 Helvi Witek New prospects in numerical relativity
Both observations and deeply theoretical considerations indicate that general relativity, our elegant standard model of gravity, requires modifications at high curvatures scales. Candidate theories of quantum gravity, in their low-energy limit, typically predict couplings to additional fields or extensions that involve higher curvature terms.
At the same time, the breakthrough discovery of gravitational waves has provided a new channel to probe gravity in its most extreme, strong-field regime. Modelling the expected gravitational radiation in these extensions of general relativity enables us to search for - or place novel observational bounds on - deviations from our standard model. In this talk I will give an overview of the recent progress on simulating binary collisions in these situations and address renewed mathematical challenges such as well-posedness of the underlying initial value formulation.
GCSW02 3rd October 2019
09:30 to 10:30 Fernando Abalos On necessary and sufficient conditions for strong hyperbolicity in systems with differential constraints
In many physical applications, due to the presence of constraints, the number of equations in the partial differential equation systems is larger than the number of unknowns, thus the standard Kreiss conditions can not be directly applied to check whether the system admits a well posed initial value formulation. In this work we show necessary and sufficient conditions such that there exists a reduced set of equations, of the same dimensionality as the set of unknowns, which satisfy Kreiss conditions and so are well defined and properly behaved evolution equations. We do that by decomposing the systems using the Kronecker decomposition of matrix pencils and, once the conditions are met, we look for specific families of reductions. We show the power of the theory in the densitized, pseudo-differential ADM equations.
11:00 to 12:00 David Hilditch Putting Infinity on the Grid
I will talk about an ongoing research program relying on a dual frame approach to treat numerically the field equations of GR (in generalized harmonic gauge) on compactified hyperboloidal slices. These slices terminate at future-null infinity, and the hope is to eventually extract gravitational waves from simulations there. The main obstacle to their use is the presence of 'infinities' coming from the compactified coordinates, which have to somehow interact well with the assumption of asymptotic flatness so that we may arrive at regular equations for regular unknowns. I will present a new 'subtract the logs' regularization strategy for a toy nonlinear wave equation that achieves this goal.
13:30 to 14:30 Warner Miller General Relativity: One Block at a Time
This talk will provide an overview and motivation for Regge calculus (RC). We will highlight our insights into unique features of building GR on a discrete geometry in regards to structure preservation, and highlight some relative strengths and weaknesses of RC. We will review some numerical applications of RC, including our more recent work on discrete Ricci flow.
14:30 to 15:30 Ragnar Winther Finite element exterior calculus as a tool for compatible discretizations
The purpose of this talk is to review the basic results of finite element exterior calculus (FEEC) and to illustrate how the set up gives rise to
to compatible discretizations of various problems. In particular, we will recall how FEEC, combined with the Bernstein-Gelfand-Gelfand framework,
gave new insight into the construction of stable schemes for elasticity methods based on the Hellinger-Reissner variational principle.
16:00 to 17:00 Douglas Arnold FEEC 4 GR?
The finite element exterior calculus (FEEC) has proven to be a powerful tool for the design and understanding of numerical methods for solving PDEs from many branches of physics: solid mechanics, fluid flow, electromagnetics, etc. Based on preserving crucial geometric and topological structures underlying the equations, it is a prime example of a structure-preserving numerical method. It has organized many known finite element methods resulting in the periodic table of finite elements. For elasticity, which is not covered by the table, it led to new methods with long sought-after properties. Might the FEEC approach lead to better numerical solutions of the Einstein equations as well? This talk will explore this question through two examples: the Einstein--Bianchi formulation of the Einstein equations based on Bel decomposition of the Weyl tensor, and the Regge elements, a family of finite elements inspired by Regge calculus. Our goal in the talk is to raise questions and inspire future work; we do not purport to provide anything near definitive answers.
GCSW02 4th October 2019
11:00 to 12:00 Pablo Laguna Inside the Final Black Hole from Black Hole Collisions
Modeling black hole singularities as punctures in space-time is common in binary black hole simulations. As the punctures approach each other, a common apparent horizon forms, signaling the coalescence of the black holes and the formation of the final black hole. I will present results from a study that investigates the fate of the punctures and in particular the dynamics of the trapped surfaces on each puncture.
Co-authors: Christopher Evans, Deborah Ferguson, Bhavesh Khamesra and Deirdre Shoemaker
13:30 to 14:30 Charalampos Markakis On numerical conservation of the Poincaré-Cartan integral invariant in relativistic fluid dynamics
The motion of strongly gravitating fluid bodies is described by the Euler-Einstein system of partial differential equations. We report progress on formulating well-posed, acoustical and canonical hydrodynamic schemes, suitable for binary inspiral simulations and gravitational-wave source modelling. The schemes use a variational principle by Carter-Lichnerowicz stating that barotropic fluid motions are conformally geodesic, a corollary to Kelvin's theorem stating that initially irrotational flows remain irrotational, and Christodoulou's acoustic metric approach adopted to numerical relativity, in order to evolve the canonical momentum of a fluid element via Hamilton or Hamilton-Jacobi equations. These mathematical theorems leave their imprints on inspiral waveforms from binary neutron stars observed by the LIGO-Virgo detectors. We describe a constraint damping scheme for preserving circulation in numerical general relativity, in accordance with Helmholtz's third theorem.
14:30 to 15:30 Luis Lehner tba
16:00 to 17:00 David Garfinkle Tetrad methods in numerical relativity
Most numerical relativity simulations use the usual coordinate methods to put the Einstein field equations in the form of partial differential equations (PDE), which are then handled using more or less standard numerical PDE methods, such as finite differences. However, there are some advantages to instead using a tetrad (orthonormal) basis rather than the usual coordinate basis. I will present the tetrad method and its numerical uses, particularly for simulating the approach to a spacetime singularity. I will end with open questions about which tetrad systems are suitable for numerical simulations.
GCS 9th October 2019
14:05 to 14:50 Richard Falk Numerical Computation of Hausdorff Dimension
We show how finite element approximation theory can be combined with theoretical results about the properties of the eigenvectors of a class of linear Perron-Frobenius operators to obtain accurate approximations of the Hausdorff dimension of some invariant sets arising from iterated function systems.
The theory produces rigorous upper and lower bounds on the Hausdorff dimension. Applications to the computation of the Hausdorff dimension of some Cantor sets arising from real and complex continued fraction expansions are described.
15:05 to 15:50 Daniele Boffi Approximation of eigenvalue problems arising from partial differential equations: examples and counterexamples
We discuss the finite element approximation of eigenvalue problems arising from elliptic partial differential equations. We present various examples of non-standard schemes, including mixed finite elements, approximation of operators related to the least-squares finite element method, parameter dependent formulations such as those produced by the virtual element method. Each example is studied theoretically; advantages and disadvantages of
each approach are pointed out.
GCS 21st October 2019
16:00 to 17:00 Donatella Marini Kirk Lecture: A recent technology for Scientific Computing: the Virtual Element Method
The Virtual Element Method (VEM) is a recent technology for the numerical solution of boundary value problems for Partial Differential Equations. It could be seen as a generalization of the Finite Element Method (FEM). With FEM the computational domain is typically split in triangles/quads (tetrahedra/hexahedra). VEM responds to the recent interest in using decompositions into polygons/polyhedra of very general shape, whenever more convenient for the approximation of problems of practical interest. Indeed,the possibility of using general polytopal meshes opens up a new range of opportunities in terms of accuracy, efficiency and flexibility. This is for instance reflected by the fact that various (commercial and free) codes recently included and keep developing polytopal meshes, showing in selected applications an improved computational efficiency with respect to tetrahedral or hexahedral grids. In this talk, after a general description of the use and potential of Scientific Computing, basic ideas of conforming VEM will be described on a simple model problem. Numerical results on more general problems in two and three dimension will be shown. Hints on Serendipity versions will be given at the end. These procedures allow to decrease significantly the number of degrees of freedom, that is, to reduce the dimension of the final linear system.
GCS 22nd October 2019
09:05 to 09:50 Franco Brezzi Serendipity Virtual Elements
After a brief reminder of classical ("plain vanilla") Virtual Elements we will see the general philosophy of "enhanced Virtual Elements" and the various types of Serendipity spaces as particular cases. The construction will always be conceptually simple (and extremely powerful, in particular for polygons with many edges), but a code exploiting the full advantage of having many edges might become difficult in the presence of non convex polygons, and in particular for complicated shapes. We shall also discuss different choices ensuring various advantages for different amounts of work.
09:55 to 10:40 Andrea Cangiani A posteriori error estimation for discontinuous Galerkin methods on general meshes and adaptivity
The application and a priori error analysis of discontinuous Galerkin (dG) methods for general classes of PDEs under general mesh assumptions is by now well developed. dG methods naturally permits local mesh and order adaptivity. However, deriving robust error estimators allowing for curved/degenerating mesh interfaces as well as developing adaptive algorithms able to exploit such flexibility is non-trivial. In this talk we present recent work on a posteriori error estimates for dG methods of interior penalty type which hold on general mesh settings, including elements with degenerating and curved boundaries. The exploitation of general meshes within mesh adaptation algorithms applied to a few challenging problems will also be discussed.
11:10 to 11:55 Emmanuil Georgoulis Discontinuous Galerkin methods on arbitrarily shaped elements.
We extend the applicability of the popular interior-penalty discontinuous Galerkin (dG) method discretizing advection-diffusion-reaction problems to meshes comprising extremely general, essentially arbitrarily-shaped element shapes. In particular, our analysis allows for curved element shapes, without the use of (iso-)parametric elemental maps. The feasibility of the method relies on the definition of a suitable choice of the discontinuity-penal-ization parameter, which turns out to be essentially independent on the particular element shape. A priori error bounds for the resulting method are given under very mild structural assumptions restricting the magnitude of the local curvature of element boundaries. Numerical experiments are also presented, indicating the practicality of the proposed approach. Moreover, we shall discuss a number of perspectives on the possible applications of the proposed framework in parabolic problems on moving domains as well as on multiscale problems. The above is an overview of results from joint works with A. Cangiani (Nottingham, UK), Z. Dong (FORTH, Greece / Cardiff UK) and T. Kappas (Leicester, UK).
13:05 to 13:50 Paul Houston An Agglomeration-Based, Massively Parallel Non-Overlapping Additive Schwarz Preconditioner for High-Order Discontinuous Galerkin Methods on Polytopic Grids
In this talk we design and analyze a class of two-level non-overlapping additive Schwarz preconditioners for the solution of the linear system of equations stemming from high-order/hp version discontinuous Galerkin discretizations of second-order elliptic partial differential equations on polytopic meshes. The preconditioner is based on a coarse space and a non-overlapping partition of the computational domain where local solvers are applied in parallel. In particular, the coarse space can potentially be chosen to be non-embedded with respect to the finer space; indeed it can be obtained from the fine grid by employing agglomeration and edge coarsening techniques. We investigate the dependence of the condition number of the preconditioned system with respect to the diffusion coefficient and the discretization
parameters, i.e., the mesh size and the polynomial degree of the fine and coarse spaces. Numerical examples are presented which confirm the theoretical bounds.
13:55 to 14:40 Jinchao Xu UPWIND FINITE ELEMENT METHODS FOR H(grad), H(curl) AND H(div) CONVECTION-DIFFUSION PROBLEMS
This talk is devoted to the construction and analysis of the finite element approximations for the H(grad), H(curl) and H(div) convection-diffusion problems. An essential feature of these constructions is to properly average the PDE coefficients on sub-simplexes from the underlying simplicial finite element meshes. The schemes are of the class of exponential fitting methods that result in special upwind schemes when the diffusion coefficient approaches to zero. Their well-posedness are established for sufficiently small mesh size assuming that the convection-diffusion problems are uniquely solvable. Convergence of first order is derived under minimal smoothness of the solution. Some numerical examples are given to demonstrate the robustness and effectiveness for general convection-diffusion problems. This is a joint work with Shounan Wu.
GCS 28th October 2019
10:00 to 10:45 Erwan Faou High-order splitting for the Vlasov-Poisson equation
We consider the Vlasov{Poisson equation in a Hamiltonian framework and
derive time splitting methods based on the decomposition of the Hamiltonian functional
between the kinetic and electric energy. We also apply a similar strategy to the Vlasov{
Maxwell system. These are joint works with N. Crouseilles, F. Casas, M. Mehrenberger
and L. Einkemmer.
10:45 to 11:30 Katharina Kormann On structure-preserving particle-in-cell methods for the Vlasov-Maxwell equations
Numerical schemes that preserve the structure of the kinetic equations can
provide new insight into the long time behavior of fusion plasmas. An electromagnetic
particle-in-cell solver for the Vlasov{Maxwell equations that preserves at the discrete
level the non-canonical Hamiltonian structure of the Vlasov{Maxwell equations has
been presented in [1]. In this talk, the framework of this geometric particle-in-cell
method will be presented and extension to curvilinear coordinates will be discussed.
Moreover, various options for the temporal discretizations will be proposed and compared.
[1] M. Kraus, K. Kormann, P. J. Morrison, and E. Sonnendrucker. GEMPIC: geometric electromag-
netic particle-in-cell methods. Journal of Plasma Physics, 83(4), 2017.
12:00 to 12:45 Ernst Hairer Numerical treatment of charged particle dynamics in a magnetic field
Combining the Lorentz force equations with Newton's law gives a second
order dierential equation in space for the motion of a charged particle in a magnetic
eld. The most natural and widely used numerical discretization is the Boris algorithm,
which is explicit, symmetric, volume-preserving, and of order 2.
In a rst part we discuss geometric properties (long-time behaviour, and in particular
near energy conservation) of the Boris algorithm. This is achieved by applying standard
backward error analysis. Near energy conservation can be obtained also in situations,
where the method is not symplectic.
In a second part we consider the motion of a charged particle in a strong magnetic eld.
Backward error analysis can no longer be applied, and the accuracy (order 2) breaks
down. To improve accuracy we modify the Boris algorithm in the spirit of exponential
integrators. Theoretical estimates are obtained with the help of modulated Fourier
expansions of the exact and numerical solutions.
This talk is based on joint work with Christian Lubich, and Bin Wang.
Related publications (2017{2019) can be downloaded from
14:00 to 14:45 Wayne Arter Challenges for modelling fusion plasmas
Modelling fusion plasmas presents many challenges, so that it is reasonable
that many modelling codes still use simple nite dierence representations that make
it relatively easy to explore new physical processes and preserve numerical stability [1].
However, October 2019 announcements by UK government have given UKAEA the
challenge of designing a nuclear fusion reactor in the next 5 years, plus a funding element
for upgrading existing software both for Exascale and to meet the design challenge. One
option under examination is the use of high order 'spectrally accurate' elements.
The biggest modelling problem is still that of turbulence mostly at relatively low plasma
collisionality. Specically non-dissipative issues are the tracking of plasma particle orbits
between collisions, sometimes reducing to tracing lines of divergence-free magnetic
eld. These particles then build into a Maxwell{Vlasov solver, for which many dierent
numerical representations, exploiting low collisonality and the presence of a strong,
directed magnetic eld have been explored [2]. One such is ideal MHD, where I have
explored the use of the Lie derivative [3, 4]. Some further speculations as to the likely
role of Lie (and spectral accuracy) in solving Vlasov{Maxwell, its approximations and
their ensembles, and interactions between the dierent approximations, in the Exascale
era will be presented.
This work was funded by the RCUK Energy Programme and the European Communities under the
contract of Association between EURATOM and CCFE.
[1] B.D. Dudson, A. Allen, G. Breyiannis, E. Brugger, J. Buchanan, L. Easy, S. Farley, I. Joseph,
M. Kim, A.D. McGann, et al. BOUT++: Recent and current developments. Journal of Plasma
Physics, 81(01):365810104, 2015.
[2] W. Arter. Numerical simulation of magnetic fusion plasmas. Reports on Progress in Physics,
58:1{59, 1995.
[3] W. Arter. Potential Vorticity Formulation of Compressible Magwnetohydrodynamics. Physical
Review Letters, 110(1):015004, 2013.
[4] W. Arter. Beyond Linear Fields: the Lie{Taylor Expansion. Proc Roy Soc A, 473:20160525, 2017.
15:15 to 16:00 Jitse Niesen Spectral deferred correction in particle-in-cell methods
Particle-in-cell methods solve the Maxwell equations for the electromagnetic
eld in combination with the equation of motion for the charged particles in a
plasma. The motion of charegd particles is usually computed using the Boris algorithm,
a variant of Stormer{Verlet for Lorentz force omputations, which has impressive
performance and order two (like Stormer{Verlet). Spectral deferred correction is an
iterative time stepping method based on collocation, which in each time step performs
multiple sweeps of a low-order method (here, the Boris method) in order to obtain a
high-order approximation. This talk describes the ongoing eorts of Kristoer Smedt,
Daniel Ruprecht, Steve Tobias and the speaker to embed a spectral deferred correction
time stepper based on the Boris method in a particle-in-cell method.
14:05 to 15:05 Jinchao Xu Deep Neural Networks and Multigrid Methods
In this talk, I will first give an introduction to several models and algorithms from two different fields: (1) machine learning, including logistic regression, support vector machine and deep neural networks, and (2) numerical PDEs, including finite element and multigrid methods. I will then explore mathematical relationships between these models and algorithms and demonstrate how such relationships can be used to understand, study and improve the model structures, mathematical properties and relevant training algorithms for deep neural networks. In particular, I will demonstrate how a new convolutional neural network known as MgNet, can be derived by making very minor modifications of a classic geometric multigrid method for the Poisson equation and then explore the theoretical and practical potentials of MgNet.
16:00 to 17:00 Patrick Farrell A Reynolds-robust preconditioner for the 3D stationary Navier-Stokes equations
When approximating PDEs with the finite element method, large sparse linear systems must be solved. The ideal preconditioner yields convergence that is algorithmically optimal and parameter robust, i.e. the number of Krylov iterations required to solve the linear system to a given accuracy does not grow substantially as the mesh or problem parameters are changed. Achieving this for the stationary Navier-Stokes has proven challenging: LU factorisation is Reynolds-robust but scales poorly with degree of freedom count, while Schur complement approximations such as PCD and LSC degrade as the Reynolds number is increased. Building on ideas of Schöberl, Xu, Zikatanov, Benzi & Olshanskii, in this talk we present the first preconditioner for the Newton linearisation of the stationary Navier–Stokes equations in three dimensions that achieves both optimal complexity and Reynolds-robustness. The scheme combines augmented Lagrangian stabilisation to control the Schur complement, the convection stabilisation proposed by Douglas & Dupont, a divergence-capturing additive Schwarz relaxation method on each level, and a specialised prolongation operator involving non-overlapping local Stokes solves. The properties of the preconditioner are tailored to the divergence-free CG(k)-DG(k-1) discretisation and the appropriate relaxation is derived from considerations of finite element exterior calculus. We present 3D simulations with over one billion degrees of freedom with robust performance from Reynolds numbers 10 to 5000.
GCS 11th November 2019
14:00 to 15:00 Shi Jin Random Batch Methods for Interacting Particle Systems and Consensus-based Global Non-convex Optimization in High-dimensional Machine Learning (copy)
We develop random batch methods for interacting particle systems with large number of particles. These methods
use small but random batches for particle interactions,
thus the computational cost is reduced from O(N^2) per time step to O(N), for a
system with N particles with binary interactions.
For one of the methods, we give a particle number independent error estimate under some special interactions.
Then, we apply these methods
to some representative problems in mathematics, physics, social and data sciences, including the Dyson Brownian
motion from random matrix theory, Thomson's problem,
distribution of wealth, opinion dynamics and clustering. Numerical results show that
the methods can capture both the transient solutions and the global equilibrium in
these problems.
We also apply this method and improve the consensus-based global optimization algorithm for high
dimensional machine learning problems. This method does not require taking gradient in finding global
minima for non-convex functions in high dimensions.
16:00 to 17:00 Sergio Blanes Magnus, splitting and composition techniques for solving non-linear Schrödinger equations
In this talk I will consider several non-autonomous non-linear Schrödinger equations
(the Gross-Pitaevskii equation, the Kohn-Sham equation and an Quantum Optimal Control equation)
and some of the numerical methods that have been used to solve them.
With a proper linearization of these equations we end up with non-autonomous linear systems
where many of the algebraic techniques from Magnus, splitting and composition algorithms can be used.
This will be an introductory talk to stimulate some collaboration between participants of the program
at the INI.
16:00 to 17:00 Erwan Faou Some results in the long time analysis of Hamiltonian PDEs and their numerical approximations
I will review some results concerning the long time behavior of Hamiltonian PDEs, and address
similar questions for their numerical approximation. I will show numerical resonances can appear
both in space and time. I will also discuss the long time stability of solitary waves evolving
on a discret set of lattice points.
14:00 to 15:00 Caroline Lasser What it takes to catch a wave packet
Wave packets describe the quantum vibrations of a molecule. They are highly oscillatory,
highly localized and move in high dimensional configuration spaces. The talk addresses
three meshless numerical methods for catching them: single Gaussian beams,
superpositions of them, and the so-called linearized initial value representation.
16:00 to 17:00 Alexander Ostermann Low-regularity time integrators
Nonlinear Schrödinger equations are usually solved by pseudo-spectral methods, where the time integration is performed by splitting schemes or exponential integrators. Notwithstanding the benefits of this approach, its successful application requires additional regularity of the solution. For instance, second-order Strang splitting requires four additional derivatives for the solution of the cubic nonlinear Schrödinger equation. Similar statements can be made about other dispersive equations like the Korteweg-de Vries or the Boussinesq equation.
In this talk, we introduce low-regularity Fourier integrators as an alternative. They are obtained from Duhamel's formula in the following way: first, a Lawson-type transformation eliminates the leading linear term and second, the dominant nonlinear terms are integrated exactly in Fourier space. For cubic nonlinear Schrödinger equations, first-order convergence of such methods only requires the boundedness of one additional derivative of the solution, and second-order convergence the boundedness of two derivatives. Similar improvements can also be obtained for other dispersive problems.
This is joint work with Frédéric Rousset (Université Paris-Sud), Katharina Schratz (Hariot-Watt, UK), and Chunmei Su (Technical University of Munich).
13:05 to 13:45 CANCELLED
13:50 to 14:30 Tony Lelievre title tba
Various applications require the sampling of probability measures restricted to submanifolds defined as the level set of some functions, in particular in computational statistical physics. We will present recent results on so-called Hybrid Monte Carlo methods, which consists in adding an extra momentum variable to the state of the system, and discretizing the associated Hamiltonian dynamics with some stochastic perturbation in the extra variable. In order to avoid biases in the invariant probability measures sampled by discretizations of these stochastically perturbed Hamiltonian dynamics, a Metropolis rejection procedure can be considered. The so-obtained scheme belongs to the class of generalized Hybrid Monte Carlo (GHMC) algorithms, and we will discuss how to ensure that the sampling method is unbiased in practice.
- T. Lelièvre, M. Rousset and G. Stoltz, Langevin dynamics with constraints and computation of free energy differences, Mathematics of Computation, 81(280), 2012.
- T. Lelièvre, M. Rousset and G. Stoltz, Hybrid Monte Carlo methods for sampling probability measures on submanifolds, to appear in Numerische Mathematik, 2019.
- E. Zappa, M. Holmes-Cerfon, and J. Goodman. Monte Carlo on manifolds: sampling densities and integrating functions. Communications in Pure and Applied Mathematics, 71(12), 2018.
15:05 to 15:35 Jonathan Goodman Step size control for Newton type MCMC samplers Jonathan Goodman
ABSTRACT: MCMC sampling can use ideas from the optimization community. Optimization via Newton's method can fail without line search, even for smooth strictly convex problems. Affine invariant Newton based MCMC sampling uses a Gaussian proposal based on a quadratic model of the potential using the local gradient and Hessian. This can fail (conjecture: give a transient Markov chain) even for smooth strictly convex potentials. We describe a criterion that allows a sequence of proposal distributions from X_n with decreasing "step sizes" until (with probability 1) a proposal is accepted. "Very detailed balance" allows the whole process to preserve the target distribution. The method works in experiments but the theory is missing.
15:40 to 16:10 Miranda Holmes-Cefron A Monte Carlo method to sample a Stratification
Many problems in materials science and biology involve particles interacting with strong, short-ranged bonds, that can break and form on experimental timescales. Treating such bonds as constraints can significantly speed up sampling their equilibrium distribution, and there are several methods to sample subject to fixed constraints. We introduce a Monte Carlo method to handle the case when constraints can break and form. Abstractly, the method samples a probability distribution on a stratification: a collection of manifolds of different dimensions, where the lower-dimensional manifolds lie on the boundaries of the higher-dimensional manifolds. We show several applications in polymer physics, self-assembly of colloids, and volume calculation.
GCS 21st November 2019
13:05 to 13:45 Alessandro Barp Hamiltonian Monte Carlo on Homogeneous Manifolds for QCD and Statistics.
13:50 to 14:30 Benedict Leimkuhler Some thoughts about constrained sampling algorithms
I will survey our work on algorithms for sampling diffusions on manifolds, including isokinetic methods and constrained Langevin dynamics methods. These have mostly been introduced and tested in the setting of molecular dynamics. It is interesting to consider possible uses of these ideas in other types of sampling computations, like neural network parameterization and training of generative models.
15:05 to 15:45 Elena Celledoni Deep learning as optimal control problems and Riemannian discrete gradient descent.
We consider recent work where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability. References - M Benning, E Celledoni, MJ Ehrhardt, B Owren, CB Schönlieb, Deep learning as optimal control problems: models and numerical methods, JCD.
16:00 to 17:00 Peter Clarkson Symmetric Orthogonal Polynomials
In this talk I will discuss symmetric orthogonal polynomials on the real line. Such polynomials give rise to orthogonal systems which have important applications in spectral methods, with several important advantages if their differentiation matrix is skew-symmetric and highly structured. Such orthogonal systems, where the differentiation matrix is skew-symmetric, tridiagonal and irreducible, have recently been studied by Iserles and Webb. The symmetric orthogonal polynomials studied will include generalisations of the classical Hermite weight and generalisations of the Freud weight.
14:05 to 14:50 Evelyne Hubert Symmetry Preserving Interpolation
In this talk I choose to present the PhD work of Erick Rodriguez Bazan. We address multivariate interpolation in the presence of symmetry as given by a finite group. Interpolation is a prime tool in algebraic computation while symmetry is a qualitative feature
that can be more relevant to a mathematical model than the numerical accuracy of the parameters. Beside its preservation, symmetry shall also be exploited to alleviate the computational cost.
We revisit minimal degree and least interpolation spaces [de Boor & Ron 1990] with symmetry adapted bases (rather than the usual monomial bases). In these bases, the multivariate Vandermonde matrix (a.k.a colocation matrix) is block diagonal as soon as the set of nodes is invariant. These blocks capture the inherent redundancy in the computations. Furthermore any equivariance an interpolation problem might have will be automatically preserved : the output interpolant will have the same equivariance property.
The special case of multivariate Hermite interpolation leads us to question the representation of polynomial ideals. Gröbner bases, the preferred tool for algebraic computations, breaks any kind of symmetry. The prior notion of H-Bases, introduced by Macaulay, appears as more suitable.
https://dl.acm.org/citation.cfm?doid=3326229.3326247
https://hal.inria.fr/hal-01994016 Joint work with Erick Rodriguez Bazan
15:05 to 16:05 Anders Hansen On the Solvability Complexity Index (SCI) hierarchy - Establishing the foundations of computational mathematics
There are four areas in computational mathematics that have been intensely investigated over more than half a century: Spectral problems, PDEs, optimisation and inverse problems. However, despite the matureness of these fields, the foundations are far from known. Indeed, despite almost 90 years of quantum mechanics, it is still unknown whether it is possible to compute the spectrum of a self-adjoint Schrodinger operator with a bounded smooth potential. Similarly, it is not known which time dependent Schrodinger equations can be computed (despite well posedness of the equation). Linear programs (LP) can be solved with rational inputs in polynomial time, but can LPs be solved with irrational inputs? Problems in signal and image processing tend to use irrational numbers, so what happens if one plugs in the discrete cosine transform in one's favourite LP solver? Moreover, can one always compute the solutions to well-conditioned infinite-dimensional inverse problems, and if not, which inverse problems can then be solved? In this talk we will discuss solutions to many of the questions above, and some of the results may seem paradoxical. Indeed, despite being an open problem for more than half a century, computing spectra of Schrodinger operators with a bounded potential is not harder than computing spectra of infinite diagonal matrices, the simplest of all infinite-dimensional spectral problems. Moreover, computing spectra of compact operators, for which the method has been known for decades, is strictly harder than computing spectra of such Schrodinger operators. Regarding linear programs (and basis pursuit, semidefinite programs and LASSO) we have the following. For any integer K > 2 and any norm, there exists a family of well conditioned inputs containing irrational numbers so that no algorithm can compute K correct digits of a minimiser, however, there exists an algorithm that can compute K-1 correct digits. But any algorithm producing K-1 correct digits will need arbitrarily long time. Finally, computing K-2 correct digits can be done in polynomial time in the number of variables. As we will see, all of these problems can be solved via the the Solvability Complexity Index (SCI) hierarchy, which is a theoretical program for establishing the boundaries of what computers can achieve in the sciences.
16:00 to 17:00 Elizabeth Mansfield On the nature of mathematical joy
Elizabeth Mansfield will discuss seven levels of mathematical joy based on her mathematical travels. This is a talk for a general audience.
GCS 4th December 2019
14:05 to 14:50 Balázs Kovács Energy estimates: proving stability for evolving surface PDEs and geometric flows
In this talk we will give some details on the main steps and ideas behind energy estimates used to prove stability of backward difference semi- and full discretisations of parabolic evolving surface problems, or geometric flows (e.g. mean curvature flow). We will give details on how the G-stability result of Dahlquist and the multiplier techniques of Nevanlinna and Odeh will be used.
13:30 to 14:15 Vanessa Styles Numerical approximations of a tractable mathematical model for tissue growth
We consider a free boundary problem representing one of the simplest mathematical descriptions of the growth and death of a tumour. The mathematical model takes the form of a closed interface evolving via forced mean curvature flow where the forcing depends on the solution of a PDE that holds in the domain enclosed by the interface. We derive sharp interface and diffuse interface finite element approximations of this model and present some numerical results
14:15 to 15:00 Bjorn Stinner Phase field modelling of free boundary problems
Diffuse interface models based on the phase field methodology have been developed and investigated in various applications such as solidification processes, tumour growth, or multi-phase flow. The interfaces are represented by thin layers, across which quantities rapidly but smoothly change their values. These interfacial layers are described in terms of order parameters, the equations for which can be solved using relatively straightforward methods, such as finite elements with adaptive mesh refinement, as no tracking of any interface is required. The interface motion is usually coupled to other fields and equations adjacent or on the interface, for instance, diffusion equations in alloys or the momentum equation in fluid flow. We discuss how such systems can be incorporated into phase field models in a generic way. Furthermore, we present a computational framework where specific models can be implemented and later on conveniently amended, if desired, in a high-level language, and which then bind to efficient software backends. A couple of code listings and numerical simulations serve to illustrate the approach
15:15 to 16:00 Bertram Düring Structure-preserving variational schemes for nonlinear partial differential equations with a Wasserstein gradient flow structure
A wide range of diffusion equations can be interpreted as gradient flow with respect to Wasserstein distance of an energy functional. Examples include the heat equation, the porous medium equation, and the fourth-order Derrida-Lebowitz-Speer-Spohn equation. When it comes to solving equations of gradient flow type numerically, schemes that respect the equation's special structure are of particular interest. The gradient flow structure gives rise to a variational scheme by means of the minimising movement scheme (also called JKO scheme, after the seminal work of Jordan, Kinderlehrer and Otto) which constitutes a time-discrete minimization problem for the energy. While the scheme has been used originally for analytical aspects, more recently a number of authors have explored the numerical potential of this scheme. In this talk we present some results on Lagrangian schemes for Wasserstein gradient flows in one spatial dimension and then discuss extensions to higher approximation order and to higher spatial dimensions
16:00 to 17:00 Chus Sanz-Serna Rothschild Lecture: Hamiltonian Monte Carlo and geometric integration
Many application fields require samples from an arbitrary probability distribution. Hamiltonian Monte Carlo is a sampling algorithm that originated in the physics literature and has later gained much popularity among statisticians. This is a talk addressed to a general audience, where I will describe the algorithm and some of its applications. The exposition requires basic ideas from different fields, from statistical physics to geometric integration of differential equations and from Bayesian statistics to Hamiltonian dynamics and I will provide the necessary background, albeit superficially.
GCS 11th December 2019
14:05 to 14:50 Antonella Zanna On the construction of some symplectic P-stable additive Runge—Kutta methods
Symplectic partitioned Runge–Kutta methods can be obtained from a variational formulation treating all the terms in the Lagrangian with the same quadrature formula. We construct a family of symplectic methods allowing the use of different quadrature formula for different parts of the Lagrangian. In particular, we study a family of methods using Lobatto quadrature (with corresponding Lobatto IIIA-IIIB symplectic method) and Gauss–Legendre quadrature combined in an appropriate way. The resulting methods are similar to additive Runge-Kutta methods. The IMEX method, using the Verlet and IMR combination is a particular case of this family. The methods have the same favourable implicitness as the underlying Lobatto IIIA-IIIB pair. Differently from the Lobatto IIIA-IIIB, which are known not to be P-stable, we show that the new methods satisfy the requirements for P-stability.
15:05 to 15:50 Brynjulf Owren Equivariance and structure preservation in numerical methods; some cases and viewpoints
Our point of departure is the situation when there is a group of transformations acting both on our problem space and on the space in which our computations are produced. Equivariance happens when the map from the problem space to the computation space, i.e. our numerical method, commutes with the group action. This is a rather general and vague definition, but we shall make it precise and consider a few concrete examples in the talk. In some cases, the equivariance property is natural, in other cases it is something that we want to impose in the numerical method in order to obtain computational schemes with certain desired structure preserving qualities. Many of the examples we present will be related to the numerical solution of differential equations and we may also present some recent examples from artificial neural networks and discrete integrable systems. This is work in progress and it summarises some of the ideas the speaker has been discussing with other participants this autumn.
|
CommonCrawl
|
If energy is only defined up to a constant, can we really claim that ground state energy has an absolute value?
Sorry if this is really naive, but we learned in Newtonian physics that the total energy of a system is only defined up to an additive constant, since you can always add a constant to the potential energy function without changing the equation of motion (since force is negative the gradient of the potential energy).
Then in Quantum Mechanics we showed how the ground state of a system with potential energy $V(x) = \frac{1}{2} m \omega^{2} x^{2}$ has an energy $E_{0}=\frac{1}{2} \hbar \omega $.
But if we add a constant to $V(x)$ won't that just shift the ground state energy by the same constant? So in what sense can we actually say that the ground state energy has an absolute value (as opposed to just a relative value)? Is there some way to measure it?
I ask this in part because I have heard that Dark Energy might be the ground state energy of quantum fields, but if this energy is only defined up to a constant, how can we say what it's value is?
gravity energy energy-conservation casimir-effect
Qmechanic♦
$\begingroup$ The clearest example of vacuum energy being meaningful is the Casimir effect, but its interpretation is somewhat tricky. $\endgroup$
– Jerry Schirmer
In non-relativistic and non-gravitational physics (both conditions have to be satisfied simultaneously for the following proposition to hold), energy is only defined up to an arbitrary additive shift. In this restricted context, the choice of the additive shift is an unphysical, unobservable convention.
However, in special relativity, energy is the time component of a 4-vector and it matters a great deal whether it is zero or nonzero. In particular, the energy of the empty Minkowski space has to be exactly zero because if it were nonzero, the state wouldn't be Lorentz-invariant: Lorentz transformations would transform the nonzero energy (time component of a vector) to a nonzero momentum (spatial components).
In general relativity, the additive shifts to energy also matter because energy is a source of spacetime curvature. A uniform shift of energy density in the Universe is known as the cosmological constant, and it will curve the vacuum. So it's important to know what it is - and it is not just a convention. Also, in general relativity, the argument from the previous paragraph may be circumvented: dark energy, regardless of its value, preserves the Lorentz (or de Sitter or anti de Sitter, which are equally large) symmetry because the stress energy tensor is proportional to the metric tensor (because $p=-\rho$). However, as long as there is gravity, the additive shift matters.
In practice, we don't measure the zero-point energy by its gravitational effects, and the value of the cosmological constant remains largely mysterious. So I surely have a different, more observationally relevant answer.
Casimir energy, comparison of situations
The additive shifts to the energy are also important when one can compare the energy in two different situations. In particular, the Casimir effect may be measured. The Casimir force arises because in between two metallic plates, the electromagnetic field has to be organized to standing waves - because of the different boundary conditions. By summing the $\hbar\omega/2$ zero-point energies of these standing waves (each wavelength produces a harmonic oscillator), and by subtracting a similar "continuous" calculation in the absence of the metallic plates, one may discover that the total zero-point energy depends on the distance of the metallic plates if they're present, and experiments have verified that the corresponding force $dE/dr$ exists and numerically agrees with the prediction.
There are many other contexts in which the zero-point energy may be de facto measured. For example, there exist metastable states that behave like the harmonic oscillator for several low-lying states. The energy of these metastable states may be compared with the energy of the free particle at infinity, and the result is $V_{\rm local\,minimum}-V_{\infty}+\hbar\omega/2$. This is somewhat analogous to calculating the energies of the bound state in a Hydrogen atom - which may be measured (think about the ionization energy).
So yes, whenever one adds either special relativity or gravity or comparisons of configurations where the structure and frequencies of the harmonic oscillators differ, the additive shift becomes physical and measurable.
Luboš MotlLuboš Motl
$\begingroup$ Thanks for this! I still have some nagging uncertainties, probably because I got so used to Newtonian mechanics, but hopefully those will clear up when I read your answer (and Marek's below) a few more times. $\endgroup$
It's quite correct that you can additively shift energy, even in quantum mechanics, and one can always make the ground state carry zero energy. Nevertheless, you can still measure some other energy even in the ground state: the kinetic energy. Because $T = {p^2 \over 2m}$ the expectation of kinetic energy in a given energy state is essentially its uncertainty in momentum (because the average value of momentum is zero). So even in the ground state of the oscillator there is some intrinsic movement present (of course only in this sense, the state is still stationary w.r.t. evolution), notwithstanding that it has zero energy.
From another point of view, consider your potential $V(x) = {1\over 2} m\omega^2 x^2 - E_0$: it will intersect the $x$-axis. But the ground state energy lies at $E=0$. So it's not found at the bottom of your potential (as one would expect for a ground state in classical physics). This relative position of $E_0$ and $V(x)$ is independent of any shifts in energy.
MarekMarek
$\begingroup$ Right, @Marek, $E_0-V_{\rm min}$ is independent of conventions. However, one may always imagine that $V(x)$ was different by $\hbar\omega/2$ than we thought and we will produce the same energy levels. Of course, then we must ask whether $E_0$ and $V_{\rm min}$ may be measured independently. It depends what tools we have to measure them. You have to assume that we can - $V_{\rm min}$ may be measured by localizing the electron, except that then it has a huge kinetic energy. $\endgroup$
– Luboš Motl
$\begingroup$ Note that if you calculate the energy as $T(p)+V(x)$ out of measured values of $x$ and $p$, the uncertainty principle makes the error of the energy exceed $\hbar\omega/2$ or so, anyway. In this sense, $V_{\rm min}$ cannot be measured separately from $E_0$. $\endgroup$
$\begingroup$ @Luboš: well, measuring these values is certainly a problem. Nevertheless, QM tells us that $E > V_{\rm min}$ for any bound state localized around $V_{\rm min}$, right? It's no problem that it's not verifiable. Theory can (and must) surely produce lots of results we can never measure. $\endgroup$
– Marek
$\begingroup$ Dear @Marek, right, physics contains many important non-measurable concepts. But one must distinguish whether a quantity is unmeasurable just "directly" - but it has physical consquences - from the case when it's unmeasurable in principle. In the latter case, it's literally unphysical. In non-relativistic non-gravitating quantum mechanics with a fixed potential etc., the additive energy shift is unmeasurable even in principle because it may be incorporated into a redefinition of $V$. This is not the case in SR; GR; or when we may change $V$ or $H$ and compare the energies. $\endgroup$
$\begingroup$ The question whether "it's verifiable" was really the original question of the OP. If it were not verifiable even in principle - and in non-relativistic non-gravitating QM with a fixed potential, it's not - then the OP would be right that we can't really claim that there is a physical zero-point energy because it depends on the way how we write it. $\endgroup$
Shifting the energy reference level
Zero Point Fluctuations
Is there really no meaning in potential energy and potential?
Effects of a non-Lorentz-invariant vacuum state
A two-level system absorbs a detuned photon. Where does the extra energy go?
Is the ground state energy always larger for the system with higher potential energy?
Meaning of zero temperature in terms of $1/T=\partial S/\partial E$
How do I find the ground state energy in this question?
How can zero-point energy have any measurable effect when it is just a constant offset to the Hamiltonian?
Is the zero point energy of this system zero?
Proof for potential energy + kinetic energy is a constant when there is no energy loss
|
CommonCrawl
|
The detection of an extremely bright fast radio burst in a phased array feed survey
We report the detection of an ultra-bright fast radio burst (FRB) from a modest, 3.4-day pilot survey with the Australian Square Kilometre Array Pathfinder. The survey was conducted in a wide-field fly's-eye configuration using the phased-array-feed technology deployed on the array to instantaneously observe an effective area of 160 deg$^2$, and achieve an exposure totaling 13200 deg$^2$ hr. We constrain the position of FRB 170107 to a region $8'\times8'$ in size (90% containment) and its flu...
Peer review status:
Bannister, K. W., Shannon, R. M., Macquart, J.-P., Flynn, C., Edwards, P. G., O'Neill, M., Osłowski, S., Bailes, M., Zackay, B., Clarke, N., D'Addario, L. R., Dodson, R., Hall, P. J., Jameson, A., Jones, D., Navarro, R., Trinh, J. T., Allison, J., Anderson, C. S., … Westmeier, T. (2017). The detection of an extremely bright fast radio burst in a phased array feed survey. Astrophysical Journal Letters, 841, Article: L12.
Bannister, K. W., et al. "The Detection of an Extremely Bright Fast Radio Burst in a Phased Array Feed Survey." Astrophysical Journal Letters, vol. 841, IOP Publishing, 2017, pp. Article: L12.
Bannister, KW, RM Shannon, J-P Macquart, C Flynn, PG Edwards, M O'Neill, S Osłowski, et al. 2017. "The Detection of an Extremely Bright Fast Radio Burst in a Phased Array Feed Survey." Astrophysical Journal Letters 841: Article: L12.
Bannister_2017_ApJL_84...
(pdf, 755.4KB)
10.3847/2041-8213/aa71ff
+ Bannister, KW More by this author
+ Shannon, RM More by this author
+ Macquart, J-P More by this author
+ Flynn, C More by this author
+ Edwards, PG More by this author
+ Australian Research Council More from this funder
Grant:
CE110001020; FT150100415
IOP Publishing Publisher's website
Astrophysical Journal Letters Journal website
Article: L12
Acceptance date:
Pubs id:
uri:f6d507a3-44e6-4566-b9a3-4f9a56f6e9f3
UUID:
uuid:f6d507a3-44e6-4566-b9a3-4f9a56f6e9f3
radiation mechanisms: non-thermal
methods: data analysis
instrumentation: interferometers
© 2017 American Astronomical Society. All rights reserved. The final version is also available online at: 10.3847/2041-8213/aa71ff
|
CommonCrawl
|
Swiss Journal of Economics and Statistics
Original article | Open | Published: 17 November 2018
Unbiased weighted variance and skewness estimators for overlapping returns
Stephen Taylor ORCID: orcid.org/0000-0003-4648-67311 &
Ming Fang1
Swiss Journal of Economics and Statisticsvolume 154, Article number: 21 (2018) | Download Citation
This article develops unbiased weighted variance and skewness estimators for overlapping return distributions. These estimators extend the variance estimation methods constructed in Bod et. al. (Applied Financial Economics 12:155-158, 2002) and Lo and MacKinlay (Review of Financial Studies 1:41-66, 1988). In addition, they may be used in overlapping return variance or skewness ratio tests as in Charles and Darné (Journal of Economic Surveys 3:503-527, 2009) and Wong (Cardiff Economics Working Papers, 2016). An example using synthetic overlapping returns from a model fit to data from the SPY S&P 500 exchange traded fund is given in order to demonstrate under which circumstances the unbiased correction becomes significant in skewness estimation. Finally, we compare the effect of the HAC weighting schemes of Andrews (Econometrica 53:817-858, 1991) as a function of sample size and overlapping return window length.
Overlapping returns are used in many contexts in the finance and econometrics literature. Applications include variance ratio tests, regression parameter error estimation, and alternative resampling methods. Standard statistical inference and estimation techniques applied to overlapping return financial time series are typically biased. In addition, for such series, recent data is regularly viewed as more relevant than past information, which has resulted in the creation of weighted generalizations of estimation methodologies. This motivates the development of unbiased analogues of such estimators which we explore in the cases of the variance and skewness statistics. Our central aim is to construct unbiased weighted variance and skewness estimators for overlapping return distributions.
Several estimation procedures and hypothesis testing frameworks have been improved through the utilization of overlapping returns. In financial overlapping return applications, Lo and MacKinlay (1988) and Hansen and Hodrick (1980) demonstrate how overlapping returns may be used to increase the efficiency of statistics used in variance ratio tests. Dunis and Keller (1995) developed a panel regression method based on overlapping returns, and Müller (1993) concludes utilizing overlapping returns in most applications will result in an overall increase in estimation precision of statistics that are a function of the overlapping returns when compared with their analogues for simple returns. In Jackwerth (2000), the author discovered that the overlapping return distribution for the S&P 500 is left-skewed and examined differences between risk neutral and realized distributions between overlapping and non-overlapping returns of the S&P 500 index and observed how associated risk aversion functions changed dramatically around the 1987 stock market crash. Wong (2016) develops skewness and kurtosis ratio tests for overlapping returns. The new weighted unbiased skewness estimator constructed below may be used as an input into any of these applications.
The idea of assigning greater weight to recent data and less weight to past data has been discussed in a number of econometric and financial studies. Past economic data may have little impact or be entirely irrelevant for present projections. In addition, by placing additional weight on recent data, associated estimation procedures tend to react more strongly to structural changes in the underlying assumption about the distribution the sample is drawn from than their uniformly weighted counterparts. For example, Tsokos (2010) shows that under nonstationary economic realization, weighted moving average models perform significantly better than the classical ARIMA model in forecasting stock prices. Andrews (1991) develops weighting schemes used in the estimation of covariance matrices assuming the underlying time series exhibits nontrivial autocorrelation and heteroskedasticity which we will utilize below. Weighted estimators are routinely used in practice as well. In particular, in Longerstaey and Spencer (1996), it is demonstrated that exponentially weighted moving average estimators incorporate external shocks more readily than equally weighted moving averages, thus providing a more realistic measure of current volatility.
Volatility and skewness estimation of financial return distributions has been the subject of a number of articles. Early examples include using maximum likelihood estimation to fit a model distribution to observed data and computing the associated model statistics in Fama (1965) and Mandelbrot (1963). More recent work has focused on the estimation of stochastic volatility models in Broto (2004). Time series techniques have also been widely applied to this task, c.f. (Tsay 2010). Measuring the asymmetry of financial return distributions has also been the central theme of many references. Grigoletto and Lisi (2006) and Wen and Yang (2009) find persistent non-trivial skewness is present in the simple daily return distributions of nearly every major international equity index. Xu (2007) shows that equity return distribution skewness is positively correlated with simultaneous returns and negatively correlated with lagged returns.
When working with overlapping returns, especially when encountering small sample sizes, bias effects from standard estimators, such as the sample variance, become important. In Lo and MacKinlay (1988), the authors provide a consistent but biased overlapping return variance estimator that has been used in several subsequent references, including Liu and He (1991), Fong et al. (1997), and Amélie and Olivier (2009). This estimator was improved in Bod et al. (2002) where the authors constructed an unbiased variance estimator for unweighted overlapping returns. Kluitman and Franses (2002) extended this work to develop an estimator that includes the case where the returns have nontrivial autocorrelation. Our main contribution is to extend these results by developing weighted unbiased variance and skewness estimators for overlapping return time series.
This article is organized as follows. We first fix notation and then derive an unbiased weighted estimator for the variance of a time series of overlapping returns. We give reduced expressions for this estimator in the cases of uniform and exponential weights. Next, we construct a similar weighted unbiased estimator for the skewness of an overlapping return distribution. We then demonstrate the difference between a normalized version of the skewness estimator and the standard normalized sample skewness in a simulation which models the overlapping return distribution of the S&P 500 index, and then summarize our results. We finally compare the estimation of the weighted volatility and skewness of the overlapping return distribution of the S&P 500 index for various weighting schemes, sample sizes, and overlapping lengths and conclude with potential additional questions to explore.
We begin by establishing notation. Given integers n,q>0 with q<n, let pt>0 for t=0,…,n+q−1 denote an asset price time series by pt and let rt=pt/pt−1−1 be the associated simple returns where t=1,…,n+q−1. Following Bod et al. (2002) and Lo and MacKinlay (1988), we assume that rt have zero mean, $\mathbb {E}[\!r_{t}]=0$, covariance E[ rtrs]=0 for any t>s, and equal finite variance Var(rt)=σ2<∞. We note that in Lo and MacKinlay (1988) and subsequent references, the authors show that these assumptions, referred to as the random walk version of the martingale hypothesis, do not hold for a variety of financial time series. This is achieved by assuming a null hypothesis that they hold and then demonstrating how variance ratios of overlapping returns may be used to reject this assertion. In this spirit, we proceed by defining the q-period overlapping returns yt associated with rs by
$$ y_{t} = \sum_{s=t}^{t+q-1}{r_{s}},\quad\text{for}\quad t = 1,\ldots,n. $$
We construct weighted unbiased estimators of the variance and skewness of yt and pair a weight wt with each yt such that wt>0 and $\sum _{t=1}^{n}w_{t}= 1$. Let $W^{ts}=\sum _{k=t}^{s}w_{k}$ be the sum of the t-th through s-th weight, and note W1n=1.
We first derive an unbiased weighted overlapping return variance estimator $\hat {\sigma }_{y}^{2}$. We seek an estimator of the form
$$ \hat{\sigma}^{2}_{y} = C_{1}(n,q,w)^{-1}\sum_{t=1}^{n} w_{t} \left(y_{t}-\bar{y}^{w}\right)^{2}, \quad \bar{y}^{w} \equiv \sum_{t=1}^{n} w_{t} y_{t}, $$
where we find C1 such that $\hat {\sigma }_{y}^{2}$ is an unbiased estimator of qσ2. Note that since ri are independent, the true variance of each yt is Var(yt)=qσ2 so that $\hat {\sigma }^{2}$ is an unbiased estimator of the variance of the random variable from which yt are sampled from if
$$ \mathbb{E}\left[\hat{\sigma}^{2}_{y}\right] = q\sigma^{2}. $$
This may be viewed as a constraint that defines the constant C1(n,q,w) which we determine by computing
$$ \begin{aligned} &\mathbb{E}\left[\sum_{t} w_{t}\left(y_{t}-\bar{y}^{w}\right)^{2}\right] =\\[-4pt]&\qquad\qquad\qquad \sum_{t} w_{t}\left(\mathbb{E}y_{t}^{2}-2\mathbb{E}\left(y_{t}\bar{y}^{w}\right)+ \mathbb{E}\left[\left(\bar{y}^{w}\right)^{2}\right]\right), \end{aligned} $$
and noting that
$$ \sum_{t} w_{t}\mathbb{E}\left(y_{t}\bar{y}^{w}\right) =\mathbb{E}\left(\bar{y}^{w} \sum_{t} w_{t}y_{t} \right) =\mathbb{E}\left[\left(\bar{y}^{w}\right)^{2}\right], $$
$$ \sum_{t} w_{t}\left(-2\mathbb{E}\left(y_{t}\bar{y}^{w}\right)+\mathbb{E}\left[\left(\bar{y}^{w}\right)^{2}\right]\right) =-\mathbb{E}\left[\left(\bar{y}^{w}\right)^{2}\right]. $$
Combining the above, we find
$$ \begin{aligned} \mathbb{E}\left[\sum_{t} w_{t}\left(y_{t}-\bar{y}^{w}\right)^{2}\right] &= \sum_{t} w_{t}\mathbb{E}y_{t}^{2} - \mathbb{E}\left[\left(\bar{y}^{w}\right)^{2}\right] \\&\quad= q\sigma^{2} - \text{Var}\left(\bar{y}^{w}\right), \end{aligned} $$
where we note that $\mathbb {E}(y_{t})=\mathbb {E}\left (\bar {y}^{w}\right)=0$, and the last equality follows from $\mathbb {E}\left (y_{t}^{2}\right)=\text {Var}(y_{t})=q\sigma ^{2}$ as well as $\text {Var}\left (\bar {y}^{w}\right)=\mathbb {E}\left [\left (\bar {y}^{w}\right)^{2}\right ]-\mathbb {E}\left [\bar {y}^{w}\right ]^{2}=\mathbb {E}\left [\left (\bar {y}^{w}\right)^{2}\right ]$.
In order to compute the variance term in the previous equation, we decompose the weighted average of yt as
$$ \bar{y}^{w} = \sum_{t=1}^{q-1}W^{1t}r_{t} + \sum_{t=q}^{n} W^{(t-q+1)t}r_{t} + \sum_{t=n+1}^{n+q-1} W^{(t-q+1)n}r_{t}. $$
One may arrive at this decomposition by viewing the individual return terms of the sum $\bar {y}^{w} = \sum _{t} w_{t}y_{t}$ as a table with values wtrs whose first row has elements, w1r1,w1r2,…,w1rq and final row is given by wnrn,wnrn+1,…,wnrn+q−1. Note that $\bar {y}^{w}$ is equivalent to the sum of all values in this table. The first term in this decomposition corresponds to grouping all elements of this table above the diagonal whose edge is formed by the w1rq and wqrq entries and factoring our common returns multiplied into varying weights. The final term can be arrived at by aggregating all terms below the diagonal formed by the wn−q+2rn+1 and wnrn+1 entries. The middle sum is computed by combining the remaining terms in the table.
Since each individual sum is composed of returns that are independent of all the returns in the other two sums, we find that
$$ \begin{aligned} \text{Var}\left(\bar{y}^{w}\right) =\sigma^{2}\left[\sum_{t=1}^{q-1}\left(W^{1t}\right)^{2} + \sum_{t=q}^{n}\left(W^{(t-q+1)t}\right)^{2}+\sum_{t=n+1}^{n+q-1}\left(W^{(t-q+1)n}\right)^{2}\right]. \end{aligned} $$
Solving for the unbiased constant from Eqs. (7) and (9), we arrive at
$$ \begin{aligned} C_{1}(n,q,w)& = \frac{1}{q\sigma^{2}}\mathbb{E}\left[\sum w_{t}\left(y_{t}-\bar{y}^{w}\right)^{2}\right] \\&\,=\,1\,-\,\frac{1}{q} \left[\sum_{t=1}^{q-1}\!\left(W^{1t}\right)^{2} \,+\,\sum_{t=q}^{n}\!\left(W^{(t-q+1)t}\right)^{2}\,+\,\sum_{t=n+1}^{n+q-1}\!\left(W^{(t-q+1)n}\right)^{2}\!\right]. \end{aligned} $$
We note that in the case of uniform weights wt=1/n, this result reduces to that of Bod et. al. (2002), where $C_{B}^{-1}=nC_{1}(n,q,w)$. Secondly, in the case of exponential weights ws=αn−s/C with C=(αn−1)/(α−1), one can show
$$ {} C_{1}(n,q,w) \,=\, \frac{2\alpha}{q}\frac{\alpha^{q}-\alpha^{2n-q}-1+\alpha^{2n}- q\alpha^{n-1}\left(\alpha^{2}-1\right)}{\left(\alpha^{2}-1\right)(\alpha^{n}-1)^{2}}. $$
We now derive an unbiased skewness estimator in a similar manner. Assume that $\mathbb {E}\left (r_{t}^{3}\right)=\gamma $, so that $\mathbb {E}\left (y_{t}^{3}\right)=q\gamma $, and consider an estimator of the skewness of the overlapping return distribution of the form
$$ \hat{\gamma}_{y} = C_{2}(n,q,w)^{-1}\sum_{t=1}^{n} w_{t} \left(y_{t} - \bar{y}^{w}\right)^{3}. $$
We would like to construct an unbiased estimator of qγ which requires
$$ \mathbb{E}\left[\hat{\gamma}_{y}\right]=q\gamma. $$
To determine C2, we compute
$$ {{} \begin{aligned} \mathbb{E}\left[\sum_{t=1}^{n} w_{t}\left(y_{t}-\bar{y}^{w}\right)^{3}\right] &= \sum_{t=1}^{n} w_{t} \mathbb{E}\Big[y_{t}^{3} -3y_{t}^{2}\bar{y}^{w} +\\ &\quad\quad\quad\quad\quad\left. 3y_{t}\left(\bar{y}^{w}\right)^{2}-\left(\bar{y}^{w}\right)^{3}\right]. \end{aligned}} $$
Decomposing yt into independent rt sums allows one to calculate the first term with
$$ {\begin{aligned} \mathbb{E}\left(y_{t}^{3}\right) &= \mathbb{E}\left[\sum_{s=t}^{t+q-1}r_{s}^{3} +3\sum_{s\neq k} r_{s}^{2} r_{k} + \sum_{s\neq k \neq l} r_{s}r_{k}r_{l}\right] \\&= \sum_{s=t}^{t+q-1}\mathbb{E}\left(r_{s}^{3}\right) = q\gamma. \end{aligned}} $$
For the second term, we note that
$$ \mathbb{E}\left[y_{t}^{2}\bar{y}^{w}\right] = \sum_{s} w_{s}\mathbb{E}\left(y_{t}^{2}y_{s}\right). $$
Computing the expectation, we find,
$$ \begin{aligned} \mathbb{E}\left[y_{t}^{2}y_{s}\right] &= \mathbb{E}\left[\sum_{k=s}^{s+q-1}r_{k} \left(\sum_{l=t}^{t+q-1}r_{l}\right)^{2} \right] \\&=\mathbb{E}\left[\sum_{k=s}^{s+q-1}r_{k} \left(\sum_{l=t}^{t+q-1}r_{l}^{2} + \sum_{p\neq m}^{t+q-1}r_{m}r_{p}\right) \right] \end{aligned} $$
$$ = \mathbb{E}\left[ \sum_{k=t}^{t+q-1}\left(r_{s}+\cdots+r_{s+q-1}\right)r_{k}^{2}\right]. $$
Note that only terms of the form $r_{t}^{3}$ are non-trivial in expectation. Splitting into two cases, we have
$$ \mathbb{E}\left[y_{t}^{2}y_{s}\right] = \left\{ \begin{array}{ll} (s-t+q)\gamma & s \leq t \leq s + q-1 \\ (t-s+q)\gamma & t \leq s \leq t + q-1 \\ \end{array}\right. $$
$$ \hspace{35pt} = (q-|t-s|)\gamma,\quad 0\leq |t-s|\leq q-1. $$
Combining Eqs. (16) and (20), we find that
$$ \sum_{t=1}^{n} w_{t}\mathbb{E}\left[y_{t}^{2}\bar{y}^{w}\right] = \gamma\sum_{t,s=1}^{n} w_{t}w_{s}(q-|t-s|)1_{\{|t-s|\in[0,q-1]\}}. $$
For the final two terms, note
$$ \begin{aligned} \mathbb{E}\left(y_{t}\left(\bar{y}^{w}\right)^{2}\right) &= \sum_{s,k}w_{s}w_{k}\mathbb{E}(y_{t}y_{s}y_{k}), \quad \mathbb{E}\left(\bar{y}^{w}\right)^{3}\\&=\sum_{\text{\textit{t,s,k}}}w_{t}w_{s}w_{k}\mathbb{E}(y_{t}y_{s}y_{k}), \end{aligned} $$
so we may simplify
$$ \sum_{t=1}^{n} w_{t}\left[3y_{t}\left(\bar{y}^{w}\right)^{2}-\left(\bar{y}^{w}\right)^{3}\right] = 2\mathbb{E}\left[\left(\bar{y}^{w}\right)^{3}\right]. $$
Using the decomposition in Eq. (14), by independence of the terms
$$ \begin{aligned} \mathbb{E}\left[\left(\bar{y}^{w}\right)^{3}\right] &= \gamma \left[\sum_{t=1}^{q-1}\left(W^{1t}\right)^{3} + \sum_{t=q}^{n} \left(W^{(t-q+1)t}\right)^{3} \right. \\[-4pt] & \quad \left.+\sum_{t=n+1}^{n+q-1}\left(W^{(t-q+1)n}\right)^{3}\right]. \end{aligned} $$
Combining results from Eqs. (15), (21), and (24), we arrive at an expression for C2 given by
$$ \begin{aligned} C_{2}(n,q,w) &= 1 - \frac{3}{q}\sum_{\substack{t,s=1\\ |t-s|<q}}^{n} w_{t}w_{s}(q-|t-s|) \\ &\quad + \frac{2}{q} \left[\sum_{t=1}^{q-1}\left(W^{1t}\right)^{3} \!\,+\,\! \sum_{t=q}^{n} \left(W^{(t-q+1)t}\right)^{3} \,+\,\sum_{t=n+1}^{n+q-1}\left(\!W^{(t-q+1)n}\right)^{3}\!\right]\!. \end{aligned} $$
In the case of uniform weights wt=1/n, the terms in this expression simplify to
$$ \begin{aligned} \sum_{\substack{t,s=1\\ |t-s|<q}}^{n} w_{t}w_{s}(q-|t-s|) &= \frac{q}{n}+\frac{2}{n^{2}}\sum_{t=1}^{q-1}(n-t)(q-t)\\ &=\frac{q}{3n^{2}}\left(1+3nq-q^{2}\right) \end{aligned} $$
$$ \begin{aligned} \sum_{t=1}^{q-1}\left(W^{1t}\right)^{3} &+ \sum_{t=q}^{n} \left(W^{(t-q+1)t}\right)^{3} +\sum_{t=n+1}^{n+q-1}\left(W^{(t-q+1)n}\right)^{3} \\&\quad= \frac{1}{n^{3}}\left[\sum_{t=1}^{q-1}t^{3} \,+\, \sum_{t=q}^{n} q^{3} \,+\, \sum_{t=n+1}^{n+q-1}(n-t+q)^{3}\right] \end{aligned} $$
$$ = \frac{q^{2}}{2n^{3}}\left(1+2nq-q^{2}\right), $$
which yields a simple form for C2 given by
$$ C_{2}(n,q,w) = \frac{(n-q+1)(n-q)(n-q-1)}{n^{3}}. $$
We finally note that it is possible to derive a closed form expression for C2 in the case of exponential weights that were previously considered for the variance estimator; however, the expression is quite lengthy and we omit it here but it is available upon request. Finally, we turn to a simulation in order to understand for what parameter pairs (n,q) the effects of the unbiased skewness estimator are most significant.
Empirical studies and results
We now develop a simulation to compare the relative error of the uniformly weighted unbiased skewness estimator $\hat {\gamma }_{y}$ and the standard unbiased sample skewness estimator which may be found in Zwillinger and Kokoska (2000). We first construct a dataset of end of day simple returns calculated from closing prices for the SPY exchange traded fund from January 1, 2012, to December 31, 2016. This was achieved using Bloomberg's Python API and an associated wrapper package named tia. We downloaded historical end of day closing prices identified with Bloomberg's PX_LAST field that are both split and dividend adjusted. This time series was fully populated with data, and hence, there was no need to fill in missing values.
Next, we fit a model distribution to this data in order to establish a framework to test the weighted unbiased skewness and kurtosis estimators in a setting where the true values of these statistics are known which closely approximates actual market data. To this end, we let X denote a skew normal distribution whose probability density function is given by
$$ p(x;a,b,c) = \frac{2}{b}\phi\left(\frac{x-a}{b}\right)\Phi\left(c\frac{x-a}{b}\right), $$
where here ϕ and Φ are the probability and cumulative distribution functions of a standard normal random variable and b>0. The skew normal distribution has mean μ and variance σ2 defined by
$$ {\begin{aligned} \mu &= a+bd\sqrt{\frac{2}{\pi}}, \\ \sigma^{2} &= b^{2}\left(1-\frac{2d^{2}}{\pi}\right),\quad \text{where}\quad d = \frac{c}{\sqrt{1+c^{2}}}. \end{aligned}} $$
The normalized skewness γ of this distribution is given by
$$ \gamma = \frac{\text{Skew}(X)}{\text{Var}(X)^{3/2}}=\frac{4-\pi}{2}\frac{(d\sqrt{2/\pi})^{3}}{(1-2d^{2}/\pi)^{3/2}}. $$
We fit this distribution to the SPY daily return data using maximum likelihood estimation. Let the likelihood function of this model be denoted by $\mathcal {L}$. We find the model parameters by directly maximizing the log-likelihood function
$$ \begin{aligned} \hat{a},\hat{b},\hat{c} &= \underset{a,b,c}{\text{argmax}}\ln\mathcal{L} = n\ln\left(\frac{2}{b}\right)\\&\quad+\sum_{t=1}^{n}\ln\phi\left(\frac{x_{t}-a}{b}\right) + \Phi\left(c\frac{x_{t}-a}{b}\right) \end{aligned} $$
using a BFGS optimizer developed in Fletcher (1987) and determine that the best fit parameters are given by $\hat {a}=0.00640$, $\hat {b}=0.01006$, and $\hat {c}=-1.1029$, which have associated mean, variance, and normalized skewness given by 4.528×10−4, 6.5789×10−5, and − 0.1689, respectively. Returns will be drawn from this distribution in the Monte Carlo simulation considered below. Specifically, this simulation consists of sampling n values from the MLE distribution, computing the q-period overlapping returns of these time series, then calculating the normalized unbiased sample skewness $\tilde {\gamma }_{s}$ (see Zwillinger and Kokoska (2000)) and the normalized unbiased skewness $\tilde {\gamma }_{y}$ with uniform weights given by $\tilde {\gamma }_{y}= \hat {\gamma }_{y}/\hat {\sigma }_{y}^{3}$. We then compute the relative error $|\tilde {\gamma }_{s}-\tilde {\gamma }_{y}|/\tilde {\gamma }_{y}$, between the different estimators for each simulation, repeat the above process 10,000 times, and display the mean percentage errors in Table 1 for distinct (n,q) pairs.
Table 1 Mean relative percentage errors between the normalized unbiased sample skewness $\tilde {\gamma }_{s}$ and the normalized unbiased skewness $\tilde {\gamma }_{y}$ for varying sample sizes n=32,…,16384, and overlapping return periods q=2,…,128
We omit cases where the number of overlapping returns n−q≤q, and first note that as the sample size increases, the error between the two estimators decreases for any fixed q value. However, when q/n is relatively large, say greater than 5%, then there are significant differences between the two estimators.
Next, we explore several weighting schemes described in Andrews (1991) which are widely used for covariance matrix estimation in the presence of heteroskedasticity and autocorrelation. Specifically, we consider weights constructed by Bartlett (Oppenheim et al. 1999), Parzen (White 1980), Tukey-Hamming (Blackman and Tukey 1958), and the Quadratic Spectral weights of Priestley (1962) and Epanechnikov (1969). These weights are defined in terms of a kernel function k(·) and are given by wt=k(bt/T) where T is a bandwidth parameter and b is a scaling constant in Zeileis (2004) and Zwillinger (2000). There are many references that study the problem of optimal bandwidth selection c.f. (Lazarus et al. 2017; Newey and West 1994; Stock and Watson 2011; Wooldridge 2006); however, we are interested in constructing reasonable weighting schemes to place importance on more recent over prior data. We found that setting the bandwidth to the sample size and b=1.2 achieves this aim.
In Fig. 1, we plot the kernel functions associated with these weighting schemes given in Andrews (1991) which are defined by
$$ k_{BT}(x) = \left\{ \begin{array}{ll} 1 - |x| & |x|\leq 1 \\ 0 & |x| > 1, \\ \end{array}\right. $$
Plot of the unnormalized HAC kernel functions kBT, kPR, kTK, kQS and the uniform kernel
$$ k_{PR}(x) = \left\{ \begin{array}{ll} 1 - 6x^{2} + 6|x|^{3} & |x|\leq 1/2 \\ 2(1-|x|)^{3} & 1/2 \leq |x| \leq 1 \\ 0 & |x| > 1, \\ \end{array}\right. $$
$$ k_{TH}(x) = \left\{ \begin{array}{ll} (1+\cos(\pi x))/2 & |x|\leq 1 \\ 0 & |x| > 1, \\ \end{array}\right. $$
$$ k_{QS}(x) = \frac{25}{12\pi^{2}x^{2}}\left(\frac{\sin(6\pi x/5)}{6\pi x/5} - \cos(6\pi x/5)\right). $$
Note that when using weighted estimators, one effectively reduces the original sample size. For example, in the extreme case of binary zero or one valued weights, only the weights with value one contribute to the estimator which reduces the sample size to the percentage of one valued weights. In this example, we can find the percentage sample size reduction by approximating the area under each weight curve. In reference to the uniform weights which we take to have normalized area of 1, the HAC weights effectively reduce the sample size by PR: 31%, BT,TK: 41%, and QS: 54% so that uniform weights have approximately two to three times the sample size of these weighting schemes.
Next, we examine how estimation of the unbiased weighted standard deviation and skewness estimators varies as a function of the overlapping return period q, sample size n, and weighting scheme using the SPY dataset previously described. We consider overlapping return periods of 5, 21, and 63 sample points which correspond to weekly, monthly, and quarterly aggregation windows for our example daily return data. Next, we truncate the sample size to 256, 512, and 1024 data points which roughly correspond to 1, 2, and 5-year time periods, using a trailing truncation window on the SPY returns.
Estimation results are presented in Table 2 where overlapping return standard deviations are displayed as percentages. We first note that estimates for uniform weights tend to be outliers as they include two to three times the effective sample size as the other weighting schemes. Next, note that overall standard deviation values are greatest in the n=512 case, slightly lower for n=256, and considerably lower for n=1024. This is due to the historical volatility of the S&P 500 being relatively high during the mid 2015 to early 2016 time period and lower in prior years over the five year window being considered. One may also note that the overlapping return distribution becomes increasingly negatively skewed as a result.
Table 2 Comparison of unbiased overlapping return standard deviation and skewness estimators as a function of weighting scheme, sample size n, and overlapping return period q
Finally, we compare how skewness estimation varies over time for different weighting schemes. In particular, we consider a 4-year window from 1/1/2012 to 1/1/2016 and estimate the unbiased skewness using a 252-day lookback period for each of the HAC estimators and a 126-day lookback period for the uniformly weighted estimator which is selected to ensure that the sample sizes are on par with one another. This procedure is carried out on a rolling basis, and results are plotted in Fig. 2.
Rolling n=252 day overlapping return unbiased skewness for HAC weighting schemes and a n=126 lookback period for uniform weights with a q=21 overlapping period for SPY daily returns
We note that the general forms of the time series in Fig. 2 tend to be similar for the majority of dates displayed. The HAC weighted estimators are more reactive than the uniform estimator and do not exhibit single day jumps that are as large in magnitude as the HAC uniform estimator.
In summary, we have derived closed form expressions for weighted unbiased variance and skewness estimators. We also developed simplified expressions for these estimators in the case of exponential weights for the variance estimator and uniform weights for both estimators. The differences between the standard unbiased sample skewness and new normalized unbiased skewness estimators were demonstrated to be significant in the case of skewness estimation for SPY end of day return data for HAC weighting schemes.
We note that as in Bod et al. (2002) and Lo and MacKinlay (1988), we assume returns satisfy the random walk version of the martingale hypothesis which generally does not hold for financial time series. An interesting future application of the skewness estimator would be to develop a hypothesis test for this assumption which may compliment the results in Lo and MacKinlay (1988). For additional future work, it would be of interest to consider as in Kluitman and Franses (2002) analogues of the weighted unbiased variance and skewness estimators under the assumption that the return process satisfies an AR(1), MA(1) or more general time series model. This will require repeating the above derivations and retaining terms of the form $\mathbb {E}\left [r_{t}r_{s}\right ]$ and $\mathbb {E}\left [r_{t}r_{s}r_{k}\right ]$ for t≠s≠k which are no longer trivial but depend on the underlying model one assumes for the return process. Then, one could fit such models to market data and compare values of the two estimators. We anticipate the estimator will strongly depend on the sign of the AR(1) lag parameter and the white noise parameter of the MA(1) model as shown in Kluitman and Franses (2002) but leave this for a future study.
Andrews, DWK (1991). Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 53(3), 817–858.
Bod, P, Blitz, D, Franses, PH, Kluitman, R (2002). An Unbiased Variance Estimator for Overlapping Returns. Applied Financial Economics, 12(3), 155–158.
Broto, C (2004). Estimation methods for stochastic volatility models: a survey. Journal of Economic Surveys, 18(5), 613–649.
Amélie, C, & Olivier, D (2009). Variance ratio tests of random walk: An overview. Journal of Economic Surveys, 3, 503–527.
Dunis, C, & Keller, A (1995). Efficiency Tests with Overlapping Data: An Application to the Currency Option Market. European Journal of Finance, 1, 345–66.
Epanechnikov, VA (1969). Non-parametric Estimation of a Multivariate Probability Density. Theory of Probability and Its Applications, 14, 153–158.
Fama, E (1965). The behavior of stock prices. Journal of Business, 38, 34–105.
Fletcher, R. (1987). Practical Methods of Optimization.Wiley. https://www.amazon.com/Practical-Methods-Optimization-R-Fletcher/dp/0471494631.
Fong, WM, Koh, SK, Ouliaris, S (1997). Joint variance-ratio tests of the martingale hypothesis for exchange rates. Journal of Business and Economic Statistics, 15, 51–59.
Grigoletto, M, & Lisi, F (2006). Looking for skewness in financial time series. Working Paper Series, 7. http://paduaresearch.cab.unipd.it/7094/1/2006_7_20070123084924.pdf.
Hansen, L, & Hodrick, RJ (1980). Forward exchange rates as optimal predictors of future spot rates: an econometric analysis. Journal of Political Economy, 88, 829–853.
Jackwerth, JC (2000). Recovering Risk Aversion from Options Prices and Realized Returns. Review of Financial Studies, 13(2), 433–451.
Xu, J (2007). Price Convexity and Skewness. The Journal of Finance, 62(5), 2521–2552.
Kluitman, R, & Franses, PH (2002). Estimating volatility on overlapping returns when returns are autocorrelated. Applied Mathematical Finance, 9(3), 179–188.
Lazarus, E, Lewis, DJ, Stock, JH, Watson, MW (2017). HAR Inference: Recommendations for Practice. JBES Invited Paper. https://scholar.harvard.edu/elazarus/publications/har-inference-recommendationspractice-jbes-invited-paper.
Liu, CY, & He, J (1991). A variance ratio test of random walks in foreign exchange rates. Journal of Finance, 46, 773–785.
Lo, AW, & MacKinlay, AC (1988). Stock Market Prices do not Follow Random Walks; Evidence from a Simple Specification Test. Review of Financial Studies, 1, 41–66.
Mandelbrot, B (1963). The variation of certain speculative prices. Journal of Business, 36(4), 394–419. https://web.williams.edu/Mathematics/sjmiller/public_html/341Fa09/econ/Mandelbroit_VariationCertainSpeculativePrices.pdf.
Müller, UA (1993). Statistics of variables observed over overlapping intervals. https://ideas.repec.org/p/wop/olaswp/_010.html.
Newey, WK, & West, KD (1994). Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631–653.
Oppenheim, AV, Schafer, RV, Buck, JR (1999). Discrete-Time Signal Processing. Prentice-Hall Signal Processing Series. https://www.amazon.com/Discrete-Time-Signal-Processing-3rd-Prentice-Hall/dp/0131988425.
Priestley, MB (1962). Basic Considerations in the Estimation of Spectra. Technometrics, 4, 551–564.
Longerstaey, J, & Spencer, M (1996). RiskMetrics Technical Document. Fourth Edition. https://www.msci.com/documents/10199/5915b101-4206-4ba0-aee2-3449d5c7e95a.
Stock, JH, & Watson, MW. (2011). Introduction to Econometrics Third Edition.Addison-Wesley. Pearson Education Limited, Edinburgh Gate, Harlow, Essex CM20 2JE, England.
Tsay, RS. (2010). Analysis of Financial Time Series. 111 River St. Hoboken: Wiley.
Tsokos, CP (2010). K-th Moving, Weighted and Exponential Moving Average for Time Series Forecasting Models. European Journal of Pure and Applied Mathematics, 3(3), 406–416.
Blackman, RB, & Tukey, JW. (1958). The measurement of power spectra. Dover: Dover Publications.
White, H (1980). A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test of Heteroskedasticity. Econometrica, 48, 817–838.
Wong, W (2016). Skewness and Kurtosis Ratio Tests: With Applications to Multiperiod Tail Risk Analysis. Cardiff Economics Working Papers. https://www.econstor.eu/handle/10419/174121.
Wooldridge, JM. (2006). Introductory Econometrics: A Modern Approach Third Edition. Mason: Thomson.
Wen, F, & Yang, X (2009). Skewness of Return Distribution and Coefficient of Risk Premium. Jrl Syst Sci & Complexity, 22, 360–371.
Zeileis, A (2004). Econometric Computing with HC and HAC Covariance Matrix Estimators. Journal of Statistical Software, 11(10), 1–17.
Zwillinger, D, & Kokoska, S. (2000). CRC Standard Probability and Statistics Tables and Formulae. Boca Raton: Chapman & Hall.
We thank the referees for their comments which greatly improved both the language and content of this article. ST would like to thank Michael Ehrlich for comments that improved the exposition of this article.
Martin Tuchman School of Management at the New Jersey Institute of Technology, Newark, NJ, USA
& Ming Fang
Search for Stephen Taylor in:
Search for Ming Fang in:
ST developed and derived the main estimators in this article. MF performed the literature review, simulations, and a verification of calculations. Both authors read and approved the final manuscript.
Correspondence to Stephen Taylor.
ST is an Assistant Prof. of Finance at the New Jersey Institute of Technology. MF is an Assistant Prof. of Accounting at the New Jersey Institute of Technology. Both ST and MF are new faculty members in the first years of their academic career.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Overlapping returns
Variance and skewness estimation
Asset returns
Weighted estimators
|
CommonCrawl
|
View all Nature Research journals
Fabrication and characterization of zeolitic imidazolate framework-embedded cellulose acetate membranes for osmotically driven membrane process
Teayeop Kim1,
Moon-ki Choi1,
Hyun S. Ahn ORCID: orcid.org/0000-0002-6014-09162,
Junsuk Rho ORCID: orcid.org/0000-0002-2179-28903,
Hyung Mo Jeong4 &
Kyunghoon Kim ORCID: orcid.org/0000-0001-8034-947X1
Scientific Reports volume 9, Article number: 5779 (2019) Cite this article
Zeolitic imidazolate framework-302 (ZIF-302)-embedded cellulose acetate (CA) membranes for osmotic driven membrane process (ODMPs) were fabricated using the phase inversion method. We investigated the effects of different fractions of ZIF-302 in the CA membrane to understand their influence on ODMPs performance. Osmotic water transport was evaluated using different draw solution concentrations to investigate the effects of ZIF-302 contents on the performance parameters. CA/ZIF-302 membranes showed fouling resistance to sodium alginate by a decreased water flux decline and increased recovery ratio in the pressure retarded osmosis (PRO) mode. Results show that the hydrothermally stable ZIF-302-embedded CA/ZIF-302 composite membrane is expected to be durable in water and alginate-fouling conditions.
Water purification is an important issue because of growing populations and energy demands that cause increasing water scarcity and environmental pollution1,2,3,4. To overcome global water scarcity, many researchers focus on water purification technologies such as distillation, sedimentation, and filtration5. Among these technologies, filtration using membranes is of much interest because of its low energy consumption6. Among membrane separation technology, osmotically driven membrane process (ODMP), is an effective strategy which harvests clean water across a semi permeable membrane and rejects a wide range of pollutants, including small ions, molecules and microorganisms. A type ODMP for desalination, forward osmosis (FO) process has lower fouling propensity attributed to low or lack of hydraulic pressure7,8,9,10. Among these advantage, ODMP wasn't limited to desalination, but also proposed to pretreatment for reverse osmosis (RO)9 and thermal desalination11,12 which doesn't require salt consumption. On the other hand, as an energy harvesting application of ODMP, pressure retarded (PRO) osmosis (PRO) process produces renewable energy from osmotic gradient between seawater and fresh water13. In order to improve the productivity of ODMP, various studies have focused on high water permeability, low solute flux, anti-fouling properties and water stability. In this regard, different kinds of membranes such as thin film composite, thin film nanocomposite, and cellulose acetate (CA) have been developed14. One of the conventional polymeric membranes, the CA membrane, has several advantages such as low cost, high fouling resistance and naturally existing monomers15. Many studies have optimized the process for fabricating CA membranes15,16,17,18. To enhance the membrane's performance and ion-rejection ratio, many researchers have focused on incorporating nanomaterials into the membrane. The emergence of various nanomaterials serving as the nanosized particles (graphene oxide, carbon nanotubes, porous frameworks, titanate nanotubes, and SiO2) enable the membrane's enhanced water permeability and can provide new solutions for fabricating advanced membranes for water purification19,20,21,22,23.
One of these nanomaterials, metal organic frameworks (MOFs), has been raised as a promising candidate for various applications in gas separation and storage. This is due to its highly selective gas adsorption through its own nanopores24,25,26,27,28. For the MOFs, various types of inorganic and organic linkers can construct 3D structures of particles which allows for a myriad of pore sizes and functionality type29. Compared to organic and inorganic porous nanomaterials, MOFs has advantage at uniformity pore size and functionality type within order which enables molecular transport with own selectivity30. Their unique selectivity has also been applied to desalination and separation technologies31,32,33,34. Various MOF functionalities can improve membrane permeation in different ways. However, the one of main problems with MOFs are their stability in water35. Among the MOF family, the zeolitic imidazolate framework (ZIF) has significant hydrothermal stability in water35,36,37,38. Because of its hydrothermal stability, ZIFs were applied to various types of desalination membrane. Furthermore, hydrophobic ZIF membranes also show substantial permeability in molecular dynamics simulations because its aperture size and hydrophobic functional group helps water pass through the membrane30.
In this study, we focus on incorporating ZIF-302 crystals within the CA membrane to improve the membrane's osmotic water flux (Fig. 1a). ZIF-302 can filter gases and other substances under both dry and humid conditions36. Also hydrophobicity of ZIF-302 lead to faster water transport by low interaction to water39 and rapid decay of hydrogen bonding30, while reject charged ion by energy barrier40 through hydrophobic channels (Fig. 1b). Chabazite structure of ZIF-302 with channel size of ~0.79 nm41 is correspond to pore diameter of osmosis membrane42 (0.6–0.8 nm) that enable selective transport of water molecules (0.28 nm) in brackish water. This approach applying ZIF-302 has difference in channel size (~0.79 nm) compared to ZIF-8 (0.34 nm) applied in previous studies.
(a) Nano-enhanced ZIF-302/CA membrane. (b) Effects of hydrophobic pores in ZIF-302 (c) a schematic description of ZIF-302/CA membrane fabrication.
The pore hydrophobicity that enables fast water transport by its weak interaction with water. The substantial hydrothermal resistance of ZIF-302 nano-enhanced materials play a key role in improving the CA membranes36. We investigated the effects of different ZIF-302 fractions and sizes in CA membranes. The CA/ZIF-302 composite membranes were simply fabricated by dispersion in solution followed by phase inversion process (Fig. 1c). Fabricated CA/ZIF-302 composite membranes were evaluated using laboratory-scale forward osmosis tests for water flux and reverse ion flux. Additional tests evaluated the effects of ZIF-302 contents on the viability of the membrane under harsh alginate fouling conditions. Our study aims to determine a ZIF-302 application to improve osmotic water transport of commercial CA membrane.
Characterization of ZIF-302 crystals and composite membranes
The water stability of MOFs has been an important issue in separation technologies with humid or wet conditions. Most types of MOFs are unstable in water because of the metal oxide clusters' interactions with water35,43. While, ZIF-302 which is a type of MOFs showed hydrothermal stability which maintains its own crystallinity for 7 days in water at 100 °C36. In this study, the FTIR spectra and XRD patterns of ZIF-302 were analyzed before and after the ultrasonic process. After the ultrasonic process, using FTIR spectra and XRD patterns, the ZIF-302 nanocrystals showed that they maintained chemical structure and crystallinity, both showing similar peaks (Fig. 2). As-synthesized ZIF-302 crystals, which have sizes up to a few hundred micrometers, were split into nanocrystals after 4 h of ultrasonic processing. Adding ZIF-302 nanocrystals, CA/ZIF-302 composite membranes were fabricated with each different material composition showed in Table 1.
(a) FTIR spectra and (b) the XRD pattern of ZIF-302 crystals and (c) particle size distribution by intensity of ZIF-302 after ultrasonic process.
Table 1 Material composition of CA and CA/ZIF-302 membranes.
All membranes showed a dense top layer, which excludes ions, and were formed by rapid evaporation of acetone and solvent outflow during the coagulation process15. This dense layer, known as the selective active layer of the CA membrane, conducts osmotic water transport through the nanopores. The pure water permeability and ion permeability are dominantly determined by the active layer on the top surface, while the structural parameter influences the internal concentration polarization (ICP) in the ODMP, and is determined by the microstructure of the porous support layer. The structural parameter indicates effective diffusion length through membrane support layer44. The lower value of structural parameter is needed to achieve higher water flux as reducing ICP. In phase inversion membrane, microstructures of void have been divided to two cases, a sponge-like structure, which intensifies the ICP by its highly tortuous structure, and finger-like macro-voids (FLVs), which reduces the ICP. The sponge-like structure occupied most of the support layer, and only a few FLVs were in the bare CA membrane (Fig. 3a). However, more numerous and longer vertical FLVs were formed in the composite membrane support layers (Fig. 3b–d). The large ZIF particles found on the surface of the FLVs (Fig. 3b–d) seem to be an agglomeration of non-incorporated ZIF-302 nanocrystals because of their poor water dispersion.
SEM image of (a–d) cross section and (e–h) top surface of membranes (a,e: CA, b,f: CZ5, c,g: CZ10, d,h: CZ15).
The bare CA membrane shows a uniform and dense top surface (Fig. 3e), while ZIF-302 particles are found on the top surface of composite membranes (Fig. 3f–h). Figure 3f shows the top surface of the CZ5 membrane with many well-dispersed ZIF-302 nanocrystals and some microcrystals significantly larger than what DLS analysis showed in Fig. 2c. These incorporated particles in active layer improve water permeability as providing alternative flow paths to water molecules45. Since a larger agglomeration is detected in the composite membranes with high-concentrations of ZIF-302 contents, the number of incorporated ZIF particles is not proportional to the contents of ZIF-302. Even the defects around the large ZIF-302 particles could cause severe problems for membrane performance as far as direct ion permeation is concerned. The excessive ZIF-302 contents spoiled the membrane integrity by hindering solution transport during the coagulation process.
Performance of ZIF-302/CA composite membranes
For ODMP, the membranes are required to achieve higher water flux and a lower reverse ion flux. First, for each membrane, both of these fluxes were measured in PRO mode with 1 M NaCl DS to investigate the influence of the ZIF-302 contents on the membrane (Fig. 4). The CZ5 membrane, with 5 wt% of ZIF-302 contents, showed 57% and 54% enhancements in the water flux when compared to a CA membrane in 1 M NaCl and 1 M MgCl2 DS, respectively. Water flux in the CZ5 membrane is improved by the nanocrystal incorporated active layer and microstructure of the support layer. As shown in Fig. 3e, as alternative flow path, individual crystals incorporated on active layer could improve osmotic water transport21,33. In support layer of CZ5 membrane, compared to CA membrane, a larger number of FLVs formed and they extended to vertical direction. These FLVs improve the water flow by reducing the ICP, and more effective, especially for long vertical shape46,47. On the contrary, CZ10 and CZ15 membranes with higher ZIF-302 contents did not demonstrate further water flux enhancements; they only exhibited higher reverse ion fluxes. Furthermore, there was no osmotic water flux in the CZ15 membrane. The excessive loading of the ZIF-302 nanocrystals might interfere with the formation of a uniform top layer and block mass transport. It is reported in previous works that excessive contents of nanomaterials reduce the ion rejection ratio of the active layer21. The active layer with low ion exclusion, which intensifies ICP, caused low osmotic water flux. As shown in Fig. 3g,h, more than 5 wt% of ZIF-302, result in larger numbers of aggregated crystals, and CZ15 membrane has large defects in top surface. The undesired defect formation via aggregation also occurred in previous studies because of excessive nanomaterial contents such as Ag nanoparticles48 and CNTs49. These aggregated crystals and large defects in active layer cause the decrease of ion exclusion and osmotic water flux. Although CZ10 and CZ15 contain large number of FLVs in support layer, which reduce ICP propensity, the active layer with low ion rejection causes low osmotic water flux. CZ15 membrane showed no functionality of active layer and little osmotic water flux due to large agglomeration and defects, through which ions permeate freely.
Pure water flux and reverse ion flux of the membrane in PRO mode with (a) 1 M NaCl and (b) 1 M MgCl2 DS (FS: DI water).
There are two main modes for ODMP, the PRO mode and the FO mode. Although the PRO mode has a higher water flux than the FO mode, the FO mode has the advantage of a lower energy lower fouling propensity. Therefore, we investigated the osmotic water flux of the CA and CZ5 membranes using NaCl and MgCl2 DSs with different concentrations (Fig. 5). As a draw solute, MgCl2 showed a higher rejection ratio than NaCl (Fig. 4). In the ODMP, the higher ion rejection rate drove a higher water flux by reducing the ICP. As shown in Fig. 5, 1 M MgCl2 has the same osmotic pressure as 1.5 M NaCl, but the water flux was higher for 1 M MgCl2 DS than for 1.5 M NaCl in both the CA and CZ5 membranes. The pure water flux enhancement for the CZ5 membrane was higher at 1 M MgCl2 DS (48% in PRO mode, 47% in FO mode) than for 1.5 M NaCl DS (37% in PRO mode, 33% in FO mode).
Pure water flux of membranes with (a) NaCl and (b) MgCl2 draw solutions at different concentrations.
The Table 2 shows calculated parametric transport properties of the CA and CZ5 membranes from experimental results. The CA membranes showed pure water permeability of 1.18 lm−2 h−1/bar and reverse salt permeability of 10.7 and 6.4 lm−2 h−1. Compared to bare CA membrane, CZ5 membrane which containing 5 wt% ZIF-302 contents showed 47% improved pure water permeability. The improvement of water permeability could be explained by the additional alternative path of water transport from incorporated ZIF-302 nanocrystals. The CZ5 membrane showed a significantly reduced structural parameter of 35% (336 ± 28 μm) compared to the CA membrane (520 ± 28 μm). The longer and larger number of FLVs in the CZ5 membrane may be attributed to the reduced structural parameter, which previous work has shown to be true47,50. The CZ5 membrane exhibited similar tendencies to the nanoparticle contents in casting solutions, reducing structural parameters by modified geometry as show in previous works21,47.
Table 2 Transport properties of membranes in terms of A, B, and S.
Alginate fouling test
Sodium alginate is a hydrophilic natural organic substance used to investigate membrane fouling propensity. Figure 6 shows a normalized water flux after a corresponding fraction of sodium alginate was added to FS. The water flux was evaluated under the PRO mode using 1 M NaCl DS. A corresponding amount of sodium alginate was added to the feed solution, and the lower water flux was normalized to initial values. To measure recovering propensity, sodium alginate DS was replaced with DI water after 300 min. After 300 min, the CZ5 membrane showed a higher percentage of water flux (47% in 250 mg/L sodium alginate, 42% in 1000 mg/L sodium alginate) than the CA membrane (37% in 250 mg/L sodium alginate, 29% in 1000 mg/L sodium alginate). When DS was changed to DI water for 90 min, the CZ5 membrane recovered 74 and 65% of its initial flux, while the CA membrane recovered 53% and 44% in 250 mg/L and 1000 mg/L sodium alginate, respectively. The modified surface properties may affect the propensity for alginate fouling by reducing the adhesion force. Also, the microstructure of the CZ5 membrane, represented by reduced structural parameters, may contribute to fouling resistance by hindering the sodium alginate accumulation in the porous support layer.
Normalized water flux of (a) the CA membrane and (b) the CZ5 membrane under alginate fouling conditions (PRO mode, DS: 1 M NaCl, FS: sodium alginate solution).
In this work, we applied ZIF-302 nanoparticles to fabricate a CA base composite membrane to enhance osmotic water flux. The ZIF-302 showed water selectivity in brackish water by improved osmotic water flux. The 5 wt% of ZIF-302 loading enhanced pure water flux while maintaining water/ion selectivity, while higher loading spoiled water/ion selectivity. The enhanced osmotic water flux was attributed to higher pure water permeability as well as reduced structural parameters. Furthermore, the CA/ZIF-302 membrane showed enhanced alginate fouling resistance with higher water flux remaining in the PRO mode. The CA/ZIF-302 composite membrane was easily fabricated using the phase inversion method used in commercial membrane fabrication. These results show that ZIF-302 can be used to enhance performance and viability of CA membranes for ODMPs.
CA (39.8 wt% acetyl, Mw ~30,000 g/mol) and sodium alginate (99%) were obtained from Sigma-Aldrich (USA). Sodium chloride (NaCl, 99.9% purity), magnesium chloride (MgCl2, 99.9%) and n-methyl-2-pyrrolidone (NMP, 99.9%) were purchased from Alfa-Aesar. Acetone (99.9%) was purchased from Daejung Chemical Co. Ltd. (South Korea). All ZIF crystal fabrication was conducted as presented in previous work36. 5(6)-methylbenzimidazole (mbImH) and N,N-dimethylformamide (DMF) were purchased from the Sigma-Aldrich (USA), and anhydrous methanol was obtained from Daejeung Chemical Co. Ltd. (South Korea). 2-methylimidazole (2-mImH) and zinc nitrate tetrahydrate were purchased from Merck Chemical Co (Germany). All of these chemicals were used without further purification. All experiments were performed in air.
Synthesis of ZIF nanoparticles and characterization
To synthesize ZIF-302, we used a mixed linker method, as described in previous work36. 2-mImH (9.9 mg, 0.120 mmol), mbImH (21.4 mg, 0.140 mmol), and Zn(NO3)2·4H2O (29.8 mg, 0.114 mmol) were added in a mixture of DMF (3.5 mL) and water (0.5 mL) in a 10-mL tightly capped vial. The solution was heated at 120 °C for 3 days. The colorless particles were precipitated and washed with 5 mL of DMF 3 times for one day.
After the DMF washing, as-synthesized ZIF samples were solvent exchanged 3 times per day with methanol at room temperature over 3 days. Collected samples were dried under vacuum at room temperature for 24 h, followed by heating at 180 °C for 2 h for activation.
As-synthesized ZIF-302 crystals were transferred into 20 ml glass vials with DI water, and sonicated for 10 min to immerse ZIF particles so they are under the glass vial. After bath sonication, the mixture was probe-sonicated for 4 h at 110 W power with a 3 s pulse and 1 s pause. Vials were placed on aluminum racks with a continuous supply of ice-water to prevent overheating. The ZIF solution was added to disposable plastic micro-cuvettes and characterized using a dynamic light scattering instrument (Zetasizer Nano-ZS90, Malvern Instruments). For membrane preparation, the ZIF particles were dried in an oven for 2 days at 80 °C to remove water and moisture. The crystal structure of the ZIF particles was characterized by powder X-ray diffraction (PXRD, D8 ADVANCE, Bruker Corporation, USA). Molecular bonding of the ZIF particles was characterized by FTIR spectra (IFS-66/S, Bruker Corporation, USA).
Membrane fabrication and characterization
The ZIF-302/CA composite membranes were prepared by the phase inversion method. Following material contents in Table 1, CA/ZIF-302 mixture (18 wt%) was dissolved in NMP (77 wt%) and acetone (5 wt%) to achieve casting solution. ZIF-302 nanocrystals were added to the solvent and bath and sonicated for 30 min to achieve homogenous dispersion prior to dissolving the CA powder. The fully dissolved casting solution was left at an ambient condition for 24 h to remove air bubbles and prevent defect formation. The degassed casting solution was poured on a flat glass plate and applied by a casting knife with a 60 μm depth. The casting solution was partially evaporated for 30 s under atmospheric conditions and coagulated in tap water. The coagulated membranes were stored in tap water overnight to extract residual organic solvents from the membranes.
To characterize the morphology, each membrane was broken in liquid nitrogen to achieve clean fracture surfaces. The fractured membranes were freeze dried for 2 days to completely remove water all content. Cross surfaces were scanned by scanning electron microscopy (SEM, S-4000H, Hitachi, Japan) and top surfaces were scanned by field emission scanning electron microscopy (FESEM, JSM7500F, JEOL, Japan).
Lab scale membrane performance evaluation
The effective area of the membranes was 12.56 cm2 and feed solution (FS) and draw solution (DS) flowed using commercial hydraulic pumps with a constant flow rate of 150 ml/min. The FO mode used an active layer facing feed solution (AL-FS) conditions for FS and DS flows on active layers and support layers, respectively. Pressure retarded osmosis (PRO) mode used an active layer facing draw solution (AL-DS) conditions for FS and DS flows on support layers and active layers, respectively. Each mode was chosen to investigate conditions to minimize the ICP that ions blocked in the support layer reduce the osmotic pressure gradient across the active layer.
The NaCl and MgCl2 solutions with different concentration (0.5, 1, 1.5, 2.0 M) were used as DS and deionized (DI) water was used as FS. Corresponding amounts of sodium alginate were dissolved in FS and bath sonicated for 12 h for the alginate fouling experiment. The pure water flux (Jw) was determined by permeate volume per unit time and effective area.
$${J}_{w}=\frac{{\rm{\Delta }}{V}_{draw}}{{A}_{m}\cdot t}$$
where, ΔVdraw is the volume change of FS, Am is the effective membrane surface area in the cross-flow area and t is time. The reverse ion flux (Js) was calculated using the initial and final FS ion concentrations. The FS ion concentration was determined using electrical conductivity measured by a conductivity meter (PCS Testr 35, Eutech, Japan).
$${J}_{s}=\frac{{c}_{1}{V}_{1}-{c}_{0}{V}_{0}}{{A}_{m}\cdot t}$$
Where, c0 and c1 are the initial and final FS concentrations, respectively, and V0 and V1 are initial and final volumes, respectively. The each evaluation was repeated 3 times for each membrane to obtain an average value. Each membrane sample was used 4 times in different draw solution concentration (0.5, 1, 1.5, 2.0 M) with single draw salt (NaCl or MgCl2) and mode (PRO or FO). In the fouling experiments, the membrane samples were used only once in a clean state and disposed.
The membrane performance parameters were characterized using the RO test. In RO test, the membranes loaded to commercial filter holder (HP 4750, Sterlitech, Kent, WA) filtered 10 mM NaCl aqueous solution applying 5 bar of hydraulic pressure. Following results of RO tests, water permeability (A), ion rejection ratio (R) and ion permeability (B) were calculated using following equations:
$$A=\frac{{\rm{\Delta }}V}{{A}_{m}\cdot {\rm{\Delta }}P\cdot {\rm{\Delta }}t}$$
$$R=(1-\frac{{C}_{p}}{{C}_{f}})$$
$$\frac{1-R}{R}=\frac{B}{A({\rm{\Delta }}P-{\rm{\Delta }}\pi )}$$
where Am is the effective area of the membrane, ΔP is the applied hydraulic pressure, Δt is time, Δπ is osmotic pressure across the membrane, and Cp and Cf are the ion concentrations of the permeated solution and feed solution, respectively.
Based on ICP theory introduced by previous work51, the structural parameter was calculated using following equation:
$$S=\frac{D}{{J}_{v}}\,\mathrm{ln}\,\frac{A\cdot {\pi }_{draw}-{J}_{v}+B}{A\cdot {\pi }_{feed}+B}\,$$
where D is the diffusion coefficient of the solute and πdraw and πfeed are the osmotic pressures of the draw solution and the feed solution, respectively.
Goh, P. S., Matsuura, T., Ismail, A. F. & Hilal, N. Recent trends in membranes and membrane processes for desalination. Desalination 391, 43–60 (2016).
Lau, W. J. et al. A review on polyamide thin film nanocomposite (TFN) membranes: History, applications, challenges and approaches. Water Res. 80, 306–324 (2015).
Mohammad, A. W. et al. Nanofiltration membranes review: Recent advances and future prospects. Desalination 356, 226–254 (2015).
Tang, C. Y., Wang, Z. N., Petrinic, I., Fane, A. G. & Helix-Nielsen, C. Biomimetic aquaporin membranes coming of age. Desalination 368, 89–105 (2015).
Wang, L. K., Hung, Y.-T. & Shammas, N. K. Physicochemical treatment processes. Vol. 3 (Springer, 2005).
Al-Karaghouli, A. & Kazmerski, L. L. Energy consumption and water production cost of conventional and renewable-energy-powered desalination processes. Renew. Sust. Energ. Rev. 24, 343–356 (2013).
Cath, T. Y., Childress, A. E. & Elimelech, M. Forward osmosis: Principles, applications, and recent developments. J. Membr. Sci. 281, 70–87 (2006).
Mi, B. X. & Elimelech, M. Organic fouling of forward osmosis membranes: Fouling reversibility and cleaning without chemical reagents. J. Membr. Sci. 348, 337–345 (2010).
Bamaga, O. A., Yokochi, A., Zabara, B. & Babaqi, A. S. Hybrid FO/RO desalination system: Preliminary assessment of osmotic energy recovery and designs of new FO membrane module configurations. Desalination 268, 163–169 (2011).
Shaffer, D. L., Yip, N. Y., Gilron, J. & Elimelech, M. Seawater desalination for agriculture by integrated forward and reverse osmosis: Improved product water quality for potentially less energy. J. Membr. Sci. 415, 1–8 (2012).
Altaee, A., Mabrouk, A., Bourouni, K. & Palenzuela, P. Forward osmosis pretreatment of seawater to thermal desalination: High temperature FO-MSF/MED hybrid system. Desalination 339, 18–25 (2014).
Darwish, M., Hassan, A., Mabrouk, A. N., Abdulrahim, H. & Sharif, A. Viability of integrating forward osmosis (FO) as pretreatment for existing MSF desalting unit. Desalin. Water. Treat. 57, 14336–14346 (2016).
Post, J. W. et al. Salinity-gradient power: Evaluation of pressure-retarded osmosis and reverse electrodialysis. J. Membr. Sci. 288, 218–230 (2007).
Li, D., Yan, Y. S. & Wang, H. T. Recent advances in polymer and polymer composite membranes for reverse and forward osmosis processes. Prog. Polym. Sci. 61, 104–155 (2016).
Zhang, S. et al. Well-constructed cellulose acetate membranes for forward osmosis: Minimized internal concentration polarization with an ultra-thin selective layer. J. Membr. Sci. 360, 522–535 (2010).
Duarte, A. P., Cidade, M. T. & Bordado, J. C. Cellulose acetate reverse osmosis membranes: Optimization of the composition. J. Appl. Polym. Sci. 100, 4052–4058 (2006).
Duarte, A. P., Bordado, J. C. & Cidade, M. T. Cellulose acetate reverse osmosis membranes: Optimization of preparation parameters. J. Appl. Polym. Sci. 103, 134–139 (2007).
Su, J. C. et al. Effects of annealing on the microstructure and performance of cellulose acetate membranes for pressure-retarded osmosis processes. J. Membr. Sci. 364, 344–353 (2010).
Shen, L., Xiong, S. & Wang, Y. Graphene oxide incorporated thin-film composite membranes for forward osmosis applications. Chem. Eng. Sci. 143, 194–205 (2016).
Kim, H. J. et al. High-Performance Reverse Osmosis CNT/Polyamide Nanocomposite Membrane by Controlled Interfacial Interactions. ACS Appl. Mater. Interfaces 6, 2819–2829 (2014).
Zirehpour, A., Rahimpour, A., Khoshhal, S., Firouzjaei, M. D. & Ghoreyshi, A. A. The impact of MOF feasibility to improve the desalination performance and antifouling properties of FO membranes. RSC Adv. 6, 70174–70185 (2016).
Emadzadeh, D. et al. Synthesis, modification and optimization of titanate nanotubes-polyamide thin film nanocomposite (TFN) membrane for forward osmosis (FO) application. Chem. Eng. J. 281, 243–251 (2015).
Niksefat, N., Jahanshahi, M. & Rahimpour, A. The effect of SiO2 nanoparticles on morphology and performance of thin film composite membranes for forward osmosis application. Desalination 343, 140–146 (2014).
Jeazet, H. B. T., Staudt, C. & Janiak, C. Metal-organic frameworks in mixed-matrix membranes for gas separation. Dalton Trans. 41, 14003–14027 (2012).
Ren, H. Q., Jin, J. Y., Hu, J. & Liu, H. L. Affinity between Metal-Organic Frameworks and Polyimides in Asymmetric Mixed Matrix Membranes for Gas Separations. Ind. Eng. Chem. Res. 51, 10156–10164 (2012).
Ma, D. X. et al. A dual functional MOF as a luminescent sensor for quantitatively detecting the concentration of nitrobenzene and temperature. Chem. Commun. 49, 8964–8966 (2013).
Japip, S., Wang, H., Xiao, Y. C. & Chung, T. S. Highly permeable zeolitic imidazolate framework (ZIF)-71 nano-particles enhanced polyimide membranes for gas separation. J. Membr. Sci. 467, 162–174 (2014).
Cacho-Bailo, F. et al. High selectivity ZIF-93 hollow fiber membranes for gas separation. Chem. Commun. 51, 11283–11285 (2015).
Furukawa, H., Cordova, K. E., O'Keeffe, M. & Yaghi, O. M. The Chemistry and Applications of Metal-Organic Frameworks. Science 341, 974–986 (2013).
Gupta, K. M., Zhang, K. & Jiang, J. W. Water Desalination through Zeolitic Imidazolate Framework Membranes: Significant Role of Functional Groups. Langmuir 31, 13230–13237 (2015).
Duan, J. T. et al. High-performance polyamide thin-film-nanocomposite reverse osmosis membranes containing hydrophobic zeolitic imidazolate framework-8. J. Membr. Sci. 476, 303–310 (2015).
Liu, X. L., Demir, N. K., Wu, Z. T. & Li, K. Highly Water-Stable Zirconium Metal Organic Framework UiO-66 Membranes Supported on Alumina Hollow Fibers for Desalination. J. Am. Chem. Soc. 137, 6999–7002 (2015).
Ma, D. C., Peh, S. B., Han, G. & Chen, S. B. Thin-Film Nanocomposite (TFN) Membranes Incorporated with Super-Hydrophilic Metal- Organic Framework (MOF) UiO-66: Toward Enhancement of Water Flux and Salt Rejection. ACS Appl. Mater. Interfaces 9, 7523–7534 (2017).
Navarro, M. et al. Thin-Film Nanocomposite Membrane with the Minimum Amount of MOF by the Langmuir-Schaefer Technique for Nanofiltration. ACS Appl. Mater. Interfaces 10, 1278–1287 (2018).
Kusgens, P. et al. Characterization of metal-organic frameworks by water adsorption. Microporous Mesoporous Mater. 120, 325–330 (2009).
Nguyen, N. T. T. et al. Selective Capture of Carbon Dioxide under Humid Conditions by Hydrophobic Chabazite-Type Zeolitic Imidazolate Frameworks. Angew. Chem. Int. Edit. 53, 10645–10648 (2014).
Ge, D. & Lee, H. K. Water stability of zeolite imidazolate framework 8 and application to porous membrane-protected micro-solid-phase extraction of polycyclic aromatic hydrocarbons from environmental water samples. J. Chromatogr. A 1218, 8490–8495 (2011).
Huang, A. S., Dou, W. & Caro, J. Steam-Stable Zeolitic Imidazolate Framework ZIF-90 Membrane with Hydrogen Selectivity through Covalent Functionalization. J. Am. Chem. Soc. 132, 15562–15564 (2010).
Corry, B. Designing carbon nanotube membranes for efficient water desalination. J. Phys. Chem. B 112, 1427–1434 (2008).
Peter, C. & Hummer, G. Ion transport through membrane-spanning nanopores studied by molecular dynamics simulations and continuum electrostatics calculations. Biophys. J. 89, 2222–2234 (2005).
ADS CAS PubMed PubMed Central Google Scholar
Yuan, J. W. et al. Novel ZIF-300 Mixed-Matrix Membranes for Efficient CO2 Capture. ACS Appl. Mater. Interfaces 9, 38575–38583 (2017).
Das, R., Ali, M. E., Abd Hamid, S. B., Ramakrishna, S. & Chowdhury, Z. Z. Carbon nanotube membranes for water purification: A bright future in water desalination. Desalination 336, 97–109 (2014).
Shih, Y. H. et al. Trypsin-Immobilized Metal-Organic Framework as a Biocatalyst In Proteomics Analysis. Chempluschem 77, 982–986 (2012).
Sivertsen, E., Holt, T., Thelin, W. & Brekke, G. Modelling mass transport in hollow fibre membranes used for pressure retarded osmosis. J. Membr. Sci. 417, 69–79 (2012).
Yin, J., Kim, E. S., Yang, J. & Deng, B. L. Fabrication of a novel thin-film nanocomposite (TFN) membrane containing MCM-41 silica nanoparticles (NPs) for water purification. J. Membr. Sci. 423, 238–246 (2012).
Tiraferri, A., Yip, N. Y., Phillip, W. A., Schiffman, J. D. & Elimelech, M. Relating performance of thin-film composite forward osmosis membranes to support layer formation and structure. J. Membr. Sci. 367, 340–352 (2011).
Emadzadeh, D., Lau, W. J., Matsuura, T., Rahbari-Sisakht, M. & Ismail, A. F. A novel thin film composite forward osmosis membrane prepared from PSf-TiO2 nanocomposite substrate for water desalination. Chem. Eng. J. 237, 70–80 (2014).
Liu, X. et al. Synthesis and characterization of novel antibacterial silver nanocomposite nanofiltration and forward osmosis membranes based on layer-by-layer assembly. Water Res. 47, 3081–3092 (2013).
Amini, M., Jahanshahi, M. & Rahimpour, A. Synthesis of novel thin film nanocomposite (TFN) forward osmosis membranes using functionalized multi-walled carbon nanotubes. J. Membr. Sci. 435, 233–241 (2013).
Xiao, P. P. et al. A sacrificial-layer approach to fabricate polysulfone support for forward osmosis thin-film composite membranes with reduced internal concentration polarisation. J. Membr. Sci. 481, 106–114 (2015).
Tiraferri, A., Yip, N. Y., Straub, A. P., Castrillon, S. R. V. & Elimelech, M. A. Method for the simultaneous determination of transport and structural parameters of forward osmosis membranes. J. Membr. Sci. 444, 523–538 (2013).
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1C1B2011750) and (NRF-2018R1C1B6004358).
School of Mechanical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
Teayeop Kim, Moon-ki Choi & Kyunghoon Kim
Department of Chemistry, Yonsei University, Seoul, 03722, Republic of Korea
Hyun S. Ahn
Department of Chemical Engineering, Pohang University of Science and Technology, Pohang, 790-784, Republic of Korea
Junsuk Rho
Department of Materials Science & Engineering, Kangwon National University, Chuncheon, 24341, Republic of Korea
Hyung Mo Jeong
Teayeop Kim
Moon-ki Choi
Kyunghoon Kim
K.K. and H.M.J. conceived of the project and designed the experiment. T.K., M.C. and H.M.J. performed the materials synthesis and carried out the experiment. T.K., M.C., H.S.A., J.R., H.M.J. and K.K. contributed to the materials characterization and analysis of data. T.K., H.M.J. and K.K. wrote the manuscript. All authors reviewed and confirmed the manuscript.
Correspondence to Junsuk Rho or Hyung Mo Jeong or Kyunghoon Kim.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Kim, T., Choi, Mk., Ahn, H.S. et al. Fabrication and characterization of zeolitic imidazolate framework-embedded cellulose acetate membranes for osmotically driven membrane process. Sci Rep 9, 5779 (2019). https://doi.org/10.1038/s41598-019-42235-5
Stable zeolitic imidazolate framework-8 supported onto graphene oxide hybrid ultrafiltration membranes with improved fouling resistance and water flux
Thollwana Andretta Makhetha
& Richard Motlhaletsi Moutloali
Chemical Engineering Journal Advances (2020)
Heavy metal removal applications using adsorptive membranes
Thi Sinh Vo
, Muhammad Mohsin Hossain
, Hyung Mo Jeong
& Kyunghoon Kim
Nano Convergence (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
Scientific Reports ISSN 2045-2322 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Genetics Selection Evolution
Genome-wide association study and biological pathway analysis of the Eimeria maxima response in broilers
Edin Hamzić1,2,3,
Bart Buitenhuis3,
Frédéric Hérault4,
Rachel Hawken5,
Mitchel S. Abrahamsen5,
Bertrand Servin6,
Jean-Michel Elsen6,
Marie-Hélène Pinard - van der Laan1,2 &
Bertrand Bed'Hom1,2
Genetics Selection Evolution volume 47, Article number: 91 (2015) Cite this article
Coccidiosis is the most common and costly disease in the poultry industry and is caused by protozoans of the Eimeria genus. The current control of coccidiosis, based on the use of anticoccidial drugs and vaccination, faces serious obstacles such as drug resistance and the high costs for the development of efficient vaccines, respectively. Therefore, the current control programs must be expanded with complementary approaches such as the use of genetics to improve the host response to Eimeria infections. Recently, we have performed a large-scale challenge study on Cobb500 broilers using E. maxima for which we investigated variability among animals in response to the challenge. As a follow-up to this challenge study, we performed a genome-wide association study (GWAS) to identify genomic regions underlying variability of the measured traits in the response to Eimeria maxima in broilers. Furthermore, we conducted a post-GWAS functional analysis to increase our biological understanding of the underlying response to Eimeria maxima challenge.
In total, we identified 22 single nucleotide polymorphisms (SNPs) with q value <0.1 distributed across five chromosomes. The highly significant SNPs were associated with body weight gain (three SNPs on GGA5, one SNP on GGA1 and one SNP on GGA3), plasma coloration measured as optical density at wavelengths in the range 465–510 nm (10 SNPs and all on GGA10) and the percentage of β2-globulin in blood plasma (15 SNPs on GGA1 and one SNP on GGA2). Biological pathways related to metabolic processes, cell proliferation, and primary innate immune processes were among the most frequent significantly enriched biological pathways. Furthermore, the network-based analysis produced two networks of high confidence, with one centered on large tumor suppressor kinase 1 (LATS1) and 2 (LATS2) and the second involving the myosin heavy chain 6 (MYH6).
We identified several strong candidate genes and genomic regions associated with traits measured in response to Eimeria maxima in broilers. Furthermore, the post-GWAS functional analysis indicates that biological pathways and networks involved in tissue proliferation and repair along with the primary innate immune response may play the most important role during the early stage of Eimeria maxima infection in broilers.
Coccidiosis is an animal parasitic disease caused by protozoans belonging to the Coccidia subclass. In the chicken, seven species of the genus Eimeria are infectious and cause coccidiosis: E. brunetti, E. necatrix, E. tenella, E. acervulina, E. maxima, E. mitis, and E. praecox. E. maxima is the most immunogenic of the seven species [1] and mostly infects the lining of the jejunum, causing mucoid enteritis [2]. Chicken coccidiosis is one of the most common and costly diseases currently affecting the poultry industry, with worldwide costs caused by production losses as well as by prevention and treatment actions that are estimated to exceed USD 3 billion per year [3, 4]. Current coccidiosis management is based on the use of anticoccidial drugs and vaccination [5]. The first anticoccidial drugs, sulfonamides, began to be used in the early 1940s, and then over the years, several different classes of drugs were developed and extensively used for the control of coccidiosis in broiler production [6]. However, the future use of anticoccidial drugs has caused concern due to the tendency of Eimeria species to rapidly develop resistance to drugs [7] as well as public dissatisfaction regarding the presence of chemical residues in food. In addition, the development of efficient multiple-species live vaccines is primarily limited by their high economic costs [5].
Due to these issues, the current control programs must be expanded using a complementary approach, including the application of genetics to improve the host response to Eimeria infection. Genetic approaches and manipulations have been shown to have a small effect at each generation, but also a cumulative effect and is a long-term, cost-effective and environmentally friendly way of keeping livestock animals healthy [8]. The first studies indicating that genetic diversity may account for differences in susceptibility to coccidiosis were published in the 1940s and 1950s [9, 10]. Considerable variation in coccidiosis susceptibility has been observed between different chicken breeds [11]. The first successful performance of divergent selection for resistance and susceptibility to acute cecal coccidiosis was performed by Johnson and Edgar [12]. Likewise, marked differences in the response to Eimeria infection were observed between inbred and outbred lines [13, 14]. The availability of genome-wide dense markers has enabled the identification of several quantitative trait loci (QTL) regions associated with resistance to Eimeria tenella and Eimeria maxima in experimental populations [15–17]. In addition, several genes that are located in highly significant previously detected QTL [16] were also found to be differentially expressed in a follow-up transcriptome study [18].
Recently, we conducted a large-scale challenge study with Cobb500 broilers using E. maxima as the infective agent, and high variability was observed in the measured traits among challenged animals [19]. The measured traits included a wide range of physiological, immunological and disease resistance parameters. This study is a follow-up on the aforementioned large-scale challenge study in which we assessed the effects of the E. maxima challenge on the measured traits and also evaluated the level of variability of the measured traits. Taking advantage of the size and structure of this large-scale challenge study, we performed a genome-wide association study (GWAS) to identify genomic regions underlying the broiler response to E. maxima. In addition, based on the results of the GWAS, a post-GWAS functional analysis was performed to further understand the biology of the underlying response to the E. maxima challenge. The functional analysis comprised the following two independent approaches: a biological pathway analysis based on the publicly available Kyoto encyclopedia of genes and genomes (KEGG) pathways and a network-based analysis that was performed using the ingenuity pathway analysis (IPA) software.
All procedures were conducted under License No. A176661 from the Veterinary Services, Charente Maritime, France and in accordance with guidelines for the Care and Use of Animals in Agricultural Research and Teaching (French Agricultural Agency and Scientific Research Agency) (http://www.gouvernement.fr/en/culture-education-and-research).
Experimental population and phenotyping
In this study, we used phenotype data collected during a large-scale Eimeria maxima challenge study on Cobb500 broilers [19]. The challenge study was performed using 2024 Cobb500 broilers randomly distributed in 44 (challenge) and two (control) litter pens (3 m × 1 m) each containing 44 birds. For the GWAS, we used only data collected from the challenged animals, with control animals excluded from the analysis. Traits were measured at two levels: "global phenotyping", which was performed on all animals and "detailed phenotyping", which was performed on a subset of 176 animals. The experimental layout of the challenge is presented in Fig. 1.
Experimental layout of the large-scale challenge study. In total, 2024 1-day-old broilers were used in the experiment with 88 control animals and 1936 challenged animals. The challenge was performed on day 16 of the experiment by inoculating 50,000 Eimeria maxima oocysts. Traits were measured at two levels: "global phenotyping" and "detailed phenotyping". Global phenotyping was performed on all animals and included body weight (BW), plasma coloration (PC), body temperature (BT) and hematocrit (HEMA) levels. Detailed phenotyping was performed on the subset of 176 animals and included lesion score (LS), oocyst count (OC), plasma protein profiles (PPP), and blood composition (BC). Body weight gain (BWG) was calculated using the following formula (BW at day 22-BW at day 15)/BW at day 15. PC is optical density of blood plasma measured for 44 wavelengths (every 5 nm, 380–600 nm) on days 15 and 22. BT was measured at day 23, and HEMA was measured from blood samples obtained on days 16 and 23. BC, PPP, OC and LS were assessed using samples obtained on day 23. Please refer to Hamzic et al. [19] for a detailed description of the trait measurements
BWG, HEMA, BT and PC were measured on all animals. BWG was calculated as BWG = (BW at day 22-BW at day 15)/BW at day 15. We used relative BWG with respect to day 15 to focus solely on the BWG in response to the challenge. HEMA levels and PC were measured from the blood samples that were collected on days 16 and 23.
The subset of 176 animals was chosen by selecting two birds among those with the lowest and two birds among those with the largest BWG from each challenged animal pen. Animal selection within each pen was based on the ranking per pen. The subset of 176 animals with extreme BWG values was used for further detailed phenotyping. Detailed phenotyping included LS (duodenum and jejunum), OC, BC and PPP. PPP was assessed using capillary electrophoresis, which separates the protein components into five major fractions by size and electrical charge: prealbumins, albumins, α-1 globulins, α-2 globulins, α-3 globulins, β-1 globulins, β-2 globulins and γ globulins. BC included blood cell count (BCC) and red blood indices (RBI). A detailed description of the methodologies used for measuring all traits was reported by Hamzic et al. [19].
Genotype data
DNA was extracted from blood samples obtained from 1972 animal samples at day 16 of the experiment. Genotyping was performed using a 580 K Affymetrix® Axiom® HD genotyping array (Affymetrix, Santa Clara, USA) [20] at a commercial laboratory (GeneSeek, Lincoln, USA). Quality control of the genotype data was performed using PLINK 1.9 [21–23] and included the sample call rate (>98 %), SNP call rate (>98 %), minor allele frequency (>2 %) and removing SNPs with extreme F-statistic values. A total of 138,568 SNPs out of 580,961 were removed during the quality control (See Additional file 1: Table S1). SNP thresholds and sample call rates were set at 98 % and all SNPs and individuals with a call rate less than 98 % were excluded from the analysis. We did not perform imputation of the residual missing genotypes. The F-statistics distribution for the SNPs that remained after the quality control steps was assessed, and all SNPs outside the adjusted interquartile range were excluded from the analysis. The interquartile range for skewed distribution of F-statistics was calculated as suggested by Hubert and Vandervieren [24]. Sex was assessed using PLINK 1.9 by analyzing SNPs on the Z and W chromosomes. Detected females were included in the analysis with sex considered as a covariate (See Additional file 1: Table S1). Finally, after quality control, we obtained a dataset that included 443,587 SNPs distributed along 28 autosomal chromosomes, two linkage groups (LGE22C19W28_E50C23 and LGE64) and chromosome Z. Mean, median and standard deviation for base pair distances between neighboring SNPs across chromosomes are in Table S2 (See Additional file 2: Table S2).
Genome-wide association analysis
The genome-wide association analyses were performed using the Genome-wide Efficient Mixed Model Association (GEMMA) algorithm [25] for the univariate linear mixed model (LMM). In this study, we used Cobb500 broilers, which are the final products of a four-way crossbreeding scheme, indicating the presence of a strong population structure. The linear mixed model approach was used since this method has been proven to successfully account for population structure in association mapping studies [26]. In the context of the LMM, the correction for population structure is performed by creating the genomic relationship matrix (K) that models the structure present in the analyzed population by estimating the contribution of genetic relatedness to the phenotypic variance. K presents a pairwise relationship between individuals, and its structure is also influenced by population structure, family structure and cryptic relatedness.
The model used for the analysis is presented in expression (1):
$${\mathbf{y}} = {\mathbf{W}}\varvec{\alpha}+ {\mathbf{x}}\upbeta + {\mathbf{u}} + {\varvec{\epsilon}},$$
where y is an n-vector of observations (or trait measurements) for n individuals, W is an n × c matrix of covariates which contains information about pen and sex including a column of 1s for the general mean; α is a c-vector of the corresponding coefficients, including the intercept, x is an n-vector of marker genotypes, β is the effect size of the marker, u is an n-vector of polygenic effects, and ε is an n-vector of residual effects.
For polygenic effects, u follows a multivariate normal distribution (MVNn) \(({\text{u}}\sim {\text{MVN}}_{\text{n}} \,(0, \lambda \tau^{ - 1} {\mathbf{K}}))\), with λ indicating the ratio between the genetic variance (more precisely the variance explained by the SNPs) and variance of the residuals, τ−1 is the variance of the residuals, and K is a known n × n, which is the genomic relationship matrix, calculated using a n × p matrix of genotypes (X). Expression (2) was used to calculate K where x i is the ith column representing genotypes of the ith SNP in the X matrix, \(\overline{{x_{i} }}\) is the sample mean, and 1 n is an n × 1 vector of 1s. Residual effects are presented as an ε vector of length n following the multivariate normal distribution (MVNn) \((\varepsilon \sim {\text{MVN}}_{\text{n}} \,(0,\tau^{ - 1} {\mathbf{I}}_{n} ))\) where I n is an n × n identity matrix.
$${\mathbf{K}} = \frac{1}{p}\mathop \sum \limits_{t = 1}^{p} \left( {x_{i} - 1_{n} \overline{{x_{i} }} } \right)\left( {x_{i} - 1_{n} \overline{{x_{i} }} } \right)^{T} .$$
To correct for multiple hypothesis testing, the false-discovery rate (FDR) was calculated for each SNP from the distribution of p-values: SNPs with an FDR less than 0.1 were considered significant [27]. The results are shown in Manhattan plots constructed by the qqman R package [28].
Biological pathway analysis
In general, biological pathway analysis is used to test the association between a curated set of genes (biological pathways) and a trait of interest. This approach tests for the cumulative effect across many genes, which enables the detection of effects at the biological pathway level. Biological pathway analysis has been conducted as described in the following paragraphs. This analysis was divided into two main steps: (1) assigning SNPs to their corresponding annotated genes and assigning these genes to their corresponding biological pathways (KEGG), and (2) statistical testing for the biological pathway (set of genes with assigned list of SNP) based on the test statistics obtained in the genome-wide association analysis.
Assigning SNPs to genes and biological pathways
The SNPs used in the GWAS were mapped to the ICGSC Gallus_gallus-4.0 chicken genome assembly (GCA_000002315.2) using the NCBI2R R package [29]. A list of annotated genes corresponding to the SNPs used in the genome-wide analysis was retrieved. For this study, the publicly available biological KEGG pathways were downloaded using the Bioconductor KEGGREST package [30], and the curated genes were assigned to specific pathways. The biological KEGG pathways represent a collection of biological pathway maps that integrate many units, including genes, proteins, RNAs, chemical compounds, and chemical reactions, as well as disease genes and drug targets, which are stored as individual entries in other KEGG databases. The chicken genome KEGG PATHWAY contains 162 pathways associated with 4342 genes. The list of genes with assigned SNPs from the GWAS and the list of genes with assigned biological pathways (KEGG) were combined, ultimately resulting in a list of 52,204 SNPs assigned to 162 biological KEGG pathways (See Additional file 3: Table S3).
Statistical analysis for biological pathway analysis
As described in the previous step, a set of SNPs that were mapped to genes were further assigned to the corresponding biological pathways. For each biological pathway that was characterized by a set of SNPs, an appropriate summary of statistics was constructed as described in detail by Jensen et al. [31]. The statistics summary was based on the negative log-transformed p values from the association of individual SNPs to the traits. By summing these negative log-transformed p values, we imitated a genetic model that captures variants with small to moderate effects [32, 33].
The observed summary statistics for a particular set of SNPs were compared with an empirical distribution for the summary statistics of random samples of SNP sets of the same size using a permutation approach. Considering that the distribution of summary statistics is affected by a correlation structure of closely linked SNPs, as the consequence of linkage disequilibrium, the following procedure was used for statistical testing.
The vector of observed SNPs with corresponding test statistics was ordered according to the physical position on the genome. SNPs were then mapped to genes and consequently to a biological pathway as described in the first step. The elements in this vector were numbered 1,2,…,N, and the permutation was performed in two steps. The first step included randomly picking an element (ej) from this vector. This jth test statistic was the first element in the permuted vector, and the remaining elements were ordered ej+1, ej+2, …, eN, e1, e2, …, ej-1 according to their original numbering. Therefore, all elements from the original vector were then shifted to a new position starting with ej; however, the gene position was kept fixed with respect to the original one. The second step involved the computation of summary statistics for each set of SNPs based on the original position of the set of SNPs in the original vector of test statistics. The connections between SNPs and genes were broken while keeping the correlation structure among the test statistics. Steps 1 and 2 were repeated 1000 times, and from this empirical distribution of summary test statistics for each set of SNPs, a p value was obtained. This empirical p value corresponds to a one-sided test of the proportion of randomly sampled summary statistics that were larger than the observed summary statistic with the arbitrary significance level set to 0.01.
Network-based analysis
The network modeling was performed using the ingenuity pathway analysis (IPA) tool as a complementary approach to the KEGG biological pathway analysis. The ingenuity pathways database is the manually curated database of previously published relationships on human and mouse biology [34]. The gene input list included genes that contained SNPs with p values below the inferred genome-wide threshold (P < 10−4) for all traits on which GWAS was performed (See Additional file 4: Table S4). In this manner, we restricted the list of putative candidate genes and exploited their documented interactions in biological pathways related to the study. The lists of SNPs (P < 10−4) were assigned to their respective genes using the biomaRt R package [35], producing the gene list for each trait used in the GWAS. The obtained Human Genome Gene Nomenclature Committee (HGNC) identifiers were mapped onto networks that are available in the Ingenuity Pathway repository. In the case of PC, we merged all gene lists obtained for individual wavelengths into one collective gene list. Furthermore, we also created a global list of genes by combining all gene lists together. In summary, we performed IPA using 27 gene lists of which 25 gene lists corresponded to each individual trait, a gene list for PC obtained by combining gene lists for all wavelengths and the global list of genes.
In this study, we analyzed the data collected during the large-scale challenge study on Cobb500 broilers with E. maxima as the infective agent [19]. The experimental scheme of the large-scale challenge study is illustrated in Fig. 1. The traits were measured at two levels: "global phenotyping" performed on all 1936 challenged animals and "detailed phenotyping" performed on a subset of 176 animals (Fig. 1). To form the subset of 176 animals, two animals among those with the lowest and two animals among those with the highest body weight gain (BWG) were selected from each pen containing challenged animals.
The traits measured on all challenged animals included BWG, plasma coloration (PC) measured as optical density (OD) of blood plasma in the 380–600 nm range, body temperature (BT) and hematocrit level (HEMA). In addition to these traits, animals from the subset of 176 were phenotyped for lesion score (LS), oocyst count (OC), blood composition (BC) and plasma protein profiles (PPP) (See Additional file 5: Table S5). BC included two sets of measurements: blood cell count (BCC) and red blood indices (RBI). BCC included erythrocyte, leukocyte, lymphocyte, heterophil, and thrombocyte counts as well as the percentage of lymphocytes and heterophils of the total number of leukocytes. RBI included hemoglobin content, mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH) and mean corpuscular hemoglobin concentration (MCHC). PPP were assessed using protein capillary zone electrophoresis, and the profiles included the following fractions: prealbumin, albumin, α1-globulin, α2-globulin, α3-globulin, β1-globulin, β2-globulin, and γ-globulin. Detailed results from the large-scale challenge study were reported by Hamzic et al. [19]. Descriptive statistics of the traits measured on the challenged animals are in Table S5 (See Additional file 5: Table S5).
Genome-wide association study (GWAS)
In total, 22 SNPs were significantly associated (q value <0.1) with the measured traits and were distributed over five chicken chromosomes (See Additional file 6: Table S6). The most significant SNPs were associated with BWG, PC for wavelengths from 465 to 510 nm and the percentage of β2-globulin in blood plasma of one of the PPP fractions.
The GWAS identified five SNPs that were significantly associated with BWG (Fig. 2), and the quantile–quantile (Q–Q) plot for BWG showed a large deviation from the distribution under the null hypothesis, indicating that strong associations were observed (See Additional file 7: Figure S1). The five observed SNPs are located on GGA1, 3 and 5. These SNPs explain 7.5 % of the total variance for BWG between days 15 and 23 of the challenge study. The significantly associated genomic region on GGA5 was located in the upstream region of the THBS1 gene, which encodes the multi-domain matrix glycoprotein termed thrombospondin-1. Similarly, the significantly associated genomic regions on GGA1 and GGA3 are in the vicinity of MGAT4C and KCNK3, respectively.
Manhattan plot for body weight gain. Manhattan plot of genome-wide −log10 (p values) for body weight gain. P values were adjusted using the false-discovery rate (FDR) at a significance level of q value <0.1. SNPs labeled in green have q value <0.1
For PC, the AX-75604378 SNP located on GGA10 was significantly associated with PC values measured for wavelengths ranging from 465 to 510 nm (Fig. 3). Figure 3 shows the Manhattan plot and Q–Q plot for PC measured at 485 nm. The Q–Q plot shows a strong deviation from the distribution under the null hypothesis, which indicates the presence of a strong association between the SNP and PC values (See Additional file 8: Figure S2). The associated SNP explains between 2.1 and 2.3 % of the total variance depending on the measured wavelength (See Additional file 6: Table S6). The AX-75604378 SNP is a non-synonymous polymorphism present in the MAN2C1 gene.
Manhattan plot for plasma coloration (485 nm). Manhattan plot of genome-wide −log10 (p values) for plasma coloration (485 nm). P values were adjusted using the false-discovery rate (FDR) at a significance level of q value <0.1. SNPs labeled in green have q value <0.1
Among the traits measured in the subset of 176 animals, the genomic region located between 52.31 and 52.63 Mb on GGA1 and the SNP AX-76165289 that mapped to GGA2 were significantly associated with the percentage of β2-globulin in the blood plasma (Fig. 4). The genomic region on GGA1 contains 16 SNPs that were associated with the percentage of β2-globulin in blood plasma. Individually, the SNPs on GGA1 explain approximately 13.4 % of the total variance, and SNP AX-76165289 explains 14.2 % of the total variance. The Q–Q plot for β2-globulin also shows that strong associations were observed (See Additional file 9: Figure S3). SNP AX-76165289 is located in the FHOD3 gene on GGA2, while five SNPs on GGA1 (between 52.31 and 52.63 Mb) are located in the LARGE gene (See Additional file 6: Table S6).
Manhattan plot for the percentage of β-globulin. Manhattan plot of genome-wide −log10 (p values) for the percentage of β-globulin. P values were adjusted using the false-discovery rate (FDR) at a significance level of q value <0.1. SNPs labeled in green have q value <0.1
To conduct the biological pathway analysis, we used all SNPs that were included in the GWAS. These SNPs were mapped to their corresponding genes using the latest genome assembly. This list of SNPs and their corresponding genes was combined with the publicly available biological KEGG pathway database. Finally, we obtained a list containing 52,204 SNPs that were included in the GWAS; these were assigned to 162 of the biological pathways associated with 4342 genes in the chicken genome KEGG PATHWAY repository (see "Methods"). The list of KEGG pathways which were significantly (P < 0.05) enriched with genes in genomic regions associated with the measured traits is in Table S7 (See Additional file 10: Table S7). The distributions of the most frequent significant biological pathways differed considerably according to whether all measured traits or all measured traits except PC measurements were considered (See Additional file 11: Figure S4). Several biological pathways were characteristic of the genomic regions associated with PC measurements (See Additional file 11: Figure S4), and due to the multiple wavelength measurements, they were more frequent, which was not the case when the results were summarized without considering PC measurements (See Additional file 11: Figure S4). For example, the phenylalanine, tyrosine and tryptophan biosynthesis pathway was one of the most frequent pathways when considering only the PC measurement results and was not common for other measured traits [(See Additional file 11: Figure S4) and Fig. 5]. Therefore, we present the PC results (Fig. 5) and all other measured pathways separately (Fig. 6).
Distribution of the biological pathways that were significantly enriched with genes in genomic regions associated with plasma coloration (PC). Distribution of the biological pathways significantly (P < 0.05) that were enriched with genes in genomic regions associated with plasma measurement for all 45 wavelengths. Coloration of blood plasma was measured as the level of absorbance for 45 wavelengths in the range from 380 to 600 nm. This figure illustrates significant KEGG pathways for PC measurements in the range from 380 to 600 nm
Distribution of the biological pathways that were significantly enriched with genes in genomic regions associated to all measured traits except plasma coloration (PC). Distribution of significantly (P < 0.05) that were enriched with genes in genomic regions associated with all measured traits except plasma coloration (PC). This figure summarizes significant biological pathways that occurred more than three times for measured traits
For PC measured from 380 to 600 nm, we detected 20 significant biological pathways (See Additional file 10: Table S7). The most frequent significant biological pathways for PC across all measured wavelengths were: phenylalanine, tyrosine and tryptophan biosynthesis, endocytosis, purine metabolism, glycosphingolipid biosynthesis—lacto and neolacto series, ether lipid metabolism, caffeine metabolism, spliceosome, and the p53 signaling pathway (Fig. 5). Each of the mentioned biological pathways occurred more than 10 times as significant when considering all PC measurements (See Additional file 10: Table S7).
In total, seven biological pathways were significantly associated with BWG (See Additional file 10: Table S7). Glycosaminoglycan biosynthesis—heparan sulfate/heparin, glycerolipid metabolism, primary bile acid biosynthesis, melanogenesis, proteasome and regulation of autophagy were the most frequent significantly associated pathways across all measured traits except PC (Fig. 6). In total, 13 and 12 biological pathways were significantly associated with BT and HEMA, respectively (See Additional file 10: Table S7). For BT, the strongest associations were observed with metabolic pathways (P = 0.002) and the ErbB signaling pathway (P = 0.003). For HEMA, the strongest associations were observed with vascular smooth muscle contraction (P < 0.001) and pantothenate and CoA biosynthesis (P = 0.004) (See Additional file 10: Table S7). For duodenal and jejunal LS, five and four biological pathways were significant, respectively (See Additional file 10: Table S7). Among these, vascular smooth muscle contraction and porphyrin and chlorophyll metabolism were significantly enriched for both duodenal and jejunal LS (See Additional file 10: Table S7).
Considering the long list of measured traits for BC and PPP, we summarize the most interesting results. Regarding the percentage of lymphocytes and heterophils, we observed 15 and 13 significant biological pathways, respectively, with 13 pathways being common for both traits (Fig. 6). Furthermore, the percentages of lymphocytes and heterophils were associated with the largest number of significant biological pathways (Fig. 6). Regarding the number of thrombocytes and erythrocytes, 13 and 10 significant biological pathways were observed, respectively (See Additional file 10: Table S7), and these two traits had four significant pathways in common (Fig. 6). Regarding PPP, both the percentages of α1-globulin and α2-globulin were associated with 10 significant biological pathways (See Additional file 10: Table S7). Percentages of prealbumin and albumin shared the following significant pathways: amino sugar and nucleotide sugar metabolism, glycosaminoglycan biosynthesis—chondroitin sulfate/dermatan sulfate and glycosylphosphatidylinositol (GPI)-anchor biosynthesis (Fig. 6).
The three most frequent significant pathways, considering all measured traits, including PC, were phenylalanine, tyrosine and tryptophan biosynthesis, endocytosis and purine metabolism (See Additional file 11: Figure S4). The most frequent significant pathways, considering all traits except PC, included purine metabolism, glycosaminoglycan biosynthesis—heparan sulfate/heparin, the pentose phosphate pathway, and peroxisome and vascular smooth muscle contraction (Fig. 6) and (See Additional file 11: Figure S4).
The network-based analysis was performed using the ingenuity pathway analysis (IPA) tool. Gene lists used as input files for the IPA tool were obtained by extracting genes containing SNPs, with an inferred genome-wide significance level (P < 10−4) for all QTL. The IPA tool produced 76 putative networks using 27 gene lists. The majority of networks were related to general molecular and cellular processes such as cell to cell signaling, nucleic acid metabolism and replication, cell cycle, with several of the networks involved in more specific pathways such as gastrointestinal diseases and organismal injury and abnormalities. In the context of the large-scale challenge study, the most informative networks were the network of interacting molecules grouped around large tumor suppressor kinases 1 (LATS1) and 2 (LATS2) with IPA score 41 (Fig. 7) and the second network with myosin heavy chain 6 (MYH6) in the center with IPA score 51 (Fig. 8).
Network of interactions between GWAS candidate genes using ingenuity pathway analysis (IPA). The network shows molecular interactions between the products of the candidate genes enriched for significantly associated SNPs (P < 10−4) with large tumor suppressor kinases 1 (LATS1) and 2 (LATS2) as the most interesting candidates. Relationships were determined using information contained in the IPA repository. The blue label indicates the genes that were enriched for significantly associated SNPs. The network was obtained using the global gene list determined by combining gene lists for all traits
Network of interactions between GWAS candidate genes using ingenuity pathway analysis (IPA). The network shows molecular interactions between the products of the candidate genes enriched for significantly associated SNPs (P < 10−4) with myosin-6 (MYH6) as the most interesting candidate. Relationships were determined using information contained in the IPA repository. The blue label indicates the genes that were enriched for significantly associated SNPs. The network was obtained using the global gene list determined by combining gene lists for all traits
Identifying genomic regions that underlie the response to Eimeria maxima in broilers allows us to understand the associated molecular mechanisms and provides us with candidate genes and genomic regions that can be used in breeding for improved resistance to coccidiosis. The host response to Eimeria is a complex trait controlled by a wide range of biological processes, which are in turn controlled by many genes with a small effect and a small number of genes with a moderate or large effect. Several QTL regions that are significantly associated with traits measured after the response to Eimeria challenge were detected in F2 crosses obtained from experimental lines with different degrees of susceptibility to coccidiosis [16, 17, 36]. In addition, similar approaches have been used for the identification of genomic regions associated with innate and adaptive immunity in laying hens [37] as well as survival rate and aimed at improving general robustness, especially in laying hens [38]. In addition, information regarding the genomic regions that are strongly associated with desirable traits can be incorporated in commercial poultry breeding programs. Regarding broilers, haplotypes that are associated with desirable traits identified in the final product of the four-crossway breeding scheme can be traced back in pedigreed populations for which selection would be performed within the pure lines. This approach may potentially improve general innate immunity as well as resistance to specific pathogens such as Eimeria species.
However, a more detailed understanding of the genetic mechanisms that control the response to Eimeria infection, as in the case of all other complex traits, requires a sufficiently large sample size, dense SNP coverage that can exploit the linkage disequilibrium and informative phenotypes [39]. Therefore, we performed the large-scale challenge study on 1936 commercial Cobb500 animals, which were genotyped using the 580K Affymetrix® Axiom® high-density genotyping array, providing considerable statistical power for GWAS for traits measured on all 1936 animals. Taking advantage of the size and structure of the challenge experiment [19], we conducted a GWAS to obtain more information on the underlying genetic determinism of the response to E. maxima in broilers. In addition, a post-GWAS functional analysis was performed to further understand the biology of the response to E. maxima in broilers.
Previous QTL studies have reported that several genomic regions are associated with BWG in response to coccidiosis [15, 20]. Comparing our results with these previous studies, only one highly significant SNP that overlaps with a QTL on GGA3 (between 263 and 282cM; 98.1 and 107.0 Mb), detected by Pinard-van der Laan et al. [16] and Bacciu et al. [17], was observed. However, these different results were not completely unexpected considering that the previously conducted challenge study was performed with animals at a different age and that originated from very different experimental lines.
The MGAT4C gene is close to the significantly associated genomic region on GGA1. This gene encodes a glycosyltransferase that is involved in the transfer of N-acetylglucosamine (GlcNAc) to the core mannose residues of N-linked glycans, also known as N-linked glycosylation. N-linked glycosylation has been shown to be essential to HIV-1 pathogenesis [40]. Furthermore, there is a wide range of well-described disorders that affect primarily N-glycan assembly, with several including gastrointestinal disorders [41]. In addition, a study on humans showed a strong association between the apolipoprotein B level and SNP variants in the MGAT4C gene [42].
The KCNK3 gene is the closest gene to the GGA3 genomic region, which was significantly associated with BWG. The KCNK3 gene encodes a member of the potassium channel superfamily, which has been associated with pulmonary hypertension in humans [43]. Broilers are known to suffer from cardiovascular disorders [44, 45], which may be related to the KCNK3 gene. Furthermore, this gene may be associated with the general robustness of broilers and their ability to cope with stress induced by the E. maxima challenge.
The THBS1 gene is located upstream of the significantly associated region GGA5 [between 28.95 and 29.11 Mb, (See Additional file 6: Table S6)]. In addition, we also analyzed putative GENSCAN Gene Predictions that are supported by a few spliced EST (See Additional file 12: Figure S5); however, no similarity was observed with known proteins, which indicates that THBS1 was a better candidate. Human THBS1 is involved in the regulation of angiogenesis and tumorigenesis in healthy tissues and cell adhesion [46, 47]. Furthermore, in the study by Heams et al. [18], THBS1 was among the top 10 of 1473 significantly differentially expressed genes in the caecum between Fayoumi (resistant) and Leghorn (susceptible) animals infected with E. tenella. Moreover, a porcine transcriptome study showed that THBS1 is strongly repressed in the in vitro stimulation of porcine peripheral-blood mononuclear cells (PBMC) with tetradecanoyl phorbol acetate (TPA)/ionomycin [48]. Finally, a recent study showed that human THBS1 plays an important role in the innate immune response during respiratory bacterial infection [49], which may be of interest regarding Eimeria infection. Based on these observations, THBS1 is a good candidate gene for further functional studies.
Regarding PC, our GWAS results show little overlap with previous QTL mapping studies [16]. We observed the strongest signal with the SNP in the MAN2C1 gene (See Additional file 6: Table S6) in association with PC measured in the range from 465 to 510 nm. The associated SNP is described as a non-synonymous polymorphism (http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=314637018). The MAN2C1 gene encodes the α-mannosidase class 2C enzyme, which is one of the key enzymes involved in N-glycan degradation [50]. Recent studies have indicated that MAN2C1 expression is crucial for maintaining efficient protein N-glycosylation [51] as well as cell–cell adhesion [52]. Glycans are important molecules in numerous essential biological processes, including cell adhesion, molecular trafficking and clearance, receptor activation, signal transduction, and endocytosis [53]. In contrast, changes in the PC reflect the status of intestinal absorption, changes in the production of protein carriers and their antioxidant effect in response to Eimeria infection [54] and reflect several of the processes that MAN2C1 may impact. Furthermore, several significant glycol-related pathways were significantly enriched in the biological pathway analysis for PC measurements (Fig. 5).
Regarding the percentage of β-globulins, we identified FHOD3 and LARGE as potential candidate genes (See Additional file 6: Table S6). FHOD3 encodes a protein that is a member of a formin subfamily and is involved in the regulation of cell actin dynamics [55]. However, how FHOD3 may be involved in the regulation of plasma β-globulin levels is difficult to discern because the current knowledge regarding FHOD3 is scarce and restricted to human and mouse functional studies [56, 57]. The LARGE gene encodes an enzyme glycosyltransferase that is involved in alpha-dystroglycan glycosylation and is capable of synthesizing glycoprotein and glycosphingolipid sugar chains [58]. The exact function of LARGE is not fully known; however, mutations in the human LARGE gene have been described to cause congenital muscular dystrophy type 1D (MDC1D) [59].
Taking all measured traits into account, we identified 22 highly associated SNPs; however, GWAS was performed on two sets of traits: the first set of traits was measured on all 2024 animals, and the second set was measured on the subset of 176 animals, for which the statistical power to detect potential candidate genes differed due to the different sample sizes. Regarding BWG and PC that were measured on all animals, two functionally well-supported candidate genes (THBS1 and MAN2C1) were detected. However, we did not identify any candidate regions for BT and HEMA, potentially because these traits have not been as affected in challenged animals compared with PC and BWG [19]. Furthermore, an infection caused by E. maxima is often characterized by intestinal malabsorption and not by severe bloody diarrhoea, which is the case with E. tenella. This finding further indicates that HEMA is not very informative when measuring the response to E. maxima infection [19]. Similarly, BT is difficult to interpret with respect to Eimeria infection because this trait may be influenced by other factors as discussed by Hamzic et al. 2015 [19]. However, we have not identified many candidate genes or genomic regions in the GWAS performed by using traits measured in the 176 animals, which may be primarily due to the sample size, which was 10 times smaller in comparison to the traits measured on all animals, and to the complexity of the genetic parameters that control these traits. In addition, this absence of identifiable candidate genes may be partially due to the precision of measured traits and rather stringent significance thresholds used in GWAS. Therefore, we performed biological pathway analysis, which enabled an increase in the statistical power to detect significant association. An increase in the statistical power is possible because we decreased the number of statistical tests performed by compressing individual SNPs into biological pathways [60].
The final aim is to transfer the acquired knowledge to the poultry breeding industry. The identified candidate genes and genomic regions can be used in breeding for improved resistance to coccidiosis. The primary obstacle in achieving this goal is relating the knowledge regarding the candidate genes identified in the final product (Cobb500) with the grand-parent pure lines where the actual selection is performed. This task becomes rather complex, considering that the poultry populations and the crossing design are not in the public domain, and due to the proprietary nature of information regarding the grandparent lines. Future studies will identify the best approach to trace the identified regions to the grandparent lines.
The biological pathway studies based on the GWAS results can potentially extend the knowledge obtained from GWAS studies by identifying the cumulative effect of gene sets [61]. Furthermore, understanding the biological pathways increases the power to detect statistically significant associations because fewer statistical tests are performed as a consequence of assigning individual SNPs to the respective biological pathways. Therefore, the number of tests is decreased from over 400,000 (number of SNPs) to approximately 160 (number of pathways) using a priori biological information. For this purpose, we assigned genes, containing SNPs used in GWAS, to the biological KEGG pathways. Therefore, biological KEGG pathways are considered as a set of genes that have been used for further analysis. However, we have to be aware that the biological pathway analysis should still be primarily viewed as an exploratory technique because the current statistical methodologies used for gene set/pathway analysis need further development [61]. Moreover, the available biological pathway databases necessary for this kind of analysis are not completely annotated and do not contain all the genes present in the chicken genome. In this study, we used a statistical modeling methodology that assesses the cumulative effect of sets of SNPs on the measured traits as presented by Jensen et al. [31] and Buitenhuis et al. [62]. For this purpose, we succeeded in assigning 52,204 SNPs from GWAS to 4342 annotated genes in the chicken genome.
For PC, the top four most frequently affected pathways across all wavelengths include phenylalanine, tyrosine and tryptophan biosynthesis, endocytosis, purine metabolism, and glycosphingolipid biosynthesis—lacto and neolacto series (Fig. 5).
Phenylalanine, tyrosine and tryptophan biosynthesis is the most commonly affected pathway when only PC is considered (Fig. 5) and (See Additional file 11: Figure S4). Phenylalanine, tyrosine and tryptophan have important roles in the regulation of the immune response [63]. Phenylalanine is indirectly involved in the regulation of nitric oxide (NO) synthesis [64], and NO is known to have multiple roles related to the immune response such as signaling properties, regulating cytokine production and killing pathogens [65]. Tyrosine is used as a precursor for the production of dopamine, catecholamines and melanin. Dopamine is a neurotransmitter known to be involved in the regulation of immune response, and melanin has antioxidant properties [63]. In addition, interferon gamma (IFN-γ) suppresses the growth of Toxoplasma gondii through the intracellular depletion of tryptophan [66]. Deprivation of tryptophan produces a deleterious effect on Toxoplasma gondii replication. Toxoplasma gondii belongs to the same order (Eucoccidiorida) of intracellular single-cell parasites as E. maxima. Furthermore, Laurent et al. [67] reported a strong increase in IFN-γ mRNA expression in chickens infected with Eimeria spp. Based on this finding and the results from the biological pathway analysis, tryptophan depletion may also be involved in the innate immune response during E. maxima infection.
The second most commonly affected pathway when considering PC is the purine metabolism pathway (Fig. 5) and (See Additional file 11: Figure S4). This pathway regulates nucleotide metabolism and is important for successful cell division. In addition, we observed that the purine metabolism pathway is significantly associated with erythrocyte number, mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC) and mean cellular volume (MCV) (Fig. 6). This association may be explained by an increased demand for cell division of blood cell progenitors and the regeneration of the intestinal epithelium due to the effects of the infection because the E. maxima infection has been characterized by severe lesions of the intestinal lining in broilers [19].
Endocytosis has been shown to play an important role in both innate and adaptive immune responses [68], which may explain why endocytosis may be among the most commonly affected pathways when PC measurements are considered (Fig. 5). Finally, glycosphingolipid biosynthesis—lacto and neolacto series, like other glycol-related pathways (Fig. 5), is involved in the production of glycoconjugate receptors, which are used by microbes to enter the host cell and are of critical importance in the early stage of the innate immune response [69, 70]. Moreover, similar paths of the host cell invasion have been previously described in the cases of Eimeria and Toxoplasma [71, 72].
We also summarized the frequency of the most common significant biological pathways excluding PC (Fig. 6). In this case, glycosaminoglycan biosynthesis involving heparan sulfate/heparin, the pentose phosphate pathway and the peroxisome are the most frequent significantly enriched biological pathways. The pentose phosphate pathway is a metabolic pathway that regulates the production of nicotinamide adenine dinucleotide phosphate (NADPH) and pentose, which are essential for the synthesis of nucleic and ribonucleic acids, respectively. The pentose phosphate pathway seems to play a role in plasma protein component and heterophil and lymphocyte production, which may be explained as a response to the increased production of these components during the primary response to the challenge (Fig. 6) [73]. The peroxisome pathway controls the metabolism of reactive oxygen components, which are known to be toxic to bacteria and several parasites and play a significant role in resilience and immunity to infectious diseases [73]. Moreover, the peroxisome pathway was significantly associated with blood components such as number of erythrocytes and percentage of heterophils and lymphocytes (Fig. 6).
These findings demonstrate that the response to Eimeria infection is characterized by a strong effect on essential metabolic pathways as well as innate immune response-related pathways. Among the essential metabolic pathways, the most frequently affected are phenylalanine, tyrosine and tryptophan biosynthesis, purine metabolism and the pentose phosphate pathway. In addition, the most frequent innate immune response-related pathways are glycol-related pathways, the peroxisome pathway and the endocytosis pathway.
The network-based analysis was performed as a complementary approach to the biological pathway analysis to build gene networks associated with responses to the E. maxima challenge using bibliography-based proven relationships that are available through the Ingenuity Pathway repository. The network-based analysis was performed independently from the biological pathway analysis and was based on a list of genes enriched for SNPs that are associated (p < 10−4) with the traits measured during the E. maxima challenge (See Additional file 4: Table S4). The network-based analysis approach, implemented in the IPA tool, assumes that genes used as an input interact with each other, and these interactions are reconstructed based on the relationships shown in the literature [34]. The network-based analysis aims at exploring the cumulative effect of sets of genes that individually explain a moderate part of the variation for a measured trait and that cannot be identified during GWAS when the strict significance threshold was applied.
Figure 7 illustrates the network formed by several of the molecules grouped around LATS1 and LATS2 with multiple direct connections with other molecules enriched with significantly associated SNPs. LATS1 and LATS2 are known to be involved in the regulation of intestinal epithelium renewal [74], which may be explained by intensified tissue repair upon E. maxima challenge. In addition, phenylalanine, tyrosine and tryptophan biosynthesis, purine metabolism and the pentose phosphate pathway are the most frequent significant pathways associated with all measured traits in the KEGG biological pathway analysis. All three of these KEGG pathways are associated with increased DNA replication, cell metabolism and protein degradation, which are essential during the tissue repair process.
The second network has the MYH6 as a key molecule connected with several direct significant relationships to the other genes that are enriched for significantly associated SNPs (Fig. 8). The MYH6 gene encodes the alpha heavy chain subunit of cardiac myosin. In mice, inactivation of the specific mutant MYH6 transcript suppresses hypertrophic cardiomyopathy [75]. In addition, we also identified KCKN3, which is associated with pulmonary hypertension, as one of the candidate genes because heart failure and ascites have been well documented in broiler chickens [44, 45]. The primary reason for these problems can be attributed to an intensive selection in poultry breeding during the last 60 years [76]. Therefore, we can potentially indicate which animals are able to maintain a normal function of the cardiovascular system and have an advantage in the face of Eimeria infection.
Based on the post-GWAS functional analysis, the broiler response to E. maxima is centered on tissue repair and recovery, general robustness and maintenance of tissue integrity, restoring intestinal homeostasis after the challenge. The described processes, which may bring a comparative advantage in the broiler's ability to cope with the challenge, can be described as resilience to acute Eimeria infection. In contrast to the previously conducted studies [16, 17], which reported associations with genes involved in the immune response, we primarily observed associations with genes, biological pathways and gene networks that are involved in tissue repair and recovery and tissue integrity maintenance. However, previous studies [16, 17] were conducted on experimental layer populations challenged with E. tenella at 28 days of age. In regard to this, broilers are also able to establish complete immunity 16 days after being challenged with E. maxima [77]. Therefore, one would assume that this challenge study would identify genetic variants associated with processes related to immune responses, which did not occur in this study and may be due to the effect of the infection doses (50,000 oocysts), which were optimized to produce severe clinical signs as reported by Hamzic et al. [19]. We argue that the more resilient animals are able to maintain their biological homeostasis and manage the consequences of the infection, which, in the context of this challenge, exceeded the importance of building an adequate immune response.
We identified 22 SNPs significantly associated with four different traits at q value <0.1. Two candidate genes, MAN2C1 and FHOD3, were significantly associated with PC measured in the range from 465 to 510 nm and the percentage of β2-globulin in blood plasma, respectively. Moreover, we identified three genomic regions on GGA1 (MGAT4C), GGA3 (KCNK3) and GGA5 (THBS1) that are significantly associated with body weight gain and the percentage of β2-globulin.
The post-GWAS functional analysis, which combined two independent approaches (the biological pathway analysis and network-based analysis), indicated that the genes and biological pathways involved in tissue repair, general robustness as well as the primary immune response may play an important role during the primary stage of E. maxima infection in broilers.
Studies that focus on the transfer of the acquired knowledge to poultry breeding considering the specificities of the broiler breeding scheme are currently under way. Finally, a follow-up transcriptome study is ongoing, which aims at integrating results from the GWAS study and further investigation of the genetic mechanisms that control the response to Eimeria infection in broilers.
No new SNPs were discovered in the preparation of this manuscript. The SNPs used in this manuscript are from the chicken 580 K Affymetrix® Axiom® HD genotyping array: http://media.affymetrix.com/support/technical/datasheets/axiom_chicken_array_plate_datasheet.pdf. SNP names and location can be found at http://www.affymetrix.com/catalog/prod670010/AFFY/Axiom%26%23174%3B+Genome%26%2345%3BWide+Chicken+Genotyping+Array#1_3.
BWG:
body weight gain
HEMA:
LS:
lesion score
oocyst count
plasma coloration
QTL:
PPP:
plasma protein profiles
blood composition
IFN-γ:
interferon gamma
EST:
expressed sequence tags
PBMC:
peripheral blood mononuclear cell
IPA:
Ingenuity Pathway Analysis
Human Genome Gene Nomenclature Committee
TPA:
tetradecanoyl phorbol acetate
MDC1D:
congenital muscular dystrophy type 1D
MCH:
mean corpuscular hemoglobin
MCV:
mean corpuscular volume
MCHC:
mean corpuscular hemoglobin concentration
hematocrit level
BT:
body temperature
BW:
Smith AL, Hesketh P, Archer A, Shirley MW. Antigenic diversity in Eimeria maxima and the influence of host genetics and immunization schedule on cross-protective immunity. Infect Immun. 2002;70:2472–9.
Conway DP, McKenwie EM. Poultry coccidiosis diagnostic and testing procedures. Ames: Blackwell Publishing Professional; 2007.
Dalloul RA, Lillehoj HS. Poultry coccidiosis: recent advancements in control measures and vaccine development. Expert Rev Vaccines. 2006;5:143–63.
Williams RB. A compartmentalised model for the estimation of the cost of coccidiosis to the world's chicken production industry. Int J Parasitol. 1999;29:1209–29.
Blake DP, Tomley FM. Securing poultry production from the ever-present Eimeria challenge. Trends Parasitol. 2014;30:12–9.
Jeffers TK. Anticoccidials: past. Feedstuffs. 2011;11:16.
Jeffers TK. Eimeria acervulina and E. maxima: incidence and anticoccidial drug resistance of isolants in major broiler-producing areas. Avian Dis. 1974;18:331–42.
Lamont SJ. Impact of genetics on disease resistance. Poult Sci. 1998;77:1111–8.
Rosenberg MM. A study of the inheritance of resistance to Eimeria tenella in the domestic fowl. Poult Sci. 1941;20:472.
Edgar SA. Control of avian coccidiosis through breeding or immunization. Poult Sci. 1951;30:911.
Long PL. The effect of breed of chickens on resistance to Eimeria infections. Br Poult Sci. 1968;9:71–8.
Johnson LW, Edgar SA. Responses to prolonged selection for resistance and susceptibility to acute cecal coccidiosis in the Auburn strain single comb white Leghorn. Poult Sci. 1982;61:2344–55.
Bumstead N, Millard B. Genetics of resistance to coccidiosis: response of inbred chicken lines to infection by Eimeria tenella and Eimeria maxima. Br Poult Sci. 1987;28:705–15.
Pinard-van der Laan MH, Monvoisin JL, Pery P, Hamet N, Thomas M. Comparison of outbred lines of chickens for resistance to experimental infection with coccidiosis (Eimeria tenella). Poult Sci. 1998;77:185–91.
Zhu JJ, Lillehoj HS, Allen PC, Van Tassell CP, Sonstegard TS, Cheng HH, et al. Mapping quantitative trait loci associated with resistance to coccidiosis and growth phenotypic measurement of disease susceptibility. Poult Sci. 2003;82:9–16.
Pinard-van der Laan MH, Bed'Hom B, Coville JL, Pitel F, Feve K, Leroux S, et al. Microsatellite mapping of QTLs affecting resistance to coccidiosis (Eimeria tenella) in a Fayoumi × White Leghorn cross. BMC Genomics. 2009;10:31.
Bacciu N, Bed'Hom B, Filangi O, Romé H, Gourichon D, Répérant JM, et al. QTL detection for coccidiosis (Eimeria tenella) resistance in a Fayoumi × Leghorn F2 cross, using a medium-density SNP panel. Genet Sel Evol. 2014;46:14.
Heams T, Bed'Hom B, Rebours E, Jaffrezic F, Pinard-van der Laan M-H. Insights into gene expression profiling of natural resistance to coccidiosis in contrasting chicken lines. BMC Proc. 2011;5:S26.
Hamzic E, Bed'Hom B, Juin H, Hawken R, Abrahamsen MS, Elsen JM, et al. Large-scale investigation of the parameters in response to Eimeria maxima challenge in broilers. J Anim Sci. 2015;93:1830–40.
Kranis A, Gheyas AA, Boschiero C, Turner F, Yu L, Smith S, et al. Development of a high density 600 K SNP genotyping array for chicken. BMC Genomics. 2013;14:59.
Chang CC, Chow CC, Tellier LC, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4:7.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81:559–75.
PLINK Version 1.90b3. 2015. https://www.cog-genomics.org/plink2. Accessed 23 April 2015.
Hubert M, Vandervieren E. An adjusted boxplot for skewed distributions. Comput Stat Data Anal. 2008;52:5186–201.
Zhou X, Stephens M. Efficient algorithms for multivariate linear mixed model algorithms in genome-wide association studies. Nat Methods. 2014;11:407–9.
Yu J, Pressoir G, Briggs WH, Vroh Bi I, Yamasaki M, Doebley JF, et al. A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nat Genet. 2006;38:203–8.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995;57:289–300.
Turner SD. qqman: an R package for visualizing GWAS results using Q-Q and Manhattan plots. bioRxiv. 2014. doi:10.1101/005165.
Melville S. NCBI2R-An R package to navigate and annotate genes and SNPs. R package version 1.4.7. 2015. http://cran.rproject.org/src/contrib/Archive/NCBI2R/. Accessed 23 April 2015.
Tenenbaum D. KEGGREST: Client-side REST access to KEGG. R package version 1.8.0. 2014. http://bioconductor.org/packages/release/bioc/html/KEGGREST.html. Accessed 23 April 2015.
Rohde PD, McKinnon-Edwards S, Sarup PM, Sorensen P. Gene based association approach identify genes across stress traits in fruit flies. In 10th world congress on genetics applied to livestock production: 17–22 August 2014; Vancouver. https://asas.org/docs/default-source/wcgalp-posters/673_paper_9223_manuscript_493_0.pdf.
Newton MA, Quintana FA, den Boon JA, Sengupta S, Ahlquist P. Random-set methods identify distinct aspects of the enrichment signal in gene-set analysis. Ann Appl Stat. 2007;1:85–106.
Jiang Z, Gentleman R. Extensions to gene set enrichment. Bioinformatics. 2007;23:306–13.
Krämer A, Green J, Pollard J Jr, Tugendreich S. Causal analysis approaches in ingenuity pathway analysis. Bioinformatics. 2014;30:523–30.
Durinck S, Spellman PT, Birney E, Huber W. Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat Protoc. 2009;4:1184–91.
Kim ES, Hong YH, Min W, Lillehoj HS. Fine-mapping of coccidia-resistant quantitative trait loci in chickens. Poult Sci. 2006;85:2028–30.
Biscarini F, Bovenhuis H, van Arendonk JAM, Parmentier HK, Jungerius AP, van der Poel JJ. Across-line SNP association study of innate and adaptive immune response in laying hens. Anim Genet. 2010;41:26–38.
Ellen ED, Visscher J, van Arendonk JAM, Bijma P. Survival of laying hens: genetic parameters for direct and associative effects in three purebred layer lines. Poult Sci. 2008;87:233–9.
Robinson MR, Wray NR, Visscher PM. Explaining additional genetic variation in complex traits. Trends Genet. 2014;30:124–32.
Montefiori DC, Robinson WE, Mitchell WM. Role of protein N-glycosylation in pathogenesis of human immunodeficiency virus type 1. Proc Natl Acad Sci USA. 1988;85:9248–52.
Leroy JG. Congenital disorders of N-glycosylation including diseases associated with O- as well as N-glycosylation defects. Pediatr Res. 2006;60:643–56.
Kathiresan S, Manning AK, Demissie S, D'Agostino RB, Surti A, Guiducci C, et al. A genome-wide association study for blood lipid phenotypes in the Framingham Heart Study. BMC Med Genet. 2007;8:S17.
Girerd B, Perros F, Antigny F, Humbert M, Montani D. KCNK3: new gene target for pulmonary hypertension? Expert Rev Respir Med. 2014;8:385–7.
Baghbanzadeh A, Decuypere E. Ascites syndrome in broilers: physiological and nutritional perspectives. Avian Pathol. 2008;37:117–26.
Olkowski AA. Pathophysiology of heart failure in broiler chickens: structural, biochemical, and molecular characteristics. Poult Sci. 2007;86:999–1005.
Ernens I, Bousquenaud M, Lenoir B, Devaux Y, Wagner DR. Adenosine stimulates angiogenesis by up-regulating production of thrombospondin-1 by macrophages. J Leukoc Biol. 2015;97:9–18.
Leclair P, Lim CJ. CD47-independent effects mediated by the TSP-derived 4N1K peptide. PLoS One. 2014;9:e98358.
Gao Y, Flori L, Lecardonnel J, Esquerré D, Hu ZL, Teillaud A, et al. Transcriptome analysis of porcine PBMCs after in vitro stimulation by LPS or PMA/ionomycin using an expression array targeting the pig immune response. BMC Genomics. 2010;11:292.
Zhao Y, Olonisakin TF, Xiong Z, Hulver M, Sayeed S, Yu MT, et al. Thrombospondin-1 restrains neutrophil granule serine protease function and regulates the innate immune response during Klebsiella pneumoniae infection. Mucosal Immunol. 2015;8:896–905.
Kuokkanen E, Smith W, Mäkinen M, Tuominen H, Puhka M, Jokitalo E, et al. Characterization and subcellular localization of human neutral class II alpha-mannosidase [corrected]. Glycobiology. 2007;17:1084–93.
Bernon C, Carré Y, Kuokkanen E, Slomianny MC, Mir AM, Krzewinski F, et al. Overexpression of Man2C1 leads to protein underglycosylation and upregulation of endoplasmic reticulum-associated degradation pathway. Glycobiology. 2011;21:363–75.
Qu L, Ju JY, Chen SL, Shi Y, Xiang ZG, Zhou YQ, et al. Inhibition of the alpha-mannosidase Man2c1 gene expression enhances adhesion of Jurkat cells. Cell Res. 2006;16:622–31.
Ohtsubo K, Marth JD. Glycosylation in cellular mechanisms of health and disease. Cell. 2006;126:855–67.
Yvore P, Mancassola R, Naciri M, Bessay M. Serum coloration as a criterion of the severity of experimental coccidiosis in the chicken. Vet Res. 1993;24:286–90.
Taniguchi K, Takeya R, Suetsugu S, Kan-O M, Narusawa M, Shiose A, et al. Mammalian formin fhod3 regulates actin assembly and sarcomere organization in striated muscles. J Biol Chem. 2009;284:29873–81.
Wooten EC, Hebl VB, Wolf MJ, Greytak SR, Orr NM, Draper I, et al. Formin homology 2 domain containing 3 variants associated with hypertrophic cardiomyopathy. Circ Cardiovasc Genet. 2013;6:10–8.
Kan-O M, Takeya R, Abe T, Kitajima N, Nishida M, Tominaga R, et al. Mammalian formin Fhod3 plays an essential role in cardiogenesis by organizing myofibrillogenesis. Biol Open. 2012;1:889–96.
Peyrard M, Seroussi E, Sandberg-Nordqvist AC, Xie YG, Han FY, Fransson I, et al. The human LARGE gene from 22q12.3-q13.1 is a new, distinct member of the glycosyltransferase gene family. Proc Natl Acad Sci USA. 1999;96:598–603.
Longman C, Brockington M, Torelli S, Jimenez-Mallebrera C, Kennedy C, Khalil N, et al. Mutations in the human LARGE gene cause MDC1D, a novel form of congenital muscular dystrophy with severe mental retardation and abnormal glycosylation of alpha-dystroglycan. Hum Mol Genet. 2003;12:2853–61.
Fridley BL, Biernacka JM. Gene set analysis of SNP data: benefits, challenges, and future directions. Eur J Hum Genet. 2011;19:837–43.
Mooney MA, Nigg JT, McWeeney SK, Wilmot B. Functional and genomic context in pathway analysis of GWAS data. Trends Genet. 2014;30:390–400.
Buitenhuis B, Janss LLG, Poulsen NA, Larsen LB, Larsen MK, Sørensen P. Genome-wide association and biological pathway analysis for milk-fat composition in Danish Holstein and Danish Jersey cattle. BMC Genomics. 2014;15:1112.
Li P, Yin YL, Li D, Kim SW, Wu G. Amino acids and immune function. Br J Nutr. 2007;98:237–52.
Shi W, Meininger CJ, Haynes TE, Hatakeyama K, Wu G. Regulation of tetrahydrobiopterin synthesis and bioavailability in endothelial cells. Cell Biochem Biophys. 2004;41:415–34.
Bogdan C. Nitric oxide synthase in innate and adaptive immunity: an update. Trends Immunol. 2015;36:161–78.
Pfefferkorn ER. Interferon gamma blocks the growth of Toxoplasma gondii in human fibroblasts by inducing the host cells to degrade tryptophan. Proc Natl Acad Sci USA. 1984;81:908–12.
Laurent F, Mancassola R, Lacroix S, Menezes R, Naciri M. Analysis of chicken mucosal immune response to Eimeria tenella and Eimeria maxima infection by quantitative reverse transcription-PCR. Infect Immun. 2001;69:2527–34.
Gleeson PA. The role of endosomes in innate and adaptive immunity. Semin Cell Dev Biol. 2014;31:64–72.
Svensson M, Platt FM, Svanborg C. Glycolipid receptor depletion as an approach to specific antimicrobial therapy. FEMS Microbiol Lett. 2006;258:1–8.
Lingwood CA. Glycosphingolipid functions. Cold Spring Harb Perspect Biol. 2011;3:a004788.
Lai L, Bumstead J, Liu Y, Garnett J, Campanero-Rhodes MA, Blake DP, et al. The role of sialyl glycan recognition in host tissue tropism of the avian parasite Eimeria tenella. PLoS Pathog. 2011;7:e1002296.
Friedrich N, Santos JM, Liu Y, Palma AS, Leon E, Saouros S, et al. Members of a novel protein family containing microneme adhesive repeat domains act as sialic acid-binding lectins during host cell invasion by apicomplexan parasites. J Biol Chem. 2010;285:2064–76.
Allen PC, Fetterer RH. Recent advances in biology and immunobiology of Eimeria species and in diagnosis and control of infection with these coccidian parasites of poultry. Clin Microbiol Rev. 2002;15:58–65.
Imajo M, Ebisuya M, Nishida E. Dual role of YAP and TAZ in renewal of the intestinal epithelium. Nat Cell Biol. 2015;17:7–19.
Jiang J, Wakimoto H, Seidman JG, Seidman CE. Allele-specific silencing of mutant Myh6 transcripts in mice suppresses hypertrophic cardiomyopathy. Science. 2013;342:111–4.
Havenstein G, Ferket P, Qureshi M. Growth, livability, and feed conversion of 1957 versus 2001 broilers when fed representative 1957 and 2001 broiler diets. Poult Sci. 2003;82:1500–8.
Stiff MI, Bafundo KW. Development of immunity in broilers continuously exposed to Eimeria sp. Avian Dis. 1993;37:295–301.
EH performed the data analysis and wrote the manuscript. BB supervised the biological pathway and GWA analysis and helped draft of the manuscript. FH was involved in the early stages of data analysis and helped draft the manuscript. BS and JME helped with the GWAS methodology selection. MHP designed the study. BBH participated in the study design, supervised the analysis with emphasis on the functional section and helped draft the manuscript. All authors read and approved the final manuscript.
EH benefited from a joint grant from the European Commission and Cobb-Vantress Inc. within the framework of the Erasmus-Mundus joint doctorate "EGS-ABG". BB was supported by the "Development of genetic selection technology for polyvalent resistance to infectious diseases" (POLY-REID) (Grant Number 10-093534) project granted by the Danish Council for Strategic Research, the Danish Poultry Council, The Hatchery Hellevad, and Cobb-Vantress Inc.
UMR1313 Animal Genetics and Integrative Biology Unit, AgroParisTech, 16 rue Claude Bernard, 75005, Paris, France
Edin Hamzić, Marie-Hélène Pinard - van der Laan & Bertrand Bed'Hom
UMR1313 Animal Genetics and Integrative Biology Unit, INRA, Domaine de Vilvert, 78350, Jouy-en-Josas, France
Department of Molecular Biology and Genetics, Center for Quantitative Genetics and Genomics, Aarhus University, Blichers Allé 20, P.O. Box 50, 8830, Tjele, Denmark
Edin Hamzić & Bart Buitenhuis
UMR1348 Physiology, Environment and Genetics for the Animal and Livestock Systems Unit, INRA, Domaine de la Prise, 35590, Saint Gilles, France
Frédéric Hérault
Cobb-Vantress Inc., Siloam Springs, AR, 72761, USA
Rachel Hawken & Mitchel S. Abrahamsen
UMR1388 Genetics, Physiology and Breeding Systems, INRA, 24 chemin de Borde-Rouge, 31326, Castanet-Tolosan, France
Bertrand Servin & Jean-Michel Elsen
Edin Hamzić
Bart Buitenhuis
Rachel Hawken
Mitchel S. Abrahamsen
Bertrand Servin
Jean-Michel Elsen
Marie-Hélène Pinard - van der Laan
Bertrand Bed'Hom
Correspondence to Bertrand Bed'Hom.
12711_2015_170_MOESM1_ESM.xlsx
Additional file 1: Table S1. Title: Quality control of genotype data. Description: The genotype data were obtained from 1972 animals that were alive at day 16 of the challenge study. Genotype data were produced using 580 K Affymetrix® Axiom® HD genotyping array. The call rate threshold was set at 98 %, and all SNPs and samples with values less than 98 % were excluded. The minor allele frequency threshold was set at 1 % and animals with values less than 1 % were excluded. Heterozygote excess was assessed based on the distribution of F-statistics adjusted for skewness using adjbox function from the R package robustbase. Control animals were also excluded from the analysis. A total of 138,568 SNPs out of 580,961 were removed during the quality control.
Additional file 2: Table S2. Title: Descriptive statistics for the distances between SNPs across chromosomes. Description: The table contains information on the number of SNPs per chromosome and the mean, median and standard deviation (SD) for the average distances between neighboring SNPs. Results in the table refer to genotype data after quality control.
Additional file 3: Table S3. Title: List of annotated SNPs used for Biological Pathway Analysis. Description: The table contains SNPs used in the genome-wide association study with p-values < 10−4 with the corresponding information on the genes in which they are located. Abbreviations used for column names: RS ID: Reference SNP ID number, SYMBOL: HGNC symbol for gene, NCBI GENE ID: NCBI gene ID, CHR: chromosome, CHR POS: chromosome position, GENE REGION: gene region of the corresponding gene, SS ID: submitted SNP ID, AFFYMETRIX: Affymetrix SNP ID, ALLELE: SNP alleles. The remaining columns are p-values for each trait with column names corresponding to trait abbreviations used in the manuscript.
Additional file 4: Table S4. Title: List of annotated SNPs and their corresponding genes used for Ingenuity Pathway Analysis. Description: The table contains SNPs with the corresponding information on the genes in which they are located and together with p-values obtained in the genome-wide association study.
Additional file 5: Table S5. Title: Descriptive statistics of the measured traits. Description: Descriptive statistics of the traits measured on the challenged animals during the large-scale challenge study. Plasma coloration was measured as absorbance (optical density) of blood plasma at different wavelengths. Oocyst count is expressed as the raw number of oocysts per gram of feces. Erythrocyte count is expressed in millions per mm3 (M∙mm−3). White blood cell and thrombocyte count is expressed in absolute number per mm3 (count∙mm−3. Lymphocytes and heterophils are also expressed in the percentage of total number of white blood cells. MCV, MCH and MCHC correspond to mean cellular volume, mean corpuscular hemoglobin and mean corpuscular hemoglobin concentration, respectively.
Additional file 6: Table S6. Title: Significant SNPs associated with traits measured on all challenged animals. Description: Significant SNPs associated with traits measured on all challenged animals. A genome-wide association analysis was performed using Cobb500 broilers for all the measured traits during the large-scale study. SNPs were considered significant at q-value < 0.10 using False Discovery Rate adjustment.
Additional file 7: Figure S1. Title: Quantile–quantile plot for body weight gain. Description: Quantile–quantile plot for the body weight gain test statistics.
Additional file 8: Figure S2. Title: Quantile–quantile plot for plasma coloration (485 nm). Description: Quantile–quantile plot for plasma coloration (485 nm) test statistics.
Additional file 9: Figure S3. Title: Quantile–quantile plot for the percentage of β2-globulin. Description: Quantile–quantile plot for β2-globulin test statistics.
12711_2015_170_MOESM10_ESM.xlsx
Additional file 10: Table S7. Title: List of biological pathways significantly enriched with genes in genomic regions associated with measured traits. Description: Results for the biological pathway significantly (P < 0.05) enriched with genes in genomic regions associated with measured traits during the large-scale challenge study. The p-values provided in the table represent values for the one-sided test for overlapping SNPs after 1000 permutations.
12711_2015_170_MOESM11_ESM.pdf
Additional file 11: Figure S4. Title: Distribution of the most frequent significant biological pathways. Description: Distribution of the most frequent biological pathways being significantly enriched with genes in genomic regions associated with measured traits during the large-scale challenge study. The distribution is considerably different when all measured traits are considered together and when PC measurements are excluded.
Additional file 12: Figure S5. Title: Illustration of the genomic region between 28.95 and 29.11 Mb on GGA5. Description: The genomic region between 28.95 and 29.11 Mb contains three SNPs that were significantly associated with body weight gain (BWG) and also contains several chicken spliced mRNA and EST, which may indicate the presence of a non-annotated chicken gene. Illustration was obtained from the UCSC Genome Browser (ICGSC Gallus_gallus-4.0/galGal4).
Hamzić, E., Buitenhuis, B., Hérault, F. et al. Genome-wide association study and biological pathway analysis of the Eimeria maxima response in broilers. Genet Sel Evol 47, 91 (2015). https://doi.org/10.1186/s12711-015-0170-0
Biological Pathway
Coccidiosis
Tryptophan Biosynthesis
Submission enquiries: [email protected]
|
CommonCrawl
|
Mon, 16 Dec 2019 20:27:02 GMT
2.4: Graph Linear Inequalities in Two Variables
[ "article:topic", "showtoc:no", "transcluded:yes", "source[1]-math-17386" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMission_College%2FMat_C_Intermediate_Algebra_(Carr)%2F02%253A_Graphs_and_Functions%2F2.04%253A_Graph_Linear_Inequalities_in_Two_Variables
Mission College
Mat C Intermediate Algebra (Carr)
2: Graphs and Functions
Verify Solutions to an Inequality in Two Variables
Recognize the Relation Between the Solutions of an Inequality and its Graph
Graph Linear Inequalities in Two Variables
Solve Applications using Linear Inequalities in Two Variables
Verify solutions to an inequality in two variables.
Recognize the relation between the solutions of an inequality and its graph.
Graph \(x>2\) on a number line.
Solve: \(4x+3>23\).
Translate: \(8<x>3\).
Previously we learned to solve inequalities with only one variable. We will now learn about inequalities containing two variables. In particular we will look at linear inequalities in two variables which are very similar to linear equations in two variables.
Linear inequalities in two variables have many applications. If you ran a business, for example, you would want your revenue to be greater than your costs—so that your business made a profit.
A linear inequality is an inequality that can be written in one of the following forms:
\( \begin{array} {l} { }& {Ax+By>C} &{Ax+By\geq C} &{Ax+By<C} &{Ax+By\leq C} \\ \end{array} \)
Where A and B are not both zero.
Recall that an inequality with one variable had many solutions. For example, the solution to the inequality x>3x>3 is any number greater than 3. We showed this on the number line by shading in the number line to the right of 3, and putting an open parenthesis at 3. See Figure.
Figure \(\PageIndex{1}\)
Similarly, linear inequalities in two variables have many solutions. Any ordered pair (x,y)(x,y) that makes an inequality true when we substitute in the values is a solution to a linear inequality.
SOLUTION TO A LINEAR INEQUALITY
An ordered pair \((x,y)\) is a solution to a linear inequality if the inequality is true when we substitute the values of x and y.
ExAMple \(\PageIndex{1}\)
Determine whether each ordered pair is a solution to the inequality y>x+4:y>x+4:
ⓐ (0,0)(0,0) ⓑ (1,6)(1,6) ⓒ (2,6)(2,6) ⓓ (−5,−15)(−5,−15) ⓔ (−8,12)(−8,12)
ⓐ
\((0,0)\)
So, \((0,0)\) is not a solution to \(y>x+4\).
ⓑ
So, \((1,6)\) is a solution to \(y>x+4\).
ⓒ
ⓓ
\((−5,−15)\)
So, \((−5,−15)\) is not a solution to \(y>x+4\).
ⓔ
\((−8,12)\)
So, \((−8,12)\) is a solution to \(y>x+4\).
Determine whether each ordered pair is a solution to the inequality \(y>x−3\):
ⓐ \((0,0)\) ⓑ \((4,9)\) ⓒ \((−2,1)\) ⓓ \((−5,−3)\) ⓔ \((5,1)\)
ⓐ yes ⓑ yes ⓒ yes ⓓ yes ⓔ no
Determine whether each ordered pair is a solution to the inequality \(y<x+1\):
ⓐ \((0,0)\) ⓑ \((8,6)\) ⓒ \((−2,−1)\) ⓓ \((3,4)\) ⓔ \((−1,−4)\)
ⓐ yes ⓑ yes ⓒ no ⓓ no ⓔ yes
Now, we will look at how the solutions of an inequality relate to its graph.
Let's think about the number line in shown previously again. The point \(x=3\) separated that number line into two parts. On one side of 3 are all the numbers less than 3. On the other side of 3 all the numbers are greater than 3. See Figure.
Figure \(\PageIndex{2}\):The solution to \(x>3\) is the shaded part of the number line to the right of \(x=3\).
Similarly, the line \(y=x+4\) separates the plane into two regions. On one side of the line are points with \(y<x+4\). On the other side of the line are the points with \(y>x+4\). We call the line \(y=x+4\) a boundary line.
BOUNDARY LINE
The line with equation \(Ax+By=C\) is the boundary line that separates the region where \(Ax+By>C\) from the region where \(Ax+By<C\).
For an inequality in one variable, the endpoint is shown with a parenthesis or a bracket depending on whether or not a is included in the solution:
Similarly, for an inequality in two variables, the boundary line is shown with a solid or dashed line to show whether or not it the line is included in the solution.
\[ \begin{array} {ll} {Ax+By<C} &{Ax+By\leq C} \\ {Ax+By>C} &{Ax+By\geq C} \\ {\text{Boundary line is }Ax+By=C.} &{\text{Boundary line is }Ax+By=C.} \\ {\text{Boundary line is not included in solution.}} &{\text{Boundary line is not included in solution.}} \\ {\textbf{Boundary line is dashed.}} &{\textbf{Boundary line is solid.}} \\ \nonumber \end{array} \]
Now, let's take a look at what we found in Example. We'll start by graphing the line \(y=x+4\), and then we'll plot the five points we tested, as shown in the graph. See Figure.
In Example we found that some of the points were solutions to the inequality \(y>x+4\) and some were not.
Which of the points we plotted are solutions to the inequality \(y>x+4\)?
The points \((1,6)\) and \((−8,12)\) are solutions to the inequality \(y>x+4\). Notice that they are both on the same side of the boundary line \(y=x+4\).
The two points \((0,0)\) and \((−5,−15)\) are on the other side of the boundary line \(y=x+4\), and they are not solutions to the inequality \(y>x+4\). For those two points, \(y<x+4\).
What about the point \((2,6)\)? Because \(6=2+4\), the point is a solution to the equation \(y=x+4\), but not a solution to the inequality \(y>x+4\). So the point \((2,6)\) is on the boundary line.
Let's take another point above the boundary line and test whether or not it is a solution to the inequality \(y>x+4\). The point \((0,10)\)clearly looks to above the boundary line, doesn't it? Is it a solution to the inequality?
\[\begin{array} {lll} {y} &{>} &{x+4} \\ {10} &{\overset{?}{>}} &{0+4} \\ {10} &{>} &{4} \\ \nonumber \end{array}\]
So, \((0,10)\) is a solution to \(y>x+4\).
Any point you choose above the boundary line is a solution to the inequality \(y>x+4\). All points above the boundary line are solutions.
Similarly, all points below the boundary line, the side with \((0,0)\) and \((−5,−15)\), are not solutions to \(y>x+4\), as shown in Figure.
The graph of the inequality \(y>x+4\) is shown in below.
The line \(y=x+4\) divides the plane into two regions. The shaded side shows the solutions to the inequality \(y>x+4\).
The points on the boundary line, those where \(y=x+4\), are not solutions to the inequality \(y>x+4\), so the line itself is not part of the solution. We show that by making the line dashed, not solid.
The boundary line shown in this graph is \(y=2x−1\). Write the inequality shown by the graph.
The line \(y=2x−1\) is the boundary line. On one side of the line are the points with \(y>2x−1\) and on the other side of the line are the points with \(y<2x−1\).
Let's test the point \((0,0)\) and see which inequality describes its position relative to the boundary line.
At \((0,0)\), which inequality is true: \(y>2x−1\) or \(y<2x−1\)?
\[\begin{array} {ll} {y>2x−1} &{y<2x−1} \\ {0\overset{?}{>}2·0−1} &{0\overset{?}{<}2·0−1} \\ {0>−1\text{ True}} &{0<−1\text{ False}} \\ \nonumber \end{array}\]
Since, \(y>2x−1\) is true, the side of the line with \((0,0)\), is the solution. The shaded region shows the solution of the inequality \(y>2x−1\).
Since the boundary line is graphed with a solid line, the inequality includes the equal sign.
The graph shows the inequality \(y\geq 2x−1\).
We could use any point as a test point, provided it is not on the line. Why did we choose \((0,0)\)? Because it's the easiest to evaluate. You may want to pick a point on the other side of the boundary line and check that \(y<2x−1\).
Write the inequality shown by the graph with the boundary line \(y=−2x+3\).
\(y\geq −2x+3\)
Write the inequality shown by the graph with the boundary line \(y=\frac{1}{2}x−4\).
\(y\leq \frac{1}{2}x−4\)
The boundary line shown in this graph is \(2x+3y=6\). Write the inequality shown by the graph.
The line \(2x+3y=6\) is the boundary line. On one side of the line are the points with \(2x+3y>6\) and on the other side of the line are the points with \(2x+3y<6\).
Let's test the point \((0,0)\) and see which inequality describes its side of the boundary line.
At \((0,0)\), which inequality is true: \(2x+3y>6\) or \(2x+3y<6\)?
\[\begin{array} {ll} {2x+3y>6} &{2x+3y<6} \\ {2(0)+3(0)\overset{?}{>}6} &{2(0)+3(0)\overset{?}{<}6} \\ {0>6\text{ False}} &{0<6\text{ True}} \\ \nonumber \end{array}\]
So the side with \((0,0)\) is the side where \(2x+3y<6\).
(You may want to pick a point on the other side of the boundary line and check that \(2x+3y>6\).)
Since the boundary line is graphed as a dashed line, the inequality does not include an equal sign.
The shaded region shows the solution to the inequality \(2x+3y<6\).
Write the inequality shown by the shaded region in the graph with the boundary line \(x−4y=8\).
\(x−4y\leq 8\)
Write the inequality shown by the shaded region in the graph with the boundary line \(3x−y=6\).
\(3x−y\geq 6\)
Now that we know what the graph of a linear inequality looks like and how it relates to a boundary equation we can use this knowledge to graph a given linear inequality.
Example \(\PageIndex{10}\): How to Graph a Linear Equation in Two Variables
Graph the linear inequality \(y\geq \frac{3}{4}x−2\).
Example \(\PageIndex{11}\)
Graph the linear inequality \(y>\frac{5}{2}x−4\).
All points in the shaded region and on the boundary line, represent the solutions to \(y>\frac{5}{2}x−4\).
Graph the linear inequality \(y<\frac{2}{3}x−5\).
All points in the shaded region, but not those on the boundary line, represent the solutions to \(y<\frac{2}{3}x−5\).
The steps we take to graph a linear inequality are summarized here.
GRAPH A LINEAR INEQUALITY IN TWO VARIABLES.
Identify and graph the boundary line.
If the inequality is \leq or\geq ,\leq or\geq , the boundary line is solid.
If the inequality is <or>,<or>, the boundary line is dashed.
Test a point that is not on the boundary line. Is it a solution of the inequality?
Shade in one side of the boundary line.
If the test point is a solution, shade in the side that includes the point.
If the test point is not a solution, shade in the opposite side.
Graph the linear inequality \(x−2y<5\).
First, we graph the boundary line \(x−2y=5\). The inequality is \(<\) so we draw a dashed line.
Then, we test a point. We'll use \((0,0)\) again because it is easy to evaluate and it is not on the boundary line.
Is \((0,0)\) a solution of \(x−2y<5\)?
The point \((0,0)\) is a solution of \(x−2y<5\), so we shade in that side of the boundary line.
All points in the shaded region, but not those on the boundary line, represent the solutions to \(x−2y<5\).
Graph the linear inequality: \(2x−3y<6\).
All points in the shaded region, but not those on the boundary line, represent the solutions to \(2x−3y<6\).
Graph the linear inequality: \(2x−y>3\).
All points in the shaded region, but not those on the boundary line, represent the solutions to \(2x−y>3\).
What if the boundary line goes through the origin? Then, we won't be able to use \((0,0)\) as a test point. No problem—we'll just choose some other point that is not on the boundary line.
Graph the linear inequality: \(y\leq −4x\).
First, we graph the boundary line \(y=−4x\). It is in slope–intercept form, with \(m=−4\) and \(b=0\). The inequality is \(\leq\) so we draw a solid line.
Now we need a test point. We can see that the point (1,0)(1,0) is not on the boundary line.
Is \((1,0)\) a solution of \(y\leq −4x\)?
The point \((1,0)\) is not a solution to \(y\leq −4x\), so we shade in the opposite side of the boundary line.
All points in the shaded region and on the boundary line represent the solutions to \(y\leq −4x\).
Graph the linear inequality: \(y>−3x\).
All points in the shaded region, but not those on the boundary line, represent the solutions to \(y>−3x\).
Graph the linear inequality: \(y\geq −2x\).
All points in the shaded region and on the boundary line, represent the solutions to \(y\geq −2x\).
Some linear inequalities have only one variable. They may have an x but no y, or a y but no x. In these cases, the boundary line will be either a vertical or a horizontal line.
Recall that:
\[\begin{array} {ll} {x=a} &{\text{vertical line}} \\ {y=b} &{\text{horizontal line}} \\ \nonumber \end{array}\]
Graph the linear inequality: \(y>3\).
First, we graph the boundary line \(y=3\). It is a horizontal line. The inequality is \(>\) so we draw a dashed line.
We test the point \((0,0)\).
\[y>3\nonumber\]\[0\slashed{>}3\nonumber\]
So, \((0,0)\) is not a solution to \(y>3\).
So we shade the side that does not include \((0,0)\) as shown in this graph.
All points in the shaded region, but not those on the boundary line, represent the solutions to \(y>3\).
Graph the linear inequality: \(y<5\).
All points in the shaded region, but not those on the boundary line, represent the solutions to \(y<5\).
Graph the linear inequality: \(y\leq −1\).
All points in the shaded region and on the boundary line represent the solutions to \(y\leq −1\).
Many fields use linear inequalities to model a problem. While our examples may be about simple situations, they give us an opportunity to build our skills and to get a feel for how thay might be used.
Hilaria works two part time jobs in order to earn enough money to meet her obligations of at least $240 a week. Her job in food service pays $10 an hour and her tutoring job on campus pays $15 an hour. How many hours does Hilaria need to work at each job to earn at least $240?
ⓐ Let xx be the number of hours she works at the job in food service and let y be the number of hours she works tutoring. Write an inequality that would model this situation.
ⓑ Graph the inequality.
ⓒ Find three ordered pairs \((x,y)\) that would be solutions to the inequality. Then, explain what that means for Hilaria.
ⓐ We let x be the number of hours she works at the job in food service and let y be the number of hours she works tutoring.
She earns $10 per hour at the job in food service and $15 an hour tutoring. At each job, the number of hours multiplied by the hourly wage will gives the amount earned at that job.
ⓑ To graph the inequality, we put it in slope–intercept form.
\[\begin{align} {10x+15y} &\geq 240 \\ 15y &\geq -10x+240 \\ y &\geq {−\frac{2}{3}x+16} \\ \nonumber \end{align}\]
ⓒ From the graph, we see that the ordered pairs \((15,10)\), \((0,16)\), \((24,0)\) represent three of infinitely many solutions. Check the values in the inequality.
For Hilaria, it means that to earn at least $240, she can work 15 hours tutoring and 10 hours at her fast-food job, earn all her money tutoring for 16 hours, or earn all her money while working 24 hours at the job in food service.
Hugh works two part time jobs. One at a grocery store that pays $10 an hour and the other is babysitting for $13 hour. Between the two jobs, Hugh wants to earn at least $260 a week. How many hours does Hugh need to work at each job to earn at least $260?
ⓐ Let x be the number of hours he works at the grocery store and let y be the number of hours he works babysitting. Write an inequality that would model this situation.
ⓒ Find three ordered pairs (x, y) that would be solutions to the inequality. Then, explain what that means for Hugh.
ⓐ \(10x+13y\geq 260\)
ⓒ Answers will vary..
Veronica works two part time jobs in order to earn enough money to meet her obligations of at least $280 a week. Her job at the day spa pays $10 an hour and her administrative assistant job on campus pays $17.50 an hour. How many hours does Veronica need to work at each job to earn at least $280?
ⓐ Let x be the number of hours she works at the day spa and let y be the number of hours she works as administrative assistant. Write an inequality that would model this situation.
ⓒ Find three ordered pairs (x, y) that would be solutions to the inequality. Then, explain what that means for Veronica
ⓐ \(10x+17.5y\geq 280\)
ⓒ Answers will vary.
Access this online resource for additional instruction and practice with graphing linear inequalities in two variables.
Graphing Linear Inequalities in Two Variables
How to graph a linear inequality in two variables.
If the inequality is \(\leq\) or \(\geq\), the boundary line is solid.
If the inequality is \(<\) or \(>\), the boundary line is dashed.
In the following exercises, determine whether each ordered pair is a solution to the given inequality.
ⓐ \((0,1)\)
ⓑ \((−4,−1)\)
ⓒ \((4,2)\)
ⓓ \((3,0)\)
ⓔ \((−2,−3)\)
ⓐ yes ⓑ yes ⓒ no ⓓ no ⓔ no
ⓑ \((2,1)\)
ⓒ \((−1,−5)\)
ⓓ \((−6,−3)\)
ⓔ \((1,0)\)
Determine whether each ordered pair is a solution to the inequality \(y<3x+2\):
ⓒ \((−2,0)\)
ⓔ \((−1,4)\)
ⓐ no ⓑ no ⓒ yes ⓓ yes ⓔ no
Determine whether each ordered pair is a solution to the inequality \(y<−2x+5\):
ⓐ \((−3,0)(−3,0)
ⓔ \((5,−4)\)
Determine whether each ordered pair is a solution to the inequality \(3x−4y>4\):
ⓑ \((−2,6)\)
ⓓ \((10,−5)\)
ⓐ yes ⓑ no ⓒ no ⓓ no ⓔ no
Determine whether each ordered pair is a solution to the inequality \(2x+3y>2\):
ⓑ \((4,−3)\)
ⓓ \((−8,12)\)
In the following exercises, write the inequality shown by the shaded region.
Write the inequality shown by the graph with the boundary line \(y=3x−4\).
\(y\leq 3x−4\)
Write the inequality shown by the graph with the boundary line \(y=−\frac{1}{2}x+1\).
\(y\leq −\frac{1}{2}x+1\)
Write the inequality shown by the graph with the boundary line \(y=-\frac{1}{3}x−2\).
Write the inequality shown by the shaded region in the graph with the boundary line \(x+y=5\).
\(x+y\geq 5\)
\(3x−y\leq 6\)
In the following exercises, graph each linear inequality.
Graph the linear inequality: \(y>\frac{2}{3}x−1\).
Graph the linear inequality: \(y<\frac{3}{5}x+2\).
Graph the linear inequality: \(y\leq −\frac{1}{2}x+4\).
Graph the linear inequality: \(y\geq −\frac{1}{3}x−2\).
Graph the linear inequality: \(x−y\leq 3\).
Graph the linear inequality: \(x−y\geq −2\).
Graph the linear inequality: \(4x+y>−4\).
Graph the linear inequality: \(x+5y<−5\).
Graph the linear inequality: \(3x+2y\geq −6\).
Graph the linear inequality: \(y>4x\).
Graph the linear inequality: \(y<−10\).
Graph the linear inequality: \(y\geq 2\).
Graph the linear inequality: \(x\leq 5\).
Graph the linear inequality: \(x\geq 0\).
Graph the linear inequality: \(x−y<4\).
Graph the linear inequality: \(x−y<−3\).
Graph the linear inequality: \(y\geq \frac{3}{2}x\).
Graph the linear inequality: \(y\leq \frac{5}{4}x\).
Graph the linear inequality: \(y>−2x+1\).
Graph the linear inequality: \(y<−3x−4\).
Graph the linear inequality: \(2x+y\geq −4\).
Graph the linear inequality: \(x+2y\leq −2\).
Graph the linear inequality: \(2x−5y>10\).
Harrison works two part time jobs. One at a gas station that pays $11 an hour and the other is IT troubleshooting for $16.50$16.50an hour. Between the two jobs, Harrison wants to earn at least $330 a week. How many hours does Harrison need to work at each job to earn at least $330?
ⓐ Let x be the number of hours he works at the gas station and let y be the number of (hours he works troubleshooting. Write an inequality that would model this situation.
ⓒ Find three ordered pairs \((x,y)\) that would be solutions to the inequality. Then, explain what that means for Harrison.
Elena needs to earn at least $450 a week during her summer break to pay for college. She works two jobs. One as a swimming instructor that pays $9 an hour and the other as an intern in a genetics lab for $22.50 per hour. How many hours does Elena need to work at each job to earn at least $450 per week?
ⓐ Let x be the number of hours she works teaching swimming and let y be the number of hours she works as an intern. Write an inequality that would model this situation.
ⓒ Find three ordered pairs (x,y)(x,y) that would be solutions to the inequality. Then, explain what that means for Elena.
The doctor tells Laura she needs to exercise enough to burn 500 calories each day. She prefers to either run or bike and burns 15 calories per minute while running and 10 calories a minute while biking.
ⓐ If x is the number of minutes that Laura runs and y is the number minutes she bikes, find the inequality that models the situation.
ⓒ List three solutions to the inequality. What options do the solutions provide Laura?
Armando's workouts consist of kickboxing and swimming. While kickboxing, he burns 10 calories per minute and he burns 7 calories a minute while swimming. He wants to burn 600 calories each day.
ⓐ If x is the number of minutes that Armando will kickbox and y is the number minutes he will swim, find the inequality that will help Armando create a workout for today.
ⓒ List three solutions to the inequality. What options do the solutions provide Armando?
Lester thinks that the solution of any inequality with a >> sign is the region above the line and the solution of any inequality with a << sign is the region below the line. Is Lester correct? Explain why or why not.
Explain why, in some graphs of linear inequalities, the boundary line is solid but in other graphs it is dashed.
ⓑ On a scale of 1–10, how would you rate your mastery of this section in light of your responses on the checklist? How can you improve this?
A linear inequality is an inequality that can be written in one of the following forms: \(Ax+By>C\), \(Ax+By\geq C\), \(Ax+By<C\), or \(Ax+By\leq C\), where A and B are not both zero.
2.3: Find the Equation of a Line
2.5: Relations and Functions
source[1]-math-17386
|
CommonCrawl
|
Chapter Test - Derivatives(Nel)
Calculus and Vectors Nelson
Purchase this Material for $5
You need to sign up or log in to purchase.
Subscribe for All Access
The graphs of a function and its derivative are shown below. Label the graphs f and f', and write a short paragraph stating the criteria you used to make your selection
Buy to View
0.18mins
Use the definition of the derivative to find \displaystyle{\frac{d}{dx}}(x-x^2).
Determine \displaystyle{\frac{dy}{dx}} for each of the following functions:
y=\displaystyle{\frac{1}{3}}x^3-3x^{-5}+4\pi
y=6(2x-9)^5
y=\displaystyle{\frac{2}{\sqrt{x}}}+\displaystyle{\frac{x}{\sqrt{3}}}+6\sqrt[3]{x}
y=\left(\displaystyle{\frac{x^2+6}{3x+4}}\right)^5
y=x^2\sqrt[3]{6x^2-7}
(Simplify your answer.)
y=\displaystyle{\frac{4x^5-5x^4+6x-2}{x^4}}
Determine the slope of the tangent to the graph of y=(x^2+3x-2)(7-3x) at (1,8).
Determine \displaystyle{\frac{dy}{dx}} at x=-2 for y=3u^2+2u and u=\sqrt{x^2+5}.
Determine the equation of the tangent to y=(3x^{-2}-2x^3)^5 at (1,1).
The amount of pollution in a certain lake is P(t)=(t^{\displaystyle{\frac{1}{4}}}+3)^3, where t is measured in years and P is measured in parts per million (ppm). At what rate is the amount of pollution changing after 16 years?
At what point on the curve y=x^4 does the normal have a slope of 16?
Determine the points on the curve y=x^3-x^2-x+1 where the tangent is horizontal.
For what values of a and b will the parabola y=x^2+ax+b be tangent to the curve y=x^3 at point (1,1)?
|
CommonCrawl
|
Genotype distribution-based inference of collective effects in genome-wide association studies: insights to age-related macular degeneration disease mechanism
Hyung Jun Woo1,
Chenggang Yu1,
Kamal Kumar1,
Bert Gold2 &
Jaques Reifman1
The Erratum to this article has been published in BMC Genomics 2016 17:752
Genome-wide association studies provide important insights to the genetic component of disease risks. However, an existing challenge is how to incorporate collective effects of interactions beyond the level of independent single nucleotide polymorphism (SNP) tests. While methods considering each SNP pair separately have provided insights, a large portion of expected heritability may reside in higher-order interaction effects.
We describe an inference approach (discrete discriminant analysis; DDA) designed to probe collective interactions while treating both genotypes and phenotypes as random variables. The genotype distributions in case and control groups are modeled separately based on empirical allele frequency and covariance data, whose differences yield disease risk parameters. We compared pairwise tests and collective inference methods, the latter based both on DDA and logistic regression. Analyses using simulated data demonstrated that significantly higher sensitivity and specificity can be achieved with collective inference in comparison to pairwise tests, and with DDA in comparison to logistic regression. Using age-related macular degeneration (AMD) data, we demonstrated two possible applications of DDA. In the first application, a genome-wide SNP set is reduced into a small number (∼100) of variants via filtering and SNP pairs with significant interactions are identified. We found that interactions between SNPs with highest AMD association were epigenetically active in the liver, adipocytes, and mesenchymal stem cells. In the other application, multiple groups of SNPs were formed from the genome-wide data and their relative strengths of association were compared using cross-validation. This analysis allowed us to discover novel collections of loci for which interactions between SNPs play significant roles in their disease association. In particular, we considered pathway-based groups of SNPs containing up to ∼10, 000 variants in each group. In addition to pathways related to complement activation, our collective inference pointed to pathway groups involved in phospholipid synthesis, oxidative stress, and apoptosis, consistent with the AMD pathogenesis mechanism where the dysfunction of retinal pigment epithelium cells plays central roles.
The simultaneous inference of collective interaction effects within a set of SNPs has the potential to reveal novel aspects of disease association.
A key focus of modern genetic research is the relationship between genomic variations and phenotypes, including susceptibilities to common diseases [1]. Recent advances in genome-wide association studies (GWAS) have greatly enhanced our understanding of such genotype-phenotype relationships [2–9]. In many cases, however, a large portion of the expected heritability information remains to be discovered. It has recently been shown that meta-analyses involving increasingly large sample sizes can yield many additional loci of statistical significance [10, 11]. Another potential source of such 'missing heritability' is the contribution of rare variants not detected by population-based genotyping platforms. Recent studies based on exome and whole-genome sequencing data combined with statistical tests including burden tests [12], C-alpha test [13], and sequence kernel association test [14] are beginning to address such possibilities. It is also expected, however, that the limitation of independent single nucleotide polymorphism (SNP) analyses, where each locus is considered separately to evaluate its association with disease using trend tests or logistic regression models [15], and possible effects of epistasis also contribute to the limited degree of biological effects uncovered so far.
Many studies have addressed the issue of incorporating such inter-variant interactions, or epistasis, in GWAS [16, 17]. Main approaches include machine-learning techniques [18–23], entropy-based methods [24], principal component analyses [25–27], and the genome-wide interaction analysis considering all distinct pairs of SNPs [28–31]. One useful strategy, in particular, is to extend parametric models to many SNPs that have been suitably selected, and inferring interaction effects under a multivariate statistical setting. Previous works within this framework include those based on lasso-penalized logistic regression [32, 33]. Under the setting of inference on many interacting SNPs, the dimensionality of the underlying model is of the order of m 2, where m is the number of SNPs that are considered simultaneously, with m=1 and 2 corresponding to the independent-SNP and pairwise tests, respectively. To prevent model overfitting, high-dimensional inference with limited sample sizes requires regularization, whose values can be determined by cross-validation. Ayers and Cordell performed a comprehensive study of the performance of different penalizer choices on noninteracting SNP inferences [34].
This class of methods within the context of GWAS so far exclusively used logistic (or linear) regression analyses for case-control (or quantitative phenotype) data, which parallels their similarly widespread adoption in the general statistical learning literature. One may note, however, that the actual training data sets in GWAS are collected from case and control populations with distinct genotype distributions. The likelihood of the data to be maximized for inference is given by the joint probability of both genotypes (predictor variables) and phenotypes (response variables). In (logistic) regression, this joint probability is replaced by the probability of phenotypes conditional to genotypes, and the marginal probability of genotypes is assumed to be uniform.
In statistical learning, discriminant analysis is another widely used option for classifying continuous random variables in addition to logistic regression [35–37]. This class of inference methods offers alternative approaches that fully model the joint distribution of predictor and response variables (Section 4.4.5 in Hastie et al. [37]) at the expense of assuming specific predictor distributions (usually multivariate normal distributions). It has been estimated that, for continuous variables, the accuracy of logistic regression models can be lower by ∼30 % than that of discriminant analyses for a given sample size [35, 37].
Genotype distributions within populations from which GWAS samples are collected are also far from uniform, and it is of interest to examine the utility of discriminant analysis-type approaches to disease association inference under high-dimensional settings, which is the main focus of this paper. The standard discriminant analysis, however, is applicable only for continuous variable predictors. A related approach, the discriminant analysis of principal components by Jombart et al. [38], applies discriminant analysis to principal components (continuous variables) of allele frequencies for unsupervised learning of population structures. We report here, as a major innovation, an adaptation of discriminant analysis to the case of discrete genotype data (discrete discriminant analysis; DDA).
Our inference includes the causal effects of both marginal single-SNP terms and their interactions. These effects are estimated simultaneously, rather than separately as in independent-SNP and pairwise analyses. We refer to such combined effects of single-SNP and interaction contributions as the collective effects of disease association. This level of description is analogous to that of the logistic regression inference performed by Wu et al. [33] in terms of the nature of SNP effects included in the modeling. Association studies have two distinct but related goals: inference and prediction. In inference (also known as feature selection), one aims to identify a subset of SNPs that are deemed to be causal, while in prediction, the goal is to apply the trained model and predict the disease status of unknown samples. Independent-SNP analyses widely performed in GWAS, either based on trend tests or logistic regression models with marginal SNP effects only, are mainly geared toward inference. In contrast, the penalized logistic regression including collective effects [33] is more suited to prediction, because the disease risk parameters are optimized directly via maximum likelihood without reference to population structures.
Our method offers a comprehensive approach achieving both inference and prediction by training models to genotype distributions of case and control groups separately under penalizers. The regularization using cross-validation optimizes prediction capability, while for inference, we derived effective p-values of the overall single-SNP (we use this terminology to refer to the contribution each locus makes by itself to the overall association, usually in the presence of interactions) and interaction effects using likelihood ratio tests. To our knowledge, the performance comparison of interaction effect detection between pairwise tests and (logistic regression) collective inference has not been reported yet. Our results based on simulated data indicate that collective inference provides far higher sensitivity for interactions than pairwise tests. Compared to penalized logistic regression, DDA yielded further advantages in sensitivity and specificity.
Our current collective inference implementation allows for the maximum likelihood inference of systems containing up to ∼104 SNPs. However, evaluating interaction p-values of SNP pairs by permutation resampling increases the computational cost by orders of magnitude, limiting the number of SNPs that can be considered in practice. In addition, the requirement for pre-selection of variants based, for example, on independent-SNP p-values, limits the possibility for discovering novel loci whose effects are significant only when interactions are taken into account. To deal more directly with genome-wide data in an unbiased fashion, we describe a second mode of DDA application where ∼106 SNPs are grouped into (∼103 or more) subsets based on phenotype-independent criteria (e.g., biological pathways), the collective inference is applied to each subset, and their relative importance in disease association is evaluated based on cross-validation prediction score. This protocol significantly expands the power of SNP-based pathway analysis beyond existing enrichment-based methodologies [39] by allowing for the incorporation of collective interaction effects within each pathway.
By applying our algorithm to the disease data of age-related macular degeneration (AMD) [40, 41], we demonstrate that the enhanced ability to account for interaction effects can translate into novel biological findings. AMD is a progressive degenerative disease affecting individuals in old age, characterized by the accumulation of deposits (drusen) in the retina or choroidal neovascularization, which can lead to vision loss. Genome-wide studies of AMD constitute one of the earliest and most successful applications of GWAS [2, 3, 40–43], where strong associations were detected and later validated at variants including those near complement pathway genes CFH, C2, and C3, in addition to the ARMS2/HTRA1 loci. However, direct molecular mechanisms tying these associated loci into disease pathogenesis remain unclear. Using AMD case-control data, we first analyzed detailed interaction patterns within SNPs selected based on independent-SNP association strengths. These interactions were enriched in loci epigenetically active in tissues including adipocytes, mesenchymal stem cells, and the liver. We then applied DDA to pathway-based groups formed from genome-wide data and found high association with pathways involved in phospholipid synthesis, cellular stress response, apoptosis, and complement activation.
Our algorithm (DDA) extends the discriminant analysis to discrete genotype data. Its overall steps are summarized in Fig. 1 and described in Methods (see Additional file 1: Text S1 for more in-depth details).
Discrete discriminant analysis algorithm. Empirical characteristics (allele frequency and correlation) of case (y=1) and control (y=0) data are used to fit their genotype distributions with parameters \(h_{i}^{(y)}\) and \(J_{ij}^{(y)}\), each roughly determining the position and width of the distribution. Disease risk parameters are given by their differences, whereas the likelihood ratio (LR) statistic q is obtained from the difference between the sum of two contributions and the corresponding pooled value
Independent SNPs
When interactions between the loci are turned off, DDA can be solved analytically (see Additional file 1: Text S1), whereas logistic regression is always numerical. We first compared this special case of DDA and logistic regression without interaction and found the odds ratio and power to be identical for all conditions for binary models (Additional file 2: Figure S1), which implies that the effect of marginal genotype distributions ignored in logistic regression is negligible for a single non-interacting locus. However, since DDA can yield p-values of each locus without numerical optimization, it leads to considerable computational speed-up even when interactions are not included.
We compared pairwise tests, logistic regression, and DDA in similarly well-controlled but high-dimensional conditions in which collective effects can play important roles. In the following, unless otherwise specified, logistic regression refers to the collective inference including both marginal and interaction terms and a penalizer (see Methods). We used simulated data that faithfully reflected prescribed genotype distributions of given sample sizes. A genotype distribution for a binary model with m loci has m single-SNP and m(m−1)/2 interaction parameters. We specified these parameters randomly from normal distributions, generated genotype samples of size n based on these distributions, performed pairwise marginal inference, logistic regression, and DDA, and compared inferred parameters with the true values (Additional file 3: Figure S2 shows examples for the dominant and genotypic models). Our simulated data include linkage disequilibrium (LD): if one approximates the genotype distribution as a continuous-variable normal distribution, the correlation within a single group (case or control) would be proportional to the matrix inverse of interaction parameters specified, and the overall r 2 would correspond to the sample size-weighted average over case and control groups.
For a given sample, we first determined the optimal penalizer (λ) value by cross-validation. With increasing λ, the mean square error and the area under the curve (AUC) of the receiver operating characteristics generally showed a minimum and maximum, respectively, at λ ∗ (Additional file 4: Figure S3). The value of λ ∗ decreased as the sample size n increased. This trend implies that for small n, an aggressive regularization is needed (large λ ∗) to minimize overfitting, while for larger n, more interaction terms are inferred with sufficient significance, leading to smaller λ ∗.
Accuracy of inference We compared results of pairwise marginal tests using PLINK [44], logistic regression, and three versions of DDA [exact enumeration; EE, pseudo-likelihood; PL, and mean field; MF (see Methods)] in two different simulation settings. In the first case (Fig. 2 a–b), we used m=10 SNPs with parameters chosen such that all sites had relatively large and strong single-SNP and interaction effects. We used the dominant model in these simulations in order to facilitate sampling, which requires exhaustive enumeration of all genotypes (2m and 3m for binary and genotypic models, respectively). Pairwise tests derive odds ratios and p-values for each SNP pair separately, and the logarithm of the interaction odds ratio corresponds to the interaction parameter. The mean square error of pairwise inference decreased slightly from sample size n=102 to 103 but showed little improvement for larger sample sizes. Outcomes from logistic regression and DDA exhibited AUC values (maximized with respect to λ for each sample) increasing with n for n≤103. The AUCs from logistic regression were slightly lower for n≤103 than DDA and comparable in larger sample sizes. The mean square error of logistic regression and DDA steadily decreased (approximately linearly in log-log scale) over all n ranges examined. Error levels of DDA from three methods were similar to one another. When compared to logistic regression, the accuracy of DDA was comparable at larger n and better at smaller n. However, the logistic regression results showed much larger variances (with respect to different realizations of samples) for small n than DDA.
Inference accuracy, sensitivity, and specificity of pairwise and collective inference on simulated data. a–b The mean square error and AUC versus sample sizes using pairwise test (PW), logistic regression (LR), and the three methods of DDA (MF, PL, and EE). Simulated genotypes were generated for 10 SNPs with parameters \({\bar h}_{y}=(-1,-0.3)\), \({\bar J}=(0,0.1)\), σ h =σ J =0.2 (see Methods). c-d Analogous results for 20 SNPs with \({\bar h}_{y}=(-1,-1+\Delta h)\), \({\bar J}=(0,\Delta J)\), and σ h =σ J =0.2. We set Δ h=0.7, Δ J=0.5 for the first 4 SNPs and their interactions and Δ h=Δ J=0 otherwise. e-f Sensitivity and specificity of disease-associated interaction pairs. Simulated data were generated with parameters \({\bar h}=(-1,-1)\), \({\bar J}=(0.01,0.01)\), σ h =0.1, σ J =0.05 for m=10 SNPs, except the interaction between the first two SNPs, for which we set \({\bar J}=(0.01,0.11)\). Interaction p-values for all pairs were calculated either by PW or by regularization to determine λ ∗ followed by the construction of null distribution under λ ∗ (Additional file 5: Figure S4) for LR, PL, and EE. The distribution of p-values for the true causal interaction pair and those of non-causal pair (geometric mean) are shown in e and f, respectively. The dominant model was used in all cases
In the second setting (Fig. 2 c–d), we enlarged the system to m=20 SNPs (EE omitted due to computational costs), and set the parameters such that only 4 SNPs contributed to disease association. The AUC values were smaller in comparison to the first setting for smaller n, which reflects a weaker overall strength of disease association, but otherwise showed similar trends. The accuracy of pairwise tests, logistic regression, and DDA also exhibited trends similar to simulations in Fig. 2 a: both logistic regression and DDA significantly outperformed pairwise tests, while DDA consistently showed slightly better accuracy than logistic regression. The variances in mean square error were smaller than in the first setting, which suggests that these variances correlate with the number of causal SNPs. For n=102, logistic regression results had a variance much larger than DDA for small n.
These simulations demonstrate that when both marginal single-SNP and interaction effects are included, the accuracy of collective inference approach is much higher than that of pairwise tests. The DDA generally provides a further edge for smaller sample sizes in comparison to logistic regression. The comparison of two different simulation settings in Fig. 2 a–b and c–d demonstrates that this trend is not significantly altered with increases in the number of SNPs and the fraction of causal SNPs among them. The accuracy (inferred model parameters versus true values) remained at similar levels when the underlying model was changed from dominant to genotypic models (Additional file 3: Figure S2).
Statistical tests We then examined the performance of collective inference methods in evaluating the statistical significance of individual interactions. In GWAS, the significance of SNPs and their interactions are tested either by contingency table or likelihood ratio tests [15]. The presence of the penalizer λ complicates this approach in collective inference. In their study of lasso-penalized logistic regression collective inference, Wu et al. [45] adopted the approach of first selecting significant SNPs of a certain size using regularization, and then calculating p-values of interactions with the penalizer turned off. A disadvantage of this approach is that the information of the relative importance of each interactions reflected in the penalized model is lost when λ is set to zero.
The (analytic) likelihood ratio tests rely on the asymptotic distribution of the likelihood ratio statistic q (q i and q ij for a site i and pair i,j): as n→∞, the distribution of q under the null hypothesis approaches the χ 2-distribution with degrees of freedom (d.f.) given by the change in the number of parameters between the null and alternative hypotheses [46]. In practice, however, with a finite n, the deviation from this asymptotic limit can be significant. We found the null distribution to show increasingly large deviations from the asymptotic limit as λ increased. We therefore based our statistical tests in the presence of a non-vanishing penalizer on empirical null distributions of q ij constructed by permutation resampling (Additional file 5: Figure S4).
We then sought to evaluate the sensitivity of causal interaction identification within different inference methods using simulations. We created simulated data of m=10 SNPs, this time with random parameters with mean values that were identical for both case and control groups, except a single SNP pair for which the case group had stronger interactions than the control (Fig. 2 e–f). For collective inference (logistic regression and DDA), we first performed cross-validation for each sample to determine optimal λ ∗, and then constructed the empirical null distribution under this λ ∗ (Additional file 5: Figure S4) to calculate p-values of the causal and non-causal interaction pairs (Fig. 2 e and f, respectively). We selected the simulation parameters such that the SNPs were fairly strongly coupled via LD in each of case and control groups, but these interaction effects were expected to cancel out except for the single causal pair. The pairwise test results remained insensitive to this causal interaction for all sample sizes. The logistic regression, DDA PL, and EE methods detected this interaction fairly robustly for n≥103. In all cases, DDA had higher sensitivity than logistic regression. The p-values for the non-causal interactions mostly followed the expected null distribution qualitatively. However, the distributions from logistic regression were significantly broader (higher false positive rates; lower specificity) than DDA for all sample sizes.
In applications to actual disease data, where one aims to identify statistically significant pairs of interactions based on p-values, the enhanced sensitivity and specificity of detection shown in Fig. 2 e are of more interest than the parameter prediction accuracy in Fig. 2 a, c. Our results suggest that the sensitivity of detecting disease-associated interactions among mostly non-causal SNP pairs from noisy data is significantly higher with collective inference than with pairwise tests. The DDA inference furthermore allowed for consistently higher sensitivity and specificity than logistic regression.
Independent-SNP We first analyzed AMD data under the independent-SNP assumption and compared the logistic regression and DDA results. Analytic expressions are available for the odds ratio and p-values for DDA [Eqs. (S24), (S27) and (S28) in Additional file 1: Text S1]. Genome-wide p-values derived from DDA (Additional file 6: Figure S5) were consistent with published results [41]. The p-values from independent-SNP logistic regression using PLINK and those from DDA for three main associated genomic regions (CFH, C2/CFB, and ARMS2 gene groups in chromosomes 1, 6, and 10, respectively) were the same for most loci except those with strongest associations, for which p-values from DDA were slightly smaller (Additional file 7: Figure S6). Differences in p-values were larger with the genotypic model than with the dominant model (Additional file 1: Table S1). Thus, when interactions are not included, DDA gives nearly the same results as the logistic regression inference. This feature allows one to directly interrogate how collective interactions modify association.
Collective inference for 20 SNPs We then examined the performance of DDA on AMD data under the first mode of application, where detailed interaction patterns within a relatively small set of pre-selected SNPs are inferred. We selected m=20 AMD SNPs using the variable selection program GWASelect [47] (see Additional file 1: Table S1), which covered most regions previously identified as strongly associated with AMD risks (Additional file 6: Figure S5 and Additional file 7: Figure S6). The independent-SNP p-values of this set are shown in Fig. 3 c. For the majority of loci (18 out of 20), the risk allele was the major allele, and odds ratios were smaller than 1. As stated above, under this condition of no interaction, the p-values from logistic regression (from PLINK) and those from DDA (analytic) were nearly the same.
Collective inference applied to pre-selected m=20 AMD SNPs. a–b Regularization via cross-validation. Dominant (DOM) and genotypic (GEN) models were used with logistic regression (LR), DDA PL (a), and MF (b). Independent-SNP limit is reached with λ→∞ and ε→0. Because of the pre-selection of SNPs using phenotype information, the prediction score (pseudo-AUC; pAUC) derived from 5-fold cross-validation over-estimates the true AUC. The maxima in pAUC correspond to optimal regularization. c–d Single-SNP and interaction p-values of the optimized (genotypic) model under PL (λ=0.01). The p-values from independent-SNP and pairwise tests are also shown for comparison in c and d, respectively. See Additional file 1: Table S1 for the independent-SNP results and SNP list
We applied collective inference (including interactions) to this 20-SNP set using logistic regression and DDA. We first performed cross-validation to determine the optimal penalizer λ ∗ (Fig. 3 a–b). It should be noted that because the pre-selection of SNPs in this case used phenotype information of the entire sample, the cross-validation prediction score is not an unbiased estimate of the true AUC and is generally higher in value [37]. In our application, this procedure merely allows for the identification of optimal regularization levels for collective inference. We denote the prediction score derived after such pre-selection using sample phenotypes as pseudo-AUC (pAUC) in order to distinguish it from the true estimate of AUC. Unbiased estimates of AUC, if desired, can be obtained, for example, by performing independent-SNP p-value-based filtering based only on training sets of each cross-validation sub-division [37] (see below) or by using selection criteria unrelated to sample phenotypes (e.g., pathways).
As observed with the simulated data, when DDA was used, the pAUC values with varying regularization levels showed a maximum (Fig. 3 a–b), which corresponds to the optimal degree of interaction effects to be included in genotype distributions. For PL (Fig. 3 a), the maxima were located at λ ∗=0.05 (pAUC=0.751) and λ ∗=0.02 (pAUC=0.765) for the dominant and genotypic models, respectively. The slightly higher pAUC suggests that for this data set, the genotypic model provides a better fit. For DDA, we verified that in the large- λ limit, the inference outcome approaches the independent-SNP result. The difference between this limit and the maximum pAUC is a measure of the relative importance of interactions in disease association.
We also applied logistic regression-based collective inference to the same data set. Cross-validation yielded similar differences between the dominant and genotypic models (Fig. 3 a). However, pAUC did not exhibit pronounced maximum, instead approaching a large- λ limit nearly monotonically. This limit was slightly higher than the corresponding DDA maximum, which is consistent with the expectation that logistic regression can yield better prediction performance because it maximizes the prediction score [Eq. (8) in Methods]. On the other hand, the absence of pronounced maximum in pAUC as a function of λ indicates a loss in sensitivity in logistic regression for the detection of interaction effects. This conclusion can be rationalized in terms of the algorithmic difference between logistic regression and DDA: in DDA, case and control group genotype distributions are fit separately in terms of their respective single-SNP and interaction parameters, whereas logistic regression optimizes the prediction score with respect to the net differences in those parameters. With more flexibility to account for differential population structures, DDA has higher sensitivity to detect interaction effects.
Figure 3 b shows the analogous model optimization under the DDA MF method, where regularization parameter values ε=0 and ε=1 each correspond to independent-SNP and full interaction limits, respectively. The maximal pAUC values from MF were similar but slightly lower in comparison to PL. On the other hand, MF is more computationally efficient that PL and allows for larger SNP sets.
We used the optimal penalizer value to determine the parameters and p-values for this 20-SNP data set under the genotypic model using DDA PL. The p-values, representing the statistical significance of the individual terms included in the model, consist of two classes: single-SNP and interactions. The single-SNP p-values represent the significance of marginal single-site effects (associated with parameters \(h_{i}^{(y)}\) or β i ). They are analogous to the independent-SNP p-values of each SNP, but having been inferred in the presence of interactions, they also indirectly reflect interaction effects. Strictly speaking, the presence of penalizer λ also affects the distribution of the likelihood ratio statistics q i and it is desirable to estimate their p-values using permutation resampling. However, since we did not impose penalty to single-SNP terms directly [Eq. (4) in Methods], we expect this effect to be moderate. In practice, these p-values tend to be much smaller than 1 for SNPs selected based on independent-SNP properties, and they are difficult to estimate using resampling. We chose to use the asymptotic χ 2-distribution to estimate these single-SNP p-values under collective inference. These are therefore expected to be upper-limits (i.e., actual p-values are expected to be smaller) based on the observation that the penalizer tends to suppress null distributions to lower q-region.
Figure 3 c shows the collective inference single-SNP p-values of the m=20 AMD data from DDA. They largely retained the relative strengths of significance in independent-SNP results, while in absolute magnitudes the − log10p values were mostly reduced. This feature indicates that in comparison to the independent-SNP model where single-SNP parameters also contain average effects of interactions, when collectively inferred, these terms make reduced contributions to association because they represent single-site effects only. We also performed analogous calculations using logistic regression, adopting the lowest value of penalizer λ=0.1 at which pAUC reached the limiting value in Fig. 3 a. The single-site p-values showed larger deviations from the independent-SNP results (Fig. 3 c), with values for many sites becoming insignificant.
We then performed resamplings of this data set to obtain interaction p-values (Fig. 3 d), which indicated strong interactions within the CFH gene group in chromosome 1, C2 in chromosome 6, and ARMS2/HTRA1 group in chromosome 10. In contrast, pairwise test p-values detected strong interactions only within the last group of loci, between rs6585827/rs2280141 and rs2014307/rs2248799 (p∼10−9). These short-range interactions suggested by DDA tended to be correlated with LD: because the net disease association is related to the difference in SNP correlation patterns between case and control groups (Fig. 1), we interpret these short-range interactions as the consequence of differential LD-patterns in case and control individuals. The absence of such signals in pairwise test p-values suggests that such differences get averaged out when represented only by marginal SNP-pair distributions.
We sought to further test whether such increased sensitivity toward interactions was achieved with adequate control for false discovery rates. The selection of m=20 SNPs in Fig. 3 c–d comprises SNPs with highest association. For comparison, we made a random selection of m=20 SNPs from the genome-wide data and performed DDA as well as pairwise test. The quantile-quantile plot (Fig. 4 a) showed that p-values for interactions between these random SNPs were distributed close to the null distribution. In particular, DDA and pairwise outcomes were similar, except for one pair for which DDA predicted p∼10−3. In contrast, the distribution of interaction p-values for the highly associated m=20 SNPs (Fig. 3) from DDA deviated significantly from the null (Fig. 4 b), whereas the pairwise test outcome remained similar to random SNPs except for ∼10 SNP pairs. These results suggest that DDA achieves increased sensitivity for interactions while adequately controlling for false positive rates.
Quantile-quantile plot of interaction p-values. a Distributions for interactions among m=20 SNPs randomly selected from genome-wide data. b Distributions for interactions among 20 SNPs with high association (Fig. 3 d) and a larger set (m=96; Fig. 6). See Additional file 8: Figure S7 for pairwise (PW) results for m=96
Large-scale collective inference The analysis described above used a fixed number of pre-selected SNPs to perform cross-validation and inferences. We next enlarged the size of SNP sets by controlling it using an independent-SNP p-value cutoff p c ; with the cutoff specified, in each cross-validation run, the training set was used to obtain independent-SNP p-values, filter SNPs, and perform inferences, and the test set was used for prediction. The prediction score derived under this protocol is an unbiased estimate of the true AUC. The AUC values (Fig. 5 a) showed qualitative trends similar to Fig. 3 a; the AUC maximum relative to the non-interacting limit was more pronounced, while its height showed a moderate decrease with increasing SNP numbers: inclusion of less-significant SNPs diluted the overall effects. We chose a SNP-set size of m=96 (p c =10−5 without cross-validation) for interaction pattern analysis. The interaction p-value computation for m SNPs entails a multiplication of the single-inference computing time by m(m−1)/2 (the number of pairs) times the necessary random resampling size (∼103 or more) for p-value estimation, thus limiting model sizes that can be considered to m∼100.
Collective inference with SNP selection based on independent-SNP p-values. a AUC with varying penalizer λ under PL inference, where independent-SNP p-value cutoff p c indicated was used to filter SNPs from the full genome-wide set in each cross-validation run. The mean SNP number \({\bar m}\) is the average over 5 runs. b AUC optimized over regularization (MF) with varying model sizes controlled by p c . SNP selections were made from the full genome-wide data (r 2<1.0) and subsets generated by pruning based on LD thresholds indicated. Note that the maximum AUC position shifts to lower \({\bar m}\) with increasing degree of pruning (fewer SNPs with LD needed to account for association) and that an optimal level of pruning (r 2<0.5) exists for highest performance. Vertical lines are 95 % C.I
The resulting single-site and interaction p-values are shown in Fig. 6, where the bottom panel compares the independent-SNP/collective single-site p-values. As in Fig. 3 d for the smaller SNP set, the collective single-site significance of strongly associated SNPs was generally reduced in strength compared to the non-interacting case. However, rs932275 in chromosome 10 had a comparable p-value (strongest within the collective inference results) and many SNPs originally with weaker associations in the non-interacting case became stronger under collective setting. The interaction landscape shown on the top panel of Fig. 6 retained the qualitative trend of the results from the smaller data set in Fig. 3 d, but with much more detail; we confirmed the presence of local interactions within the CFH, C2, and ARMS2 gene groups. In addition, we observed numerous 'long-range' interactions that were absent in the m=20 results: rs2284664 in CFH interactions with rs511294 and rs544167 in C2, and there were additional distributed interactions between the CFH loci and the ARMS2 gene group. The distribution of interaction p-values was similar to that for m=20 in the quantile-quantile plot (Fig. 4 b). The pairwise test p-value landscape for the same data (Additional file 8: Figure S7) was also qualitatively similar to the m=20 case (Fig. 3 d, bottom).
Interaction and single-site p-values for m=96 AMD SNPs. The bars (bottom) and the heat map (top) show the single-SNP and interaction p-values, respectively. Hollow and solid bars represent the independent-SNP and collective inference p-values respectively. DDA PL was used for collective inference
The overall picture of disease-associated epistatic interactions from our small and larger-scale collective inferences in Figs. 3 d and 6 provides a plausible explanation of the recent observation by Hemani et al. [31], who detected many epistatic SNP pairs leading to differential gene expressions within the human genome by exhaustive searches using pairwise tests. Wood et al. [48] then observed that many of these effects could be explained by single untyped third SNPs in LD with the interacting pairs. Here, we observed both from simulations and AMD SNP analyses that pairwise tests (Fig. 3 d, bottom and Additional file 8: Figure S7) detect only a subset of statistically significant interactions, and a portion of the interaction patterns uncovered from collective inference parallels that of the underlying LD (Fig. 6 and Additional file 9: Figure S8): SNPs with strong overall correlations often also have differential LD between case and control groups. It is thus understandable that interacting pairs of SNPs identified in marginal pairwise tests turn out to be in LD with other hidden variants. Our results in Fig. 6, however, demonstrate a strong presence of interactions beyond both the population LD (Additional file 9: Figure S8) and the reach of pairwise tests (Additional file 8: Figure S7).
Disease-associated epigenomes Most of the disease-associated loci from GWAS reside in non-coding regions, presumably exerting their effects through modifications of gene regulatory action [49]. The overlap of epigenetic signatures with disease-associated SNPs and their interactions can provide important biological insights to the underlying disease mechanism. We sought to identify tissue and cell type-specific interaction patterns associated with AMD phenotypes using the SNP interaction map we derived above (Fig. 6). We used the recently published NIH Roadmap Epigenomics Consortium data [50] to first calculate the enrichment p-values of the transcribed/enhancer states among the selected 96 AMD SNPs within each of the 111 reference epigenomes (Fig. 7). We combined the actively transcribed and enhancer states of the 15 chromatin state annotations of the reference epigenomes to define the 'active' state. For each AMD-associated SNP, we identified the group of all known SNPs that were strongly correlated (high LD), obtained the distribution of epigenetic states over these SNPs within a given epigenome, and tested the over-representation of the active state against the background distribution. This enlarged search over the set of all known SNPs in LD with the locus included in inference is crucial to address the issue of the incomplete coverage of genotype data.
Enrichment p-values of active epigenetic states among AMD-associated SNPs. The set of 96 SNPs in Fig. 6 was used. The reference epigenome labels are as defined in Fig. 2 of Ref. [50]. ES, embryonic stem cell; ES-deriv., ES cell-derived; HSC, hematopoietic stem cell; iPS, induced pluripotent stem cell; MSC, mesenchymal stem cell; Neurosph., neurosphere
The most prominent feature in Fig. 7 is the strong enrichment of active epigenetic states among AMD SNPs within the liver tissue (E066), followed by ovary (E097). Two additional epigenomes, embryonic stem cell-derived neuronal progenitor cultured cells (E009) and bone marrow-derived mesenchymal stem cells (MSCs; E026), were also notable, but their enrichment p-values on the single-SNP level were comparable to other tissues.
We then hypothesized that the statistically significant interactions between SNPs identified in Fig. 6 would provide additional information of the cell-type specificity of epigenetically active states and their interactions. We selected the SNP pairs with interaction p<10−3 from Fig. 6 and, assuming that each groups of LD-correlated SNPs came from specific cell types (111×112/2 possible pairs, including self-interactions), tested the enrichment of active state pairs. The p-values derived then reflect the statistical significance of the epigenetic modifications enabling the interactions occurring between two cell types that are disease-associated.
The resulting landscape shown in Fig. 8 exhibited strong interactions involving the liver tissue, consistent with the single-SNP result in Fig. 7. However, clear patterns not seen on the single-SNP level also emerged: bone-marrow derived MSCs (E026) and adipose nuclei (E063) also featured prominently in the interaction landscape; the bulk of interactions involving the liver tissue was accounted for by those with MSC, adipocytes, and muscle tissues. Embryonic stem cell H1-derived MSCs (E006) showed interactions that were weaker but similarly distributed in comparison to bone-marrow derived counterparts. The ovary followed patterns similar to the liver but was less pronounced than in Fig. 7. In addition, lung (E096) and placenta (E091) showed some interactions with adipocytes and MSCs. All of these tissues strongly interacted with themselves: interacting SNPs in these tissues are highly likely to be active epigenetically.
Enrichment p-values of active epigenetic state pairs among AMD-associated SNP interactions. The SNP pairs with interaction p-value <10−3 in Fig. 6 were tested for enrichment within each reference epigenome pairs
SNP selection from genome-wide data Collective inference without interaction p-value computation can be applied to SNP sets of sizes up to m∼104. The prediction AUC as the main outcome for each SNP selection then allows for the comparison of the relative strengths of disease association of different SNP groups. In such applications, the performance of DDA could depend on (phenotype-independent) processing applied to genome-wide data in selecting SNP sets for analysis. We evaluated this second mode of DDA application and assessed how its performance varied depending on the degree of LD within SNP sets. We generated different subsets of genome-wide SNPs by pruning, removing variants that had LD with neighboring SNPs higher than a threshold (Fig. 5 b). The AUC obtained with SNPs selected from the full genome-wide data peaked around the mean number of SNPs \(\bar {m}~\sim 50\), as suggested also by Fig. 5 a. With LD-based pruning, the position of maximum shifted to levels up to \(\bar {m}\sim 10\), which suggests that about 10 SNPs in linkage equilibrium account for the bulk of the association. The height of the AUC first increased with the data pruned with r 2<0.5 compared to the full set and then decreased with r 2<0.3, indicating that there is an optimal level of pruning beyond which key causal SNPs begin to be removed. Overall, the relatively small model size ranges where collective inference performance is maximized in Fig. 5 b suggests that AMD is only weakly polygenic with dominant contributions from a few loci. Analyses of the type demonstrated in Fig. 5 b thus allows one to assess the polygenicity of the phenotype under consideration and choose suitable strategies for SNP selection.
Pathway-based SNP selection An obvious criterion for grouping genome-wide SNPs into subsets for collective inference-based evaluation is the proximity to gene sets belonging to known biological pathways. From the full AMD genome-wide data, we generated 1,732 SNP sets corresponding to 1,732 Reactome pathways [51], each containing from 20 to ∼104 SNPs. We then applied DDA MF inference and derived optimized AUC values for each pathway (Fig. 9 a). The majority of the pathways had association levels [median AUC: 0.514±0.021 (95 % C.I.)] close to the null value of 0.5. The differences in collective inference AUC relative to independent-SNP results ranged from 0 to ∼0.06. Reflecting the dominance of the complement-related genetic loci, Regulation of complement cascade (AUC: 0.688±0.018, m=448) and Complement cascade (AUC: 0.684±0.018, m=869) showed top association levels clearly separated from the rest. These AUC values were similar to the levels observed in p c -based sampling in Fig. 5 b adjusted to their corresponding SNP numbers. We used a selection of pathways with low AUC values to sample their null distributions and connect AUC (as the statistic for each pathway) and p-values corresponding to the overall association of each SNP set (Fig. 9 b). The − log10p values monotonically increased from 0 as AUC increased from 0.5, and became highly linear for AUC>0.52 (r 2=0.94). We used this linear regression formula to translate AUC into p-values. The Bonferroni correction with 1,732 pathways to the nominal false discovery rate indicated a threshold of AUC>0.567, which led to 13 pathways above the threshold shown in Table 1.
AMD association of pathways under collective inference. a AUC score versus pathway size (number of SNPs in each pathway). Symbols show collective and independent-SNP inference AUCs under 5-fold cross validation. Vertical lines are 95 % C.I. The horizontal line represents the Bonferroni-corrected nominal discovery threshold based on the p-value estimates. b Regression of AUC versus pathway p-values. The latter were obtained for a selection of pathways via phenotype-label reshuffling using AUC as the statistic. Dotted line is the linear fit for AUC>0.52. c–e Pathways with association strength AUC>0.55, grouped according to the top hierarchical classes they belong to. We excluded pathways in the Disease class. Dendrograms below the bars show their hierarchical relationships. Abl, Abl tyrosine kinase; activ., activation; assoc., association/associated; biosynth., biosynthesis; C3, complement component 3; C5, complement component 5; CCT, chaperonin-containing T-complex polypeptide 1; cell., cellular; ChREBP, carbohydrate response element-binding protein; ECM, extracellular matrix; EHMT2, euchromatic histone-lysine-methyltransferase 2; elong., elongation; ER, endoplasmic reticulum; ERCC6, excision repair cross-complementation group 6; expr., expression; form., formation; HSF, heat shock factor; IFN, interferon; indep., independent; Lys, lysine; MDA5, melanoma differentiation-associated gene 5; med., mediated; metab., metabolism; MYD88, myeloid differentiation primary response 88; NFKB, nuclear factor- κ B; PA, phosphatidic acid; PKMT, protein lysine methyltransferase; pol, polymerase; proc., processing; reg., regulate/regulation/regulated; RIG-I, retinoic acid-inducible gene-I; RIP, receptor-interaction protein; Robo, roundabout; SASP, senescence-associated secretory phenotype; sig., signaling; SMAC, second mitochondrial activator of caspases; synth., synthesis; sys., system; thru, through; TP53, tumor protein p53; transc., transcription/transcriptional; TriC, T-complex polypeptide 1 ring complex; ZBP1, Z-DNA-binding protein-1
Table 1 Pathways highly associated with AMD in collective inference
AMD disease mechanism We sought to gain insights to molecular-level disease mechanisms of AMD by examining the pathways in Table 1 along with additional pathways near the threshold and grouping them into hierarchical classes (Fig. 9 c–e). There are two primary types of AMD, the 'dry' and 'wet' forms [52]. The dry AMD more commonly occurs in earlier stages, where retinal pigment epithelium (RPE) cells supporting photoreceptors in the retina undergo degeneration, often accompanied by the accumulation of drusen in the area between RPE and the Bruch's membrane separating the retina from the choroid. The wet AMD is characterized by invasive choroidal neovascularization of the retina. In both forms, cellular stress factors exacerbated by aging are the primary causes leading to RPE dysfunction. The normal functioning of photoreceptors, bombarded by light and highly susceptible to oxidative damage, relies on continual recycling of their spent outer segments via phagocytosis by RPE cells [53]. Peroxidation products of phospholipids, the key ingredients of photoreceptors, often end up as major components of drusen, and serve as damage-associated molecular patterns activating innate immune receptors, including toll-like receptors (TLRs) as well as complement factor H (CFH) [54]. The latter has been shown to bind malondialdehyde (MDA) derived from docosahexaenoic acid [55]. In addition, phosphatidylserine is the main 'eat-me' signal toward phagocytes when displayed on the extracellular membrane of dying cells [56]. Consistent with these aspects of AMD pathogenesis, we found associations with Phospholipid metabolism pathways (Fig. 9 e), and in particular, Synthesis of phosphatidic acid, which suggests that disease risk is affected by genetic variants modifying the ability to supply these phospholipids.
Phospholipid synthesis requires the supply of fatty acids, synthesized in the liver. The causal link to this process of lipogenesis is suggested in Fig. 9 e by the pathway Carbohydrate response element-binding protein (ChREBP) activates metabolic gene expression. ChREBP is a key transcription factor in hepatocytes, responding to glucose uptake and activating genes involved in lipogenesis [57, 58]. Fatty acids thus synthesized are transported into the bloodstream in the form of very low density lipoproteins and stored as triacylglycerols in adipocytes [57]. The suggested AMD risk association of the fatty acid supply from the liver and adipocytes for phospholipid synthesis provides an explanation of our earlier finding in Fig. 8 that SNP interactions associated with AMD are epigenetically active in the liver and adipocytes.
Oxidative stress is often accompanied by disruptions to protein folding, which can lead to protein aggregation and autophagy when refolding by chaperones proves inadequate [59]. We found association in Protein folding pathways (Fig. 9 e), and in particular Chaperonin-mediated protein folding, which primarily targets actins and tubulins, the major ingredients of cytoskeletal networks [60]. This observation suggests that RPE stress from protein misfolding affects AMD risk via its effect on phagocytic function, which relies heavily on actin filament and microtubule remodeling dynamics [56]. Also closely related is the Regulation of heat shock factor (HSF) 1-mediated heat shock response in Fig. 9 e, which describes the transcriptional activation of heat shock protein (chaperone) expression under stress. The latter pathway belongs to the Cellular responses to stress group, in which we also found association with Senescence-associated secretory phenotype (SASP). Senescence is one of the possible fates of cells under stress, where normal cellular growth is arrested in preparation for clearance by phagocytes [61]. SASP refers to a complex suite of inflammatory cytokines, chemokines, and growth factors facilitating the process, and we infer that senescence in RPE cells under oxidative stress plays a part in AMD.
Apoptosis, or controlled cell death [62], is another major stressed-cell response, and was also represented in our results (Fig. 9 e). A large body of direct evidence points to apoptosis as one of the main routes of RPE degeneration in AMD [63]. Induction of apoptosis upon stress is dictated by the action of master regulator p53, and it was recently shown that aging increases the activity of p53 in RPE cells and the likelihood for apoptotic cell death [64]. Consistent with this evidence, we found association with pathways in Transcriptional regulation by TP53 group (Fig. 9 d). In particular, Regulation of TP53 activity through methylation was among the top pathway in our association analysis (Table 1), suggesting that p53 modification by methylation and the closely related histone modifications [Protein lysine methyltransferases (PKMTs) methylate histone lysine in Fig. 9 e] play important roles in RPE apoptosis regulation. In the intrinsic apoptotic pathway induced by oxidative stress, cytochrome c is released from mitochondria into the cytosol, binding and activating caspases, the main proteases central to apoptotic action. We found association in pathways involving 'inhibitor of apoptosis' (IAP) and its negative regulator 'second mitochondrial activator of caspases' (SMAC) [65], which suggests that disruption to regulatory mechanisms preventing apoptosis in RPE cells may play roles in AMD.
RPE degeneration and drusen formation can lead to inflammation, the main innate immune response involving a wide range of pattern-recognition receptors (PRRs) and complement activation [66]. Most of known PRRs were represented in our results (Fig. 9 c), including TLRs, advanced glycosylation endproduct receptors, RIG-I-like receptors, and cytosolic DNA sensors [66]. Complement factors constitute the soluble counterparts of PRRs, and Regulation of complement cascade showed the highest association due to the contribution of CFH, as well as Activation of C3 and C5. CFH normally protects self tissues from complement-induced destruction by binding to a range of surface signals including glycoproteins and C-reactive protein. In addition to the binding of CFH to MDA noted above, it was also reported that CFH inhibits lipoprotein binding toward Bruch's membrane [67]. The breach of Bruch's membrane and the intrusion of blood vessels into the retina are the hallmarks of wet AMD [52], which are consistent with our finding of high association in Degradation of extracellular matrix and Common pathway of fibrin clot formation in Fig. 9 e.
In this paper, we first described and tested discriminant analysis-based algorithms inferring collective disease association effects applied to intermediate-sized SNP sets. Using simulated and actual disease data, we provided evidence suggesting that collective inference methods outperform pairwise tests and logistic regression in incorporating interaction effects in disease association.
We demonstrated two different modes of applying DDA in the analysis of actual disease data: one in which detailed interaction patterns within a relatively small set of SNPs are inferred, and the other where genome-wide SNP data are grouped into different subsets of SNPs and collective inference is used to compute the degrees of disease association of each subset. Our results applied to AMD in Fig. 9 based on pathway-based SNP selection, in particular, show that the latter protocol allows us to identify pathways encompassing a large fraction of disease mechanisms previously established by non-genetic means. Based on current study, we propose the following approach to deal with novel GWAS case-control data using DDA: first, characterize the degree of polygenicity of the data set with independent-SNP and collective inferences employing p c -based SNP selection and optimize the density of SNPs using LD-based pruning (Fig. 5). Second, classify SNPs into pathway-based groups, score them using collective inference, and seek insights to the underlying disease mechanisms by analyzing the results within the pathway hierarchy.
Genotype distribution of case-control groups
Our algorithm is best understood in comparison to the classical continuous variable discriminant analysis. Table 2 outlines the similarities and differences of classical (continuous variable) and discrete (our adaptation) versions of discriminant analyses. In the continuous variable case, one aims to classify data into two known groups, case and control (denoted by y=1 and y=0, respectively), based on predictor a, a vector of dimension m. Classification (and inference) are performed by maximizing the likelihood of model parameters given the training data of known class identities, i.e., the joint probability
$$ \text{Pr}(\mathbf{a},y)=\text{Pr}(\mathbf{a}|y)p_{y}, $$
Table 2 Comparison of continuous-variable/discrete discriminant analyses
where p y is the marginal probability of group membership. One then finds the class-specific mean vectors μ y and covariance matrices Σ y . These quantities define the predictor distribution within each class, which are assumed to follow a multivariate normal distribution: a∼N(μ y ,Σ y ), or
$$ \text{Pr}(\mathbf{a}|y) \propto \exp\left({\mu_{y}^{t}} \Sigma_{y}^{-1} \mathbf{a}-\mathbf{a}^{t}\Sigma_{y}^{-1}\mathbf{a}/2\right), $$
where the superscript t denotes transpose. In this formulation, the maximum likelihood condition for Eq. (1) is equivalent to maximizing the discriminant function δ y (a) given by the exponent of Eq. (2) plus lnp y , which is used to classify an arbitrary data a into case if δ 1(a)>δ 0(a) and control otherwise [37]. It is also useful to note that if we assume that a is a scalar, this framework reduces to t-tests for the null hypothesis μ 0=μ 1.
For our application, the predictor a is the collection of genotypes, which is discrete. The description here applies to the binary model (dominant or recessive), such that a i =0,1 represent aa and Aa/AA for SNP i for the dominant model, and aa/Aa and AA for the recessive model (see Additional file 1: Text S1 for the genotypic model). Figure 1 illustrates the general spirit of the DDA algorithm. Training data of known phenotypes can be used to obtain allele frequency vectors and covariance matrices with elements \({\hat f}_{i}^{(y)}\) and \({\hat f}_{ij}^{(y)}\), respectively, where i,j=1,⋯,m are SNP indices. Note that these quantities are the exact counterparts of the continuous variable mean μ y and covariance Σ y . We model the genotype distribution within each class in a form analogous to Eq. (2) [68]:
$$ \text{Pr}(\mathbf{a}|y)\propto\exp\left(\sum_{i} h_{i}^{(y)}a_{i}+\sum_{i<j} J_{ij}^{(y)}a_{i} a_{j}\right), $$
where \(h_{i}^{(y)}\) and \(J_{ij}^{(y)}\) are model parameters of the distribution that we refer to as single-SNP and interaction parameters, respectively. Comparing Eqs. (2) and (3), one can observe that these parameters \(\psi _{y}\equiv \{h_{i}^{(y)},J_{ij}^{(y)}\}\), each multiplying predictor a in linear and quadratic fashion, respectively, are expected to be related to frequencies \({\hat f}_{i}^{(y)}\) and \({\hat f}_{ij}^{(y)}\). In contrast to the continuous case, however, the exact form of this relationship is unknown due to the discrete nature of a, except for the special case of independent SNPs (see Section S1.5 in Additional file 1: Text S1; we refer to the special case of no interaction as the independent-SNP case).
The inference of this relationship is the major computational component of DDA, and is based on maximizing the log-likelihood (L y ) per individual,
$$ L_{y}/n_{y}=\frac{1}{n_{y}}\sum_{k\in y}\ln \text{Pr}\left(\mathbf{a}^{k}|y\right)-\frac{\lambda}{2}\sum_{i<j} \left(J_{ij}^{(y)}\right)^{2}, $$
where the first summation is over genotype data of n y individuals in group y, and λ denotes a regularization parameter (penalizer) that controls the contribution of the SNP interactions in comparison to the independent-SNP case. The independent-SNP limit is reached with λ→∞, where optimal values of \(J_{ij}^{(y)}\) all become zero due to high penalty. We opted for an l 2-penalizer rather than l 1; the latter generally exerts stronger effects [69] but l 2 is analytic and facilitates non-linear optimization. In Additional file 1: Text S1, we show implementations of three possible ways to perform this inference of varying computational cost and reliability: exact enumeration (EE), mean field (MF) [68], and pseudo-likelihood (PL) [70, 71] methods. The EE is essentially exact, but requires enumerations of all possible genotypes, and can only be used for m∼25 or less. We used this option to assess the reliability of the other two methods. Both MF and PL are approximate and can be used for m∼103 or larger. The MF option involves matrix inversion and requires a different regularization: instead of λ, we used ε∈[ 0,1], where ε=0 corresponds to the independent-SNP limit with no interaction. The PL method uses λ>0 and has the advantage that it can be easily parallelized. We implemented parallel computations of PL using the message passing interface protocol.
Once genotype distributions of case, control, and pooled (whole sample) groups have been inferred, Bayes' theorem allows one to obtain disease risk:
$$ {}\text{Pr}(y\,=\,1|\mathbf{a})\,=\, \frac{\text{Pr}\left(\mathbf{a}|y=1\right)p_{1}}{\sum_{y^{\prime}}\text{Pr}\left(\mathbf{a}|y^{\prime}\right)p_{y^{\prime}}} \,=\,\frac{1}{1+e^{-\alpha-\sum_{i}\beta_{i} a_{i}-\sum_{i<j} \gamma_{ij} a_{i} a_{j}}}. $$
One can show that (Additional file 1: Text S1)
$$\begin{array}{@{}rcl@{}} \beta_{i}&=& h_{i}^{(1)}-h_{i}^{(0)}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \gamma_{ij}&=& J_{ij}^{(1)}-J_{ij}^{(0)}. \end{array} $$
In other words, the single-SNP and interaction disease risk parameters θ={β i ,γ ij } are given by differences in genotype distribution parameters between case and control groups. In addition, the overall likelihood ratio statistic is given by the sum of L y subtracted by the pooled value (see Fig. 1). The parameter α is related to disease prevalence p 1=1−p 0 (see Additional file 1: Text S1).
We used cross-validation to determine the penalizer λ in Eq. (4). We first formed five training and test sets (of 4:1 size ratios) from the data and used the training set to select variants with independent SNP p-values below a cutoff. We calculated the AUC for different λ values and found an optimal choice. Even when the actual AUC values obtained are not high enough for a reasonable risk prediction, this procedure still allows us to identify optimal ranges of the model size (the role of interactions).
We used disease prevalence p 1=n 1/n for cross-validation because the training and test sets have the same sampling biases. In predicting risks with a prospective sample, however, this probability would have to be adjusted to known population phenotype frequencies. We implemented a software feature such that the disease prevalence, which affects the disease risk parameter α, can be re-specified when a parameter set inferred from case-control data is applied to an independent test set.
In contrast to DDA outlined above, the logistic regression uses
$$ {}\text{Pr}(\mathbf{a},\!y)\,=\,\text{Pr}(y|\mathbf{a})\text{Pr}(\mathbf{a})\!\simeq\! \text{Pr}(y|\mathbf{a}) \!\equiv\! \frac{1}{1\,+\,e^{-\alpha-\sum_{i}\!\beta_{i} a_{i}-\!\sum_{i<j} \gamma_{ij} a_{i} a_{j}}} $$
instead of Eq. (1), assuming that the marginal genotype distribution is uniform. The parameters α,β i , and γ ij are then directly determined by maximizing the likelihood of Pr(y|a) only:
$$ L/n=\frac{1}{n}\sum_{k}\ln \text{Pr}\left(y^{k}|\mathbf{a}^{k}\right)-\frac{\lambda}{2}\sum_{i<j} \gamma_{ij}^{2}, $$
where n=n 0+n 1, with respect to α, β i , and γ ij . In general, these disease risk parameter values from logistic regression are different from those obtained via genotype distribution parameters ψ y in DDA with Eq. (6); the latter contains the effects of the nonuniform marginal genotype distribution Pr(a) of the sample, while logistic regression does not. For comparison, we also implemented this multivariate logistic regression with an l 2-penalizer. The logistic regression can yield higher prediction AUC than DDA because by maximizing Eq. (8), one optimizes prediction directly. However, the quantity maximized in DDA given by Eq. (4) (or in fact the sum L 0+L 1; see Additional file 1: Text S1), rather than the prediction score, is the true likelihood.
Significance tests
We performed likelihood ratio tests to assess the statistical significance of the overall collective inference and individual loci/interactions. The p-values derived are conditional to the number of loci m and penalizer value λ (determined from cross-validation). The statistic was obtained by adding the log-likelihood values of case and control groups and subtracting that of the pooled group (see Text S1). We tested the significance of the single-locus contribution to disease association from site i by calculating the likelihoods \(L_{y}[\!h_{i}^{(y)}=h_{i}]\), where the single-SNP parameters of site i were prescribed as their pooled values (restricted model), and evaluating the differences against the likelihood of the full model (all parameters optimized without restriction). Analogous tests were performed for SNP pair i,j with \(L_{y}[\!J_{ij}^{(y)}=J_{ij}]\). The restricted model corresponds to the null hypothesis that the parameters belonging to a particular locus or interaction in case and control groups are the same as for those in the pooled group. For interaction statistics, we used the approach of constructing the empirical null distribution of the statistics under a given λ by permutation resampling: the phenotype data of a given sample with a certain penalizer λ value was randomly reshuffled to obtain realizations of the likelihood ratio statistics. This sampling was repeated multiple times (up to ∼104) to construct empirical cumulative distribution functions of the statistic for each site, or SNP-pair, from which the p-values were estimated. For the single-locus contribution statistics, we calculated p-values using the asymptotic χ 2-distribution.
In testing the inference algorithms using simulated data, samples of case-control genotypes containing m=10 or 20 loci and n=2n 0 individuals were generated under randomly assigned parameters {ψ 0,ψ 1}. The model parameters were chosen with \(h_{i}^{(y)}\sim N({\bar h}_{y},{\sigma _{h}^{2}})\), and \(J_{ij}^{(y)}\sim N({\bar J}_{y},{\sigma _{J}^{2}})\). To generate simulated data from these distributions, we evaluated summations over all (2m for binary models) possible genotypes to calculate their probability distribution using Eq. (3). For a given sample, cross-validation was first performed with λ values ranging from 10−4 to 102 to determine the optimal value λ ∗ that maximizes AUC. The parameters θ were then derived using the full sample and λ ∗. For DDA, the single-SNP and interaction parameters for case and control groups (ψ 0 and ψ 1, respectively) were obtained and used to derive θ. In contrast, in logistic regression, θ was obtained directly. The mean square error was calculated for inferred θ relative to true values of all distinct single-SNP and interaction parameters. We performed different inferences using a common set of data for each realization of parameters. The mean square error was then averaged over 100 realizations of parameters. We also compared pairwise test results using PLINK [44] epistasis module. We used PLINK version 1.9 with logistic regression and either dominant or genotypic coding options.
Age-related macular degeneration data
We obtained AMD case-control genotype data from the National Eye Institute Study of AMD Database (dbGaP accession number phs000684.v1.p1), which consisted of 2, 159 case and 1, 150 control individuals. Autosomal SNPs were filtered with the criteria of MAF >0.01, Hardy-Weinberg equilibrium p-value >10−6, and genotyping rate >0.05 [41] to yield 324, 713 SNPs in total. Independent-SNP DDA analyses were performed using Eq. (S24) and (S27) in Additional file 1: Text S1 and compared with logistic regression results from PLINK [44] in addition to our numerical logistic regression implementation. For each SNP, the minor allele was identified from the allele frequencies over the pooled sample.
Except otherwise stated, inferences on AMD data used the genotypic model. In all cases, λ (or ε for MF) was first determined from cross-validation (optimal value λ ∗ with the maximum AUC) and later used consistently for parameter estimation and p-value calculation. Interaction p-values were obtained for a given SNP selection and λ ∗ by resampling.
To generate SNP sets with reduced LD for p c -based selection (Fig. 5 b), we used the pairwise LD-based pruning option of PLINK with window size of 50 kb and 5 SNPs for shifts along with r 2-thresholds of 0.5 and 0.3. The two threshold choices yielded SNP sets with m=180,354 and 117,976, respectively.
In performing the epigenetic state enrichment analysis, for each SNP considered, we used the 1000 Genomes reference haplotypes [72] of European individuals to build the list of correlated SNPs (LD r 2>0.5). We then used the NIH Roadmap reference epigenome chromatin state annotations [50] to construct the distribution of epigenetic states within each group of LD-correlated SNPs. We used the hidden Markov model-based 15 state-annotations of 111 reference epigenomes, selecting 8 states [active transcription start site (TSS), flanking active TSS, transcription at gene 5' and 3', strong transcription, weak transcription, zinc finger-associated, genic enhancers, and enhancers] to define the 'active' state, which contained the transcribed, promoter, and enhancer regions. For each SNP location, we calculated the fraction of LD-correlated locations in active states within each cell type. This fraction was summed over the list of associated SNPs (m=96 in Fig. 7) to give the effective number of active states observed, and compared with the background active state frequency estimated over the whole genome for each epigenome. The over-representation p-values were calculated by the binomial test.
Analogous calculations were performed for the SNP interaction enrichment analysis. We first selected statistically significant SNP pairs with p<10−3 from Fig. 6 (310 in total). We then considered each unique combination of two epigenomes (including self-interactions) and, for each SNP pair selected, obtained the fraction of active state-active state pairs with the two groups of LD-correlated SNPs assumed to belong to the two epigenomes. This fraction was summed over the list of SNP pairs and compared with the background expected pair number (product of background active state frequencies from two tissues). Over-representation p-values were calculated using the binomial test.
Pathway-based SNP sets were generated for human pathways in Reactome database [51]. We compiled the list of all genes, assigned SNPs in the AMD genome-wide set within 50 kb of the coding region to each gene, and collected SNPs corresponding to the gene set belonging to each pathway. Only those pathways with 20 or more SNPs were considered (1,732 in total). For most pathways with m<6×103, DDA independent-SNP and collective inference (ε-optimized MF) inferences were applied to each SNP set without further filtering to derive 5-fold cross-validation AUC. For larger pathways, p c -based filtering was incorporated into cross-validation to reduce the model sizes.
AUC:
area under the curve
CFH:
complement factor H
ChREBP:
carbohydrate response element-binding protein
DDA:
discrete discriminant analysis
exact enumeration
GeDI:
genotype distribution-based inference
MDA:
mean field
mesenchymal stem cell
pAUC:
pseudo-AUC
PL:
pseudo-likelihood
PRR:
pattern-recognition receptor
RPE:
retina pigment epithelium
SASP:
senescence-associated secretory phenotype
TLR:
toll-like receptor
TSS:
transcription start site
Kim YA, Wuchty S, Przytycka TM. Identifying causal genes and dysregulated pathways in complex diseases. PLoS Comput Biol. 2011; 7:e1001095.
Haines JL, Hauser MA, Schmidt S, Scott WK, Olson LM, Gallins P, et al.Complement factor H variant increases the risk of age-related macular degeneration. Science. 2005; 308:419–21.
Edwards AO, Ritter R, Abel KJ, Manning A, Panhuysen C, Farrer LA. Complement factor H polymorphism and age-related macular degeneration. Science. 2005; 308:421–4.
Wang WY, Barratt BJ, Clayton DG, Todd JA. Genome-wide association studies: theoretical and practical concerns. Nat Rev Genet. 2005; 6:109–18.
The Wellcome Trust Case Control Consortium. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature. 2007; 447:661–78.
Gold B, Kirchhoff T, Stefanov S, Lautenberger J, Viale A, Garber J, et al.Genome-wide association study provides evidence for a breast cancer risk locus at 6q22.33. Proc Natl Acad Sci USA. 2008; 105:4340–5.
Bergeron-Sawitzke J, Gold B, Olsh A, Schlotterbeck S, Lemon K, Visvanathan K, et al. Multilocus analysis of age-related macular degeneration. Eur J Hum Genet. 2009; 17:1190–9.
Manolio TA. Bringing genome-wide association findings into clinical use. Nat Rev Genet. 2013; 14:549–58.
Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, et al.The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 2014; 42:D1001–1006.
Ripke S, Neale BM, Corvin A, Walters JT, Farh KH, Holmans PA, et al. Biological insights from 108 schizophrenia-associated genetic loci. Nature. 2014; 511:421–7.
Locke AE, Kahali B, Berndt SI, Justice AE, Pers TH, Day FR, et al. Genetic studies of body mass index yield new insights for obesity biology. Nature. 2015; 518:197–206.
Morris AP, Zeggini E. An evaluation of statistical approaches to rare variant analysis in genetic association studies. Genet Epidemiol. 2010; 34:188–93.
Neale BM, Rivas MA, Voight BF, Altshuler D, Devlin B, Orho-Melander M, et al. Testing for an unusual distribution of rare variants. PLoS Genet. 2011; 7:e1001322.
Wu MC, Lee S, Cai T, Li Y, Boehnke M, Lin X. Rare-variant association testing for sequencing data with the sequence kernel association test. Am J Hum Genet. 2011; 89:82–93.
Clarke GM, Anderson CA, Pettersson FH, Cardon LR, Morris AP, Zondervan KT. Basic statistical analysis in genetic case-control studies. Nat Protoc. 2011; 6:121–133.
Cordell HJ. Detecting gene-gene interactions that underlie human diseases. Nat Rev Genet. 2009; 10:392–404.
Wei WH, Hemani G, Haley CS. Detecting epistasis in human complex traits. Nat Rev Genet. 2014; 15:722–33.
Yu K, Xu J, Rao DC, Province M. Using tree-based recursive partitioning methods to group haplotypes for increased power in association studies. Ann Hum Genet. 2005; 69:577–89.
Ritchie MD, Hahn LW, Roodi N, Bailey LR, Dupont WD, Parl FF, et al. Multifactor-dimensionality reduction reveals high-order interactions among estrogen-metabolism genes in sporadic breast cancer. Am J Hum Genet. 2001; 69:138–47.
Moore JH, Gilbert JC, Tsai CT, Chiang FT, Holden T, Barney N, et al. A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility. J Theor Biol. 2006; 241:252–61.
Roshan U, Chikkagoudar S, Wei Z, Wang K, Hakonarson H. Ranking causal variants and associated regions in genome-wide association studies by the support vector machine and random forest. Nucleic Acids Res. 2011; 39:e62.
Pan Q, Hu T, Moore JH. Epistasis, complexity, and multifactor dimensionality reduction. Methods Mol Biol. 2013; 1019:465–77.
Zhang Q, Long Q, Ott J. AprioriGWAS, a new pattern mining strategy for detecting genetic variants associated with disease through interaction effects. PLoS Comput Biol. 2014; 10:e1003627.
Fan R, Zhong M, Wang S, Zhang Y, Andrew A, Karagas M, et al.Entropy-based information gain approaches to detect and to characterize gene-gene and gene-environment interactions/correlations of complex diseases. Genet Epidemiol. 2011; 35:706–21.
Gauderman WJ, Murcray C, Gilliland F, Conti DV. Testing association between disease and multiple SNPs in a candidate gene. Genet Epidemiol. 2007; 31:383–95.
Gao Q, He Y, Yuan Z, Zhao J, Zhang B, Xue F. Gene- or region-based association study via kernel principal component analysis. BMC Genet. 2011; 12:75.
Cai M, Dai H, Qiu Y, Zhao Y, Zhang R, Chu M, et al. SNP set association analysis for genome-wide association studies. PLoS ONE. 2013; 8:e62495.
Herold C, Steffens M, Brockschmidt FF, Baur MP, Becker T. INTERSNP: genome-wide interaction analysis guided by a priori information. Bioinformatics. 2009; 25:3275–81.
Wu X, Dong H, Luo L, Zhu Y, Peng G, Reveille JD, et al. A novel statistic for genome-wide interaction analysis. PLoS Genet. 2010; 6:e1001131.
Ueki M, Cordell HJ. Improved statistics for genome-wide interaction analysis. PLoS Genet. 2012; 8:e1002625.
Hemani G, Shakhbazov K, Westra HJ, Esko T, Henders AK, McRae AF, et al. Detection and replication of epistasis influencing transcription in humans. Nature. 2014; 508:249–53.
Park MY, Hastie T. Penalized logistic regression for detecting gene interactions. Biostatistics. 2008; 9:30–50.
Wu TT, Chen YF, Hastie T, Sobel E, Lange K. Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics. 2009; 25:714–21.
Ayers KL, Cordell HJ. SNP selection in genome-wide and candidate gene studies via penalized logistic regression. Genet Epidemiol. 2010; 34:879–91.
Efron B. The efficiency of logistic regression compared to normal discriminant analysis. J Am Stat Assoc. 1975; 70:892–8.
Press SJ, Wilson S. Choosing between logistic regression and discriminant analysis. J Am Statist Assoc. 1978; 73:699–705.
Hastie T, Tibshirani R, Friedman J. The elements of statistical learning: data mining, inference, and prediction, 2nd ed. New York: Springer; 2011. http://statweb.stanford.edu/~tibs/ElemStatLearn.
Jombart T, Devillard S, Balloux F. Discriminant analysis of principal components: a new method for the analysis of genetically structured populations. BMC Genet. 2010; 11:94.
Wang K, Li M, Hakonarson H. Analysing biological pathways in genome-wide association studies. Nat Rev Genet. 2010; 11:843–54.
Swaroop A, Chew EY, Rickman CB, Abecasis GR. Unraveling a multifactorial late-onset disease: from genetic susceptibility to disease mechanisms for age-related macular degeneration. Annu Rev Genomics Hum Genet. 2009; 10:19–43.
Chen W, Stambolian D, Edwards AO, Branham KE, Othman M, Jakobsdottir J, et al. Genetic variants near TIMP3 and high-density lipoprotein-associated loci influence susceptibility to age-related macular degeneration. Proc Natl Acad Sci USA. 2010; 107:7401–6.
Gold B, Merriam JE, Zernant J, Hancox LS, Taiber AJ, Gehrs K, et al. Variation in factor B (BF) and complement component 2 (C2) genes is associated with age-related macular degeneration. Nat Genet. 2006; 38:458–62.
Fritsche LG, Chen W, Schu M, Yaspan BL, Yu Y, Thorleifsson G, et al. Seven new loci associated with age-related macular degeneration. Nat Genet. 2013; 45:433–9.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007; 81:559–75.
Wilks SS. The large-sample distribution of the likelihood ratio for testing composite hypotheses. Ann Math Stat. 1937; 9:60–2.
He Q, Lin DY. A variable selection method for genome-wide association studies. Bioinformatics. 2011; 27:1–8.
Wood AR, Tuke MA, Nalls MA, Hernandez DG, Bandinelli S, Singleton AB, et al.Another explanation for apparent epistasis. Nature. 2014; 514:3–5.
Ward LD, Kellis M. Interpreting noncoding genetic variation in complex traits and human disease. Nat Biotechnol. 2012; 30:1095–106.
Kundaje A, Meuleman W, Ernst J, Bilenky M, Yen A, Heravi-Moussavi A, et al.Integrative analysis of 111 reference human epigenomes. Nature. 2015; 518:317–30.
Fabregat A, Sidiropoulos K, Garapati P, Gillespie M, Hausmann K, Haw R, et al.The Reactome pathway Knowledgebase. Nucleic Acids Res. 2016; 44:D481–487. http://www.reactome.org.
Ambati J, Fowler BJ. Mechanisms of age-related macular degeneration. Neuron. 2012; 75:26–39.
Sun M, Finnemann SC, Febbraio M, Shan L, Annangudi SP, Podrez EA, et al.Light-induced oxidation of photoreceptor outer segment phospholipids generates ligands for CD36-mediated phagocytosis by retinal pigment epithelium: a potential mechanism for modulating outer segment phagocytosis under oxidant stress conditions. J Biol Chem. 2006; 281:4222–30.
Weismann D, Binder CJ. The innate immune response to products of phospholipid peroxidation. Biochim Biophys Acta. 2012; 1818:65–2475.
Weismann D, Hartvigsen K, Lauer N, Bennett KL, Scholl HP, Charbel Issa P, et al.Complement factor H binds malondialdehyde epitopes and protects from oxidative stress. Nature. 2011; 478:76–81.
Flannagan RS, Jaumouille V, Grinstein S. The cell biology of phagocytosis. Annu Rev Pathol. 2012; 7:61–98.
Postic C, Dentin R, Denechaud PD, Girard J. ChREBP, a transcriptional regulator of glucose and lipid metabolism. Annu Rev Nutr. 2007; 27:179–92.
Wang Y, Viscarra J, Kim SJ, Sul HS. Transcriptional regulation of hepatic lipogenesis. Nat Rev Mol Cell Biol. 2015; 16:678–89.
Ferrington DA, Sinha D, Kaarniranta K. Defects in retinal pigment epithelial cell proteolysis and the pathology associated with age-related macular degeneration. Prog Retin Eye Res. 2016; 51:69–89.
Leroux MR, Hartl FU. Protein folding: versatility of the cytosolic chaperonin TRiC/CCT. Curr Biol. 2000; 10:R260–264.
Munoz-Espin D, Serrano M. Cellular senescence: from physiology to pathology. Nat Rev Mol Cell Biol. 2014; 15:482–96.
Taylor RC, Cullen SP, Martin SJ. Apoptosis: controlled demolition at the cellular level. Nat Rev Mol Cell Biol. 2008; 9:231–41.
Dunaief JL, Dentchev T, Ying GS, Milam AH. The role of apoptosis in age-related macular degeneration. Arch Ophthalmol. 2002; 120:1435–42.
Bhattacharya S, Chaum E, Johnson DA, Johnson LR. Age-related susceptibility to apoptosis in human retinal pigment epithelial cells is triggered by disruption of p53-Mdm2 association. Invest Ophthalmol Vis Sci. 2012; 53:8350–66.
Salvesen GS, Duckett CS. IAP proteins: blocking the road to death's door. Nat Rev Mol Cell Biol. 2002; 3:401–10.
Kauppinen A, Paterno JJ, Blasiak J, Salminen A, Kaarniranta K. Inflammation and its role in age-related macular degeneration. Cell Mol Life Sci. 2016; 73:1765–86.
Toomey CB, Kelly U, Saban DR, Bowes Rickman C. Regulation of age-related macular degeneration-like pathology by complement factor H. Proc Natl Acad Sci USA. 2015; 112:E3040–3049.
Morcos F, Pagnani A, Lunt B, Bertolino A, Marks DS, Sander C, et al.Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proc Natl Acad Sci USA. 2011; 108:E1293–1301.
Fan J, Lv J. A selective overview of variable selection in high dimensional feature space. Stat Sin. 2010; 20:101–48.
Besag J. Statistical analysis of non-lattice data. Statistician. 1975; 24:179–95.
Aurell E, Ekeberg M. Inverse Ising inference using all the data. Phys Rev Lett. 2012; 108:090201.
Abecasis GR, Auton A, Brooks LD, DePristo MA, Durbin RM, Handsaker RE, et al. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012; 491:56–65.
The GWAS data used were obtained from the National Eye Institute Study of the Age-Related Macular Degeneration (NEI-AMD) Database found at http://www.ncbi.nlm.nih.gov/gap through dbGaP accession number phs000684.v1.p1. We thank NEI-AMD participants and the NEI-AMD Research Group for their valuable contribution to this research. HJW thanks Marianne Spevak and Joy Hoffman. Parts of the computation were performed on the high-performance computing resources at the U.S. Air Force Research Laboratory, U.S. Army Research Laboratory, and U.S. Army Engineer Research and Development Center.
This work was supported by the U.S. Army Medical Research and Materiel Command (Ft. Detrick, Maryland). The opinions and assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the U.S. Army or of the U.S. Department of Defense. This paper has been approved for public release with unlimited distribution.
The source code and documentation of the software (GeDI; genotype distribution-based inference) implementing the algorithm are freely available: the archived version at http://dx.doi.org/10.5281/zenodo.32630 and most recent version at http://github.com/BHSAI/GeDI; programming language: C++; license: GNU GPL.
HJW and JR conceived the study. HJW derived formulas/implemented the algorithm, wrote the software, collected data, and performed the analyses. CY, KK, BG, and JR participated in the data collection and analyses. HJW and JR wrote the manuscript. CY and KK tested the software. All authors read and approved the manuscript.
Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, U.S. Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA
Hyung Jun Woo, Chenggang Yu, Kamal Kumar & Jaques Reifman
Laboratory of Genomic Diversity, National Cancer Institute, Frederick, Maryland, USA
Bert Gold
Hyung Jun Woo
Chenggang Yu
Jaques Reifman
Correspondence to Jaques Reifman.
An erratum to this article can be found at http://dx.doi.org/10.1186/s12864-016-3095-2.
Additional file 1
Supplementary material. Text S1. Mathematical formulation of inference algorithms. Table S1. Independent-SNP inference comparison of logistic regression (from PLINK) and DDA (GeDI). (PDF 283 kb)
Figure S1. Inference properties of a single independent SNP. Log odds ratio (OR) and power (level of significance 0.05) inferred from case-control data of size n=2n 0=2n 1 using DDA (analytic) and logistic regression (numerical) are shown for the dominant model. Equation (S29) of Text S1 was used. The minor allele frequency ϕ y for control and case groups were set such that \(f^{(y)}=2\phi _{y}(1-\phi _{y})+{\phi _{y}^{2}}=\phi _{y}(2-\phi _{y})\). We used ϕ y =(0.1,0.25) such that f (y)=(0.19,0.4375) and h (y)=(−1.45,−0.25) for control and case groups, respectively, and β=1.1987. (PDF 6.99 kb)
Figure S2. Examples of true versus inferred parameters. a Dominant model with m=10 SNPs and inference on a sample of size n=103. b Genotypic model with m=10 SNPs and inference on a sample of size n=105. In all cases, the penalizer value was determined by cross-validation. (PDF 18.1 kb)
Figure S3. Determination of penalizer λ via cross-validation. The data set is one realization of simulations shown in Fig. 2 b and the inference is with the exact enumeration (EE) method. The minima in mean square error (a) and the maxima in AUC (b) shift to lower λ as sample size n increases. (PDF 6.17 kb)
Figure S4. Distributions of interaction likelihood ratio statistics under the null hypothesis. Empirical cumulative distribution functions (CDF) in terms of the interaction statistics q were obtained by resampling. Simulation conditions were as described in Fig. 2 e and inferences used EE. (PDF 7.39 kb)
Figure S5. Whole-genome p-value profile of AMD data. Independent-SNP DDA with genotypic model was used. (PDF 25.7 kb)
Figure S6. Regional views of AMD data. Independent-SNP DDA results (light blue) are compared to logistic regression from PLINK (yellow). Genotypic model was used. (PDF 10.9 kb)
Figure S7. Marginal pairwise interaction p-values. PLINK epistatic module was used to m=96 AMD SNPs. SNP pairs with strongest p-values near HTRA1 have p∼10−9. Genotypic model was used. The bottom panel shows the independent-SNP p-values. (PDF 54.6 kb)
Figure S8. Linkage disequilibrium r 2 within m=96 AMD SNPs from PLINK. (PDF 48.3 kb)
Woo, H., Yu, C., Kumar, K. et al. Genotype distribution-based inference of collective effects in genome-wide association studies: insights to age-related macular degeneration disease mechanism. BMC Genomics 17, 695 (2016). https://doi.org/10.1186/s12864-016-2871-3
Genome-wide association
Epistasis
|
CommonCrawl
|
Track C
You are here: Home → Accepted Papers (with abstracts)
Paper Submissions
Best Papers
Accepted Papers (with abstracts)
by Michael Hoffmann — last modified Apr 13, 2011 10:04 AM
The following contributions have been accepted for presentation at ICALP 2011. They are listed in no particular order.
There is also a list without abstracts.
Leslie Ann Goldberg and Mark Jerrum. A polynomial-time algorithm for estimating the partition function of the ferromagnetic Ising model on a regular matroid
Abstract: We investigate the computational difficulty of approximating the partition function of the ferromagnetic Ising model on a regular matroid. Jerrum and Sinclair have shown that there is a fully polynomial randomised approximation scheme (FPRAS) for the class of graphic matroids. On the other hand, the authors have previously shown, subject to a complexity-theoretic assumption, that there is no FPRAS for the class of binary matroids, which is a proper superset of the class of graphic matroids. In order to map out the region where approximation is feasible, we focus on the class of regular matroids, an important class of matroids which properly includes the class of graphic matroids, and is properly included in the class of binary matroids. Using Seymour's decomposition theorem, we give an FPRAS for the class of regular matroids.
T-H. Hubert Chan and Li Ning. Fast Convergence for Consensus in Dynamic Networks
Abstract: We study the convergence time required to achieve consensus in dynamic networks. In each time step, a node's value is updated to some weighted average of its neighbors' and its old values. We study the case when the underlying network is dynamic, and investigate different averaging models. Both our analysis and experiments show that dynamic networks exhibit fast convergence behavior, even under very mild connectivity assumptions.
Leah Epstein, Csanad Imreh, Asaf Levin and Judit Nagy-Gyorgy. On variants of file caching
Abstract: In the file caching problem, the input is a sequence of requests for files out of a slow memory. A file has two attributes, a retrieval cost and an integer size. It is required to maintain a cache of size k, bringing each file, which is not present in the cache at the time of request, from the slow memory into the cache. This situation is called a miss and incurs a cost equal to the retrieval cost of the file. Well-known special cases include paging (all costs and sizes are equal to 1), the cost model also known as weighted paging (all sizes are equal to 1), the fault model (all costs are equal to 1) and the bit model (the cost of a file is equal to its size). We study two online variants of the problem, caching with bypassing and caching with rejection. If bypassing is allowed, a miss for a file still results in an access to this file in the slow memory, but its subsequent insertion into the cache is optional. In the model with rejection, together with each request for a file, the algorithm is informed with a rejection penalty of the request. When a file which is not present in the cache is requested, the algorithm must either bring the file into the cache, paying the retrieval cost of the file, or reject the file, paying the rejection penalty of the request. The goal function is the sum of all amounts paid by the algorithm, that is, the sum of total rejection penalty and the total retrieval cost. This problem generalizes both caching and caching with bypassing. We study the relation between these two variants and design deterministic and randomized algorithms for both problems. The competitive ratios of the randomized algorithms match the best known results for caching. These are O(\log k)-competitive algorithms for all important special cases, and an O(\log^2 k)-competitive algorithm for caching. This improves the best known result with bypassing for paging, the bit model and the cost model, and it is the first algorithm of competitive ratio o(k) for the other cases. In the deterministic case, it is known that a (k+1)-competitive algorithm for caching with bypassing exists, and this is best possible. In contrast, we present a lower bound of 2k+1 on the competitive ratio of any deterministic algorithm for the variant with rejection, which holds already for paging. Exploiting the relation between the two problems, we design a (2k+2)-competitive algorithm for caching with rejection. We design a different (2k+1)-competitive algorithm for the last problem, which is applicable for paging, the bit model and the cost model.
Andrei Bulatov and Dániel Marx. Constraint satisfaction parameterized by solution size
Abstract: In the constraint satisfaction problem (CSP) corresponding to a constraint language (i.e., a set of relations) $\Gamma$, the goal is to find an assignment of values to variables so that a given set of constraints specified by relations from $\Gamma$ is satisfied. In this paper we study the fixed-parameter tractability of constraint satisfaction problems parameterized by the size of the solution in the following sense: one of the possible values, say 0, is ``free,'' and the number of variables allowed to take other, ``expensive,'' values is restricted. A size constraint requires that exactly $k$ variables take nonzero values. We also study a more refined version of this restriction: a global cardinality constraint prescribes how many variables have to be assigned each particular value. We study the parameterized complexity of these types of CSPs where the parameter is the required number $k$ of nonzero variables. As special cases, we can obtain natural and well-studied parameterized problems such as Independent set, Vertex Cover, $d$-Hitting Set, Biclique, etc. In the case of constraint languages closed under substitution of constants, we give a complete characterization of the fixed-parameter tractable cases of CSPs with size constraints, and we show that all the remaining problems are W[1]-hard. For CSPs with cardinality constraints, we obtain a similar classification, but for some of the problems we are only able to show that they are Biclique-hard. The exact parameterized complexity of the Biclique problem is a notorious open problem, although it is believed to be W[1]-hard.
Fabian Kuhn and Monaldo Mastrolilli. Vertex Cover in Graphs with Locally Few Colors
Abstract: In 1986 Erd\H{o}s et. al. defined the local chromatic number of a graph as the minimum number of colors that must appear within distance 1 of a vertex. For any fixed $\Delta\geq 2$, they presented graphs with arbitrarily large chromatic number that can be colored so that: (i) no vertex neighborhood contains more than $\Delta$ different colors (\emph{bounded local colorability}), and (ii) adjacent vertices from two color classes form an induced subgraph that is complete and bipartite (\emph{local completeness}). We investigate the weighted vertex cover problem in graphs when a locally bounded coloring is given as input. This generalizes in a very natural vein the vertex cover problem in bounded degree graphs to a class of graphs with arbitrarily large chromatic number. Assuming the Unique Game Conjecture, we provide a tight characterization. More precisely, we prove that it is UG-hard to improve the approximation ratio of $2-2/(\Delta+1)$ if only the bounded local colorability, but not the local completeness condition holds for the given coloring. A matching upper bound is also provided. Vice versa, when both the above two properties (i) and (ii) hold, we present a randomized approximation algorithm with performance ratio of $2-\Omega(1)\frac{\ln \ln \Delta}{\ln\Delta}$. This matches (up to the constant factor in the lower order term) known inapproximability results for the special case of bounded degree graphs. Moreover, we show that when both the above two properties (i) and (ii) hold, the obtained result finds a natural application in a classical scheduling problem, namely the precedence constrained single machine scheduling problem to minimize the weighted sum of completion times. In a series of recent papers it was established that this scheduling problem is a special case of the minimum weighted vertex cover in graphs $G$ of incomparable pairs defined in the dimension theory of partial orders. We show that $G$ satisfies properties (i) and (ii) where $\Delta-1$ is the maximum number of predecessors (or successors) of each job.
Sourav Chakraborty, David García-Soriano and Arie Matsliah. Efficient Sample Extractors for Juntas with Applications
Abstract: We develop a query-efficient sample extractor for juntas, that is, a probabilistic algorithm that can simulate random samples from the core of a $k$-junta $f:\zo^n \to \zo$ given oracle access to a function $f':\zo^n \to \zo$ that is only close to $f$. After a preprocessing step, which takes $\widetilde{O}(k)$ queries, generating each sample to the core of $f$ takes only one query to $f'$. We then plug in our sample extractor in the ``testing by implicit learning'' framework of Diakonikolas et al. \cite{til}, improving the query complexity of testers for various Boolean function classes. In particular, for some of the classes considered in \cite{til}, such as $s$-term DNF formulas, size-$s$ decision trees, size-$s$ Boolean formulas, $s$-sparse polynomials over $\mathbb{F}_2$, and size-$s$ branching programs, the query complexity is reduced from $\widetilde{O}(s^4/\epsilon ^2)$ to $\widetilde{O}(s/\epsilon ^2)$. This shows that using the new sample extractor, testing by implicit learning can lead to testers having better query complexity than those tailored to a specific problem, such as the tester of Parnas et al. \cite{prs} for the class of monotone $s$-term DNF formulas. In terms of techniques, we extend the tools used in \cite{fiso} for testing function isomorphism to juntas. Specifically, while the original analysis in \cite{fiso} allowed query-efficient noisy sampling from the core of any $k$-junta $f$, the one presented here allows similar sampling from the core of the {\em closest} $k$-junta to $f$, even if $f$ is not a $k$-junta but just close to being one. One of the observations leading to this extension is that the junta tester of Blais \cite{blais_juntas}, based on which the aforementioned sampling is achieved, enjoys a certain weak form of tolerance.
Amin Karbasi, Stratis Ioannidis and Laurent Massoulie. Content Search Through Comparisons
Abstract: We study the problem of navigating through a database of similar objects using comparisons under heterogeneous demand, a problem strongly related to small-world network design. We show that, under heterogeneous demand, the small-world network design problem is NP-hard. Given the above negative result, we propose a novel mechanism for small-world network design and provide an upper bound on its performance under heterogeneous demand. The above mechanism has a natural equivalent in the context of content search through comparisons, again under heterogeneous demand; we use this to establish both upper and lower bounds on content-search through comparisons.
Christos Kapoutsis. Nondeterminism is Essential in Small 2FAs with Few Reversals
Abstract: On every n-long input, every two-way finite automaton (2FA) can reverse its head O(n) times before halting. A "2FA with few reversals" is an automaton where this number is only o(n). For every h, we exhibit a language that requires Ω(2^h) states on every deterministic 2FA with few reversals, but only h states on a nondeterministic 2FA with few reversals.
Christine Cheng, Eric Mcdermid and Ichiro Suzuki. Center stable matchings and centers of cover graphs of distributive lattices
Abstract: Let I be an instance of the stable marriage (SM) problem. In the late 1990s, Teo and Sethuraman discovered the existence of median stable matchings, which are stable matchings that match all participants to their (lower/upper) median stable partner. About a decade later, Cheng showed that not only are they locally-fair, but they are also globally-fair in the following sense: when G(I) is the cover graph of the distributive lattice of stable matchings, these stable matchings are also medians of G(I) -- i.e., their average distance to the other stable matchings is as small as possible. Unfortunately, finding a median stable matching of I is #P-hard. Inspired by the fairness properties of the median stable matchings, we study the center stable matchings which are the centers of G(I) -- i.e., the stable matchings whose maximum distance to any stable matching is as small as possible. Here are our two main results. First, we show that a center stable matching of I can be computed in O(|I|^{2.5}) time. Thus, center stable matchings are the first type of globally-fair stable matchings we know of that can be computed efficiently. Second, we show that in spite of the first result, there are similarities between the set of median stable matchings and the set of center stable matchings of I. The former induces a hypercube in G(I) while the latter is the union of hypercubes of a fixed dimension in G(I). Furthermore, center stable matchings have a property that approximates the one found by Teo and Sethuraman for median stable matchings. Finally, we note that our results extend to other variants of SM whose solutions form a distributive lattice and whose rotation posets can be constructed efficiently.
Ashley Montanaro, Aram Harrow and Anthony Short. Limitations on quantum dimensionality reduction
Abstract: The Johnson-Lindenstrauss Lemma is a classic result which implies that any set of n real vectors can be compressed to O(log n) dimensions while only distorting pairwise Euclidean distances by a constant factor. Here we consider potential extensions of this result to the compression of quantum states. We show that, by contrast with the classical case, there does not exist any distribution over quantum channels that significantly reduces the dimension of quantum states while preserving the 2-norm distance with high probability. We discuss two tasks for which the 2-norm distance is indeed the correct figure of merit. In the case of the trace norm, we show that the dimension of low-rank mixed states can be reduced by up to a square root, but that essentially no dimensionality reduction is possible for highly mixed states.
Bogdan Chlebus, Dariusz Kowalski, Andrzej Pelc and Mariusz Rokicki. Efficient Distributed Communication in Ad-hoc Radio Networks
Abstract: We present new distributed deterministic solutions to two communication problems in $n$-node ad-hoc radio networks: rumor gathering and multi-broadcast. In these problems, some or all nodes of the network initially contain input data called rumors, which have to be learned by other nodes. In rumor gathering, there are $k$ rumors initially distributed arbitrarily among the nodes, and the goal is to collect all the rumors at one node. Multi-broadcast is related to two fundamental communication problems: gossiping and routing. In gossiping, every node is initialized with a rumor and the goal is for all nodes to learn all rumors. In routing, $k$ rumors distributed arbitrarily among the nodes must be delivered each to its designated destination. The problem of multi-broadcast, considered in this work, is a generalization of gossiping and a strengthening of routing, and is defined as follows: some $k$ rumors are distributed arbitrarily among the nodes and the goal is for all the nodes to learn every rumor. Our rumor gathering algorithm works in $O((k+n)\log n)$ time and our multi-broadcast algorithm works in $O(k\log^3 n+n\log^4 n)$~time, for any $n$-node networks and $k$ rumors (with arbitrary $k$), which is a substantial improvement over the best previously known deterministic solutions to these problems. As a consequence, we \emph{exponentially} decrease the gap between upper and lower bounds on the deterministic time complexity of four communication problems: rumor gathering, multi-broadcast, gossiping and routing, in the important case when every node has initially at most one rumor (this is the scenario for gossiping and for the usual formulation of routing). Indeed, for $k=O(n)$, our results simultaneously decrease the complexity gaps for these four problems from polynomial to polylogarithmic in the size of the graph. Moreover, our {\em deterministic} gathering algorithm applied for $k=O(n)$ rumors, improves over the best previously known {\em randomized} algorithm of time $O(k\log n+ n\log^2 n)$.
Jakob Nordstrom and Alexander Razborov. On Minimal Unsatisfiability and Time-Space Trade-offs for k-DNF Resolution
Abstract: A well-known theorem by Tarsi states that a minimally unsatisfiable CNF formula with m clauses can have at most m-1 variables, and this bound is exact. In the context of proving lower bounds on proof space in k-DNF resolution, [Ben-Sasson and Nordstrom 2009] extended the concept of minimal unsatisfiability to sets of k-DNF formulas and proved that a minimally unsatisfiable k-DNF set with m formulas can have at most O((mk)^(k+1)) variables. This result is far from tight, however, since they could only present explicit constructions of minimally unsatisfiable sets with Omega(mk^2) variables. In the current paper, we revisit this combinatorial problem and significantly improve the lower bound to Omega(m)^k, which almost matches the upper bound above. Furthermore, using similar ideas we show that the analysis of the technique in [Ben-Sasson and Nordstrom 2009] for proving time-space separations and trade-offs for k-DNF resolution is almost tight. This means that although it is possible, or even plausible, that stronger results than in [Ben-Sasson and Nordström 2009] should hold, a fundamentally different approach would be needed to obtain such results.
Shengyu Zhang. On the power of lower bound methods for one-way quantum communication complexity
Abstract: his paper studies the three known lower bound methods for one-way quantum communication complexity, namely the Partition Tree method by Nayak, the Trace Distance method by Aaronson, and the two-way quantum communication complexity. It is shown that all these three methods can be exponentially weak for some total Boolean functions. In particular, for a large class of functions generated from Erdos-Renyi random graphs G(N,p) with p in some range of 1/poly(N), the two-way quantum communication complexity gives linear lower bound while the other two methods only gives constant lower bounds. This denies the possibility of using any of these known quantum lower bounds for proving the fundamental conjecture that the classical and quantum one-way communication complexities are polynomially related. En route of the exploration, we also discovered that the power of Nayak's method is exactly equal to the \emph{extended equivalence query complexity} in learning theory.
Markus Jalsenius and Raphael Clifford. Lower Bounds for Online Integer Multiplication and Convolution in the Cell-Probe Model
Abstract: We show time lower bounds for both online integer multiplication and convolution in the cell-probe model with word size w. For the multiplication problem, one pair of digits, each from one of two n digit numbers that are to be multiplied, is given as input at step i. The online algorithm outputs a single new digit from the product of the numbers before step i+1. We give a lower bound of Omega(d/w log n) time on average per output digit for this problem where 2^d is the maximum value of a digit. In the convolution problem, we are given a fixed vector V of length n and we consider a stream in which numbers arrive one at a time. We output the inner product of V and the vector that consists of the last n numbers of the stream. We show an Omega(d/w log n) lower bound for the time required per new number in the stream. All the bounds presented hold under randomisation and amortisation. Multiplication and convolution are central problems in the study of algorithms which also have the widest range of practical applications.
Shiri Chechik. Fault-Tolerant Compact Routing Schemes for General Graphs
Abstract: This paper considers compact fault-tolerant routing schemes for weighted general graphs, namely, routing schemes that avoid a set of failed (or {\em forbidden}) edges. We present a compact routing scheme capable of handling multiple edge failures. Assume a source node $s$ contains a message $M$ designated to a destination target $t$ and assume a set $F$ of edges crashes (unknown to $s$). Our scheme routes the message to $t$ (provided that $s$ and $t$ are still connected in $G \setminus F$) over a path whose length is proportional to the distance between $s$ and $t$ in $G \setminus F$, the number of failures in $F$ and some poly-log factor. The routing table required at a node $v$ is of size proportional to the degree of $v$ in $G$ and some poly-log factor. This improves on the previously known fault-tolerant compact routing scheme for general graphs, which was capable of overcoming at most 2 edge failures.
Chien-Chung Huang and Telikepalli Kavitha. Popular Matchings in the Stable Marriage Problem
Abstract: The input is a bipartite graph G = (A U B, E) where each vertex u in A U B ranks its neighbors in a strict order of preference. This is the same as an instance of the stable marriage problem with incomplete lists. A matching M is said to be popular if there is no matching M' such that more vertices are better off in M' than in M. Any stable matching of G is popular, however such a matching is a minimum cardinality popular matching. We consider the problem of computing a maximum cardinality popular matching here. It has very recently been shown that when preference lists have ties, the problem of determining if a given instance admits a popular matching or not is NP-complete. When preference lists are strict, popular matchings always exist, however the complexity of computing a maximum cardinality popular matching was unknown. In this paper we give a simple characterization of popular matchings when preference lists are strict and a sufficient condition for a maximum cardinality popular matching. We then show an O(mn) algorithm for computing a maximum cardinality popular matching, where m is the number of edges and n is the number of vertices in G.
Jiawei Qian and David Williamson. An O(log n)-competitive algorithm for online constrained forest problems
Abstract: In the generalized Steiner tree problem, we find a minimum-cost set of edges to connect a given set of source-sink pairs. In the online version of this problem, the source-sink pairs arrive over time. Agrawal, Klein, and Ravi give a 2-approximation algorithm for the offline problem; Berman and Coulston give an O(log n)-competitive algorithm for the online problem. Goemans and Williamson subsequently generalized the offline algorithm of Agrawal et al. to handle a large class of problems they called constrained forest problems, and other problems, such as the prize-collecting Steiner tree problem. In this paper, we show how to combine the ideas of Goemans and Williamson and those of Berman and Coulston to give an O(log n)-competitive algorithm for online constrained forest problems, including an online version of the prize-collecting Steiner tree problem.
Hung Ngo, Ely Porat and Atri Rudra. Efficiently Decodable Error-Correcting List Disjunct Matrices and Applications
Abstract: A $(d,\ell)$-list disjunct matrix is a non-adaptive group testing primitive that, given a set of $n$ items with at most $d$ ``defectives," outputs a super-set of the defectives containing at most $\ell-1$ non-defective items. The primitive has found many applications as stand alone objects and as building blocks in the construction of other combinatorial objects. This paper studies error-tolerant list disjunct matrices which can correct up to $e_0$ false positives and $e_1$ false negatives in sub-linear time. We then use list-disjunct matrices to prove new results in three different applications. Our major contributions are as follows. First, we prove several (almost)-matching lower and upper bounds for the optimal number of tests, including the fact that $\Theta(d\log(n/d) + e_0+de_1)$ is necessary and sufficient when $\ell=\Theta(d)$. Similar results are also derived for the disjunct matrix case (i.e. $\ell=1$). Second, we present two methods that convert error-tolerant list disjunct matrices in a \textit{black-box} manner into error-tolerant list disjunct matrices that are also {\em efficiently decodable}. The methods help us derive a family of (strongly) explicit constructions of list-disjunct matrices which are either optimal or near optimal, and which are also efficiently decodable. Third, we show how to use error-correcting efficiently decodable list-disjunct matrices in three different applications: (i) explicit constructions of $d$-disjunct matrices with $t = O(d^2\log n+rd)$ tests which are decodable in $\mathrm{poly}(t)$ time, where $r$ is the maximum number of test errors. This result is optimal for $r = \Omega(d\log n)$, and even for $r=0$ this result improves upon known results; (ii) (explicit) constructions of (near)-optimal, error-correcting, and efficiently decodable monotone encodings; and (iii) (explicit) constructions of (near)-optimal, error-correcting, and efficiently decodable multiple user tracing families.
Silvia Crafa and Francesco Ranzato. Probabilistic Bisimulation and Simulation Algorithms by Abstract Interpretation
Abstract: We show how probabilistic bisimulation equivalence and simulation preorder, namely the main behavioural relations on probabilistic nondeterministic processes, can be characterized by abstract interpretation. Indeed, both bisimulation and simulation can be obtained as domain completions of partitions and preorders, viewed as abstractions, w.r.t. a pair of concrete functions that encode a probabilistic LTS. As a consequence, this approach provides a general framework for designing algorithms for computing probabilistic bisimulation and simulation. Notably, (i) we show that the standard probabilistic bisimulation algorithm by Baier et al. can be viewed as an instance of such a framework and (ii) we design a new efficient probabilistic simulation algorithm that improves the state of the art.
Daniel Lokshtanov and Dániel Marx. Clustering with Local Restrictions
Abstract: We study a family of graph clustering problems where each cluster has to satisfy a certain local requirement. Formally, let $\mu$ be a function on the subsets of vertices of a graph $G$. In the $(mu,p,q)$-Partition problem, the task is to find a partition of the vertices where each cluster $C$ satisfies the requirements that (1) at most $q$ edges leave $C$ and (2) $\mu(C)\le p$. Our first result shows that if $\mu$ is an {\em arbitrary} polynomial-time computable monotone function, then $(mu,p,q)$-Partition can be solved in time $n^{O(q)}$, i.e., it is polynomial-time solvable {\em for every fixed $q$}. We study in detail three concrete functions $\mu$ (number of nonedges in the cluster, maximum degree of nonedges in the cluster, number of vertices in the cluster), which correspond to natural clustering problems. For these functions, we show that $(mu,p,q)$-Partition can be solved in time $2^{O(p)}\cdot n^{O(1)}$ and in randomized time $2^{O(q)}\cdot n^{O(1)}$, i.e., the problem is fixed-parameter tractable parameterized by $p$ or by $q$.
Amin Coja-Oghlan and Angelica Pachon-Pinzon. The decimation process in random k-SAT
Abstract: Let F be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-rigorous statistical mechanics ideas have inspired a message passing algorithm called Belief Propagation Guided Decimation for finding satisfying assignments of F. This algorithm can be viewed as an attempt at implementing a certain thought experiment that we call the Decimation Process. In this paper we identify a variety of phase transitions in the decimation process and link these phase transitions to the performance of the algorithm.
Hans L. Bodlaender, Bart M. P. Jansen and Stefan Kratsch. Preprocessing for Treewidth: A Combinatorial Analysis through Kernelization
Abstract: Using the framework of kernelization we study whether efficient preprocessing schemes for the Treewidth problem can give provable bounds on the size of the processed instances. Assuming the AND-distillation conjecture to hold, the standard parameterization of Treewidth does not have a kernel of polynomial size and thus instances (G, k) of the decision problem of Treewidth cannot be efficiently reduced to equivalent instances of size polynomial in k. In this paper, we consider different parameterizations of Treewidth. We show that Treewidth has a kernel with O(l^3) vertices, with l the size of a vertex cover, and a kernel with O(l^4) vertices, with l the size of a feedback vertex set. This implies that given an instance (G, k) of treewidth we can efficiently reduce its size to O((l*)^4) vertices, where l* is the size of a minimum feedback vertex set in G. In contrast, we show that Treewidth parameterized by the vertex-deletion distance to a co-cluster graph and Weighted Treewidth parameterized by the size of a vertex cover do not have polynomial kernels unless NP \subseteq coNP/poly. Treewidth parameterized by the deletion distance to a cluster graph has no polynomial kernel unless the AND-distillation conjecture does not hold.
Stefan Kiefer, Andrzej Murawski, Joel Ouaknine, James Worrell and Lijun Zhang. On Stabilization in Herman's Algorithm
Abstract: Herman's algorithm is a synchronous randomized protocol for achieving self-stabilization in a token ring consisting of N processes. The interaction of tokens makes the dynamics of the protocol very difficult to analyze. In this paper we study the expected time to stabilization in terms of the initial configuration. It is straightforward that the algorithm achieves stabilization almost surely from any initial configuration, and it is known that the worst-case expected time to stabilization (with respect to the initial configuration) is Theta(N^2). Our first contribution is to give an upper bound of 0.64 N^2 on the expected stabilization time, improving on previous upper bounds and reducing the gap with the best existing lower bound. We also introduce an asynchronous version of the protocol, showing a similar O(N^2) convergence bound in this case. Assuming that errors arise from the corruption of some number k of bits, where k is fixed independently of the size of the ring, we show that the expected time to stabilization is O(N). This reveals a hitherto unknown and highly desirable property of Herman's algorithm: it recovers quickly from bounded errors. We also show that if the initial configuration arises by resetting each bit independently and uniformly at random, then stabilization is significantly faster than in the worst case.
Bundit Laekhanukit. An improved approximation algorithm for the minimum-cost subset k-connectivity
Abstract: The minimum-cost subset $k$-connected subgraph problem is a cornerstone problem in the area of network design with vertex connectivity requirements. In this problem, we are given a graph $G=(V,E)$ with costs on edges and a set of terminals $T$. The goal is to find a minimum cost subgraph such that every pair of terminals are connected by $k$ openly (vertex) disjoint paths. In this paper, we present an approximation algorithm for the subset $k$-connected subgraph problem which improves on the previous best approximation guarantee of $O(k^2\log{k})$ by Nutov (FOCS 2009). Our approximation guarantee, $\alpha(|T|)$, depends upon the number of terminals: \[ \alpha(|T|) \ \ =\ \ \begin{cases} O(|T|^2) &\mbox{if } |T| < 2k\\ O(k \log^3 k) & \mbox{if } 2k\le |T| < k^2\\ O(k \log^2 k) & \mbox{if } |T| \ge k^2 \end{cases} \] So, when the number of terminals is {\em large enough}, the approximation guarantee improves dramatically. Moreover, we show that, given an approximation algorithm for $|T|=k$, we can obtain almost the same approximation guarantee for any instances with $|T|>k$. This suggests that the hardest instances of the problem are when $|T|\approx k$.
Michael Goodrich and Michael Mitzenmacher. Privacy-Preserving Access of Outsourced Data via Oblivious RAM Simulation
Abstract: Suppose a client, Alice, has outsourced her data to an external storage provider, Bob, because he has capacity for her massive data set, of size $n$, whereas her private storage is much smaller---say, of size $O(n^{1/r})$, for some constant $r>1$. Alice trusts Bob to maintain her data, but she would like to keep its contents private. She can encrypt her data, of course, but she also wishes to keep her access patterns hidden from Bob as well. In this paper, we describe a scheme for this \emph{oblivious RAM simulation} problem with a small logarithmic or polylogarithmic amortized increase in her access times, while keeping the storage Alice purchases from Bob to be of size $O(n)$. To achieve this, our algorithmic contributions include a parallel MapReduce cuckoo-hashing algorithm and an external-memory data-oblivious sorting algorithm.
Malte Beecken, Johannes Mittmann and Nitin Saxena. Algebraic Independence and Blackbox Identity Testing
Abstract: Algebraic independence is an advanced notion in commutative algebra that generalizes independence of linear polynomials to higher degree. Polynomials {f_1, ..., f_m} \subset \F[x_1, ..., x_n] are called algebraically independent if there is no non-zero polynomial F such that F(f_1, ..., f_m) = 0. The transcendence degree, trdeg{f_1, ..., f_m}, is the maximal number r of algebraically independent polynomials in the set. In this paper we design blackbox and efficient linear maps \phi that reduce the number of variables from n to r but maintain trdeg{\phi(f_i)}_i = r, assuming f_i's sparse and small r. We apply these fundamental maps to solve several cases of blackbox identity testing: (1) Given a polynomial-degree circuit C and sparse polynomials f_1, ..., f_m with trdeg r, we can test blackbox D := C(f_1, ..., f_m) for zeroness in poly(size(D))^r time. (2) Define a spsp_\delta(k,s,n) circuit C to be of the form \sum_{i=1}^k \prod_{j=1}^s f_{i,j}, where f_{i,j} are sparse n-variate polynomials of degree at most \delta. For k = 2 we give a poly(\delta sn)^{\delta^2} time blackbox identity test. (3) For a general depth-4 circuit we define a notion of rank. Assuming there is a rank bound R for minimal simple spsp_\delta(k,s,n) identities, we give a poly(\delta snR)^{Rk\delta^2} time blackbox identity test for spsp_\delta(k,s,n) circuits. This partially generalizes the state of the art of depth-3 to depth-4 circuits. The notion of trdeg works best with large or zero characteristic, but we also give versions of our results for arbitrary fields.
Naonori Kakimura and Kazuhisa Makino. Robust Independence Systems
Abstract: An independence system \cal{F} is one of the most fundamental combinatorial concepts, which includes a variety of objects in graphs and hypergraphs such as matchings, stable sets, and matroids. We discuss the robustness for independence systems, which is a natural generalization of the greedy property of matroids. For a real number \alpha> 0, a set X in \cal{F} is said to be \alpha-robust if for any k, it includes an \alpha-approximation of the maximum k-independent set, where a set Y in \cal{F} is called k-independent if the size |Y| is at most k. In this paper, we show that every independence system has a 1/\sqrt{\mu(\cal{F})}-robust independent set, where \mu(\cal{F}) denotes the exchangeability of \cal{F}. Our result contains a classical result for matroids and the ones of Hassin and Rubinstein[SIAM J. Disc. Math. 2002] for matchings and Fujita, Kobayashi, and Makino[ESA 2010] for matroid 2-intersections, and provides better bounds for the robustness for many independence systems such as b-matchings, hypergraph matchings, matroid p-intersections, and unions of vertex disjoint paths. Furthermore, we provide bounds of the robustness for nonlinear weight functions such as submodular and convex quadratic functions. We also extend our results to independence systems in the integral lattice with separable concave weight functions.
Kohei Suenaga and Ichiro Hasuo. Programming with Infinitesimals: A While-Language for Hybrid System Modeling
Abstract: We add, to the common combination of a WHILE-language and a Hoare-style program logic, a constant dt that represents an infinitesimal (i.e. infinitely small) value. The outcome is a framework for modeling and verification of hybrid systems: hybrid systems exhibit both continuous and discrete dynamics and getting them right is a pressing challenge. We rigorously define the semantics of programs in the language of nonstandard analysis, on the basis of which the program logic is shown to be sound and relatively complete.
Tomas Brazdil, Stefan Kiefer, Antonin Kucera and Ivana Hutarova Varekova. Runtime Analysis of Probabilistic Programs with Unbounded Recursion
Abstract: We study the runtime in probabilistic programs with unbounded recursion. As underlying formal model for such programs we use probabilistic pushdown automata (pPDA) which exactly correspond to recursive Markov chains. We show that every pPDA can be transformed into a stateless pPDA (called ``pBPA'') whose runtime and further properties are closely related to those of the original pPDA. This result substantially simplifies the analysis of runtime and other pPDA properties. We prove that for every pPDA the probability of performing a long run decreases exponentially in the length of the run, if and only if the expected runtime in the pPDA is finite. If the expectation is infinite, then the probability decreases ``polynomially''. We show that these bounds are asymptotically tight. Our tail bounds on the runtime are generic, i.e., applicable to any probabilistic program with unbounded recursion. An intuitive interpretation is that in pPDA the runtime is exponentially unlikely to deviate from its expected value.
Marek Cygan, Marcin Pilipczuk, Michal Pilipczuk and Jakub Wojtaszczyk. Subset Feedback Vertex Set is Fixed Parameter Tractable
Abstract: The classical FEEDBACK VERTEX SET problem asks, for a given undirected graph G and an integer k, to find a set of at most k vertices that hits all the cycles in the graph G. FEEDBACK VERTEX SET has attracted a large amount of research in the parameterized setting, and subsequent kernelization and fixed-parameter algorithms have been a rich source of ideas in the field. In this paper we consider a more general and difficult version of the problem, named SUBSET FEEDBACK VERTEX SET (SUBSET-FVS in short) where an instance comes additionally with a set S of vertices, and we ask for a set of at most k vertices that hits all simple cycles passing through S. Because of its applications in circuit testing and genetic linkage analysis SUBSET-FVS was studied from the approximation algorithms perspective by Even et al. [SICOMP'00, SIDMA'00]. The question whether the SUBSET-FVS problem is fixed parameter tractable was posed independently by K. Kawarabayashi and S. Saurabh in 2009. We answer this question affirmatively. We begin by showing that this problem is fixed-parameter tractable when parametrized by |S|. Next we present an algorithm which reduces the size of S to O(k^3) in 2^O(k log k)n^O(1) time using kernelization techniques such as the 2-Expansion Lemma, Menger's theorem and Gallai's theorem. These two facts allow us to give a 2^O(k log k)n^O(1) algorithm solving the SUBSET FEEDBACK VERTEX SET problem, proving that it is indeed fixed parameter tractable.
Simone Bova, Hubie Chen and Matt Valeriote. Generic Expression Hardness Results for Primitive Positive Formula Comparison
Abstract: We study the expression complexity of two basic problems involving the comparison of primitive positive formulas: equivalence and containment. We give two generic hardness results for the studied problems, and discuss evidence that they are optimal and yield, for each of the problems, a complexity trichotomy.
Martin Hoefer. Local Matching Dynamics in Social Networks
Abstract: We study stable marriage and roommates problems in graphs with locality constraints. Each player is a node in a social network and has an incentive to match with other players. The value of a match is specified by an edge weight. Players explore possible matches only based on their current neighborhood. We study convergence of natural better response dynamics that converge to locally stable matchings -- matchings that allow no incentive to deviate with respect to their imposed information structure in the social network. For every starting state we construct in polynomial time a sequence of polynomially many better response moves to a locally stable matching. However, for a large class of oblivious dynamics including random and concurrent better response the convergence time turns out to be exponential. Perhaps surprisingly, convergence time becomes polynomial if we allow the players to have a small amount of random memory, even for many-to-many matchings and more general notions of neighborhood.
Eric Allender and Fengming Wang. On the power of algebraic branching programs of width two
Abstract: We show that there are families of polynomials having small depth-two arithmetic circuits that cannot be expressed by algebraic branching programs of width two. This clarifies the complexity of the problem of computing the product of a sequence of two-by-two matrices, which arises in several settings.
Benoit Libert and Moti Yung. Adaptively Secure Non-Interactive Threshold Cryptosystems
Abstract: Threshold cryptography aims at enhancing the availability and security of encryption and signature schemes by splitting private keys into several (say $n$) shares (typically, each of size comparable to the original secret key). In these schemes, a quorum ($t \leq n$) of servers needs to act upon a message to produce the result (decrypted value or signature), while corrupting less than $t$ servers maintains the security of the scheme. For about two decades starting from the mid 80's, extensive study was dedicated to this subject, which created a number of notable results. So far, most practical threshold signatures, where servers act non-interactively, were analyzed in the limited static corruption model (where the adversary chooses which servers will be corrupted at the system's initialization stage). Existing threshold encryption schemes that withstand the strongest combination of adaptive corruptions (allowing the adversary to corrupt servers at any time based on his complete view), and chosen-ciphertext attacks (CCA) all require interaction (in the non-idealized model) and attempts to remedy this problem resulted only in relaxed schemes. To date (and for about 10 years), it was an open problem whether there are non-interactive threshold schemes providing the highest security (namely, CCA-secure encryption and CMA-secure signature) with scalable shares (i.e., shares are as short as the original key) and adaptive security. This paper answers this question affirmatively by presenting such efficient encryption and signature schemes within a unified algebraic framework.
Shi Li. A 1.488-approximation algorithm for the uncapacitated facility location problem
Abstract: We present a 1.488 approximation algorithm for the metric uncapacitated facility location (UFL) problem. The previous best algorithm was due to Byrka \cite{Byr07}. By linearly combining two algorithms $A1(\gamma_f)$ for $\gamma_f = 1.6774$ and the (1.11,1.78)-approximation algorithm $A2$ proposed by Jain, Mahdian and Saberi \cite{JMS02}, Byrka gave a 1.5 approximation algorithm for the UFL problem. We show that if $\gamma_f$ is randomly selected from some distribution, the approximation ratio can be improved to 1.488.
Nicole Megow, Kurt Mehlhorn and Pascal Schweitzer. Online Graph Exploration: New Results on Old and New Algorithms
Abstract: We study the problem of exploring an unknown undirected connected graph. Beginning in some start vertex, a searcher must visit each node of the graph by traversing edges. Upon visiting a vertex for the first time, the searcher learns all incident edges and their respective traversal costs. The goal is to find a tour of minimum total cost. Kalyanasundaram and Pruhs proposed a sophisticated generalization of a Depth First Search that is 16-competitive on planar graphs. While the algorithm is feasible on arbitrary graphs, the question whether it has constant competitive ratio in general has remained open. Our main result is an involved lower bound construction that answers this question negatively. On the positive side, we prove that the algorithm has constant competitive ratio on any class of graphs with bounded genus. Furthermore, we provide a constant competitive algorithm for general graphs with a bounded number of distinct weights.
Ken-Ichi Kawarabayashi, Philip Klein and Christian Sommer. Linear-Space Approximate Distance Oracles for Planar, Bounded-Genus, and Minor-Free Graphs
Abstract: A $(1+\epsilon)$-approximate distance oracle for a graph is a data structure that supports approximate point-to-point shortest-path-distance queries. The relevant measures for a distance-oracle construction are: space, query time, and preprocessing time. There are strong distance-oracle constructions known for planar graphs (Thorup) and, subsequently, minor-excluded graphs (Abraham and Gavoille). However, these require $\Omega(\epsilon^{-1} n \lg n)$ space for $n$-node graphs. We argue that a very low space requirement is essential. Since modern computer architectures involve hierarchical memory (caches, primary memory, secondary memory), a high memory requirement in effect may greatly increase the actual running time. Moreover, we would like data structures that can be deployed on small mobile devices, such as handhelds, which have relatively small primary memory. In this paper, for planar graphs, bounded-genus graphs, and minor-excluded graphs we give distance-oracle constructions that require only $O(n)$ space. The big $O$ hides only a fixed constant, independent of $\epsilon$ and independent of genus or size of an excluded minor. Our constructions have proportionately higher query times. For planar graphs, the query time is $O(\epsilon^{-2} \lg^2 n)$. The preprocessing times for our distance oracle are also better than those for the previously known constructions. For planar graphs, the preprocessing time is $O(n \lg^2 n)$. For bounded-genus graphs, there was previously no distance-oracle construction known other than the one implied by the minor-excluded construction, for which the constant is enormous and the preprocessing time is a high-degree polynomial. In our result, the query time is $O(\epsilon^{-2}(\lg n + g)^2)$ and the preprocessing time is $O(n(\lg n)(g^3+\lg n))$. For all these linear-space results, we can in fact ensure, for any $\delta>0$, that the space required is only $1+\delta$ times the space required just to represent the graph itself.
Velumailum Mohanaraj and Martin Dyer. Pairwise-interaction Games
Abstract: We study the complexity of computing Nash equilibria in games where players arranged as the vertices of a graph play a symmetric 2-player game against their neighbours. We call this a pairwise-interaction game. We analyse this game for n players with a fixed number of actions and show that (1) a mixed Nash equilibrium can be computed in constant time for any game, (2) a pure Nash equilibrium can be computed through Nash dynamics in polynomial time for games with symmetrisable payoff matrix, (3) determining whether a pure Nash equilibrium exists for zero-sum games is NP-complete, and (4) counting pure Nash equilibria is #P-complete even for 2-strategy games. In proving (3), we define a new defective graph colouring problem called Nash colouring, which is of independent interest, and prove that its decision version is NP-complete. Finally, we show that pairwise-interaction games form a proper subclass of the usual graphical games.
Tim Nonner. Clique Clustering yields a PTAS for max-Coloring Interval Graphs
Abstract: We are given an interval graph $ G = (V,E) $ where each interval $ I \in V $ has a weight $ w_I \in \mathbb{R}^+ $. The goal is to color the intervals $ V $ with color classes $ C_1, C_2, \ldots, C_k $ such that $ \sum_{i=1}^k \max_{I \in C_i} w_I $ is minimized. This problem, called max-coloring interval graphs, contains the classical problem of coloring interval graphs as a special case, and it arises in many practical scenarios such as memory management. Pemmaraju, Raman, and Varadarajan showed that max-coloring interval graphs is NP-hard (SODA'04) and presented a 2-approximation algorithm. Closing a gap which has been open for years, we settle the approximation complexity of this problem by giving a polynomial-time approximation scheme (PTAS), that is, we show that there is an $ (1+\epsilon) $-approximation algorithm for any $ \epsilon > 0 $. Besides using standard preprocessing techniques such as geometric rounding and shifting, our main building block is a general technique for trading the overlap structure of an interval graph for accuracy, which we call clique clustering.
Markus Lohrey and Christian Mathissen. Isomorphism of regular trees and words
Abstract: The computational complexity of the isomorphism problem for regular trees, regular linear orders, and regular words is analyzed. A tree is regular if it is isomorphic to the prefix order on a regular language. In case regular languages are represented by NFAs (DFAs), the isomorphism problem for regular trees turns out to be EXPTIME-complete (resp. P-complete). In case the input automata are acyclic NFAs (acyclic DFAs), the corresponding trees are (succinctly represented) finite trees, and the isomorphism problem turns out to be PSPACE-complete (resp. P-complete). A linear order is regular if it is isomorphic to the lexicographic order on a regular language. A polynomial time algorithm for the isomorphism problem for regular linear orders (and even regular words, which generalize the latter) given by DFAs is presented. This solves an open problem by Esik and Bloom.
Chien-Chung Huang. Collusion in Atomic Splittable Routing Games
Abstract: We investigate how collusion affects the social cost in atomic splittable routing games. Suppose that players form coalitions and each coalition behaves as if it were a single player controlling all the flows of its participants. It may be tempting to conjecture that the social cost would be lower after collusion, since there would be more coordination among the players. We construct examples to show that this conjecture is not true. Even in very simple single-source-single-destination networks, the social cost of the post-collusion equilibrium can be higher than that of the pre-collusion equilibrium. This counter-intuitive phenomenon of collusion prompts us to ask the question: \emph{under what conditions would the social cost of the post-collusion equilibrium be bounded by the social cost of the pre-collusion equilibrium?} We show that if (i) the network is ``well-designed'' (satisfying a natural condition), and (ii) the delay functions are affine, then collusion is always beneficial for the social cost in the Nash equilibria. On the other hand, if either of the above conditions is unsatisfied, collusion can worsen the social cost. Our main technique is a novel flow-augmenting algorithm to build Nash equilibria. Our positive result for collusion is obtained by applying this algorithm simultaneously to two different flow value profiles of players and observing the difference in the derivatives of their social costs. Moreover, for a non-trivial subclass of selfish routing games, this algorithm finds the \emph{exact} Nash equilibrium in polynomial time.
Hu Ding and Jinhui Xu. Solving the Chromatic Cone Clustering Problem via Minimum Spanning Sphere
Abstract: In this paper, we study the following {\em Chromatic Cone Clustering (CCC)} problem: Given $n$ point-sets with each containing $k$ points in the first quadrant of the $d$-dimensional space $R^{d}$, find $k$ cones apexed at the origin such that each cone contains at least one distinct point (i.e., different from other cones) from every point-set and the total size of the $k$ cones is minimized, where the size of a cone is the angle from any boundary ray to its center line. CCC is motivated by an important biological problem and finds applications in several other areas. For the CCC problem, we consider in this paper the cases that $k$ is either $2$ or some other constant number. Our approaches for solving the CCC problem relies on the solutions to the problem of computing a {\em Minimum Spanning Sphere (MinSS)} for point-sets. For the MinSS problem, we present two $(1+\epsilon)$-approximation algorithms based on core-sets and $\epsilon$-net respectively. With these algorithms, we then show that the CCC problem admits $(1+\epsilon)$-approximation solutions for both cases. Our results use several interesting geometric techniques in high dimensional space, and are the first solutions to these problems.
Markus Chimani and Petr Hlineny. A Tighter Insertion-based Approximation of the Crossing Number
Abstract: Let $G$ be a planar graph and $F$ a set of additional edges not yet in $G$. The {\em multiple edge insertion} problem (MEI) asks for a drawing of $G+F$ with the minimum number of pairwise edge crossings, such that the subdrawing of $G$ is plane. As an exact solution to MEI is NP-hard for general $F$, we present the first approximation algorithm for MEI which achieves an additive approximation factor (depending only on the size of $F$ and the maximum degree of $G$). Our algorithm seems to be the first directly implementable one in that realm, too, next to the single edge insertion. It is also known that an (even approximate) solution to the MEI problem would approximate the crossing number of the \emph{$F$-almost-planar graph} $G+F$, while computing the crossing number of $G+F$ exactly is NP-hard already when $|F|=1$. Hence our algorithm induces new, improved approximation bounds for the crossing number problem of $F$-almost-planar graphs, achieving constant-factor approximation for the large class of such graphs of bounded degrees and bounded size of $F$.
Martin Delacourt. Rice's theorem for mu-limit sets of cellular automata
Abstract: Cellular automata are a parallel and synchronous computing model, made of infinitely many finite automata updating according to the same local rule. Rice's theorem states that any nontrivial property over computable functions is undecidable. It has been adapted by Kari to limit sets of cellular automata, that is the set of configurations that can be reached arbitrarily late. This paper proves a new Rice theorem for $\mu$-limit sets, which are sets of configurations often reached arbitrarily late.
Hans-Joachim Boeckenhauer, Dennis Komm, Rastislav Kralovic and Richard Kralovic. On the Advice Complexity of the k-Server Problem
Abstract: Competitive analysis is the established tool for measuring the output quality of algorithms that work in an online environment. Recently, the model of advice complexity has been introduced as an alternative measurement which allows for a more fine-grained analysis of the hardness of online problems. In this model, one tries to measure the amount of information an online algorithm is lacking about the future parts of the input. This concept was investigated for a number of well-known online problems including the k-server problem. In this paper, we first extend the analysis of the k-server problem by giving both a lower bound on the advice needed to obtain an optimal solution, and upper bounds on algorithms for the general k-server problem on metric graphs and the special case of dealing with the Euclidean plane. In the general case, we improve the previously known results by an exponential factor, in the Euclidean case we design an algorithm which achieves a constant competitive ratio for a very small (i.e., constant) number of advice bits per request. Furthermore, we investigate the relation between advice complexity and randomized online computations by showing how lower bounds on the advice complexity can be used for proving lower bounds for the competitive ratio of randomized online algorithms.
Stefan Mengel. Characterizing Arithmetic Circuit Classes by Constraint Satisfaction Problems
Abstract: We explore the expressivity of constraint satisfaction problems (CSPs) in the arithmetic circuit model. While CSPs are known to yield VNP-complete polynomials in the general case, we show that for different restrictions of the structure of the CSPs we get characterizations of different arithmetic circuit classes. In particular we give the first natural non-circuit characterization of VP, the class of polynomial families efficiently computable by arithmetic circuits.
Magnús Halldórsson and Pradipta Mitra. Nearly Optimal Bounds for Distributed Wireless Scheduling in the SINR Model
Abstract: We study the wireless scheduling problem in the physically realistic SINR model. More specifically: we are given a set of $n$ links, each a sender-receiver pair. We would like to schedule the links using the minimum number of slots, given the SINR model of interference among simultaneously transmitting links. In the basic problem, all senders transmit with the same uniform power. In this work, we provide a distributed $O(\log n)$-approximation for the scheduling problem, matching the best ratio known for centralized algorithms. This is based on an algorithm of Kesselheim and V\"ocking, improving their analysis by a logarithmic factor. We show this to be best possible for any such distributed algorithm. Our analysis extends also to linear power assignments, and as well as for more general assignments, modulo assumptions about message acknowledgement mechanisms.
Carsten Moldenhauer. Primal-Dual Approximation Algorithms for Node-Weighted Steiner Forest on Planar Graphs
Abstract: Node-Weighted Steiner Forest is the following problem: Given an undirected graph, a set of pairs of terminal vertices, a weight function on the vertices, find a minimum weight set of vertices that includes and connects each pair of terminals. We consider the restriction to planar graphs where the problem remains NP-complete. Demaine et al. [DHK09] showed that the generic primal-dual algorithm of Goemans and Williamson [GW97] is a 6-approximation on planar graphs. We present (1) a different analysis to prove an approximation factor of 3, (2) show that this bound is tight for the generic algorithm, and (3) show how the algorithm can be improved to yield a 9/4-approximation algorithm. We give a simple proof for the first result using contraction techniques and present an example for the lower bound. Then, we establish a connection to the feedback problems studied by Goemans and Williamson [GW98]. We show how our constructions can be combined with their proof techniques yielding the third result and an alternative, more involved, way of deriving the first result. The third result induces an upper bound on the integrality gap of 9/4. Our analysis implies that improving this bound for Node-Weighted Steiner Forest via the primal-dual algorithm is essentially as difficult as improving the integrality gap for the feedback problems in [GW98].
David Hopkins, Andrzej Murawski and Luke Ong. A Fragment of ML Decidable by Visibly Pushdown Automata
Abstract: The simply-typed, call-by-value language, RML, may be viewed as a canonical restriction of Standard ML to ground-type references, augmented by a "bad variable" construct in the sense of Reynolds. By a short type, we mean a type of order at most 2 and arity at most 1. We consider the fragment of (finitary) RML, RML_OStr, consisting of terms-in-context such that (i) the term has a short type, and (ii) every argument type of the type of each free variable is short. RML_OStr is surprisingly expressive; it includes several instances of (in)equivalence in the literature that are challenging to prove using methods based on (state-based) logical relations. We show that it is decidable whether a given pair of RML_OStr terms-in-context is observationally equivalent. Using the fully abstract game semantics of RML, our algorithm reduces the problem to the language equivalence of visibly pushdown automata. When restricted to terms in canonical form, the problem is EXPTIME-complete.
Chandra Chekuri and Alina Ene. Submodular Cost Allocation Problem and Applications
Abstract: We study the Submodular Cost Allocation problem (MCSA). In this problem we are given a finite ground set $V$ and $k$ non-negative submodular set functions $f_1,\ldots,f_k$ on $V$. The objective is to partition $V$ into $k$ (possibly empty) sets $C_1, \cdots, C_k$ such that the sum $\sum_{i = 1}^k f_i(C_i)$ is minimized. Several well-studied problems such as the non-metric facility location problem, multiway-cut in graphs and hypergraphs, and uniform metric-labeling and its generalizations can be shown to be special case of MCSA. In this paper we consider a convex-programming relaxation obtained via the Lov\'{a}sz-extension for submodular functions. This allows us to understand several previous relaxations and rounding procedures in a unified fashion and also develop new formulations and approximation algorithms for several problems. In particular, we give a $1.5$-approximation for the hypergraph multiway partition problem. We also give a $\min\{2(1-1/k), H_\Delta\}$-approximation for the hypergraph multiway cut problem when $\Delta$ is the maximum hyperedge degree. Both problems generalize the multiway cut problem in graphs and the hypergraph cut problem is approximation equivalent to the node-weighted multiway cut problem in graphs.
Kook Jin Ahn and Sudipto Guha. Linear Programming in the Semi-streaming Model with Application to the Maximum Matching Problem
Abstract: In this paper, we study linear programming based approaches to the maximum matching problem in the semi-streaming model. The semi-streaming model has gained attention as a model for processing massive graphs as the importance of such graphs has increased. This is a model where edges are streamed-in in an adversarial order and we are allowed a space proportional to the number of vertices in a graph. In recent years, there has been several new results in this semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. We present several techniques to adapt and optimize linear programming based approaches in the semi-streaming model with an application to the maximum matching problem. As a consequence, we (almost) improve all previous results on this problem, and also prove new results on interesting variants.
Yuval Filmus, Toniann Pitassi and Rahul Santhanam. Exponential Lower Bounds for AC0-Frege Imply Superpolynomial Frege Lower Bounds
Abstract: We give a general transformation which turns polynomial-size Frege proofs to subexponential-size AC0-Frege proofs. This indicates that proving exponential lower bounds for AC0-Frege is hard, since it is a longstanding open problem to prove super-polynomial lower bounds for Frege. Our construction is optimal for tree-like proofs. As a consequence of our main result, we are able to shed some light on the question of weak automatizability for bounded-depth Frege systems. First, we present a simpler proof of the results of Bonet et al. showing that under cryptographic assumptions, bounded-depth Frege proofs are not weakly automatizable. Secondly, we show that because our proof is more general, under the right cryptographic assumptions, it could resolve the weak automatizability question for lower depth Frege systems.
Roberto Cominetti, Jose Correa and Omar Larre. Existence and uniqueness of equilibria for flows over time
Abstract: Network flows that vary over time arise naturally when modeling rapidly evolving systems such as the Internet. In this paper, we continue the study of equilibria for flows over time in the deterministic queuing model proposed by Koch and Skutella. We give a constructive proof for the existence and uniqueness of equilibria for the case of a piecewise constant inflow rate, through a detailed analysis of the static flows obtained as derivatives of a dynamic equilibrium. We also provide a nonconstructive existence proof of equilibria when the inflow rate is a general function.
Maurice Jansen and Rahul Santhanam. Permanent Does Not Have Succinct Polynomial Size Arithmetic Circuits of Constant Depth
Abstract: We show that over fields of characteristic zero there does not exist a polynomial $p(n)$ and a constant-free succinct arithmetic circuit family $\{\Phi_n\}$, where $\Phi_n$ has size at most $p(n)$ and depth $O(1)$, such that $\Phi_n$ computes the $n\times n$ permanent. A circuit family $\{\Phi_n\}$ is succinct if there exists a {\em nonuniform} Boolean circuit family $\{C_n\}$ with $O(\log n)$ many inputs and size $n^{o(1)}$ such that that $C_n$ can correctly answer direct connection language queries about $\Phi_n$-succinctness is a relaxation of uniformity. To obtain this result we develop a novel technique that further strengthens the connection between black-box derandomization of polynomial identity testing and lower bounds for arithmetic circuits. From this we obtain the lower bound by explicitly constructing a hitting set against arithmetic circuits in the polynomial hierarchy. To the best of our knowledge, this is the first lower bound proof which takes advantage of (partial) uniformity without using diagonalization.
Vassilis Zikas and Martin Hirt. Player-Centric Byzantine Agreement
Abstract: Byzantine Agreement (BA) is one of the most important primitives in the area of secure distributed computation. It allows a set of players to agree on a value, even when some of the players are faulty. BA comes in two flavors: {\em Consensus} and {\em Broadcast}. Both primitives require that all non-faulty players agree on an output-value $y$ (consistency). However, in Consensus every player has an input, and it is required that if all players pre-agree on some $x$ then $y=x$, whereas in Broadcast only one player, the {\em sender}, has input and it is required that $y$ equals the sender's input, unless he is faulty (validity). Most of the existing feasibility results for BA are of an {\em all-or-nothing} fashion: in Broadcast they address the question whether or not there exists a protocol which allows {\em any} player to broadcast his input. Similarly, in Consensus the question is whether or not consensus can be reached which respects pre-agreement on the inputs of {\em all} correct players. In this work, we take a different approach motivated by the above observation, namely that Consensus and Broadcast are only two extreme sub-cases of a more general consistency primitive. In particular, we are interested in the question whether or not the validity condition of BA primitives can be satisfied {\em with respect to a specific sub-set of the complete player set \PS}. To this direction, we introduce the natural notion of {\em player-centric BA} which is a class of BA primitives, denoted as $\pcba=\{\pba{\CC}\}_{\CC\subseteq\PS}$, parametrized by subsets $\CC$ of the player set. For each primitive $\pba{\CC}\in\pcba$ the validity is defined on the input(s) of the players in $\CC$. Note that Broadcast (with sender $p$) and Consensus are special cases of \pcba primitives for $\CC=\{p\}$ and $\CC=\PS$, respectively. We study feasibility of $\pcba$ for arbitrary sets $\CC\subseteq\PS$ in a setting where a general (aka non-threshold) mixed (active/passive) adversary is considered. Such an adversary is characterized by a so-called adversary structure, which is an enumeration of admissible adversary classes specifying the sets of players that can be actively and passively corrupted. We give a complete characterization of adversaries tolerable for $\pcba$ for all three security levels, i.e., perfect, statistical, and computational security. Note that with the exception of perfect security, exact bounds for this model are not even known for the traditional notions of BA. Our results expose an asymmetry of Broadcast which has, so far, been neglected in the literature: there exist non-trivially adversaries which can be tolerated for Broadcast with sender some $p_i\in\PS$ but {\em not} for every $p_j\in\PS$ being the sender. Finally, we show how to extend the definition of \pcba in a setting where the adversary can actively, passively, and fail corrupt players, simultaneously. For this setting, we give exact feasibility bounds for computational secure \pba{\PS} (aka Consensus), assuming a public-key infrastructure (PKI). This last result answers an open problem from ASIACRYPT~2008 that concerns feasibility of computationally secure multi-party computation in this model.
Konstantin Makarychev and Maxim Sviridenko. Maximizing a Polynomial Subject to Assignment Constraints
Abstract: We study the q-adic assignment problem. We first give an $O(n^{(q-1)/2})$-approximation algorithm for the Koopmans-Beckman version of the problem improving upon the result of Barvinok. Then, we introduce a new family of instances satisfying "tensor triangle inequalitiesa" and give a constant factor approximation algorithm for them. We show that many classical optimization problems can be modeled by q-adic assignment problems from this family. Finally, we give several integrality gap examples for the natural LP relaxations of the problem.
Danny Hermelin, Matthias Mnich, Erik Jan Van Leeuwen and Gerhard J. Woeginger. Domination when the Stars Are Out
Abstract: We algorithmize the recent structural characterization for claw-free graphs by Chudnovsky and Seymour. Building on this result, we show that Dominating Set on claw-free graphs is (i) fixed-parameter tractable and (ii) even possesses a polynomial kernel. To complement these results, we establish that Dominating Set is not fixed-parameter tractable on the slightly larger class of graphs that exclude K1;4 as an induced subgraph. Our results provide a dichotomy for Dominating Set in K1;L-free graphs and show that the problem is fixed-parameter tractable if and only if L <= 3. Finally, we show that our algorithmization can also be used to show that the related Connected Dominating Set problem is fixed-parameter tractable on claw-free graphs.
Ittai Abraham, Daniel Delling, Amos Fiat, Andrew Goldberg and Renato Werneck. VC-Dimension and Shortest Path Algorithms
Abstract: We explore the relationship between learning theory and graph algorithm design. In particular, we show that set systems induced by sets of vertices on shortest paths have VC-dimension at most two. This allows us to use a result from learning theory to improve time bounds on query algorithms for the point-to-point shortest path problem in networks of low highway dimension. We also refine the definitions of highway dimension and related concepts, making them more general and potentially more relevant to practice. In particular, we define highway dimension in terms of set systems induced by shortest paths, and give cardinality-based and average case definitions.
Isolde Adler, Stavros Kolliopoulos, Philipp Klaus Krause, Daniel Lokshtanov, Saket Saurabh and Dimitrios Thilikos. Tight Bounds for Linkages in Planar Graphs
Abstract: The {\sc Disjoint-Paths Problem} asks, given a graph $G$ and a set of pairs of terminals $(s_{1},t_{1}),\ldots,(s_{k},t_{k})$, whether there is a collection of $k$ vertex-disjoint paths linking $s_{i}$ and $t_{i}$, for $i=1,\ldots,k$. In their $f(k)\cdot n^{3}$ algorithm for this problem, Robertson and Seymour introduced the {\sl irrelevant vertex technique} according to which in every instance of treewidth grater than $g(k)$ there is an ``irrelevant'' vertex whose removal creates an equivalent instance of the problem. This fact is based on the celebrated {\sl Unique Linkage Theorem}, whose -- very technical -- proof gives a function $g(k)$ that is responsible for an immense parametric dependence in the running time of the algorithm. In this paper we prove this result for planar graphs achieving $g(k)=2^{O(k)}$. Our bound is radically better than the bounds known for general graphs. Moreover, our proof is new and self-contained, and it strongly exploits the combinatorial properties of planar graphs. We also prove that our result is optimal, in the sense that the bound on $g(k)$ cannot become better than exponential. Our results suggest that any algorithm for the {\sc Disjoint-Paths Problem} that runs in time better than $2^{2^{o(k)}}\cdot n^{O(1)}$, should probably require drastically different ideas from those in the irrelevant vertex technique.
Olivier Carton, Thomas Colcombet and Gabriele Puppis. Regular Languages of Words Over Countable Linear Orderings
Abstract: We develop an algebraic model suitable for recognizing languages of words indexed by countable linear orderings. We prove that this notion of recognizability is effectively equivalent to definability in monadic second-order (MSO) logic. This reproves in particular the decidability of MSO logic over the rationals with order. Our proof also implies the first known collapse result for MSO logic over countable linear orderings.
Vince Barany, Balder Ten Cate and Luc Segoufin. Guarded negation
Abstract: We consider restrictions of first-order logic and of fixpoint logic in which all occurrences of negation are required to be guarded by an atomic predicate. In terms of expressive power, the logics in question, called GNFO and GNFP, extend the guarded fragment of first-order logic and guarded least fixpoint logic, respectively. They also extend the recently introduced unary negation fragments of first-order logic and of least fixpoint logic. We show that the satisfiability problem for GNFO and for GNFP is 2ExpTime-complete, both on arbitrary structures and on finite structures. We also study the complexity of the associated model checking problems. Finally, we show that GNFO and GNFP are not only computationally well behaved, but also model theoretically: we show that GNFO and GNFP have the tree-like model property and that GNFO has the finite model property, and we characterize the expressive power of GNFO in terms of invariance for an appropriate notion of bisimulation.
Heng Guo, Pinyan Lu and Leslie Valiant. The Complexity of Symmetric Boolean Parity Holant Problems
Abstract: For certain subclasses of NP, $\oplus$P or \#P characterized by local constraints, it is known that if there exist any problems that are not polynomial time computable within that subclass, then those problems are NP-, $\oplus$P- or \#P-complete. Such dichotomy results have been proved for characterizations such as Constraint Satisfaction Problems, and directed and undirected Graph Homomorphism Problems, often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant Problems. These additionally allow for the expression of matching problems, which have had pivotal roles in complexity theory. As our main result we prove the dichotomy theorem that, for the class $\oplus$P, every set of boolean symmetric Holant signatures of any arities that is not polynomial time computable is $\oplus$P-complete. The result exploits some special properties of the class $\oplus$P and characterizes four distinct tractable subclasses within $\oplus$P. It leaves open the corresponding questions for NP, $\#$P and $\#_k$P for $k\neq 2$.
Sanjeev Arora and Rong Ge. New Algorithms for Learning in Presence of Errors
Abstract: We give new algorithms for a variety of randomly-generated instances of computational problems using a linearization technique that reduces to solving a system of linear equations. These algorithms are derived in the context of learning with structured noise, a notion introduced in this paper. This notion is best illustrated with the learning parities with noise (LPN) problem ---well-studied in learning theory and cryptography. In the standard version, we have access to an oracle that, each time we press a button, returns a random vector $ \vec{a} \in \GF(2)^n$ together with a bit $b \in \GF(2)$ that was computed as $\vec{a}\cdot \vec{u} +\eta$, where $\vec{u}\in \GF(2)^n$ is a {\em secret} vector, and $\eta \in \GF(2)$ is a noise bit that is $1$ with some probability $p$. Say $p=1/3$. The goal is to recover $\vec{u}$. This task is conjectured to be intractable. In the structured noise setting we introduce a slight (?) variation of the model: upon pressing a button, we receive (say) $10$ random vectors $\vec{a_1}, \vec{a_2}, \ldots, \vec{a_{10}} \in \GF(2)^n$, and corresponding bits $b_1, b_2, \ldots, b_{10}$, of which at most $3$ are noisy. The oracle may arbitrarily decide which of the $10$ bits to make noisy. We exhibit a polynomial-time algorithm to recover the secret vector $\vec{u}$ given such an oracle. We think this structured noise model may be of independent interest in machine learning. We discuss generalizations of our result, including learning with more general noise patterns. We also give the first nontrivial algorithms for two problems, which we show fit in our structured noise framework. We give a slightly subexponential algorithm for the well-known learning with errors (LWE) problem over $\GF(q)$ introduced by Regev for cryptographic uses. Our algorithm works for the case when the gaussian noise is small; which was an open problem. Our result also clarifies why existing hardness results fail at this particular noise rate. We also give polynomial-time algorithms for learning the MAJORITY OF PARITIES function of Applebaum et al. for certain parameter values. This function is a special case of Goldreich's pseudorandom generator.
Arash Farzan and Shahin Kamali. Compact Navigation and Distance Oracles for Graphs with Small Treewidth
Abstract: Given an unlabeled, unweighted, and undirected graph with $n$ vertices and small (but not necessarily constant) treewidth $k$, we consider the problem of preprocessing the graph to build space-efficient encodings (oracles) to perform various queries efficiently. We assume the word RAM model where the size of a word is $\omegah{\log n}$ bits. The first oracle, we present, is the navigation oracle which facilitates primitive navigation operations of adjacency, neighborhood, and degree queries. By way of an enumerate argument, which is of independent interest, we show the space requirement of the oracle is optimal to within lower order terms for all treewidths. The oracle supports the mentioned queries all in constant worst-case time. The second oracle, we present, is an exact distance oracle which facilitates distance queries between any pair of vertices (i.e., an all-pair shortest-path oracle). The space requirement of the oracle is also optimal to within lower order terms. Moreover, the distance queries perform in $\oh{k^2\log^3 k}$ time. Particularly, for the class of graphs of our interest, graphs of bounded treewidth (where $k$ is constant), the distances are reported in constant worst-case time.
Ashwinkumar Badanidiyuru Varadaraja. Buyback Problem - Approximate matroid intersection with cancellation costs
Abstract: In the buyback problem, an algorithm observes a sequence of bids and must decide whether to accept each bid at the moment it arrives, sub ject to some constraints on the set of accepted bids. Decisions to reject bids are irrevocable, whereas decisions to accept bids may be canceled at a cost that is a fixed fraction of the bid value. Previous to our work, deterministic and randomized algorithms were known when the constraint is a matroid constraint. We extend this and give a deterministic algorithm for the case when the constraint is an intersection of k matroid constraints. We further prove a matching lower bound on the competitive ratio for this problem and extend our results to arbitrary downward closed set systems. This problem has applications to banner advertisement, semi-streaming, routing, load balancing and other problems where preemption or cancellation of previous allocations is allowed.
Anand S, Naveen Garg and Nicole Megow. Meeting deadlines: How much speed suffices?
Abstract: We consider the online problem of scheduling real-time jobs with hard deadlines on~$m$ parallel machines. Each job has a processing time and a deadline, and the objective is to schedule jobs so that they complete before the deadline. It is known that even when the instance is feasible it may not be possible to meet all deadlines when jobs arrive online over time. We therefore consider the setting when the algorithm has available machines with speed~$s>1$. We present a new online algorithm that finds a feasible schedule on machines of speed~$e/(e-1)$ for any instance that is feasible on unit speed machines. This improves on the previously best known result which requires a speed of~$2-2/(m+1)$. Our algorithm only uses the relative order of job deadlines and is oblivious of the actual deadline values. It was shown earlier that the minimum speed required for such algorithms is~$e/(e-1)$, and thus, our analysis is tight. We also show that our new algorithm outperforms two other well-known algorithms by giving the first lower bounds on their minimum speed requirement.
Matthew Anderson, Dieter Van Melkebeek, Nicole Schweikardt and Luc Segoufin. Locality of queries definable in invariant first-order logic with arbitrary built-in predicates
Abstract: We consider first-order formulas over relational structures which, in addition to the symbols in the relational schema, may use a binary relation interpreted as a linear order over the elements of the structures, and arbitrary numerical predicates (including addition and multiplication) interpreted over that linear order. We require that for each fixed interpretation of the initial relational schema, the validity of the formula is independent of the particular interpretation of the linear order and its associated numerical predicates. We refer to such formulas as Arb-invariant first-order. Our main result shows a Gaifman locality theorem: two tuples of a structure with n elements, having the same neighborhood up to distance (\log n)^{\omega(1)}, cannot be distinguished by Arb-invariant first-order formulas. When restricting attention to word structures, we can achieve the same quantitative strength for Hanf locality: Arb-invariant first-order formulas cannot distinguish between two words having the same neighborhoods up to distance (\log n)^{\omega(1)}. In both cases we show that our bounds are tight. Our proof exploits the close connection between Arb-invariant first-order formulas and the complexity class AC^0, and hinges on the tight lower bounds for parity on constant-depth circuits.
Daniel Reichman and Uriel Feige. Recoverable Values for Independent Sets
Abstract: The notion of {\em recoverable value} was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has {\em recoverable value} $\rho$, if for every graph it recovers an independent set of size at least $\max_I \sum_{v\in I} \min[1,\rho/(d(v) + 1)]$, where $d(v)$ is the degree of vertex $v$, and $I$ ranges over all independent sets in $G$. Hence, in a sense, from every vertex $v$ in the maximum independent set the algorithm recovers a value of at least $\rho/(d_v + 1)$ towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure $\rho \ge 1$. We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least $\rho \ge 15/7$. In addition, we show that approximating MIS in graphs with a given $k$-coloring within a ratio larger than $2/k$ is unique games hard. This rules out a natural approach for obtaining $\rho \ge 2$.
Danny Hermelin, Avivit Levy, Oren Weimann and Raphael Yuster. Distance Oracles for Vertex-Labeled Graphs
Abstract: Given a graph G = (V,E) with non-negative edge-lengths whose vertices are assigned a label from L = {\lambda_1,...,\lambda_\ell}, we construct a compact distance oracle that answers queries of the form: "What is d(v,\lambda)?", where v is a vertex in the graph, \lambda a vertex-label, and d(v,\lambda) is the distance (length of the shortest path) between v and the closest vertex labeled \lambda in G. We give the first formalization of this natural problem and provide a hierarchy of approximate distance oracles that require subquadratic space, and return a distance of constant-stretch.We also extend our solution to dynamic oracles that can handle label changes in sublinear time.
Piotr Berman, Arnab Bhattacharyya, Elena Grigorescu, Sofya Raskhodnikova, David Woodruff and Grigory Yaroslavtsev. Linear Programming and Combinatorial Bounds on Steiner Transitive-Closure Spanners
Abstract: Given a directed graph G = (V,E) and an integer k >= 1, a k-transitive-closure-spanner (k-TC-spanner) of G is a directed graph H = (V, E_H) that has (1) the same transitive closure as G and (2) diameter at most k. In some applications, the shortcut paths added to the graph in order to obtain small diameter can use Steiner vertices, that is, vertices not in the original graph G. The resulting spanner is called a Steiner transitive-closure spanner (Steiner TC-spanner). Motivated by applications to property reconstruction and access control hierarchies, we concentrate on Steiner TC-spanners of directed acyclic graphs or, equivalently, partially ordered sets. In these applications, the goal is to find a sparsest Steiner k-TC-spanner of a poset G for a given k and G. The focus of this paper is the relationship between the dimension of a poset and the size of its sparsest Steiner TC-spanner. The dimension of a poset G is the smallest d such that G can be embedded into a d-dimensional directed hypergrid via an order-preserving embedding. We present a nearly tight lower bound on the size of Steiner 2-TC-spanners of d-dimensional directed hypergrids. It implies better lower bounds on the complexity of local reconstructors of monotone functions and functions with low Lipschitz constant. The proof of the lower bound constructs a dual solution to a linear programming relaxation of the Steiner 2-TC-spanner problem. We also show that one can efficiently construct a Steiner 2-TC-spanner, of size matching the lower bound, for any low-dimensional poset. Finally, we present a lower bound on the size of Steiner k-TC-spanners of d-dimensional posets that shows that the best-known construction, due to De Santis et al., cannot be improved significantly.
Laurent Bulteau, Guillaume Fertin and Irena Rusu. Sorting by Transpositions is Difficult
Abstract: In comparative genomics, a transposition is an operation that exchanges two consecutive sequences of genes in a genome. The transposition distance, that is, the minimum number of transpositions needed to transform a genome into another, is, according to numerous studies, a relevant evolutionary distance. The problem of computing this distance when genomes are represented by permutations, called the Sorting by Transpositions problem (SBT), has been introduced by Bafna and Pevzner in 1995. It has naturally been the focus of a number of studies, but the computational complexity of this problem has remained undetermined for 15 years. In this paper, we answer this long-standing open question by proving that the Sorting by Transpositions problem is NP-hard. As a corollary of our result, we also prove that the following problem is NP-hard: given a permutation \pi, is it possible to sort \pi using d_b(\pi)/3 permutations, where d_b(\pi) is the number of breakpoints of \pi?
Shin-Ya Katsumata. Relating Computational Effects by TT-Lifting
Abstract: We consider the problem of establishing a relationship between two monadic semantics of the lambda_c-calculus extended with algebraic operations. We show that two monadic semantics of any program are related if the relation includes value relations and is closed under the algebraic operations in the lambda_c-calculus.
Jim Laird, Giulio Manzonetto and Guy Mccusker. Constructing differential categories and deconstructing categories of games
Abstract: We present an abstract construction for building differential categories useful to model resource sensitive calculi, and we apply it to categories of games. In one instance, we recover a category previously used to give a fully abstract model of a nondeterministic imperative language. The construction exposes the differential structure already present in this model. A second instance corresponds to a new Cartesian differential category of games. We give a model of a Resource PCF in this category and show that it enjoys the finite definability property. Comparison with a relational semantics reveals that the latter also possesses this property and is fully abstract.
Radu Mardare, Luca Cardelli and Kim Larsen. Modular Markovian Logic
Abstract: We introduce Modular Markovian Logic (MML) for compositional continuous-time and continuous-space Markov processes. MML combines operators specific to stochastic logics with operators that reflect the modular structure of the semantics, similar to those used by spatial and separation logics.We present a complete Hilbert-style axiomatization for MML, prove the small model property and analyze the relation between the stochastic bisimulation and the logical equivalence relation induced by MML on models.
Moran Feldman, Seffi Naor and Roy Schwartz. Nonmonotone Submodular Maximization via a Structural Continuous Greedy Algorithm
Abstract: Consider a situation in which one has a suboptimal solution $S$ to a maximization problem which only constitutes a weak approximation to the problem. Suppose that even though the value of $S$ is small compared to an optimal solution $OPT$ to the problem, $S$ happens to be structurally similar to $OPT$. A natural question to ask in this scenario is whether there is a way of improving the value of $S$ based solely on this information. In this paper we introduce the \emph{Structural Continuous Greedy Algorithm} which answers this question affirmatively in the context of the \textsc{Nonmonotone Submodular Maximization Problem}. Using this algorithm we are able to improve on the best approximation factor known for this problem. In the \textsc{Nonmonotone Submodular Maximization Problem} we are given a non-negative submodular function $f$, and the objective is to find a subset maximizing $f$. This is considered one of the basic submodular optimization problems, generalizing many well known problems such as the Max Cut problem in undirected graphs. The current best approximation factor for this problem is $0.41$ given by Gharan and Vondr\'{a}k. On the other hand, Feige et al. showed that no algorithm can give $0.5 + \ee$ approximation for it. Our method yields an improved $0.42$-approximation for the problem.
Robert Elsaesser and Tobias Tscheuschner. Settling the complexity of local max-cut (almost) completely
Abstract: We consider the problem of finding a local optimum for the max-cut problem with FLIP-neighborhood, in which exactly one node may change the partition. Schäffer and Yannakakis (SICOMP, 1991) showed PLS-completeness of this problem on graphs with unbounded degree. On the other side, Poljak (SICOMP 1995) showed that in cubic graphs every FLIP local search takes O(n^2) steps, where n is the size of the graph. Due to the huge gap between degree three and unbounded degree, Ackermann, Röglin, and Vöcking (JACM, 2008) asked for the smallest d such that on graphs with maximum degree d the local max-cut problem with FLIP-neighborhood is PLS-complete. In this paper, we prove that the computation of a local optimum on graphs with maximum degree five is PLS-complete. Thus, we solve the problem posed by Ackermann et al. almost completely, by showing that d is either 4 or 5 (unless PLS is in P). On the other side, we also prove that on graphs with degree O(logn) every FLIP local search has probably polynomial smoothed complexity. Roughly speaking, for any instance, in which the edge weights are perturbated by a (Gaussian) random noise with variance σ^2, every FLIP local search terminates in time polynomial in n and σ, with probability 1−n^{−Ω(1)}. Putting both results together, we may conclude that although local max-cut is likely to be hard on graphs with bounded degree, it can be solved in polynomial time for slightly perturbated instances with high probability.
Georg Zetzsche. On the capabilities of grammars, automata, and transducers controlled by monoids
Abstract: During the last decades, classical models in language theory have been extended by control mechanisms defined by monoids. We study which monoids cause the extensions of context-free grammars, finite automata, or finite state transducers to exceed the capacity of the original model. Furthermore, we investigate when, in the extended automata model, the nondeterministic variant differs from the deterministic one in capacity. We show that all these conditions are in fact equivalent and present an algebraic characterization. In particular, the open question of whether every language generated by a valence grammar over a finite monoid is context-free is provided with a positive answer.
Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye and Amélie Stainer. Emptiness and Universality Problems in Timed Automata with Positive Frequency
Abstract: The languages of infinite timed words accepted by timed automata are traditionally defined using Büchi-like conditions. These acceptance conditions focus on the set of locations visited infinitely often along a run, but completely ignore quantitative timing aspects. In this paper we propose a natural quantitative semantics for timed automata based on the so-called frequency, which measures the proportion of time spent in the accepting states. We study various properties of timed languages accepted with positive frequency, and in particular the emptiness and universality problems.
Magnus Bordewich and Ross J. Kang. Rapid mixing of subset Glauber dynamics on graphs of bounded tree-width
Abstract: Motivated by the `subgraphs world' view of the ferromagnetic Ising model, we develop a general approach to studying mixing times of Glauber dynamics based on subset expansion expressions for a class of graph polynomials. With a canonical paths argument, we demonstrate that the chains defined within this framework mix rapidly upon graphs of bounded tree-width. This extends known results on rapid mixing for the Tutte polynomial, the adjacency-rank ($R_2$-)polynomial and the interlace polynomial.
Lei Huang and Toniann Pitassi. Automatizability and Simple Stochastic Games
Abstract: The complexity of simple stochastic games (SSGs) has been open since they were defined by Condon in 1992. Such a game is played by two players, Min and Max, on a graph consisting of max nodes, min nodes, and average nodes. The goal of Max is to reach the 1-sink, while the goal of Min is to avoid the 1-sink. When on a max (min) node, Max (Min) chooses the outedge, and when on an average node, they take each edge with equal probability. The complexity problem is to determine, given a graph, whether or not Max has a strategy that is guaranteed to reach the 1-sink with probability at least 1/2. Despite intensive effort, the complexity of this problem is still unresolved. In this paper, we establish a new connection between the complexity of SSGs and the complexity of an important problem in proof complexity--the proof search problem for low depth Frege systems. We prove that if depth-3 Frege systems are weakly automatizable, then SSGs are solvable in polynomial-time. Moreover we identify a natural combinatorial principle, which is a version of the well-known Graph Ordering Principle (GOP), that we call the integer-valued GOP (IGOP). This principle states that for any graph $G$ with nonnegative integer weights associated with each node, there exists a locally maximal vertex (a vertex whose weight is at least as large as its neighbors). We prove that if depth-2 Frege plus IGOP is weakly automatizable, then SSG is in P.
Holger Hermanns, David N. Jansen, Flemming Nielson and Lijun Zhang. Automata-based CSL model checking
Abstract: For continuous-time Markov chains, the model-checking problem with respect to continuous-time stochastic logic (CSL) has been introduced and shown to be decidable by Aziz, Sanwal, Singhal and Brayton in 1996. The presented decision procedure, however, has exponential complexity. In this paper, we propose an effective approximation algorithm for full CSL. The key to our method is the notion of stratified CTMCs with respect to the CSL property to be checked. We present a measure-preservation theorem allowing us to reduce the problem to a transient analysis on stratified CTMCs. The corresponding probability can then be approximated in polynomial time (using uniformization). This makes the present work the centerpiece of a broadly applicable full CSL model checker. Recently, the decision algorithm by Aziz et al. was shown to work only for stratified CTMCs. As an additional contribution, our measure-preservation theorem can be used to ensure the decidability for general CTMCs.
Stefan Canzar, Khaled Elbassioni, Gunnar W. Klau and Julián Mestre. On Tree-Constrained Matchings and Generalizations
Abstract: We consider the following \textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\times V_2 \mapsto \mathbb{R}_+$, find a maximum weight matching $\mathcal{M}$ between nodes of the two trees, such that none of the matched nodes is an ancestor of another matched node in either of the trees. This generalization of the classical bipartite matching problem appears, for example, in the computational analysis of live cell video data. We show that the problem is $\mathcal{APX}$-hard and thus, unless $\mathcal{P} = \mathcal{NP}$, disprove a previous claim that it is solvable in polynomial time. Furthermore, we give a $2$-approximation algorithm based on a combination of the local ratio technique and a careful use of the structure of basic feasible solutions of a natural LP-relaxation, which we also show to have an integrality gap of $2-o(1)$. In the second part of the paper, we consider a natural generalization of the problem, where trees are replaced by partially ordered sets (posets). We show that the local ratio technique gives a $2k\rho$-approximation for the $k$-dimensional matching generalization of the problem, in which the maximum number of incomparable elements below (or above) any given element in each poset is bounded by $\rho$. We finally give an almost matching integrality gap example, and an inapproximability result showing that the dependence on $\rho$ is most likely unavoidable.
Johannes Dams, Martin Hoefer and Thomas Kesselheim. Convergence Time of Power Control Dynamics
Abstract: We study two (classes of) distributed algorithms for power control in a general model of wireless networks. There are n wireless communication requests or links that experience interference and noise. To be successful a link must satisfy an SINR constraint. The goal is to find a set of powers such that all links are successful simultaneously. A classic algorithm for this problem is the fixed-point iteration due to Foschini and Miljanic [1992], for which we prove the first bounds on worst-case running times -- after roughly O(n log n) rounds all SINR constraints are nearly satisfied. When we try to satisfy each constraint exactly, however, convergence time is infinite. For this case, we design a novel framework for power control using regret learning algorithms and iterative discretization. While the exact convergence times must rely on a variety of parameters, we show that roughly a polynomial number of rounds suffices to make every link successful during at least a constant fraction of all previous rounds.
Michael Benedikt, Gabriele Puppis and Cristian Riveros. The cost of traveling between languages
Abstract: We show how to calculate the number of edits per character needed to convert a string in one regular language to a string in another language. Our algorithm makes use of a local determinization procedure applicable to a subclass of distance automata. We then show how to calculate the same property when the editing needs to be done in streaming fashion, by a finite state transducer, using a reduction to mean-payoff games. Finally, we determine when the optimal editor can be approximated by a streaming editor -- the construction here makes use of a combination of games and distance automata.
Alexey Gotsman and Hongseok Yang. Liveness-preserving atomicity abstraction
Abstract: Modern concurrent algorithms are usually encapsulated in libraries, and complex algorithms are often constructed using libraries of simpler ones. We present the first theorem that allows harnessing this structure to give compositional liveness proofs to concurrent algorithms and their clients. We show that, while proving a liveness property of a client using a concurrent library, we can soundly replace the library by another one related to the original library by a generalisation of a well-known notion of linearizability. We apply this result to show formally that lock-freedom, an often-used liveness property of non-blocking algorithms, is compositional for linearizable libraries, and provide an example illustrating our proof technique.
Olaf Beyersdorff, Nicola Galesi, Massimo Lauria and Alexander Razborov. Parameterized Bounded-Depth Frege is Not Optimal
Abstract: A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [DMS07]. In that framework the parameterized version of any proof system is not fpt-bounded for some technical reasons, but we remark that this question becomes much more interesting if we restrict ourselves to those parameterized contradictions $(F,k)$ in which $F$ itself is a contradiction. We call such parameterized contradictions {\em strong}, and with one important exception (vertex cover) all interesting contradictions we are aware of are strong. It follows from the gap complexity theorem of [DMS07] that tree-like Parameterized Resolution is not fpt-bounded w.r.t. strong parameterized contradictions. The main result of this paper significantly improves upon this by showing that even the parameterized version of bounded-depth Frege is not fpt-bounded w.r.t. strong contradictions. More precisely, we prove that the pigeonhole principle requires proofs of size $n^{\Omega(k)}$ in bounded-depth Frege, and, as a special case, in dag-like Parameterized Resolution. This answers an open question posed in [DMS07]. In the opposite direction, we interpret a well-known FPT algorithm for vertex cover as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's.
Stephane Durocher, Meng He, Ian Munro, Patrick Nicholson and Matthew Skala. Range Majority in Constant Time and Linear Space
Abstract: Given an array $A$ of size $n$, we consider the problem of answering range majority queries: given a query range $[i..j]$ where $1 \le i \le j \le n$, return the majority element of the subarray $A[i..j]$ if it exists. We describe a linear space data structure that answers range majority queries in constant time. We further generalize this problem by defining range $\alpha$-majority queries: given a query range $[i..j]$, return all the elements in the subarray $A[i..j]$ with frequency greater than $\alpha(j-i+1)$. We prove an upper bound on the number of $\alpha$-majorities that can exist in a subarray, assuming that query ranges are restricted to be larger than a given threshold. Using this upper bound, we generalize our range majority data structure to answer range $\alpha$-majority queries in $O(\frac{1}{\alpha})$ time using $O(n \lg(1/\alpha + 1))$ space, for any fixed $\alpha \in (0, 1)$. This result is interesting since other similar range query problems based on frequency have nearly logarithmic lower bounds on query time when restricted to linear space.
Thomas Brihaye, Laurent Doyen, Gilles Geeraerts, Joël Ouaknine, Jean-François Raskin and James Worrell. On Reachability for Hybrid Automata over Bounded Time
Abstract: This paper investigates the time-bounded version of the reachability problem for hybrid automata. This problem asks whether a given hybrid automaton can reach a given target location within B time units, where B is a given bound. We prove that, unlike the classical reachability problem, the timed-bounded version is decidable on rectangular hybrid automata if only nonnegative rates are allowed. This class is of practical interest as it subsumes for example the class of stopwatch automata. We also show that the problem becomes undecidable if either diagonal constraints or both negative and positive rates are allowed.
Endre Boros, Khaled Elbassioni, Mahmoud Fouz, Vladimir Gurvich, Kazuhisa Makino and Bodo Manthey. Stochastic Mean Payoff Games: Smoothed Analysis and Approximation Schemes
Abstract: We consider two-player zero-sum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWR-games games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a long-standing open question if a polynomial algorithm exists that solves BWR-games. In fact, a pseudo-polynomial algorithm for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudo-polynomial algorithm: BW-games (the case with no random nodes) and ergodic BWR-games (in which the game's value does not depend on the initial position) with constant number of random nodes. In this paper, we show that the existence of a pseudo-polynomial algorithm for BWR-games with constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomial-time approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BW-games and ergodic BWR-games (assuming a technical requirement about the probabilities at the random nodes).
Scott Aaronson and Andrew Drucker. Advice Coins for Classical and Quantum Computation
Abstract: We study the power of classical and quantum algorithms equipped with nonuniform advice, in the form of a coin whose bias encodes useful information. This question takes on particular importance in the quantum case, due to a surprising result that we prove: a quantum finite automaton with just two states can be sensitive to arbitrarily small changes in a coin's bias. This contrasts with classical probabilistic finite automata, whose sensitivity to changes in a coin's bias is bounded by a classic 1970 result of Hellman and Cover. Despite this finding, we are able to bound the power of advice coins for space-bounded classical and quantum computation. We define the classes BPPSPACE/coin and BQPSPACE/coin, of languages decidable by classical and quantum polynomial-space machines with advice coins. Our main theorem is that both classes coincide with PSPACE/poly. Proving this result turns out to require substantial machinery. We use an algorithm due to Neff for finding roots of polynomials in NC; a result from algebraic geometry that lower-bounds the separation of a polynomial's roots; and a result on fixed-points of superoperators due to Aaronson and Watrous, originally proved in the context of quantum computing with closed timelike curves.
Sze-Hang Chan, Tak-Wah Lam, Lap-Kei Lee, Chi-Man Liu and Hing-Fung Ting. Sleep Management on Multiple Machines for Energy and Flow Time
Abstract: In large data centers, determining the right number of working machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. Motivated by such concern, we extend the traditional study of online flow-time scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), how jobs are dispatched among the machines, and how each machine schedules its job. To understand the tradeoff between flow time and energy, we consider schedules whose objective is to minimize the sum of flow time and energy. We study this new scheduling problem in two settings: the first one assumes machines running at a fixed speed, and the second one assumes the dynamic speed scaling model (i.e., each machine can scale its speed dynamically to further optimize its energy). The latter implies that the online scheduler also has to adjust the speed for each machine. For each speed model, we give an $O(1)$-competitive algorithm. Like the previous work on flow time and energy, the analysis of our algorithms is also based on potential functions. What is new is that the sleep management problem would allow the online and offline algorithms to use different subsets of machines at different times, and this pushes us to derive a more general potential analysis that can consider different match-up of machines between the two algorithms.
Sylvain Schmitz and Philippe Schnoebelen. Multiply-Recursive Upper Bounds with Higman's Lemma
Abstract: We develop a new analysis for the length of controlled bad sequences in well-quasi-orderings based on Higman's Lemma. This leads to tight multiply-recursive upper bounds that readily apply to several verification algorithms for well-structured systems.
Subhash Khot and Per Austrin. A Simple Deterministic Reduction for the Gap Minimum Distance of Code Problem
Abstract: We present a simple deterministic gap-preserving reduction from SAT to the Minimum Distance of Code Problem over GF(2). We also show how to extend the reduction to work over any finite field. Previously a randomized reduction was known due to Dumer, Micciancio, and Sudan [8], which was recently derandomized by Cheng and Wan [6, 7]. These reductions rely on highly non-trivial coding theoretic constructions whereas our reduction is elementary. As an additional feature, our reduction gives a constant factor hardness even for asymptotically good codes, i.e., having constant rate and relative distance. Previously it was not known how to achieve deterministic reductions for such codes.
Lance Fortnow and Rahul Santhanam. Robust Simulations and Significant Separations
Abstract: We define and study a new notion of ``robust simulations'' between complexity classes which is intermediate between the traditional notions of infinitely-often and almost-everywhere, as well as a corresponding notion of ``significant separations''. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L in C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitely-often notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the Karp-Lipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, time-space tradeoffs, and the recent theorem of Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the hierarchy theorem for non-deterministic polynomial time than the ones previously known.
Ryan O'Donnell, John Wright and Yuan Zhou. The Fourier Entropy-Influence Conjecture for certain classes of Boolean functions
Abstract: In 1996, Friedgut and Kalai made the "Fourier Entropy-Influence Conjecture": For every Boolean function f : {-1,1}^n -> {-1,1} it holds that H[\hat{f}^2] <= C I[f], where H[\hat{f}^2] is the spectral entropy of f, I[f] is the total influence of f, and C is a universal constant. In this work we verify the conjecture for symmetric functions. More generally, we verify it for functions with symmetry group S_{n_1} x ... x S_{n_d}$ where d is constant. We also verify the conjecture for functions computable by read-once decision trees.
Frederic Magniez, Ashwin Nayak, Miklos Santha and David Xiao. Improved bounds for the randomized decision tree complexity of recursive majority
Abstract: We consider the randomized decision tree complexity of the recursive 3-majority function. For evaluating a height h formulae, we prove a lower bound for the δ-two-sided-error randomized decision tree complexity of (1−2δ)(5/2)^h, improving the lower bound of (1−2δ)(7/3)^h given by Jayram et al. (STOC '03). We also state a conjecture which would further improve the lower bound to (1 − 2δ)2.54355^h. Second, we improve the upper bound by giving a new zero-error randomized decision tree algorithm that has complexity at most (1.007) · 2.64946^h, improving on the previous best known algorithm, which achieved (1.004) · 2.65622^h. Our lower bound follows from a better analysis of the base case of the recursion of Jayram et al. . Our algorithm uses a novel "interleaving" of two recursive algorithms.
Lorenzo Clemente. Buechi Automata can have Smaller Quotients
Abstract: We study novel simulation-like preorders for quotienting nondeterministic Buechi automata. We define fixed-word delayed simulation, a new preorder coarser than delayed simulation. We argue that fixed-word simulation is the coarsest forward simulation-like preorder which can be used for quotienting Buechi automata, thus improving our understanding of the limits of quotienting. Also, we show that computing fixed-word simulation is PSPACE-complete. On the practical side, we introduce proxy simulations, which are novel polynomial-time computable preorders sound for quotienting. In particular, delayed proxy simulation induce quotients that can be smaller by an arbitrarily large factor than direct backward simulation. We derive proxy simulations as the product of a theory of refinement transformers: A refinement transformer maps preorders non-decreasingly, preserving certain properties. We study under which general conditions refinement transformers are sound for quotienting.
Matthew Hennessy and Yuxin Deng. On the Semantics of Markov Automata
Abstract: Markov automata describe systems in terms of events which may be nondeterministic, may occur probabilistically, or may be subject to time delays. We define a novel notion of weak bisimulation for such systems and prove that this provides both a sound and complete proof methodology for a natural extensional behavioural equivalence between such systems, a generalisation of reduction barbed congruence, the well-known touchstone equivalence for a large variety of process description languages.
André Chailloux, Iordanis Kerenidis and Bill Rosgen. Quantum Commitments from Complexity Assumptions
Abstract: Bit commitment schemes are at the basis of modern cryptography. Since information-theoretic security is impossible both in the classical and the quantum regime, we need to look at computationally secure commitment schemes. In this paper, we study worst-case complexity assumptions that imply quantum bit-commitment schemes. First, we show that QSZK \not\subseteq QMA implies a computationally hiding and statistically binding auxiliary-input quantum commitment scheme. We then extend our result to show that the much weaker assumption QIP \not\subseteq QMA (which is weaker than PSPACE \not\subseteq PP) implies the existence of auxiliary-input commitment schemes with quantum advice. Finally, to strengthen the plausibility of the first assumption, we find a quantum oracle relative to which honest-verifier QSZK is not contained in QCMA, the class of languages that can be verified using a classical proof in quantum polynomial time.
Sebastian Faust, Krzysztof Pietrzak and Daniele Venturi. Tamper-Proof Circuits: How to Trade Leakage for Tamper-Resilience
Abstract: Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g. smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt'06], we propose a compiler that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the tampering fails with some probability $\delta>0$. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack. Formally, we show that a $q$-query tampering attack against the transformed circuit can be ``simulated'' with only black-box access to the original circuit and $\log(q)$ bits of additional auxiliary information. Thus, if the implemented cryptographic scheme is secure against $\log(q)$ bits of leakage, then our implementation is tamper-proof in the above sense. Surprisingly, allowing for this small amount of information leakage --- and not insisting on perfect simulability like in the work of Ishai et al. --- allows for much more efficient compilers, which moreover do not require randomness during evaluation. Similar to earlier work our compiler requires small, stateless and computation-independent tamper-proof gadgets. Thus, our result can be interpreted as reducing the problem of shielding arbitrary complex computation to protecting simple components.
Sylvain Salvati and Igor Walukiewicz. Krivine machines and higher-order schemes
Abstract: We propose a new approach to analysing higher-order recursive schemes. Many results in the literature use automata models generalising pushdown automata, most notably higher-order pushdown automata with collapse (CPDA). Instead, we propose to use the Krivine machine model. Compared to CPDA, this model is closer to lambda-calculus, and incorporates nicely many invariants of computations, as for example the typing information. The usefulness of the proposed approach is demonstrated with new proofs of two central results in the field: the decidability of the local and global model checking problems for higher-order schemes with respect to the mu-calculus.
Benjamin Doerr and Mahmoud Fouz. Asymptotically Optimal Randomized Rumor Spreading
Abstract: We propose a new protocol solving the fundamental problem of disseminating a piece of information to all members of a group of $n$ players. It builds upon the classical randomized rumor spreading protocol and several extensions. The main achievements are the following: Our protocol spreads a rumor from one node to all other nodes in the asymptotically optimal time of $(1 + o(1)) \log_2 n$. The whole process can be implemented in a way such that only $O(n f(n))$ calls are made, where $f(n)= \omega(1)$ can be arbitrary. In spite of these quantities being close to the theoretical optima, the protocol remains relatively robust against failures. We show that for \emph{random} node failures, our algorithm again comes arbitrarily close to the theoretical optima. The protocol can be extended to also deal with \emph{adversarial} node failures. The price for this additional robustness is only a constant factor increase in the run-time, where the constant factor depends on the fraction of failing nodes the protocol is supposed to cope with. It can easily be implemented such that only $O(n)$ calls to properly working nodes are made. In contrast to the push-pull protocol by Karp et al. [FOCS 2000], our algorithm only uses push operations, i.e., only informed nodes take active actions in the network. To the best of our knowledge, this is the first randomized push algorithm that achieves an asymptotically optimal running time.
Eric Allender, Luke Friedman and William Gasarch. Limits on the Computational Power of Random Strings
Abstract: R, the set of Kolmogorov-random strings, is a central notion in the study of algorithmic information theory, and in recent years R has increasingly been studied in relation to computational complexity theory. This work takes as its starting point three strange inclusions that have been proved since 2002: 1. NEXP is contained in the class of problems NP-Turing-reducible to R. 2. PSPACE is contained in the class of problems poly-time Turing-reducible to R. 3. BPP is contained in the class of problems poly-time truth-table-reducible to R. (These inclusions hold for both of the most widely-studied variants of Kolmogorov complexity: the plain complexity C(x) and the prefix-complexity K(x). They also hold no matter which "universal" Turing machine is used in the definitions of the functions C and K.) These inclusions are "strange" since R is not even computable! Thus it is not at all clear that these are meaningful upper bounds on the complexity of BPP, PSPACE, and NEXP, and indeed it is not at all clear that it is very interesting to consider efficient reductions to noncomputable sets such as R. Our main theorems are that, if we restrict attention to prefix complexity K and the corresponding set of random strings R_K, then the class of decidable problems that are in NP relative to R_K (no matter which universal machine is used to define K) lies in EXPSPACE, and the class of decidable problems that are poly-time truth-table reducible to R_K (no matter which universal machine is used to define K) lies in PSPACE. Thus we can "sandwich" PSPACE between the class of problems truth-table- and Turing-reducible to R_K, and the class of decidable problems that are in NP relative to R_K lies between NEXP and EXPSPACE. The corresponding questions for plain Kolmogorov complexity C are wide open; no upper bounds are known at all for the class of decidable problems efficiently reducible to R_C. These results also provide the first quantitative limits on the applicability of uniform derandomization techniques.
Andreas Cord-Landwehr, Bastian Degener, Barbara Kempkes, Friedhelm Meyer Auf Der Heide, Sven Kurras, Matthias Fischer, Martina Hüllmann, Alexander Klaas, Peter Kling, Marcus Märtens, Kamil Swierkot, Christoph Raupach, Daniel Warner, Christoph Weddemann and Daniel Wonisch. A new Approach for Analyzing Convergence Algorithms for Mobile Robots
Abstract: Given a set of $n$ mobile robots in the d-dimensional Euclidean space, the goal is to let them converge to a single not predefined point. The challenge is that the robots are limited in their capabilities. We consider the well-studied family of robotic capabilities where a robot can, upon activation, compute the positions of all other robots using an individual affine coordinate system. The robots are indistinguishable, oblivious and may have different affine coordinate systems, which may even change over time. A very general discrete time model assumes that robots are activated in arbitrary order. Further, the computation of a new target point may happen much earlier than the movement so that the movement is based on outdated information about other robot's positions. Time is measured as the number of rounds, where a round ends as soon as each robot has moved at least once. In [CP05], the Center of Gravity is considered as target function. This target function is invariant against the choice of the affine coordinate system. Convergence is proven, and the number of rounds needed for halving the diameter of the convex hull of the robot's positions is shown to be O(n^2) and Omega(n). In this paper, we present an easy-to-check property of target functions that guarantees convergence and yields upper time bounds. This property intuitively says that when a robot computes a new target point, this point is significantly within the current axis-aligned minimal box containing all robots. (Note that this box is defined relative to a fixed coordinate system and cannot be computed by the robots.) This property holds, for example, for the above-mentioned target function, and improves the above O(n^2) to an asymptotically optimal O(n) upper bound. Our technique also yields a constant time bound for a target function that requires that all robots have the same coordinate system.
Ahmed Bouajjani, Roland Meyer and Eike Moehlmann. Deciding robustness against total store ordering
Abstract: Sequential consistency (sc) is the classical model for shared memory concurrent programs. It corresponds to the interleaving semantics where the order of actions issued by a component is preserved. For performance reasons, modern processors adopt relaxed memory models that may execute actions out of order. We address the issue of deciding robustness of a program against the popular total store ordering (tso) memory model, i.e., we check whether the behaviour under tso coincides with the expected sc semantics. We prove that this problem is decidable and only PSPACE-complete. Surprisingly, we detect violations to robustness on pairs of sc computations, without consulting the tso model.
Konstantinos Kollias and Tim Roughgarden. Restoring Pure Equilibria to Weighted Congestion Games
Abstract: Congestion games model several interesting applications, including routing and network formation games, and also possess attractive theoretical properties, including the existence of and convergence of natural dynamics to a pure Nash equilibrium. Weighted variants of congestion games that rely on sharing costs proportional to players' weights do not generally have pure-strategy Nash equilibria. We propose a new way of assigning costs to players with weights in congestion games that recovers the important properties of the unweighted model. This method is derived from the Shapley value, and it always induces a game with a (weighted) potential function. For the special cases of weighted network cost-sharing and atomic selfish routing games (with Shapley value-based cost shares), we prove tight bounds on the price of stability and price of anarchy, respectively.
Diana Fischer and Lukasz Kaiser. Model Checking the Quantitative mu-Calculus on Linear Hybrid Systems
Abstract: In this work, we consider the model-checking problem for a quantitative extension of the modal mu-calculus on a class of hybrid systems. Qualitative model checking has been proved decidable and implemented for several classes of systems, but this is not the case for quantitative questions, which arise naturally in this context. Recently, quantitative formalisms that subsume classical temporal logics and additionally allow to measure interesting quantitative phenomena were introduced. We show how a powerful quantitative logic, the quantitative mu-calculus, can be model-checked with arbitrary precision on initialised linear hybrid systems. To this end, we develop new techniques for the discretisation of continuous state spaces based on a special class of strategies in model-checking games and show decidability of a class of counter-reset games that may be of independent interest.
Konstantinos Tsakalidis and Gerth Stølting Brodal. Dynamic Planar Range Maxima Queries
Abstract: We consider the dynamic two-dimensional maxima query problem. Let $P$ be a set of $n$ two-dimensional points. A point is \emph{maximal} if it is not dominated by any other point in $P$. We describe two data structures that support the reporting of the $t$ maximal points that dominate a given query point, and allow for insertions and deletions of points in $P$. % In the pointer machine model we present a linear space data structure with $O(\log n + t)$ worst case query time and $O(\log n)$ worst case update time. This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds in the worst case. The update time is optimal for the achieved query time. % The data structure also supports the more general query of reporting the maximal points among the points that lie in a given a 3-sided orthogonal range unbounded from above. Queries take $O(\log n + t)$ worst case time, where $t$ is the number of reported points. We can support 4-sided queries in $O(\log^2 n + t)$ worst case time, and $O(\log^2 n)$ worst case update time, using $O(n\log n)$ space. This allows us to improve the worst case deletion time of the dynamic rectangular visibility query problem from $O(\log^3 n)$ to $O(\log^2 n)$. % Our data structure can be adapted to the RAM model with word size $w$, where the coordinates of the points are integers in the range $U\smash = \{0, \dots ,2^w\smash -1 \}$. We present a linear space data structure for 3-sided range maxima queries with $O( \frac{\log n}{\log \log n } + t )$ worst case query time and $O( \frac{\log n}{\log \log n } )$ worst case update time. This is the first data structure that achieves sublogarithmic worst case complexity for all operations in the RAM model. The query time is optimal for the achieved update time.
Tomas Brazdil, Vaclav Brozek, Kousha Etessami and Antonin Kucera. Approximating the Termination Value of One-counter MDPs and Stochastic Games
Abstract: One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-player turn-based zero-sum, stochastic games played on the transition graph of classic one-counter automata (equivalently, pushdown automata with a 1-letter stack alphabet). A key objective for the analysis and verification of these games is the {\em termination} objective, where the players aim to maximize (minimize, respectively) the probability of hitting counter value $0$, starting at a given control state and given counter value. Recently \cite{BBEKW10,BBE10}, we studied {\em qualitative} decision problems (``is the optimal termination value = 1?'') for OC-MDPs (and OC-SSGs) and showed them to be decidable in P-time (in NP$\cap$coNP, respectively). However, {\em quantitative} decision and approximation problems (``is the optimal termination value $\geq p$'', or ``approximate the termination value within $\epsilon$'') are far more challenging. This is so in part because optimal strategies may not exist, and even when they do exist they can have a highly non-trivial structure. It thus remained open even whether any of these quantitative termination problems are computable. In this paper we show that all quantitative {\em approximation} problems for the termination value for OC-MDPs and OC-SSGs are computable. Specifically, given a OC-SSG, and given $\epsilon > 0$, we can compute a value $v$ that approximates the value of the OC-SSG termination game within additive error $\epsilon$, and furthermore we can compute $\epsilon$-optimal strategies for both players in the game. A key ingredient in our proofs is a subtle martingale, derived from solving certain LPs that we can associate with a maximizing OC-MDP. An application of Azuma's inequality on these martingales yields a computable bound for the ``wealth'' at which a ``rich person's strategy'' becomes $\epsilon$-optimal for OC-MDPs.
Anna Adamaszek, Artur Czumaj, Andrzej Lingas and Jakub Onufry Wojtaszczyk. Approximation schemes for capacitated geometric network design
Abstract: We study a capacitated network design problem in geometric setting. We assume that the input consists of an integral link capacity k and two sets of points on a plane, sources and sinks, each source/sink having an associated integral demand (amount of flow to be shipped from/to). The capacitated geometric network design problem is to construct a minimum-length network N that allows to route the requested flow from sources to sinks, such that each link in N has capacity k; the flow is splittable and parallel links are allowed in N. The capacitated geometric network design problem generalizes, among others, the geometric Steiner tree problem, and as such it is NP-hard. We show that if the demands are polynomially bounded and the link capacity k is not too large, the single-sink capacitated geometric network design problem admits a polynomial-time approximation scheme. If the capacity is arbitrarily large, then we design a quasi polynomial-time approximation scheme for the capacitated geometric network design problem allowing for arbitrary number of sinks. Our results rely on a derivation of an upper bound on the number of vertices different from sources and sinks (the so called Steiner vertices) in an optimal network. The bound is polynomial in the total demand of the sources.
Piotr Berman, Arnab Bhattacharyya, Konstantin Makarychev, Sofya Raskhodnikova and Grigory Yaroslavtsev. Improved Approximation for the Directed Spanner Problem
Abstract: We give a new $O(\sqrt{n}\log n)$ approximation algorithm for the problem of finding the sparsest spanner of a given {\em directed} graph $G$ on $n$ vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph $G=(V,E)$ with nonnegative edge lengths $d:~E\rightarrow{\mathbb N}$ and a {\em stretch} $k\geq 1$, a subgraph $H = (V,E_H)$ is a {\em $k$-spanner} of $G$ if for every edge $(u,v) \in E$, the graph $H$ contains a path from $u$ to $v$ of length at most $k \cdot d(u,v)$. The previous best approximation ratio was $\tilde{O}(n^{2/3})$, due to Dinitz and Krauthgamer~(STOC '11). We also present an improved algorithm for the important special case of directed 3-spanners with unit edge lengths. The approximation ratio of our algorithm is $\tilde{O}(n^{1/3})$ which almost matches the lower bound of Dinitz and Krauthgamer for the integrality gap of the linear program. The best previously known algorithms for this problem, due to Berman, Raskhodnikova and Ruan (FSTTCS '10) and Dinitz and Krauthgamer, had approximation ratio $\tilde{O}(\sqrt{n})$.
Ryan Harkins and John Hitchcock. Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds
Abstract: This paper extends and improves work of Fortnow and Klivans \cite{FortnowKlivans09}, who showed that if a circuit class $\calC$ has an efficient learning algorithm in Angluin's model of exact learning via equivalence and membership queries \cite{Angluin88}, then we have the lower bound $\EXP^\NP \not\subseteq \calC$. We use entirely different techniques involving betting games \cite{BvMRSS01} to remove the $\NP$ oracle and improve the lower bound to $\EXP \not\subseteq \calC$. This shows that it is even more difficult to design a learning algorithm for $\calC$ than the results of Fortnow and Klivans indicated.
Andrew Drucker. A PCP Characterization of AM
Abstract: We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some hypotheses related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses may be over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist `inapproximable-on-average' optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.
Franck Van Breugel and Xin Zhang. A Progress Measure for Explicit-State Probabilistic Model-Checkers
Abstract: Verification of the source code of a probabilistic system by means of an explicit-state model-checker is challenging. In most cases, the probabilistic model-checker will run out of memory due to the infamous state space explosion problem. We introduce the notion of a progress measure for such a model-checker. The progress measure returns a number in the interval [0, 1]. This number captures the amount of progress the model-checker has made towards verifying a particular linear-time property. The larger the number, the more progress the model-checker has made. We prove that the progress measure provides a lowerbound of the measure of the set of execution paths that satisfy the property. We also show how to compute the progress measure for checking a particular class of linear-time properties, namely invariants. Keywords: probabilistic model-checking, progress measure, line
Program is available. May 07, 2011
Registration is open! Apr 30, 2011
List of accepted papers is out! Apr 13, 2011
SDKB Call for Papers is out! Mar 30, 2011
GT Call for Papers is out! Mar 24, 2011
© 2020 ETH Zürich | Imprint | Disclaimer | 05 July 2010
|
CommonCrawl
|
Evidence for sex-specific genetic architectures across a spectrum of human complex traits
Konrad Rawlik1,
Oriol Canela-Xandri1 &
Albert Tenesa1,2
Sex differences are a common feature of human traits; however, the role sex determination plays in human genetic variation remains unclear. The presence of gene-by-sex (GxS) interactions implies that trait genetic architecture differs between men and women. Here, we show that GxS interactions and genetic heterogeneity among sexes are small but common features of a range of high-level complex traits.
We analyzed 19 complex traits measured in 54,040 unrelated men and 59,820 unrelated women from the UK Biobank cohort to estimate autosomal genetic correlations and heritability differences between men and women. For 13 of the 19 traits examined, there is evidence that the trait measured is genetically different between males and females. We find that estimates of genetic correlations, based on ~114,000 unrelated individuals and ~19,000 related individuals from the same cohort, are largely consistent. Genetic predictors using a sex-specific model that incorporated GxS interactions led to a relative improvement of up to 4 % (mean 1.4 % across all relevant phenotypes) over those provided by a sex-agnostic model. This further supports the hypothesis of the presence of sexual genetic heterogeneity across high-level phenotypes.
The sex-specific environment seems to play a role in changing genotype expression across a range of human complex traits. Further studies of GxS interactions for high-level human traits may shed light on the molecular mechanisms that lead to biological differences between men and women. However, this may be a challenging endeavour due to the likely small effects of the interactions at individual loci.
Phenotypic differences between men and women are a pervasive feature of quantitative traits. Sex provides two different environmental contexts determined by the hormonal milieu, differential gene expression between the sexes [1, 2], and lifetime systematic differences in general environmental exposures arising, for instance, as a consequence of different gender roles in society. This raises the possibility of sex-specific autosomal genetic effects, induced by gene–environment interactions, and differences in heritability among sexes that contribute to inter-sex phenotypic variation [3–8].
Previous studies have used pedigrees to show that heritability differs between the sexes for a range of, mainly, low-level phenotypes [3]. However, to what extent differences observed in low-level phenotypes affect high-level complex traits and whether such differences can be observed in high-level complex traits remains unclear. Furthermore, differences in heritability do not imply differences in genetic architecture as they could arise as a consequence of differences in environmental variances between the sexes. It is important, therefore, to further examine differences in genetic effects directly or by estimating genetic correlations between sexes.
Studies of high-level complex traits which have examined differences in genetic effects between sexes have, however, been largely restricted to various anthropometric traits. Although familial studies have repeatedly reported differences between the genetic architecture for these phenotypes [4, 9], such findings have contrasted with studies on cohorts of unrelated individuals which have failed to find significant differences in genetic effects [5, 6, 8] or to identify significantly associated sex-specific single-nucleotide polymorphisms (SNPs) for traits such as height and body mass index (BMI) [5, 10, 11]. These differences could potentially arise due to biases in familial estimates due to shared environmental variance, differences in phenotype definition between different studies, or simply lack of power. To understand the nature of such discrepancies, it is important that estimations of sex genetic heterogeneity from related and unrelated individuals are made using large numbers of individuals of the same population and a uniform definition of phenotype.
To assess the extent of gene-by-sex (GxS) interactions in a human population, we tested for differences in genetic effects between men and women across 19 high-level complex traits. Specifically, we demonstrate the presence of GxS interactions by estimating both sex-specific heritabilities and genetic correlations between sexes using individual-level SNP data from ~114,000 unrelated and ~19,000 related individuals genotyped for up to 525,242 SNPs. Finally, we provide further evidence that supports the hypothesis that sex-determined genetic heterogeneity is present in high-level phenotypes by demonstrating that the observed GxS interactions can be utilised in practice to improve prediction of phenotypes based on genotypic information.
To provide a broad overview of sex-specific genetic architecture in a human population we examined the presence of GxS interactions and sex-specific heritabilities across a broad spectrum of quantitative traits in ~114,000 unrelated white British participants in the UK Biobank [12] (Additional file 1: Table S1) who had been genotyped for 319,038 common autosomal SNPs (minor allele frequency (MAF) >5 %; see "Methods"). The 19 phenotypes considered were height, BMI, waist circumference (WC), hip circumference (HC), waist to hip ratio (WHR), body fat percentage (BF%), basal metabolic rate (BMR), age at completion of full time education for individuals without university education (education age), fluid intelligence score (FI score), a cognitive function score (CF score), lifetime reproductive success (LRS), diastolic and systolic blood pressure (BPdia and BPsys), peak expiratory flow (PEF), forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC), ratio of FEV1 over FVC (FEV1/FVC), self-assessed overall health (overall health) and extent of cigarette smoking as measured in pack years (Pack years). Education age in the UK Biobank has only been recorded for individuals without university education and care has to be taken in the interpretation of results for this phenotype and comparisons with other studies which use duration of education as a measure for educational attainment. Twelve of these phenotypes showed pronounced differences in distribution between the sexes (Additional file 1: Table S2 and Figure S1).
We evaluated the sex-specific genetic architecture of these traits by modelling male and female observations as occurrences of a phenotype in two different environmental contexts. The model used includes a genetic correlation between the two instances of the phenotype which may differ from one, thus providing evidence for a non-proportional change in the genetic effects between the two [13]. At the same time we allow for differences in heritability between the two sexes, which, on the other hand, can provide evidence for proportional changes in genetic effects or differences in environmental influences. Specifically, we fitted a bivariate linear mixed model (LMM) using the DISSECT software [14] (see "Methods"). The model included independent genetic and residual variances for male and female phenotypes and a genetic correlation. We tested whether genetic correlations were significantly different from one or whether heritabilities differed between men and women separately using likelihood ratio tests.
Sex-specific heritability
Seven traits showed significant differences (P < 0.05) in heritability (Table 1). In addition, the blood pressure traits (BPdia and BPsys) showed more pronounced differences in heritability when individuals with hypertension or taking blood pressure medication were included in the analysis, whilst adjusting for these factors (Additional file 1: Table S3). These differences may arise as a consequence of different environmental contexts, different amounts of genetic variation, or both. However, for five of the seven phenotypes for which we detected a significant difference in heritability, these differences could be explained by larger differences in genetic, rather than residual, variance between the two sexes (Fig. 1). Moreover, in general, larger genetic variances in one sex were observed together with larger residual variance components in the same sex. The exceptions were education age, FI score, LRS, overall health, and BPdia, for which the relationship between the sexes with regard to the size of genetic and residual variance was reversed; that is, we find, for example, that the genetic variance for LRS is larger in women while the residual variance for this trait is larger in men. With the exception of HC, all traits which show significant differences in heritability (BMR, WHR, education age, CF score, LRS, and FEV1/FVC) were found to have larger differences between the sexes for genetic rather than residual variances. Hence, overall the results support the view that differences in heritability are a consequence of a difference in genetic architecture or gene–environment interactions associated with sex rather than arising purely due to a significant difference in environmental variance.
Table 1 Estimates of sex-specific heritabilities and genetic correlations in unrelated white British individuals
Differences in variance components between the sexes. The fold difference between male and female genetic and residual variance components as estimated using common SNPs in unrelated white British individuals. Values larger than one indicate a larger variance in males, values smaller than one a larger variance in females
Genetic correlations between sexes
For 13 out of the 19 traits studied we found evidence of GxS interactions because the genetic correlation (rg) between the traits measured in men and women was significantly different from one. Estimates of rg for these phenotypes ranged from 0.96 for height to 0.56 for LRS, with six of the phenotypes having an estimated rg below 0.9. Importantly GxS interactions were found across all categories of phenotypes, including anthropometric, cognitive, pulmonary, and cardiovascular. Familial studies [9] have reported evidence supporting GxS interactions across a range of anthropometric phenotypes. However, whilst genome-wide association studies have identified SNP-by-sex interactions for some anthropometric traits like WHR [5, 8], identifying these has proven to be extremely challenging, raising the question of whether the expected interactions exist. Similarly to familial studies, we find evidence of GxS interactions in all the anthropometric traits studied using unrelated samples of the population, albeit our estimates of rg are generally higher than those reported in twin studies [4]. Previous analyses on unrelated individuals for either height or BMI [5, 6, 8] did not find significant differences in genetic effects, suggesting these studies lacked power due to smaller samples sizes or the methodology used. In particular, our results are consistent with the standard errors of Yang et al. [6], who, using a sample size of individual-level data less than half of that used here, did not obtain significant results for either height or BMI using the same methodology.
Beyond anthropometric traits, the low genetic correlation for LRS among sexes is of particular interest due to its implications in, for instance, maintaining genetic variation in the population. A GxS interaction for LRS suggests that the genetic determinants that contribute to the reproductive fitness of men and women may be different or may have different effects, which could play an important role in maintaining genetic variation in the population [15]. In addition, the differences in heritabilities between men and women, which are a consequence of women having twice as much genetic variation for this trait as men, suggest that genetics plays a larger role in the reproductive fitness of women than men.
Effects of study population
To confirm the robustness of our results we performed several additional analyses. First, we re-estimated variance components for the same set of unrelated individuals including rare variants (MAF 0.37–5 %) together with common variants, that is, all available SNPs that passed our quality control (Additional file 1: Figure S2). Second, we performed the analyses, using common variants, on a set of ~19,000 related white British participants which partially overlaps (10,112 individuals) with the unrelated cohort. The results of these analyses do not represent an independent replication but a way of assessing the effect of the changed tagging structure due to long-range linkage disequilibrium and shared environment in these related individuals. In line with expectations [16, 17] the estimates of heritability increased significantly in both alternative analyses (Fig. 2a), whilst estimates of rg remained largely unaffected (Fig. 2b; Additional file 1: Table S4). The correlation between rgestimates based on common variants and combined common and rare variants in unrelated individuals was 0.98, whilst between related and unrelated individuals was 0.79 (Fig. 2c).
Effect of relatedness and SNP density on estimates of heritability and genetic correlations. Estimates of male and female heritability obtained using common variants in unrelated individuals against those obtained using a related individuals and common variants or b including common and rare variants for unrelated individuals. c Comparison of estimates of genetic correlations between these two alternative analyses with the estimates obtained on the main set of unrelated individuals with common variants
As four of the phenotypes we considered (education age, FI score, LRS, and overall health) were categorical in nature, we investigated to what extent our results, based on estimates obtained on the observed scale, could be affected by differences in phenotypic distribution between the sexes. To this end we repeated the analysis using common variants on a random subset of the unrelated white British cohort, sampled so as to ensure that phenotypic distributions for both sexes are equal (see "Methods"). The results of this analysis are consistent with those obtained on the whole cohort (Additional file 1: Table S5), suggesting that the observed differences in heritability on the observed scale are not likely to be driven by differences in phenotype distribution in the two sexes. Similarly, to confirm that differences in phenotypic distributions of the considered quantitative traits did not lead to spurious genetic correlations below one, we performed an additional analysis on rank normalized phenotypes (Additional file 1: Additional methods and results). The results were highly consistent with those reported here for untransformed phenotypes (Additional file 1: Figure S3 and Tables S6 and S7). Additionally we performed a simulation study (Additional file 1: Additional methods and results) of traits with differing phenotypic distributions and heritabilities but genetic correlations of one, which further supports our view that these factors do not lead to spurious results (Additional file 1: Table S8 and Figure S4).
Finally, we investigated three other potential sources of bias: the presence of spouses in the UK Biobank cohort [18], differences in socio-economic status among the sampled male and female population, and sex differences in overall health status. In particular, the latter could potentially bias our results due to the possibility of differences in the enrolment of the male and female components of the study population [19]. To investigate the effect of couples, we excluded one individual at random from each genotyped spouse pair and repeated all the main analyses. Despite the reduction in sample size, the results of these analyses were very similar to the main results (Additional file 1: Figure S5, Tables S6 and S9). Likewise, we repeated all the main analyses adjusting for additional factors related to socioeconomic status and health (that is, self-reported overall health status, Townsend Deprivation Index and educational attainment). These results were also highly consistent with the results of the main analysis (Additional file 1: Figure S6, Tables S6 and S10), suggesting that our results are not substantially influenced by these socio-economic and health status factors.
Sex-specific genomic prediction
An alternative way of testing whether the genetic architecture of the two sexes is different or whether there are GxS interactions is to perform genomic predictions under a model that accounts for these interactions (that is, a sex-specific model) and another that does not (that is, a sex-agnostic model). To this end, we estimated separate male and female common SNP effects in the sex-specific bivariate model as well as in the sex-agnostic univariate model [20] (see "Methods"). We used our ~114,000 unrelated white British individuals as the training population and predicted additive genetic values for a separate cohort of ~12,000 genotyped UK Biobank individuals who self-reported as white British but had failed to be confirmed as such by the principal components analysis (PCA; see "Methods"). Prediction accuracy was measured as the correlation between predicted additive genetic value and observed phenotypes adjusted for fixed effects. The sex-specific model outperformed the sex-agnostic model for a majority of phenotypes, i.e., 14 out of 19 (Additional file 1: Table S11). In particular, considering only the ten phenotypes with evidence of genetic heterogeneity between sexes (P > 0.05 for rg ≠ 1 or h m 2 ≠ h f 2 ) and substantial heritability (h m 2 > 0.2 and h f 2 > 0.2), all but one (WC) showed an improvement in prediction accuracy, with BMR showing the largest improvement (>4 %) and with the mean improvement across these traits being 1.4 %. These results add further evidence to the presence of sex-specific genetic effects and, since sex could be considered as a surrogate measure of different environmental factors, provide evidence that the utilisation of gene–environment interactions can improve the accuracy of genomic profiling.
Our analyses of a large cohort using individual-level genotype data provide a broad assessment of differences in genetic architecture between sexes and shows that contributions of sex-specific genetic effects, although of modest magnitude, may be found across a broad spectrum of traits. While significance does not imply that these effects are large, we are able to reproduce our results, whilst simultaneously quantifying the impact of these effects, using genomic predictions in an independent cohort. Taking WHR as an example, 291 associations are reported in the GWAS Catalog [21], which may be taken as a lower bound of the total number of associated variants. We may then observe that, based on the assumption that these contribute equally to prediction accuracy in the univariate model, differences in genetic architecture between the sexes are equivalent to a lower bound of about seven additional associated variants.
The general lack of significant SNP-by-sex interactions in genome-wide association studies suggests that these effects may be a consequence of the accumulated effect of many interactions of small effect, identification of which may require even larger sample sizes than used here. Further research to identify the causes that determine sex genetic heterogeneity will need to disentangle whether sex genetic heterogeneity may arise as a consequence of interactions with genetic loci located on the sex chromosomes, differences in gene control due to differences in the sex-specific cellular environment, or more general differences in environmental exposures between the sexes. Furthermore, we demonstrate, using sex as a surrogate measure of environmental exposure, how to incorporate gene-by-environment interactions into genomic prediction models.
Genotype quality control
For our analysis, we used the data for the individuals genotyped in phase 1 of the UK Biobank genotyping program; 49,979 individuals were genotyped using the Affymetrix UK BiLEVE Axiom array and 102,750 individuals using the Affymetrix UK Biobank Axiom array. Details regarding genotyping procedure and genotype calling protocols are provided elsewhere (http://biobank.ctsu.ox.ac.uk/crystal/refer.cgi?id=155580). We performed quality control using the entire set of genotyped individuals before extracting the white British cohort used in our analysis. From the overlapping markers, we excluded those which are multi-allelic and those whose overall missingness rate exceeded 2 % or which exhibited a strong platform specific missingness bias (Fisher's exact test, P < 10-100). We also excluded individuals if they exhibited excess heterozygosity, as identified by UK Biobank internal quality control procedures (http://biobank.ctsu.ox.ac.uk/crystal/refer.cgi?id=155580), if their missingness rate exceeded 5 %, or if their self-reported sex did not match genetic sex estimated from X chromosome inbreeding coefficients. These criteria resulted in a reduced dataset of 151,532 individuals. Finally, we only kept variants that did not exhibit departure from Hardy–Weinberg equilibrium (P < 10-50) in the unrelated (subset of individuals with a relatedness below 0.0625) white British subset of the cohort. To define the white British cohort, we performed a PCA of all individuals passing genotypic quality control using a linkage disequilibrium pruned set of 99,101 autosomal markers (http://biobank.ctsu.ox.ac.uk/crystal/refer.cgi?id=149744) that passed our SNP quality control protocol. The white British individuals were defined as those for whom the projections onto the leading 20 genomic principal components fell within three standard deviations of the mean and who self-reported their ethnicity as white British. Those individuals who self-reported as white British but who were excluded based on the PCA analysis formed the test white British sample used in prediction. We furthermore pruned the set of white British individuals, removing one individual from pairs with relatedness above 0.0625 (corresponding to second degree cousins) to obtain a datasets of unrelated confirmed white British individuals.
Phenotype quality control
We obtained measures for waist circumference (UKBID 48), hip circumference (UKBID 49), standing height (UKBID 50), BMI (UKBID 21001), body fat percentage (UKBID 23099), basal metabolic rate (UKBID 23105), self-reported age of completion of full-time education for individuals without university education (UKBID 845), number of offspring (UKBID 2405 and 2734), fluid intelligence score (UKBID 20016), diastolic blood pressure (UKBID 4079), systolic blood pressure (UKBID 4080), forced volume vital capacity (UKBID 3062), forced expiratory volume in 1 s (UKBID 3063), peak expiratory flow (UKBID 3064), and self-reported overall health (UKBID 2178). Further information about these, including details of measurement protocols, can be accessed through the UK Biobank resource (http://biobank.ctsu.ox.ac.uk/crystal/index.cgi) using the provided UKBID. We additionally computed several derived phenotypes based on information contained in the UK Biobank. Specifically, we computed WHR and FEV1/FVC as ratios of WC, HC, FEV1 and FVC, respectively, and furthermore rescaled BMR to have a standard deviation of 1 in the population due to numerical problems in model fitting on the measurement scale. LRS was calculated as the self-reported number of offspring for individuals who have completed their reproductive life. These were defined as men aged over 60 years and women who reported either having had their menopause or undergone a hysterectomy or who were aged over 60 years. We furthermore excluded individuals who reported having had in excess of 15 offspring. We constructed a cognitive function score (CF score) as the first principal component of several cognitive measures, specifically the results of a reaction time test (UKBID 20023), time to complete, and number of incorrect guesses during completion of a pairs matching test (UKBID 400 and 399). Prior to PCA we excluded individuals who were more than 5 standard deviations from the population for any of these measures. The number of pack years of smoking was calculated based on smoking history information as described elsewhere (http://biobank.ctsu.ox.ac.uk/crystal/field.cgi?id=20161). Self-reported overall health status was measured as the answer to the question "In general how would you rate your overall health?" excluding "Do not know"/"Prefer not to answer" we coded the possible four answers as numerical values 1 ("Excellent") to 4 ("Poor") and fitted all models on this observed scale. For the cardiovascular phenotypes (BPdia and BPsys) we excluded all individuals who reported taking blood pressure medication (UKBID 6153 and 6177). We removed outliers from WC, HC, height, BMI, BF%, BMR, WHR, education age, BPdia, BPsys, FVC, FEV1, PEF, and FEV1/FVC, defining outliers as males and females who were outside ±3 standard deviations from their gender mean of all the individuals in the white British cohort.
Estimation of heritability and genetic correlations
We used a linear mixed model approach [22] to estimate sex-specific variance components and genetic correlations between the sexes. Specifically, we computed restricted maximum likelihood (REML) fits for the bivariate model:
$$ \mathbf{y}=\left(\begin{array}{c}\hfill {\mathbf{y}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\mathbf{y}}_{\boldsymbol{f}}\hfill \end{array}\right)=\left(\begin{array}{cc}\hfill {\mathbf{x}}_{\boldsymbol{m}}\hfill & \hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill & \hfill {\mathbf{x}}_{\boldsymbol{f}}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {\boldsymbol{\upbeta}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\boldsymbol{\upbeta}}_{\boldsymbol{f}}\hfill \end{array}\right)+\left(\begin{array}{cc}\hfill {\mathbf{G}}_{\boldsymbol{m}}\hfill & \hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill & \hfill {\mathbf{G}}_{\boldsymbol{f}}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {\mathbf{a}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\mathbf{a}}_{\boldsymbol{f}}\hfill \end{array}\right)+\left(\begin{array}{c}\hfill {\mathbf{e}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\mathbf{e}}_{\boldsymbol{f}}\hfill \end{array}\right), $$
where for sex x, X x is the incidence matrix for fixed effects β x , including a constant column modeling the mean, G x is the matrix of standardized genotypes, a x is the vector of SNP effects, and e x is the vector of residuals. Priors were placed on the SNP effects and residuals:
$$ \left(\begin{array}{c}\hfill {\mathbf{a}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\mathbf{a}}_{\boldsymbol{f}}\hfill \end{array}\right)\sim N\left(\left(\begin{array}{c}\hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill \end{array}\right),\left(\begin{array}{cc}\hfill {\sigma}_{{\text{\textit{\textsf{g}}}}_m}^2\hfill & \hfill \rho \sqrt{\sigma_{{\text{\textit{\textsf{g}}}}_m}^2{\sigma}_{{\text{\textit{\textsf{g}}}}_f}^2}\hfill \\ {}\hfill \rho \sqrt{\sigma_{{\text{\textit{\textsf{g}}}}_m}^2{\sigma}_{g_f}^2}\hfill & \hfill {\sigma}_{{\text{\textit{\textsf{g}}}}_f}^2\hfill \end{array}\right)\otimes \mathrm{I}\right)\kern0.5em \mathrm{and}\kern0.5em \left(\begin{array}{c}\hfill {\mathbf{e}}_{\boldsymbol{m}}\hfill \\ {}\hfill {\mathbf{e}}_{\boldsymbol{f}}\hfill \end{array}\right)\sim N\left(\left(\begin{array}{c}\hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill \end{array}\right),\left(\begin{array}{cc}\hfill {\sigma}_{e_m}^2\hfill & \hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill & \hfill {\sigma}_{e_f}^2\hfill \end{array}\right)\otimes \mathrm{I}\right) $$
where N(μ,Σ) is the multivariate normal distribution with mean μ and covariance Σ, I is the identity matrix, and ⊗ is the Kronecker product between matrices. Using the estimates of the model parameters, heritabilities for each sex were computed as \( {h}_x^2=\frac{\sigma_{{\mathrm{g}}_x}^2}{\sigma_{g_x}^2+{\sigma}_{e_x}^2} \), with the reported confidence intervals calculated based on standard errors of the model parameters. We additionally obtained REML fits of the above model under the constraints ρ = 1 and \( \frac{\sigma_{{\text{\textit{\textsf{g}}}}_m}^2}{\left({\sigma}_{{\text{\textit{\textsf{g}}}}_m}^2+{\sigma}_{e_m}^2\right)}=\frac{\sigma_{{\text{\textit{\textsf{g}}}}_f}^2}{\left({\sigma}_{{\text{\textit{\textsf{g}}}}_f}^2+{\sigma}_{e_f}^2\right)} \), respectively. For the latter we reparametrized the model in terms of parameters σ m 2 , σ f 2 and λ, setting \( {\sigma}_{{\text{\textit{\textsf{g}}}}_m}^2={\sigma}_m^2,{\sigma}_{{\text{\textit{\textsf{g}}}}_f}^2={\sigma}_f^2,{\sigma}_{e_m}^2=\lambda {\sigma}_m^2\kern0.5em \mathrm{and}\kern0.5em {\sigma}_{e_f}^2=\lambda {\sigma}_f^2 \) and optimized the REML. Using these restricted models, we tested for genetic correlations between the sexes different from unity and unequal heritabilities for the two sexes using likelihood ratio tests using 1 degree of freedom. All analyses included the leading 20 genomic principal components as fixed effects in order to adjust for population structure. Furthermore, age was included in all analyses as a fixed effect, with the exception of LRS, where we included year of birth, which better captures cohort effects. Finally, analyses of pulmonary phenotypes included further fixed effects; specifically, both height and pack years were included for PEF and FEV1 and only pack years was included for FEV1/FVC. All models were fitted using the DISSECT software (http://www.dissect.ed.ac.uk) [14] on the UK National Supercomputer (ARCHER).
Alternative analyses
Analyses performed to investigate robustness of the results utilized the following datasets. From the dataset of all individuals who were identified as white British, we extracted the set of individuals who had at least one other white British individual with a relatedness, as estimated based on common SNPs, above 0.0625. This cohort of 19,695 related white British individuals partially overlapped (i.e., 10,112 individuals overlapped) with the unrelated white British cohort used in the main analysis as, for the latter, only one of each pair of related individuals was excluded.
For the additional analysis of categorical phenotypes, we subsampled the set of unrelated white British individuals for each phenotype to maximize the total sample size whilst ensuring that the phenotypic distribution in the sexes was equal. To this end we stratified the individuals within each sex by the phenotype value and for each strata included all individuals of the sex with fewer samples and randomly sampled an equal number of individuals for the other sex.
Predictions ŷ i for a phenotype of individual i were computed as:
$$ {\widehat{y}}_i={\displaystyle \sum_{j=1}^M\frac{s_{ij}-{\mu}_j}{\sigma_j}}{a}_j, $$
where s ij is the number of copies of the reference allele at SNP j for individual i, M is the total number of SNPs used for prediction, i.e., in our case the number of common SNPs, and a j is the estimated SNP effect of SNP j, while μ j and σ j are the mean and standard deviation of the reference allele in the training population, i.e., the genotypically white British individuals. We estimated effects of all SNPs following either a standard univariate approach [20], i.e., fitting a LMM treating male and female phenotypes as one phenotype, or by estimating sex-specific SNP effects based on the bivariate model discussed previously. Specifically, SNP effects were estimated by their posterior mean with variance parameters and fixed effect parameters fixed to their REML estimates. The same fixed effect structure was used in both models, i.e., we included sex interactions in the fixed effects of the univariate model. Prediction accuracies were computed as the correlation between predicted phenotypes and observed phenotypes adjusted for fixed effects using estimated fixed effect coefficients.
Carrel L, Willard HF. X-inactivation profile reveals extensive variability in X-linked gene expression in females. Nature. 2005;434:400–4.
Rinn JL, Snyder M. Sexual dimorphism in mammalian gene expression. Trends Genet. 2005;21:298–305.
Weiss LA, Pan L, Abney M, Ober C. The sex-specific genetic architecture of quantitative traits in humans. Nat Genet. 2006;38:218–22.
Schousboe K, Willemsen G, Kyvik KO, Mortensen J, Boomsma DI, Cornes BK, Davis CJ, Fagnani C, Hjelmborg J, Kaprio J, et al. Sex differences in heritability of BMI: a comparative study of results from twin studies in eight countries. Twin Res. 2003;6:409–21.
Randall JC, Winkler TW, Kutalik Z, Berndt SI, Jackson AU, Monda KL, Kilpelainen TO, Esko T, Magi R, Li S, et al. Sex-stratified genome-wide association studies including 270,000 individuals show sexual dimorphism in genetic loci for anthropometric traits. PLoS Genet. 2013;9:e1003500.
Yang J, Bakshi A, Zhu Z, Hemani G, Vinkhuyzen AA, Nolte IM, van Vliet-Ostaptchouk JV, Snieder H, Lifelines Cohort S, Esko T, et al. Genome-wide genetic homogeneity between sexes and populations for human height and body mass index. Hum Mol Genet. 2015;24:7445–9.
Ober C, Loisel DA, Gilad Y. Sex-specific genetic architecture of human disease. Nat Rev Genet. 2008;9:911–22.
Winkler TW, Justice AE, Graff M, Barata L, Feitosa MF, Chu S, Czajkowski J, Esko T, Fall T, Kilpelainen TO, et al. The Influence of age and sex on genetic associations with adult body size and shape: a large-scale genome-wide interaction study. PLoS Genet. 2015;11:e1005378.
Polderman TJ, Benyamin B, de Leeuw CA, Sullivan PF, van Bochoven A, Visscher PM, Posthuma D. Meta-analysis of the heritability of human traits based on fifty years of twin studies. Nat Genet. 2015;47:702–9.
Lango Allen H, Estrada K, Lettre G, Berndt SI, Weedon MN, Rivadeneira F, Willer CJ, Jackson AU, Vedantam S, Raychaudhuri S, et al. Hundreds of variants clustered in genomic loci and biological pathways affect human height. Nature. 2010;467:832–8.
Speliotes EK, Willer CJ, Berndt SI, Monda KL, Thorleifsson G, Jackson AU, Lango Allen H, Lindgren CM, Luan J, Magi R, et al. Association analyses of 249,796 individuals reveal 18 new loci associated with body mass index. Nat Genet. 2010;42:937–48.
Collins R. What makes UK Biobank special? Lancet. 2012;379:1173–4.
Lynch M, Walsh B. Genetics and analysis of quantitative traits. Sunderland: Sinauer Associates.
Canela-Xandri O, Law A, Gray A, Woolliams JA, Tenesa A. A new tool called DISSECT for analyzing large genomic datasets using a Big Data approach. Nat Commun. 2015;6:10162.
Foerster K, Coulson T, Sheldon BC, Pemberton JM, Clutton-Brock TH, Kruuk LE. Sexually antagonistic genetic variation for fitness in red deer. Nature. 2007;447:1107–10.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, Madden PA, Heath AC, Martin NG, Montgomery GW, et al. Common SNPs explain a large proportion of the heritability for human height. Nat Genet. 2010;42:565–9.
Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, Hunter DJ, McCarthy MI, Ramos EM, Cardon LR, Chakravarti A, et al. Finding the missing heritability of complex diseases. Nature. 2009;461:747–53.
Tenesa A, Rawlik K, Navarro P, Canela-Xandri O. Genetic determination of height-mediated mate choice. Genome Biol. 2016;16:269.
Davey Smith G, Davies NM. Can genetic evidence help us understand why height and weight relate to social position? BMJ. 2016;352:i1224.
Canela-Xandri O, Rawlik K, Woolliams JA, Tenesa A. Accurate genetic profiling of anthropometric traits using a big data approach. bioRxiv. 2015.
Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, Klemm A, Flicek P, Manolio T, Hindorff L, Parkinson H. The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 2014;42:D1001–6.
Lee SH, Yang J, Goddard ME, Visscher PM, Wray NR. Estimation of pleiotropy between complex diseases using single-nucleotide polymorphism-derived genomic relationships and restricted maximum likelihood. Bioinformatics. 2012;28:2540–2.
This work used the ARCHER UK National Supercomputing Service (http://www.archer.ac.uk) and the Edinburgh Compute and Data Facility (ECDF; http://www.ecdf.ed.ac.uk/). This research has been conducted using the UK Biobank Resource.
This work was mainly supported by The Roslin Institute Strategic Grant funding from the BBSRC. AT also acknowledges funding from the Medical Research Council Human Genetics Unit.
Availability of data and material
The data can be accessed through the UK Biobank (http://www.ukbiobank.ac.uk/). We used the individuals genotyped in phase 1 of the UK Biobank genotyping project, which were released by the UK Biobank in June 2015. The genotype data were downloaded on 5 June 2015. All data used were available under project tag 788, which could be accessed by researchers after approval by the UK Biobank team. The DISSECT software used to perform the analysis is freely available from http://www.dissect.ed.ac.uk/.
All authors contributed to the design of the study, critical discussion of the results, and writing of the manuscript. KR and OCX performed data quality control. KR performed the analysis. All authors read and approved the final manuscript.
The use of the UK Biobank dataset falls within the study's ethical approval from the North West Medical Research Ethics Committee (reference 11/NW/0382) and complies with the conditions of data use, as stipulated by UK BioBank.
The Roslin Institute, Royal (Dick) School of Veterinary Studies, The University of Edinburgh, Easter Bush Campus, Midlothian, EH25 9RG, Scotland, UK
Konrad Rawlik
, Oriol Canela-Xandri
& Albert Tenesa
MRC Human Genetics Unit at the MRC IGMM, University of Edinburgh, Western General Hospital, Crewe Road South, Edinburgh, EH4 2XU, UK
Albert Tenesa
Search for Konrad Rawlik in:
Search for Oriol Canela-Xandri in:
Search for Albert Tenesa in:
Correspondence to Albert Tenesa.
Additional methods and results. Figure S1 Phenotypic distributions for both sexes for considered phenotypes. Figure S2 SNP MAFs. Figure S3 Comparison of the results of the main analysis with those obtained on rank normalized phenotypes. Figure S4 Comparison of the results of the main analysis with those of simulated phenotypes. Figure S5 Comparison of the results of the main analysis with those obtained after exclusion of spouses. Figure S6 Comparison of the results of the main analysis with those obtained after adjusting for further socio-economic and health covariates. Table S1 Phenotype-specific sample sizes for the sets of unrelated and related white British individuals. Table S2 Sexual dimorphism indices for all phenotypes. Table S3 Comparison of the parameter estimates for alternative analyses of blood pressure phenotypes. Table S4 Estimates of heritabilities and genetic correlations for models fitted to related individuals with common SNPs and unrelated individuals including both common and common and rare SNPs. Table S5 Estimates of heritabilities, genetic correlations and P values for differences in heritability and genetic correlations differing from one, for categorical phenotypes using subsamples of the unrelated white British individuals with matching phenotypic distributions between the sexes. Table S6 Comparison of the results of the main analysis with those obtained on alternative analyses. Table S7 Estimates of heritabilities, genetic correlations and P values for differences in heritability and genetic correlations differing from one, for rank normalized phenotypes. Table S8 Parameter estimates for simulated phenotypes. Table S9 Estimates of heritabilities, genetic correlations and P values for differences in heritability and genetic correlations differing from one, after exclusion of spouses. Table S10 Estimates of heritabilities, genetic correlations and P values for differences in heritability and genetic correlations differing from one, when adjusting for additional socio-economic and health factors. Table S11 Prediction accuracies for bivariate and univariate genomic prediction models for all traits. (DOCX 2206 kb)
Rawlik, K., Canela-Xandri, O. & Tenesa, A. Evidence for sex-specific genetic architectures across a spectrum of human complex traits. Genome Biol 17, 166 (2016) doi:10.1186/s13059-016-1025-x
Gene-by-sex interactions
Sex-specific genetic architecture
|
CommonCrawl
|
Mathematics Mathematical Physics
Quantum Theory, Groups and Representations
Authors: Woit, Peter
Systematically emphasizes the role of Lie groups, Lie algebras, and their unitary representation theory in the foundations of quantum mechanics
Introduces fundamental structures and concepts of representation theory in an elementary, physically relevant context
Gives a careful treatment of the mathematical subtleties of quantum theory, without obscuring its global mathematical shape
About this Textbook
This text systematically presents the basics of quantum mechanics, emphasizing the role of Lie groups, Lie algebras, and their unitary representations. The mathematical structure of the subject is brought to the fore, intentionally avoiding significant overlap with material from standard physics courses in quantum mechanics and quantum field theory. The level of presentation is attractive to mathematics students looking to learn about both quantum mechanics and representation theory, while also appealing to physics students who would like to know more about the mathematics underlying the subject. This text showcases the numerous differences between typical mathematical and physical treatments of the subject. The latter portions of the book focus on central mathematical objects that occur in the Standard Model of particle physics, underlining the deep and intimate connections between mathematics and the physical world. While an elementary physics course of some kind would be helpful to the reader, no specific background in physics is assumed, making this book accessible to students with a grounding in multivariable calculus and linear algebra. Many exercises are provided to develop the reader's understanding of and facility in quantum-theoretical concepts and calculations.
Peter Woit is a Senior Lecturer of Mathematics at Columbia University. His general area of research interest is the relationship between mathematics, especially representation theory, and fundamental physics, especially quantum field theories like the Standard Model.
"The book presents a large variety of important subjects, including the basic principles of quantum mechanics … . This good book is recommended for mathematicians, physicists, philosophers of physics, researchers, and advanced students in mathematics and physics, as well as for readers with some elementary physics, multivariate calculus and linear algebra courses." (Michael M. Dediu, Mathematical Reviews, June, 2018)
Woit, Peter
The Group U(1) and its Representations
Two-state Systems and SU(2)
Linear Algebra Review, Unitary and Orthogonal Groups
Lie Algebras and Lie Algebra Representations
The Rotation and Spin Groups in Three and Four Dimensions
Rotations and the Spin $$\frac{1}{2}$$ Particle in a Magnetic Field
Representations of SU(2) and SO(3)
Tensor Products, Entanglement, and Addition of Spin
Momentum and the Free Particle
Fourier Analysis and the Free Particle
Position and the Free Particle
The Heisenberg group and the Schrödinger Representation
The Poisson Bracket and Symplectic Geometry
Hamiltonian Vector Fields and the Moment Map
Quadratic Polynomials and the Symplectic Group
Semi-direct Products
The Quantum Free Particle as a Representation of the Euclidean Group
Representations of Semi-direct Products
Central Potentials and the Hydrogen Atom
The Harmonic Oscillator
Coherent States and the Propagator for the Harmonic Oscillator
The Metaplectic Representation and Annihilation and Creation Operators, $$d=1$$
The Metaplectic Representation and Annihilation and Creation Operators, arbitrary d
Complex Structures and Quantization
The Fermionic Oscillator
Weyl and Clifford Algebras
Clifford Algebras and Geometry
Anticommuting Variables and Pseudo-classical Mechanics
Fermionic Quantization and Spinors
A Summary: Parallels Between Bosonic and Fermionic Quantization
Supersymmetry, Some Simple Examples
The Pauli Equation and the Dirac Operator
Lagrangian Methods and the Path Integral
Multiparticle Systems: Momentum Space Description
Multiparticle Systems and Field Quantization
Symmetries and Non-relativistic Quantum Fields
Quantization of Infinite dimensional Phase Spaces
Minkowski Space and the Lorentz Group
Representations of the Lorentz Group
The Poincaré Group and its Representations
The Klein–Gordon Equation and Scalar Quantum Fields
Symmetries and Relativistic Scalar Quantum Fields
U(1) Gauge Symmetry and Electromagnetic Fields
Quantization of the Electromagnetic Field: the Photon
The Dirac Equation and Spin $$\frac{1}{2}$$ Fields
An Introduction to the Standard Model
Download Sample pages 2 PDF (240 KB)
Download Table of contents PDF (567.3 KB)
Download Product Flyer Access Instructor´s Textbook Exam Copy Download High-Resolution Cover
Peter Woit
XXII, 668
|
CommonCrawl
|
•https://doi.org/10.1364/OL.430043
Compact quantum random number generation using a linear optocoupler
Ying-Ying Hu, Yu-Yang Ding, Shuang Wang, Zhen-Qiang Yin, Wei Chen, De-Yong He, Wei Huang, Bing-Jie Xu, Guang-Can Guo, and Zheng-Fu Han
Ying-Ying Hu,1,2,3 Yu-Yang Ding,4 Shuang Wang,1,2,3,* Zhen-Qiang Yin,1,2,3 Wei Chen,1,2,3 De-Yong He,1,2,3 Wei Huang,5 Bing-Jie Xu,5 Guang-Can Guo,1,2,3 and Zheng-Fu Han1,2,3
1CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, Anhui 230026, China
2CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China
3State Key Laboratory of Cryptology, P. O. Box 5159, Beijing 100878, China
4HeFei GuiZhen Chip Tec Ltd., Hefei, Anhui 230000, China
5Science and Technology on Communication Security Laboratory, Institute of Southwestern Communication, Chengdu, Sichuan 610041, China
*Corresponding author: [email protected]
Shuang Wang https://orcid.org/0000-0003-1822-1613
Wei Chen https://orcid.org/0000-0003-1789-789X
Y Hu
Y Ding
S Wang
Z Yin
W Chen
D He
W Huang
B Xu
G Guo
Z Han
Ying-Ying Hu, Yu-Yang Ding, Shuang Wang, Zhen-Qiang Yin, Wei Chen, De-Yong He, Wei Huang, Bing-Jie Xu, Guang-Can Guo, and Zheng-Fu Han, "Compact quantum random number generation using a linear optocoupler," Opt. Lett. 46, 3175-3178 (2021)
Spotlight Summary by Filippo Miatto
Random number generation (RNG) is a primitive operation that is required in many contexts such as scientific simulations, computational statistics and of course cryptography. Given the crucial role it plays in security, hardware RNGs are preferred to pseudo-random algorithms. The key difference between the two is that hardware RNGs rely on physical effects to produce random numbers, and therefore they are more secure. In their work, Yingying Hu and colleagues have created a simple and fast hardware RNG that relies on the random recombination of charges and holes in a light emitting diode. The samples are recorded at 43 Mbps, and they pass with flying colors the NIST-STS test suite for random number generators. Small size RNGs like the one in this work could be embedded in IoT devices, wearable devices and virtually any device where size is of the essence.
Quantum random number generation based on spontaneous Raman scattering in standard single-mode fiber
Ying-Ying Hu, et al.
Quantum random number generator based on twin beams
Qiang Zhang, et al.
Opt. Lett. 42(5) 895-898 (2017)
Dual-entropy-source quantum random number generation based on spontaneous emission
Quantum Optics and Quantum Communication
Field programmable gate arrays
Light intensity
Revised Manuscript: May 30, 2021
Manuscript Accepted: June 2, 2021
July 9, 2021 Spotlight on Optics
To date, various quantum random number schemes have been demonstrated. However, the cost, size, and final random bit generation rate usually limits their wide application on-shelf. To overcome these limitations, we propose and demonstrate a compact, simple, and low-cost quantum random number generation based on a linear optocoupler. Its integrated structure consists mainly of a light emitting diode and a photodetector. Random bits are generated by directly measuring the intensity noise of the output light, which originates from the random recombination between holes of the p region and electrons of the n region in a light emitting diode. Moreover, our system is robust against fluctuation of the operating environment, and can be extended to a parallel structure, which will be of great significance for the practical and commercial application of quantum random number generation. After post-processing by the SHA-256 algorithm, a random number generation rate of 43 Mbps is obtained. Finally, the final random bit sequences have low autocorrelation coefficients with a standard deviation of $3.16 \times {10^{- 4}}$ and pass the NIST-Statistical Test Suite test.
Random numbers play an indispensable role for a wide range of applications in many modern commercial and scientific fields, such as stochastic simulations, statistical sampling, lotteries, and even cryptography protocols [1,2]. It is an essential step to distinguish the generation process of random numbers. In computer software, the system of generating random numbers directly from computational algorithms is called pseudorandom number generation (PRNG), which normally starts from a small string of bits called seeds. Although the output random numbers generated by PRNG may be a perfectly uniform distribution in statistics, the properties of a strong period and predictability exist. They are not suitable for areas with high security requirements, which may result in unexpected errors or open loopholes such as in cryptography. On the contrary, true random number generations (TRNGs) are based on measuring some unpredictable physical processes to generate random numbers. The final bit sequences have high security and true randomness. Quantum random number generations (QRNGs) are an essential case of physical TRNGs where the inherent randomness based on the quantum physical phenomenon is extracted. Quantum random bits are the best guarantee to provide high-quality random numbers under the condition of ensuring security. Over the past few decades, various practical QRNG schemes have been proposed and demonstrated, for instance, single photon splitting by a beam splitter [3,4], homodyne detection of the vacuum field noise [5–8], phase diffusion in lasers [9–11], amplified spontaneous emission (ASE) noise [12–15], and the intensity fluctuation of spontaneous emission from light emitting diodes (LEDs) or atoms [16–18]. Other kinds of QRNGs based on untrusted devices have been developed that can resist stronger attacks in theory. For example, device-independent (DI) schemes require the violation of a Bell inequality [19–21], and semi-DI (or self-testing) approaches need only a partial characterization of the whole device [22–25].
Despite much progress, most QRNG schemes have the same features: expensive optical or electrical setups, complex system structure, huge size, and so on. These disadvantages have prevented them from becoming widespread for practical and commercial applications on-shelf. For instance, for a single photon beam splitting scheme, the final generation rate of random bits is limited mainly by the dead time of a single photon detector (SPD) [4]. For the detection of vacuum field noise, the influence of classical noise and a complex post-processing procedure are key challenges for practical applications of this scheme [5–7]. For the phase noise scheme of lasers, a fiber Mach–Zehnder interferometer (MZI) with a length imbalance makes it difficult for the system to remain stable for a long time. Moreover, the use of complex and bulky light sources and detector devices also brings great challenges to the integration of the whole system [2]. From a practical point of view, many experimental approaches aim to reduce the size and price and improve the generation rate of QRNGs. Thus, a large number of high-speed, real-time, compact, and integrated QRNG schemes have been demonstrated [26–31].
Fig. 1. (a) Structure of linear optocoupler. LED, light emitting diode; PD, photodetector; 1, anode; 2, cathode; 3, emitter; 4, collector. When the ports of 1, 2 and 3, 4 of the linear optocoupler are respectively applied with appropriate voltages, the photodetector will receive light radiation from the LED and generate the corresponding output signal, which is linear with the light intensity. (b) Process of light emitting from the LED. When the LED is applied with an appropriate voltage, holes in the p region and electrons in the n region will recombine randomly in the active area to emit photons. This process is similar to the spontaneous emission of atoms and belongs to the quantum random process guaranteed by quantum mechanics.
In this work, we present a compact, simple, and low-cost experimental scheme of QRNG based on a linear optocoupler. A linear optocoupler is a semiconductor device that features low cost and compactness or integration. In general, its structure consists mainly of a semiconductor light emitting element at the input and a photoresponse element at the output, which are arranged on an insulating substrate in such a manner that they oppose each other. As shown in Fig. 1(a), the light source and receiver are usually a LED and photodetector (PD), respectively. They are integrated or packaged together in which the p-n junction of the former is perpendicular to a light receiving face of the latter [32]. When an appropriate voltage is applied to the LED, it will emit photons. Then, the PD will output electrical signals that are linear to the received light emission intensity from the LED. Therefore, the linear optocoupler completes the whole conversion process of electricity–light–electricity.
Before showing the experimental setup and scheme of QRNG based on a linear optocoupler, here we recall the light emission theory and process of LEDs. It is composed mainly of a p-n junction, as shown in Fig. 1(b). When an appropriate voltage is applied to the LED, holes in the p region and electrons in the n region recombine randomly in the active region, during which photons are emitted. This process is similar to the spontaneous emission of atoms, where the particles of the excited state spontaneously transition to the ground state, and then photons are emitted. Therefore, the light emission process of LEDs also belongs to the quantum random process guaranteed by quantum mechanics. The inherent randomness of LEDs is reflected in the fluctuation of light intensity. The values of light intensity of two consecutive measurements are considered to be mutually independent when the time interval of two measurements exceeds the coherence time ${\tau _c}$ of the light source, which depends on the spectral width $\Delta \nu$ of the LED by ${\tau _c} \simeq \frac{1}{{\pi \Delta \nu}}$ [9]. The probability density distribution of light intensity with the number of emitted photons $n$ is given by [16]
(1)$$P(n) = \frac{{{e^{- \bar n}}{{\bar n}^n}}}{{n!}},$$
where $\bar n$ is the average number of photons of the LED.
According to the above analysis, the light emission process of the LED integrated into the linear optocoupler belongs to an inherent quantum phenomenon guaranteed by quantum mechanics [16]. When the light intensity from the LED is detected by the PD, the output electrical signal is linear to the emission intensity due to the linearity of the optocoupler. Thus, quantum random numbers can be generated by the measurement of intensity fluctuations from the LED in a linear optocoupler. In this Letter, we propose and realize a simple, compact, and low-cost scheme of QRNG based on a linear optocoupler where the processes of light radiation and detection have been completed. The unique setup and advantages play an important role in promoting the practical, commercial, and compact development of QRNG technology.
Fig. 2. Design block diagram of the experimental model and its actual picture integrated on the printed circuit board. ADC, an 8-bit analog-to-digital converter. The process of light emission from the LED and detection of the PD is completed in the linear optocoupler. Then, the output signals are finally amplified and digitized by the amplifier and the ADC, respectively. The final random bits are generated after post-processing from raw sequences.
Figure 2 is the experimental setup including the design block diagram and the actual image integrated on the printed circuit board (PCB), which shows the simple and compact design. The detailed design block diagram of the QRNG module consists mainly of three parts. The first includes the quantum source and detector, where the light intensity of the LED is detected by the PD integrated in the linear optocoupler. Then, the output detection signal from the PD is amplified by an external electrical amplifier for a higher signal-to-noise ratio (SNR). Finally, the signal is sampled, digitized, and stored by an analog-to-digital converter (ADC) with 8-bit vertical resolution, and the post-processing procedure is applied by the computer. The sampling rate is 9.77 MSa/s, and the amplitude of the temporal waveform is adjusted so that most of the signal voltages are included with 8-bit vertical resolution. Furthermore, the noise from the PD and amplifier circuit is considered when the LED is not working. As shown in Fig. 3(a), the temporal waveforms of signal and noise are measured, which shows that the quantum signal noise is dominant. The output signal oscillates irregularly, which shows good randomness of the intensity fluctuation. The blue histogram in Fig. 3(b) shows the distribution of the signal where the symmetry is shown by comparing it with the red fitting curve.
Fig. 3. (a) Temporal waveforms measurement of the optical signal and noise from the PD and the amplifier circuit, where the quantum noise is dominant. (b) The blue histogram distribution of the signal voltage shows good symmetry compared with the red fitting curve. (c) Influence of power input voltage on min-entropy, where min-entropy remains almost stable as the input voltage of the power changes. (d) Influence of operating environment temperature on min-entropy. When the temperature changes, min-entropy basically stays stable in our design.
In addition, to evaluate the randomness quality precisely contained in the measured raw data, an essential parameter of min-entropy is introduced. Min-entropy of the raw bits related to the probability of the distribution is defined as ${H_\infty} = - \mathop {\log}\nolimits_2 \max P(i)$, where $\max P(i)$ is the probability of the most likely event in the distribution. According to the distribution of the output detection signal by our measurement, raw data of each detection comprise min-entropy of 4.49 bits. To evaluate the robustness of the setup, the influence of the operating environment including the power supply voltage and operating temperature is considered. Due to the special design of the regulator circuit in our setup, the output signal is not noticeably affected by the input voltage of the power. As shown in Fig. 3(c), the corresponding min-entropy remains almost stable with the change in power supply voltage. Moreover, min-entropy basically stays stable with the change in environment temperature, as shown in Fig. 3(d). Therefore, our setup is robust against fluctuation of the operating environment.
Finally, we employ a random number extraction procedure based on the SHA-256 algorithm by the computer to remove the bias of raw bits and improve the quality of the out random sequences, which extracts 4.41 bits per eight raw bits. The final generation speed of the obtained random bits is determined by the sampling rate and min-entropy. The NIST-Statistical Test Suite (STS) is used to test the quality of the final random bits. The results show that all of the random bit streams pass the test with a significance level of 0.01, as shown in Table 1. Furthermore, to quantify the independence of the two adjacent or delayed bits in the final random sequences, we statistically calculate autocorrelation coefficient ${\rho _k}$ at lag $k$ bit with 10 Mbits, where the formula is expressed as [12]
(2)$${\rho _k} = \frac{{\frac{1}{n}\sum\nolimits_{i = 1}^n {{b_i}{b_{i + k}}} - {{\big(\frac{1}{n}\sum\nolimits_{i = 1}^n {{b_i}}\big)}^2}}}{{\frac{1}{n}\sum\nolimits_{i = 1}^n {b_i^2} - {{\big(\frac{1}{n}\sum\nolimits_{i = 1}^n {{b_i}}\big)}^2}}},$$
where ${b_i}$ denotes the $i$th random bit in the test sequences, $k$ denotes the delayed bit, and $n$ is the total length of random bits in the test. The results with 200-bit delay indicate low autocorrelation coefficients, as shown in Fig. 4.
Table 1. Result of NIST Statistical Randomness Test Suite, Using 1000 Samples of 1 Mb and Significance Level $\alpha = 0.01$a,b
Fig. 4. Autocorrelation coefficients of ${10^7}$ bits extracted random numbers after SHA-256 procedure with 200-bit delay, which has a standard deviation of $3.16 \times {10^{- 4}}$.
In summary, we propose and implement a compact, simple, and low-cost QRNG scheme based on a linear optocoupler. The intensity fluctuation of the LED is measured directly by a PD where the output amplitude is linear to the emission intensity of the light source. The quantum randomness originates from a random recombination between holes of the p region and electrons of the n region of the LED, which is similar to the spontaneous emission of atoms. There is no need for an additional optical coupling system for detection, a bulky detector, or integration technology. In addition, the system is robust against fluctuation of the operating environment by our measurement. This scheme has the potential to be extended to a parallel and real-time design by a field programmable gate array (FPGA) for a higher generation rate, which still maintains the characteristics of low cost and compact structure. This will be of great significance to promote the practical and commercial development of QRNG.
National Cryptography Development Fund (MMJJ20170120); National Natural Science Foundation of China (61475148, 61575183, 61622506, 61627820, 61675189); National Key Research and Development Program of China (2018YFA0306400); Anhui Initiative in Quantum Information Technologies.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002). [CrossRef]
2. M. Herrero-Collantes and J. C. Garcia-Escartin, Rev. Mod. Phys. 89, 015004 (2017). [CrossRef]
3. J. G. Rarity, P. C. M. Owens, and P. R. Tapster, J. Mod. Opt. 41, 2435 (1994). [CrossRef]
4. T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger, Rev. Sci. Instrum. 71, 1675 (2000). [CrossRef]
5. C. Gabriel, C. Wittmann, D. Sych, R. Dong, W. Mauerer, U. L. Andersen, C. Marquardt, and G. Leuchs, Nat. Photonics 4, 711 (2010). [CrossRef]
6. M. Jofre, M. Curty, F. Steinlechner, G. Anzolin, J. P. Torres, M. W. Mitchell, and V. Pruneri, Opt. Express 19, 20665 (2011). [CrossRef]
7. Z. Zheng, Y. Zhang, W. Huang, S. Yu, and H. Guo, Rev. Sci. Instrum. 90, 043105 (2019). [CrossRef]
8. D. Milovančev, N. Vokić, C. Pacher, I. Khan, C. Marquardt, W. Boxleitner, H. Hübel, and B. Schrenk, IEEE J. Sel. Top. Quantum Electron. 26, 6400608 (2020). [CrossRef]
9. B. Qi, Y.-M. Chi, H.-K. Lo, and L. Qian, Opt. Lett. 35, 312 (2010). [CrossRef]
10. F. Xu, B. Qi, X. Ma, H. Xu, H. Zheng, and H.-K. Lo, Opt. Express 20, 12366 (2012). [CrossRef]
11. M. Huang, Z. Chen, Y. Zhang, and H. Guo, Appl. Sci. 10, 2431 (2020). [CrossRef]
12. C. R. S. Williams, J. C. Salevan, X. Li, R. Roy, and T. E. Murphy, Opt. Express 18, 23584 (2010). [CrossRef]
13. A. Argyris, E. Pikasis, S. Deligiannidis, and D. Syvridis, J. Lightwave Technol. 30, 1329 (2012). [CrossRef]
14. A. Martin, B. Sanguinetti, C. C. W. Lim, R. Houlmann, and H. Zbinden, J. Lightwave Technol. 33, 2855 (2015). [CrossRef]
15. T. Tomaru, Appl. Opt. 59, 3109 (2020). [CrossRef]
16. B. Sanguinetti, A. Martin, H. Zbinden, and N. Gisin, Phys. Rev. X 4, 031056 (2014). [CrossRef]
17. E. Amri, Y. Felk, D. Stucki, J. Ma, and E. R. Fossum, Sensors 16, 1002 (2016). [CrossRef]
18. Q. Zhang, D. Kong, Y. Wang, H. Zou, and H. Chang, Opt. Lett. 45, 304 (2020). [CrossRef]
19. S. Pironio, A. Acín, S. Massar, A. B. de La Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Nature 464, 1021 (2010). [CrossRef]
20. X. Ma, X. Yuan, Z. Cao, B. Qi, and Z. Zhang, npj Quantum Inf. 2, 16021 (2016). [CrossRef]
21. Y. Zhang, L. K. Shalm, J. C. Bienfang, M. J. Stevens, M. D. Mazurek, S. W. Nam, C. Abellán, W. Amaya, M. W. Mitchell, H. Fu, C. A. Miller, A. Mink, and E. Knill, Phys. Rev. Lett. 124, 010505 (2020). [CrossRef]
22. H.-W. Li, Z.-Q. Yin, Y.-C. Wu, X.-B. Zou, S. Wang, W. Chen, G.-C. Guo, and Z.-F. Han, Phys. Rev. A 84, 034301 (2011). [CrossRef]
23. T. Lunghi, J. B. Brask, C. C. W. Lim, Q. Lavigne, J. Bowles, A. Martin, H. Zbinden, and N. Brunner, Phys. Rev. Lett. 114, 150501 (2015). [CrossRef]
24. F. Xu, J. H. Shapiro, and F. N. C. Wong, Optica 3, 1266 (2016). [CrossRef]
25. D. Drahi, N. Walk, M. J. Hoban, A. K. Fedorov, R. Shakhovoy, A. Feimov, Y. Kurochkin, W. S. Kolthammer, J. Nunn, J. Barrett, and I. A. Walmsley, Phys. Rev. X 10, 041048 (2020). [CrossRef]
26. C. Abellan, W. Amaya, D. Domenech, P. Muñoz, J. Capmany, S. Longhi, M. W. Mitchell, and V. Pruneri, Optica 3, 989 (2016). [CrossRef]
27. X. Guo, C. Cheng, M. Wu, Q. Gao, P. Li, and Y. Guo, Opt. Lett. 44, 5566 (2019). [CrossRef]
28. Q. Zhou, R. Valivarthi, C. John, and W. Tittel, Quantum Eng. 1, e8 (2019). [CrossRef]
29. Q. Luo, Z. Cheng, J. Fan, L. Tan, H. Song, G. Deng, Y. Wang, and Q. Zhou, Opt. Lett. 45, 4224 (2020). [CrossRef]
30. S. J. U. White, F. Klauck, T. T. Tran, N. Schmitt, M. Kianinia, A. Steinfurth, M. Heinrich, M. Toth, A. Szameit, I. Aharonovich, and A. S. Solntsev, J. Opt. 23, 01LT01 (2020). [CrossRef]
31. N. Leone, D. Rusca, S. Azzini, G. Fontana, F. Acerbi, A. Gola, A. Tontini, N. Massari, H. Zbinden, and L. Pavesi, APL Photon. 5, 101301 (2020). [CrossRef]
32. G. Yu, K. Pakbaz, and A. J. Heeger, J. Electron. Mater. 23, 925 (1994). [CrossRef]
N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
M. Herrero-Collantes and J. C. Garcia-Escartin, Rev. Mod. Phys. 89, 015004 (2017).
J. G. Rarity, P. C. M. Owens, and P. R. Tapster, J. Mod. Opt. 41, 2435 (1994).
T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger, Rev. Sci. Instrum. 71, 1675 (2000).
C. Gabriel, C. Wittmann, D. Sych, R. Dong, W. Mauerer, U. L. Andersen, C. Marquardt, and G. Leuchs, Nat. Photonics 4, 711 (2010).
M. Jofre, M. Curty, F. Steinlechner, G. Anzolin, J. P. Torres, M. W. Mitchell, and V. Pruneri, Opt. Express 19, 20665 (2011).
Z. Zheng, Y. Zhang, W. Huang, S. Yu, and H. Guo, Rev. Sci. Instrum. 90, 043105 (2019).
D. Milovančev, N. Vokić, C. Pacher, I. Khan, C. Marquardt, W. Boxleitner, H. Hübel, and B. Schrenk, IEEE J. Sel. Top. Quantum Electron. 26, 6400608 (2020).
B. Qi, Y.-M. Chi, H.-K. Lo, and L. Qian, Opt. Lett. 35, 312 (2010).
F. Xu, B. Qi, X. Ma, H. Xu, H. Zheng, and H.-K. Lo, Opt. Express 20, 12366 (2012).
M. Huang, Z. Chen, Y. Zhang, and H. Guo, Appl. Sci. 10, 2431 (2020).
C. R. S. Williams, J. C. Salevan, X. Li, R. Roy, and T. E. Murphy, Opt. Express 18, 23584 (2010).
A. Argyris, E. Pikasis, S. Deligiannidis, and D. Syvridis, J. Lightwave Technol. 30, 1329 (2012).
A. Martin, B. Sanguinetti, C. C. W. Lim, R. Houlmann, and H. Zbinden, J. Lightwave Technol. 33, 2855 (2015).
T. Tomaru, Appl. Opt. 59, 3109 (2020).
B. Sanguinetti, A. Martin, H. Zbinden, and N. Gisin, Phys. Rev. X 4, 031056 (2014).
E. Amri, Y. Felk, D. Stucki, J. Ma, and E. R. Fossum, Sensors 16, 1002 (2016).
Q. Zhang, D. Kong, Y. Wang, H. Zou, and H. Chang, Opt. Lett. 45, 304 (2020).
S. Pironio, A. Acín, S. Massar, A. B. de La Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Nature 464, 1021 (2010).
X. Ma, X. Yuan, Z. Cao, B. Qi, and Z. Zhang, npj Quantum Inf. 2, 16021 (2016).
Y. Zhang, L. K. Shalm, J. C. Bienfang, M. J. Stevens, M. D. Mazurek, S. W. Nam, C. Abellán, W. Amaya, M. W. Mitchell, H. Fu, C. A. Miller, A. Mink, and E. Knill, Phys. Rev. Lett. 124, 010505 (2020).
H.-W. Li, Z.-Q. Yin, Y.-C. Wu, X.-B. Zou, S. Wang, W. Chen, G.-C. Guo, and Z.-F. Han, Phys. Rev. A 84, 034301 (2011).
T. Lunghi, J. B. Brask, C. C. W. Lim, Q. Lavigne, J. Bowles, A. Martin, H. Zbinden, and N. Brunner, Phys. Rev. Lett. 114, 150501 (2015).
F. Xu, J. H. Shapiro, and F. N. C. Wong, Optica 3, 1266 (2016).
D. Drahi, N. Walk, M. J. Hoban, A. K. Fedorov, R. Shakhovoy, A. Feimov, Y. Kurochkin, W. S. Kolthammer, J. Nunn, J. Barrett, and I. A. Walmsley, Phys. Rev. X 10, 041048 (2020).
C. Abellan, W. Amaya, D. Domenech, P. Muñoz, J. Capmany, S. Longhi, M. W. Mitchell, and V. Pruneri, Optica 3, 989 (2016).
X. Guo, C. Cheng, M. Wu, Q. Gao, P. Li, and Y. Guo, Opt. Lett. 44, 5566 (2019).
Q. Zhou, R. Valivarthi, C. John, and W. Tittel, Quantum Eng. 1, e8 (2019).
Q. Luo, Z. Cheng, J. Fan, L. Tan, H. Song, G. Deng, Y. Wang, and Q. Zhou, Opt. Lett. 45, 4224 (2020).
S. J. U. White, F. Klauck, T. T. Tran, N. Schmitt, M. Kianinia, A. Steinfurth, M. Heinrich, M. Toth, A. Szameit, I. Aharonovich, and A. S. Solntsev, J. Opt. 23, 01LT01 (2020).
N. Leone, D. Rusca, S. Azzini, G. Fontana, F. Acerbi, A. Gola, A. Tontini, N. Massari, H. Zbinden, and L. Pavesi, APL Photon. 5, 101301 (2020).
G. Yu, K. Pakbaz, and A. J. Heeger, J. Electron. Mater. 23, 925 (1994).
Abellan, C.
Abellán, C.
Acerbi, F.
Achleitner, U.
Acín, A.
Aharonovich, I.
Amaya, W.
Amri, E.
Andersen, U. L.
Anzolin, G.
Argyris, A.
Azzini, S.
Barrett, J.
Bienfang, J. C.
Bowles, J.
Boxleitner, W.
Brask, J. B.
Brunner, N.
Cao, Z.
Capmany, J.
Chang, H.
Chen, W.
Cheng, Z.
Chi, Y.-M.
Curty, M.
de La Giroday, A. B.
Deligiannidis, S.
Deng, G.
Domenech, D.
Dong, R.
Drahi, D.
Fan, J.
Fedorov, A. K.
Feimov, A.
Felk, Y.
Fontana, G.
Fossum, E. R.
Fu, H.
Gabriel, C.
Gao, Q.
Garcia-Escartin, J. C.
Gisin, N.
Gola, A.
Guo, G.-C.
Guo, H.
Guo, X.
Guo, Y.
Han, Z.-F.
Hayes, D.
Heeger, A. J.
Heinrich, M.
Herrero-Collantes, M.
Hoban, M. J.
Houlmann, R.
Huang, M.
Huang, W.
Hübel, H.
Jennewein, T.
Jofre, M.
John, C.
Khan, I.
Kianinia, M.
Klauck, F.
Knill, E.
Kolthammer, W. S.
Kong, D.
Kurochkin, Y.
Lavigne, Q.
Leone, N.
Leuchs, G.
Li, H.-W.
Li, P.
Li, X.
Lim, C. C. W.
Lo, H.-K.
Longhi, S.
Lunghi, T.
Luo, L.
Luo, Q.
Ma, J.
Ma, X.
Manning, T. A.
Marquardt, C.
Massar, S.
Massari, N.
Matsukevich, D. N.
Mauerer, W.
Maunz, P.
Mazurek, M. D.
Miller, C. A.
Milovancev, D.
Mink, A.
Mitchell, M. W.
Monroe, C.
Muñoz, P.
Murphy, T. E.
Nam, S. W.
Nunn, J.
Olmschenk, S.
Owens, P. C. M.
Pacher, C.
Pakbaz, K.
Pavesi, L.
Pikasis, E.
Pironio, S.
Pruneri, V.
Qi, B.
Qian, L.
Rarity, J. G.
Ribordy, G.
Roy, R.
Rusca, D.
Salevan, J. C.
Sanguinetti, B.
Schmitt, N.
Schrenk, B.
Shakhovoy, R.
Shalm, L. K.
Shapiro, J. H.
Solntsev, A. S.
Song, H.
Steinfurth, A.
Steinlechner, F.
Stevens, M. J.
Stucki, D.
Sych, D.
Syvridis, D.
Szameit, A.
Tan, L.
Tapster, P. R.
Tittel, W.
Tomaru, T.
Tontini, A.
Torres, J. P.
Toth, M.
Valivarthi, R.
Vokic, N.
Walk, N.
Walmsley, I. A.
Weihs, G.
Weinfurter, H.
White, S. J. U.
Williams, C. R. S.
Wittmann, C.
Wong, F. N. C.
Wu, M.
Wu, Y.-C.
Xu, F.
Xu, H.
Yin, Z.-Q.
Yu, G.
Yu, S.
Yuan, X.
Zbinden, H.
Zeilinger, A.
Zhang, Q.
Zheng, H.
Zheng, Z.
Zhou, Q.
Zou, H.
Zou, X.-B.
APL Photon. (1)
Appl. Sci. (1)
J. Electron. Mater. (1)
J. Lightwave Technol. (2)
J. Mod. Opt. (1)
J. Opt. (1)
npj Quantum Inf. (1)
Phys. Rev. A (1)
Phys. Rev. Lett. (2)
Phys. Rev. X (2)
Quantum Eng. (1)
Rev. Mod. Phys. (2)
Rev. Sci. Instrum. (2)
|
CommonCrawl
|
Quantized refrigerator for an atomic cloud
Wolfgang Niedenzu1, Igor Mazets2,3, Gershon Kurizki4, and Fred Jendrzejewski5
1Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 21a, A-6020 Innsbruck, Austria
2Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, TU Wien, 1020 Vienna, Austria
3Wolfgang Pauli Institute, Universität Wien, 1090 Vienna, Austria
4Department of Chemical Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
5Heidelberg University, Kirchhoff-Institut für Physik, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
Updated version: The authors have uploaded version v4 of this work to the arXiv which may contain updates or corrections not contained in the published version v3. The authors left the following comment on the arXiv:
11 pages, 4 figures; v4: changes in the affiliations and the acknowledgements
We propose to implement a quantized thermal machine based on a mixture of two atomic species. One atomic species implements the working medium and the other implements two (cold and hot) baths. We show that such a setup can be employed for the refrigeration of a large bosonic cloud starting above and ending below the condensation threshold. We analyze its operation in a regime conforming to the quantized Otto cycle and discuss the prospects for continuous-cycle operation, addressing the experimental as well as theoretical limitations. Beyond its applicative significance, this setup has a potential for the study of fundamental questions of quantum thermodynamics.
@article{Niedenzu2019quantized, doi = {10.22331/q-2019-06-28-155}, url = {https://doi.org/10.22331/q-2019-06-28-155}, title = {Quantized refrigerator for an atomic cloud}, author = {Niedenzu, Wolfgang and Mazets, Igor and Kurizki, Gershon and Jendrzejewski, Fred}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {3}, pages = {155}, month = jun, year = {2019} }
[1] R. Alicki, The quantum open system as a model of the heat engine, J. Phys. A 12, L103 (1979).
https://doi.org/10.1088/0305-4470/12/5/007
[2] R. Kosloff, A quantum mechanical open system as a model of a heat engine, J. Chem. Phys. 80, 1625 (1984).
https://doi.org/10.1063/1.446862
[3] D. Gelbwaser-Klimovsky, W. Niedenzu, and G. Kurizki, Thermodynamics of Quantum Systems Under Dynamical Control, Adv. At. Mol. Opt. Phys. 64, 329 (2015a).
https://doi.org/10.1016/bs.aamop.2015.07.002
[4] J. Goold, M. Huber, A. Riera, L. del Rio, and P. Skrzypczyk, The role of quantum information in thermodynamics—a topical review, J. Phys. A 49, 143001 (2016).
[5] S. Vinjanampathy and J. Anders, Quantum thermodynamics, Contemp. Phys. 57, 1 (2016).
[6] R. Kosloff and Y. Rezek, The Quantum Harmonic Otto Cycle, Entropy 19, 136 (2017).
https://doi.org/10.3390/e19040136
[7] A. Ghosh, W. Niedenzu, V. Mukherjee, and G. Kurizki, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 37–66.
https://doi.org/10.1007/978-3-319-99046-0_2
[8] M. O. Scully, M. S. Zubairy, G. S. Agarwal, and H. Walther, Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence, Science 299, 862 (2003).
https://doi.org/10.1126/science.1078955
[9] O. Abah, J. Roßnagel, G. Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, and E. Lutz, Single-Ion Heat Engine at Maximum Power, Phys. Rev. Lett. 109, 203006 (2012).
[10] M. Horodecki and J. Oppenheim, Fundamental limitations for quantum and nanoscale thermodynamics, Nat. Commun. 4, 2059 (2013).
https://doi.org/10.1038/ncomms3059
[11] P. Skrzypczyk, A. J. Short, and S. Popescu, Work extraction and thermodynamics for individual quantum systems, Nat. Commun. 5, 4185 (2014).
[12] J. B. Brask, G. Haack, N. Brunner, and M. Huber, Autonomous quantum thermal machine for generating steady-state entanglement, New J. Phys. 17, 113029 (2015).
[13] R. Uzdin, A. Levy, and R. Kosloff, Equivalence of Quantum Heat Machines, and Quantum-Thermodynamic Signatures, Phys. Rev. X 5, 031044 (2015).
[14] M. Campisi and R. Fazio, The power of a critical heat engine, Nat. Commun. 7, 11895 (2016).
[15] W. Niedenzu, V. Mukherjee, A. Ghosh, A. G. Kofman, and G. Kurizki, Quantum engine efficiency bound beyond the second law of thermodynamics, Nat. Commun. 9, 165 (2018).
https://doi.org/10.1038/s41467-017-01991-6
[16] J.-P. Brantut, C. Grenier, J. Meineke, D. Stadler, S. Krinner, C. Kollath, T. Esslinger, and A. Georges, A Thermoelectric Heat Engine with Ultracold Atoms, Science 342, 713 (2013).
[17] J. V. Koski, V. F. Maisi, J. P. Pekola, and D. V. Averin, Experimental realization of a Szilard engine with a single electron, Proc. Natl. Acad. Sci. USA 111, 13786 (2014).
https://doi.org/10.1073/pnas.1406966111
[18] J. Roßnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler, and K. Singer, A single-atom heat engine, Science 352, 325 (2016).
https://doi.org/10.1126/science.aad6320
[19] J. Klaers, S. Faelt, A. Imamoglu, and E. Togan, Squeezed Thermal Reservoirs as a Resource for a Nanomechanical Engine beyond the Carnot Limit, Phys. Rev. X 7, 031044 (2017).
[20] J. Klatzow, J. N. Becker, P. M. Ledingham, C. Weinzetl, K. T. Kaczmarek, D. J. Saunders, J. Nunn, I. A. Walmsley, R. Uzdin, and E. Poem, Experimental Demonstration of Quantum Effects in the Operation of Microscopic Heat Engines, Phys. Rev. Lett. 122, 110601 (2019).
[21] J. V. Koski, A. Kutvonen, I. M. Khaymovich, T. Ala-Nissila, and J. P. Pekola, On-Chip Maxwell's Demon as an Information-Powered Refrigerator, Phys. Rev. Lett. 115, 260602 (2015).
[22] D. von Lindenfels, O. Gräb, C. T. Schmiegelow, V. Kaushal, J. Schulz, F. Schmidt-Kaler, and U. G. Poschinger, A spin heat engine coupled to a harmonic-oscillator flywheel, arXiv preprint arXiv:1808.02390 (2018).
[23] C. J. Pethick and H. Smith, Bose–Einstein Condensation in Dilute Gases, 2nd ed. (Cambridge University Press, Cambridge, 2008).
[24] E. Geva and R. Kosloff, A quantum-mechanical heat engine operating in finite time. A model consisting of spin-1/2 systems as the working fluid, J. Chem. Phys. 96, 3054 (1992).
[25] T. Feldmann and R. Kosloff, Quantum four-stroke heat engine: Thermodynamic observables in a model with intrinsic friction, Phys. Rev. E 68, 016101 (2003).
[26] O. Abah and E. Lutz, Optimal performance of a quantum Otto refrigerator, EPL (Europhys. Lett.) 113, 60002 (2016).
https://doi.org/10.1209/0295-5075/113/60002
[27] P. A. Erdman, V. Cavina, R. Fazio, F. Taddei, and V. Giovannetti, Maximum Power and Corresponding Efficiency for Two-Level Quantum Heat Engines and Refrigerators, arXiv preprint arXiv:1812.05089 (2018).
[28] A. Mazurenko, C. S. Chiu, G. Ji, M. F. Parsons, M. Kanász-Nagy, R. Schmidt, F. Grusdt, E. Demler, D. Greif, and M. Greiner, A cold-atom Fermi–Hubbard antiferromagnet, Nature 545, 462 (2016).
https://doi.org/10.1038/nature22362
[29] O. Fialko and D. W. Hallwood, Isolated Quantum Heat Engine, Phys. Rev. Lett. 108, 085303 (2012).
[30] K. B. Davis, M.-O. Mewes, M. A. Joffe, M. R. Andrews, and W. Ketterle, Evaporative Cooling of Sodium Atoms, Phys. Rev. Lett. 74, 5202 (1995).
[31] W. Petrich, M. H. Anderson, J. R. Ensher, and E. A. Cornell, Stable, Tightly Confining Magnetic Trap for Evaporative Cooling of Neutral Atoms, Phys. Rev. Lett. 74, 3352 (1995).
[32] R. Grimm, M. Weidemüller, and Y. B. Ovchinnikov, Optical Dipole Traps for Neutral Atoms, Adv. At. Mol. Opt. Phys. 42, 95 (2000).
https://doi.org/10.1016/S1049-250X(08)60186-X
[33] M. Kolář, D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Quantum Bath Refrigeration towards Absolute Zero: Challenging the Unattainability Principle, Phys. Rev. Lett. 109, 090601 (2012).
[34] D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Minimal universal quantum heat machine, Phys. Rev. E 87, 012140 (2013).
[35] M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Direct Observation of Tunneling and Nonlinear Self-Trapping in a Single Bosonic Josephson Junction, Phys. Rev. Lett. 95, 010402 (2005).
https://doi.org/10.1103/PhysRevLett.95.010402
[36] S. Eckel, J. G. Lee, F. Jendrzejewski, C. J. Lobb, G. K. Campbell, and W. T. Hill, Contact resistance and phase slips in mesoscopic superfluid-atom transport, Phys. Rev. A 93, 063619 (2016).
[37] N. Spethmann, F. Kindermann, S. John, C. Weber, D. Meschede, and A. Widera, Dynamics of Single Neutral Impurity Atoms Immersed in an Ultracold Gas, Phys. Rev. Lett. 109, 235301 (2012).
[38] M. Hohmann, F. Kindermann, T. Lausch, D. Mayer, F. Schmidt, and A. Widera, Single-atom thermometer for ultracold gases, Phys. Rev. A 93, 043607 (2016).
[39] R. Scelle, T. Rentrop, A. Trautmann, T. Schuster, and M. K. Oberthaler, Motional Coherence of Fermions Immersed in a Bose Gas, Phys. Rev. Lett. 111, 070401 (2013).
[40] T. Rentrop, A. Trautmann, F. A. Olivares, F. Jendrzejewski, A. Komnik, and M. K. Oberthaler, Observation of the Phononic Lamb Shift with a Synthetic Vacuum, Phys. Rev. X 6, 041041 (2016).
[41] I. Bloch, Ultracold quantum gases in optical lattices, Nat. Phys. 1, 23 (2005).
https://doi.org/10.1038/nphys138
[42] K.-N. Schymik, Implementing an Optical Accordion Lattice for the Realization of a Quantized Otto Cycle, Masterarbeit, Universität Heidelberg (2018).
[43] M. J. H. Ku, A. T. Sommer, L. W. Cheuk, and M. W. Zwierlein, Revealing the Superfluid Lambda Transition in the Universal Thermodynamics of a Unitary Fermi Gas, Science 335, 563 (2012).
[44] T. Jacqmin, J. Armijo, T. Berrada, K. V. Kheruntsyan, and I. Bouchoule, Sub-Poissonian Fluctuations in a 1D Bose Gas: From the Quantum Quasicondensate to the Strongly Interacting Regime, Phys. Rev. Lett. 106, 230405 (2011).
[45] R. Desbuquois, T. Yefsah, L. Chomaz, C. Weitenberg, L. Corman, S. Nascimbène, and J. Dalibard, Determination of Scale-Invariant Equations of State without Fitting Parameters: Application to the Two-Dimensional Bose Gas Across the Berezinskii-Kosterlitz-Thouless Transition, Phys. Rev. Lett. 113, 020404 (2014).
[46] L. A. Correa, M. Perarnau-Llobet, K. V. Hovhannisyan, S. Hernández-Santana, M. Mehboudi, and A. Sanpera, Enhancement of low-temperature thermometry by strong coupling, Phys. Rev. A 96, 062103 (2017).
[47] A. Lampo, S. H. Lim, M. Á. García-March, and M. Lewenstein, Bose polaron as an instance of quantum Brownian motion, Quantum 1, 30 (2017).
https://doi.org/10.22331/q-2017-09-27-30
[48] V. Mukherjee, A. Zwick, A. Ghosh, X. Chen, and G. Kurizki, Enhanced precision of low-temperature quantum thermometry via dynamical control, arXiv preprint arXiv:1711.09660 (2017).
[49] M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Bauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J Schmiedmayer, Relaxation and Prethermalization in an Isolated Quantum System, Science 337, 1318 (2012).
[50] R. Olf, F. Fang, G. E. Marti, A. MacRae, and D. M. Stamper-Kurn, Thermometry and cooling of a Bose gas to 0.02 times the condensation temperature, Nat. Phys. 11, 720 (2015).
https://doi.org/10.1038/nphys3408
[51] B. Rauer, S. Erne, T. Schweigler, F. Cataldini, M. Tajik, and J. Schmiedmayer, Recurrences in an isolated quantum many-body system, Science 360, 307 (2018).
https://doi.org/10.1126/science.aan7938
[52] L. P. Pitaevski\u\i, Bose—Einstein condensation in magnetic traps. Introduction to the theory, Phys.-Uspekhi 41, 569 (1998).
https://doi.org/10.1070/PU1998v041n06ABEH000407
[53] F. Zambelli, L. Pitaevskii, D. M. Stamper-Kurn, and S. Stringari, Dynamic structure factor and momentum distribution of a trapped Bose gas, Phys. Rev. A 61, 063608 (2000).
[54] F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Theory of Bose-Einstein condensation in trapped gases, Rev. Mod. Phys. 71, 463 (1999).
[55] A. del Campo, J. Goold, and M. Paternostro, More bang for your buck: Super-adiabatic quantum engines, Sci. Rep. 4, 6208 (2014).
[56] F. Grusdt and E. Demler, in Quantum Matter at Ultralow Temperatures, Proceedings of the International School of Physics ``Enrico Fermi'', Vol. 191, edited by M. Inguscio, W. Ketterle, S. Stringari, and G. Roati (IOS Press, Amsterdam, 2016) p. 325.
https://doi.org/10.3254/978-1-61499-694-1-325
[57] M.-G. Hu, M. J. Van de Graaff, D. Kedar, J. P. Corson, E. A. Cornell, and D. S. Jin, Bose Polarons in the Strongly Interacting Regime, Phys. Rev. Lett. 117, 055301 (2016).
[58] N. B. Jørgensen, L. Wacker, K. T. Skalmstang, M. M. Parish, J. Levinsen, R. S. Christensen, G. M. Bruun, and J. J. Arlt, Observation of Attractive and Repulsive Polarons in a Bose-Einstein Condensate, Phys. Rev. Lett. 117, 055302 (2016).
[59] C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Production of Two Overlapping Bose-Einstein Condensates by Sympathetic Cooling, Phys. Rev. Lett. 78, 586 (1997).
https://doi.org/10.1103/PhysRevLett.78.586
[60] M. Prüfer, P. Kunkel, H. Strobel, S. Lannig, D. Linnemann, C.-M. Schmied, J. Berges, T. Gasenzer, and M. K. Oberthaler, Observation of universal dynamics in a spinor Bose gas far from equilibrium, Nature 563, 217 (2018).
[61] C. Eigen, J. A. P. Glidden, R. Lopes, E. A. Cornell, R. P. Smith, and Z. Hadzibabic, Universal prethermal dynamics of Bose gases quenched to unitarity, Nature 563, 221 (2018).
[62] S. Erne, R. Bücker, T. Gasenzer, J. Berges, and J. Schmiedmayer, Universal dynamics in an isolated one-dimensional Bose gas far from equilibrium, Nature 563, 225 (2018).
[63] D. Gelbwaser-Klimovsky, W. Niedenzu, P. Brumer, and G. Kurizki, Power enhancement of heat engines via correlated thermalization in a three-level ``working fluid'', Sci. Rep. 5, 14413 (2015b).
[64] W. Niedenzu and G. Kurizki, Cooperative many-body enhancement of quantum thermal machine power, New J. Phys. 20, 113038 (2018).
https://doi.org/10.1088/1367-2630/aaed55
[65] R. Nandkishore and D. A. Huse, Many-Body Localization and Thermalization in Quantum Statistical Mechanics, Annu. Rev. Condens. Matter Phys. 6, 15 (2015).
https://doi.org/10.1146/annurev-conmatphys-031214-014726
[66] T. Hartmann, T. A. Schulze, K. K. Voges, P. Gersema, M. W. Gempel, E. Tiemann, A. Zenesini, and S. Ospelkaus, Feshbach resonances in $^{23}\mathrm{Na}+^{39}\mathrm{K}$ mixtures and refined molecular potentials for the NaK molecule, Phys. Rev. A 99, 032711 (2019).
[67] L. J. LeBlanc and J. H. Thywissen, Species-specific optical lattices, Phys. Rev. A 75, 053612 (2007).
[68] M. O. Scully, Collective Lamb Shift in Single Photon Dicke Superradiance, Phys. Rev. Lett. 102, 143601 (2009).
[69] I. E. Mazets and G. Kurizki, Multiatom cooperative emission following single-photon absorption: Dicke-state dynamics, J. Phys. B: At. Mol. Opt. Phys. 40, F105 (2007).
https://doi.org/10.1088/0953-4075/40/6/F01
[70] A. Manatuly, W. Niedenzu, R. Román-Ancheyta, B. Çakmak, Ö. E. Müstecaplioğlu, and G. Kurizki, Collectively enhanced thermalization via multiqubit collisions, Phys. Rev. E 99, 042145 (2019).
[71] F. Jendrzejewski, S. Eckel, N. Murray, C. Lanier, M. Edwards, C. J. Lobb, and G. K. Campbell, Resistive Flow in a Weakly Interacting Bose-Einstein Condensate, Phys. Rev. Lett. 113, 045305 (2014).
[72] E. Torrontegui, S. Ibánez, S. Martínez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, Shortcuts to Adiabaticity, Adv. At. Mol. Opt. Phys. 62, 117 (2013).
https://doi.org/10.1016/B978-0-12-408090-4.00002-5
[73] A. del Campo, A. Chenu, S. Deng, and H. Wu, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 127–148.
[74] R. Kosloff, Quantum Thermodynamics: A Dynamical Viewpoint, Entropy 15, 2100 (2013).
[75] R. Alicki, Quantum Thermodynamics: An Example of Two-Level Quantum Machine, Open Syst. Inf. Dyn. 21, 1440002 (2014).
https://doi.org/10.1142/S1230161214400022
[76] K. Brandner and U. Seifert, Periodic thermodynamics of open quantum systems, Phys. Rev. E 93, 062134 (2016).
[77] V. Mukherjee, W. Niedenzu, A. G. Kofman, and G. Kurizki, Speed and efficiency limits of multilevel incoherent heat engines, Phys. Rev. E 94, 062109 (2016).
[78] D. J. Wineland and W. M. Itano, Laser cooling of atoms, Phys. Rev. A 20, 1521 (1979).
https://doi.org/10.1103/PhysRevA.20.1521
[79] K. Szczygielski, On the application of Floquet theorem in development of time-dependent Lindbladians, J. Math. Phys. 55, 083506 (2014).
https://doi.org/10.1063/1.4891401
[80] M. Lewenstein, J. I. Cirac, and P. Zoller, Master equation for sympathetic cooling of trapped particles, Phys. Rev. A 51, 4617 (1995).
[81] E. Geva, R. Kosloff, and J. L. Skinner, On the relaxation of a two-level system driven by a strong electromagnetic field, J. Chem. Phys. 102, 8541 (1995).
[82] R. Scelle, Dynamics and Motional Coherence of Fermions Immersed in a Bose Gas, Ph.D. thesis, University of Heidelberg (2013).
https://doi.org/10.11588/heidok.00015142
[83] N. Erez, G. Gordon, M. Nest, and G. Kurizki, Thermodynamic control by frequent quantum measurements, Nature 452, 724 (2008).
[84] G. Gordon, G. Bensky, D. Gelbwaser-Klimovsky, D. D. B. Rao, N. Erez, and G. Kurizki, Cooling down quantum bits on ultrashort time scales, New J. Phys. 11, 123025 (2009).
[85] G. A. Álvarez, D. D. B. Rao, L. Frydman, and G. Kurizki, Zeno and Anti-Zeno Polarization Control of Spin Ensembles by Induced Dephasing, Phys. Rev. Lett. 105, 160401 (2010).
[86] R. S. Whitney, Non-Markovian quantum thermodynamics: Laws and fluctuation theorems, Phys. Rev. B 98, 085415 (2018).
[87] A. Wunsche, Displaced Fock states and their connection to quasiprobabilities, Quantum Opt. 3, 359 (1991).
https://doi.org/10.1088/0954-8998/3/6/005
[88] H. Bateman, Higher Transcendental Functions Volume II, 1st ed. (McGraw-Hill, New York, 1953).
[1] Nathan M. Myers, Obinna Abah, and Sebastian Deffner, "Quantum thermodynamic devices: From theoretical proposals to experimental reality", AVS Quantum Science 4 2, 027101 (2022).
[2] Sebastian Deffner, "Quantum refrigerators – the quantum thermodynamics of cooling Bose gases", Quantum Views 3, 20 (2019).
[3] Y. Y. Atas, A. Safavi-Naini, and K. V. Kheruntsyan, "Nonequilibrium quantum thermodynamics of determinantal many-body systems: Application to the Tonks-Girardeau and ideal Fermi gases", Physical Review A 102 4, 043312 (2020).
[4] M. Miskeen Khan, H. Terças, J. T. Mendonça, J. Wehr, C. Charalambous, M. Lewenstein, and M. A. Garcia-March, "Quantum dynamics of a Bose polaron in a d -dimensional Bose-Einstein condensate", Physical Review A 103 2, 023303 (2021).
[5] Nathan M. Myers and Sebastian Deffner, "Bosons outperform fermions: The thermodynamic advantage of symmetry", Physical Review E 101 1, 012110 (2020).
[6] Yi-Cong Yu, Shizhong Zhang, and Xi-Wen Guan, "Grüneisen parameters: Origin, identity, and quantum refrigeration", Physical Review Research 2 4, 043066 (2020).
[7] Jing Li, E. Ya Sherman, and Andreas Ruschhaupt, "Quantum heat engine based on a spin-orbit- and Zeeman-coupled Bose-Einstein condensate", Physical Review A 106 3, L030201 (2022).
[8] Tim Keller, Thomás Fogarty, Jing Li, and Thomas Busch, "Feshbach engine in the Thomas-Fermi regime", Physical Review Research 2 3, 033335 (2020).
[9] Nicola Pancotti, Matteo Scandi, Mark T. Mitchison, and Martí Perarnau-Llobet, "Speed-Ups to Isothermality: Enhanced Quantum Thermal Machines through Control of the System-Bath Coupling", Physical Review X 10 3, 031015 (2020).
[10] Marek Gluza, João Sabino, Nelly H.Y. Ng, Giuseppe Vitagliano, Marco Pezzutto, Yasser Omar, Igor Mazets, Marcus Huber, Jörg Schmiedmayer, and Jens Eisert, "Quantum Field Thermal Machines", PRX Quantum 2 3, 030310 (2021).
[11] Nathan M. Myers, Jacob McCready, and Sebastian Deffner, "Quantum Heat Engines with Singular Interactions", Symmetry 13 6, 978 (2021).
[12] Nathan M Myers, Francisco J Peña, Oscar Negrete, Patricio Vargas, Gabriele De Chiara, and Sebastian Deffner, "Boosting engine performance with Bose–Einstein condensation", New Journal of Physics 24 2, 025001 (2022).
[13] Andreas Hartmann, Victor Mukherjee, Glen Bigan Mbeng, Wolfgang Niedenzu, and Wolfgang Lechner, "Multi-spin counter-diabatic driving in many-body quantum Otto refrigerators", Quantum 4, 377 (2020).
[14] Jonas Glatthard, Jesús Rubio, Rahul Sawant, Thomas Hewitt, Giovanni Barontini, and Luis A. Correa, "Optimal Cold Atom Thermometry Using Adaptive Bayesian Strategies", PRX Quantum 3 4, 040330 (2022).
[15] Nathan M Myers, Obinna Abah, and Sebastian Deffner, "Quantum Otto engines at relativistic energies", New Journal of Physics 23 10, 105001 (2021).
[16] Deniz Türkpençe and Ricardo Román-Ancheyta, "Tailoring the thermalization time of a cavity-field using distinct atomic reservoirs", arXiv:1708.03721, (2017).
1 thought on "Quantized refrigerator for an atomic cloud"
Pingback: Perspective in Quantum Views by Sebastian Deffner "Quantum refrigerators – the quantum thermodynamics of cooling Bose gases"
|
CommonCrawl
|
Are there any many to one encryption frameworks available?
I am reasonably new to the cryptography. For my use-case I need a method where in I would need a receiver to accept encrypted messages from a range of senders (each having their own, say, public key). My use-case also needs that some of these users only have temporary access which can be revoked externally i.e. without explicitly telling the receiver but ensuring that receiver can no longer honour the encrypted messages from those senders.
Its very similar to the way public wifi hotspots work. Assuming the receiver is the Wifi Router, and each sender has their own passphrase which is short lived for the duration of the subscription.
However in this use-case there is no single receiver, but a network of receivers. All of them talk to each other using a common encryption key. What I am looking for is to have
an encryption key to be a function of multiple sender's public keys.
all of the receivers and the senders can decrypt the messages which were encrypted with that encryption key.
Adding a new sender should be fairly less complex (I guess we need the encryption key to be updated in this case, but if any of the receivers missed the update how can they get attached again?)
Removing a sender should be fairly less complex (Again I guess we need the encryption key to be updated which again comes with the case of how to deal with receivers missing such an update)
Could you please help point such an encryption technique already in place and possibly proven to be secure? Many thanks and please ask if you need further information to answer my query.
AravindAravind
What you ask for is possible, but only (I'm pretty certain, although I don't have any sort of formal proof) if there is a central authority who has a separate, secured connection to each user. Also, the concept of public keys becomes unnecessary. If the encryption and decryption key is a function of "public keys", then any user who at any point was a member of the system will be able to encrypt or decrypt messages. To show what I'm talking about, let's build the following system.
We have users $u_1, \ldots u_n$. Let's say that $u_1$ is the authority, and can revoke any user's access to the system. We have a shared encryption key, $k_s$, which is known to all $u_i$ and $k_s$ is determined by some key derivation function $KDER(pub_{u_1}, \ldots pub_{u_n})$.
If $KDER$ is public (i.e., the new key is derived by each user from the public keys), then we have the following problem. Let's say the user getting booted is $u_{n}$. If $KDER$ is public, it is trivial for $u_n$ to attack the system, as they know $KDER$ and know $pub_{u_1}, \ldots pub_{u_{n-1}}$, and can therefore derive the new key.
So the concept of using public keys doesn't quite make sense. So instead of referring to them as $pub_{u_i}$, let's consider them as $tok_{u_i}$, which is private token randomly generated by the central authority. Now, $KDER$ is private, and $u_1$ must distribute the new encryption key using his secure channels. At this point, the users can all use the new key for sending messages, and the booted user won't be able to attack the system.
Having a central authority seems like it should work for you, because of the line:
My use-case also needs that some of these users only have temporary access which can be revoked externally
Which seems to imply that there is some sort of authoritative actor within or without the system. But if this is not the case I don't believe what you're asking for is possible. For this to be decentralized, every user within the system, including the user who was just removed, must have the ability to create the new key. Crypto has some nice tools for granting access after an amount of time $t$ (proof-of-work systems and so on, see this question), however I do not believe there are any systems which grant access for some amount of time that do not require a central authority.
I really hope I'm mistaken, because such a system would have some very interesting use cases, but if you think about it it doesn't quite make sense. Someone needs to decide when a user's access is up.
sjusju
Not the answer you're looking for? Browse other questions tagged encryption or ask your own question.
Embedded devices Authentication, Integrity and Confidentiality
Is there a way to do single key-pair asymmetric encryption?
Are there encryption schemes with which it takes (significantly) longer to encrypt than to decrypt?
Digital Signing algorithm for a fixed set of private and public keys in broadcasting like system?
What is the cost of encrypting of long message with public-key cryptography?
Can TweetNaCl box sender's private key be set to "1" without privacy loss?
Signature verification vs decryption?
|
CommonCrawl
|
Experiments 1 and 2: Manipulation of surface steepness
Experiments 3 and 4: Effects of changes to central stimulus regions
Experiments 5 and 6: Effects of surface discontinuity
Surface continuity and discontinuity bias the perception of stereoscopic depth
Ross Goutcher; Eilidh Connolly; Paul B. Hibbard
Ross Goutcher
Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, UK
[email protected]
Eilidh Connolly
Paul B. Hibbard
Department of Psychology, University of Essex, Essex, UK
[email protected]
Journal of Vision November 2018, Vol.18, 13. doi:10.1167/18.12.13
Ross Goutcher, Eilidh Connolly, Paul B. Hibbard; Surface continuity and discontinuity bias the perception of stereoscopic depth. Journal of Vision 2018;18(12):13. doi: 10.1167/18.12.13.
Binocular disparity signals can provide high acuity information about the positions of points, surfaces, and objects in three-dimensional space. For some stimulus configurations, however, perceived depth is known to be affected by surface organization. Here we examine the effects of surface continuity and discontinuity on such surface organization biases. Participants were presented with a series of random dot surfaces, each with a cumulative Gaussian form in depth. Surfaces varied in the steepness of disparity gradients, via manipulation of the standard deviation of the Gaussian, and/or the presence of differing forms of surface discontinuity. By varying the relative disparity between surface edges, we measured the points of subjective equality, where surfaces of differing steepness and/or discontinuity were perceptually indistinguishable. We compare our results to a model that considers sensitivity to different frequencies of disparity modulation. Across a series of experiments, the observed patterns of change in points of subjective equality suggest that perceived depth is determined by the integration of measures of relative disparity, with a bias toward sharp changes in disparity. Such disparities increase perceived depth when they are in the same direction as the overall disparity. Conversely, perceived depth is reduced by the presence of sharp disparity changes that oppose the sign of the overall depth change.
An extensive body of physiological and computational work suggests that the perception of depth from binocular disparity is derived from a dense map of disparity measurements, encoded in retinal coordinates at the early stages of visual cortex (DeAngelis, Ohzawa, & Freeman, 1991; Fleet, Wagner, & Heeger, 1996; Goncalves & Welchman, 2017; Nienborg, Bridge, Parker, & Cumming, 2004; Ohzawa, DeAngelis, & Freeman, 1990; Prince, Cumming, & Parker, 2002; Qian & Zhu, 1997; Read & Cumming, 2007). This conception of a point-by-point disparity map matches current approaches for the initial stages of disparity estimation in computer vision (e.g., Hirschmüller, 2008), and is supported by results from several psychophysical studies, which show that proposed mechanisms for dense disparity estimation are sufficient to account for performance in a range of tasks (Allenmark & Read, 2010, 2011; Banks, Gepshtein, & Landy, 2004; Filippini & Banks, 2009; Goutcher & Hibbard, 2014). Several other avenues of research suggest, however, that while dense disparity maps may be an important initial processing step, they cannot account for multiple aspects of our perception of the three-dimensional (3-D) structure of our environment.
Recently, researchers have shown that the perception of quantitative depth in binocular stimuli depends upon surrounding disparity information, with the presence of continuous gradations in disparity resulting in a reduction in perceived depth (Cammack & Harris, 2016; Deas & Wilcox, 2014, 2015; Hornsey, Hibbard, & Scarfe, 2016). These recent findings are consistent with much earlier results, which showed that disparity discrimination thresholds are increased for pairs of vertical lines when intervening horizontal lines create a closed figure (McKee, 1983; Mitchison & Westheimer, 1984). Similarly, such thresholds are reduced for random-dot stereograms (RDSs) of curved surfaces containing gaps between surface segments (Vreven, McKee, & Verghese, 2002). Deas and Wilcox (2014, 2015) suggested that these reductions in perceived depth were due to the effects of Gestalt grouping principles of similarity and good continuation. While these papers provided compelling evidence of the effects of grouping rules on depth perception, they did not provide a mechanism through which such rules might influence quantitative estimates of depth.
One potential mechanism was proposed by Cammack and Harris (2016), who suggested that reductions in perceived depth might be a consequence of spatial averaging procedures operating to improve the signal-to-noise ratio of absolute (i.e., retinal coordinate) disparity estimates. To account for reductions in perceived depth, Cammack and Harris (2016) measured the size of the spatial window over which such averaging would operate. These were found to be very large (around 90% of the size of their stimulus), although their modeling does not provide a mechanism, or a computational reason, for deciding the area over which any averaging should occur. While an intriguing possibility, the absence of a mechanism through which to determine the area for averaging means that Cammack and Harris's (2016) results provide no means to predict when we should find reductions in perceived depth in novel stimuli.
An alternative possibility is that biases in the perceived depth of continuous surfaces are due to disparity measurement mechanisms operating at the level of relative disparities. One such mechanism was proposed by Tyler (1975, 2013; Tyler & Kontesevich, 2001), who adopted the terminology hypercyclopean to describe cells selective for specific frequencies of disparity modulation, analogous to frequency-tuned cells in the luminance domain. These hypercyclopean channels have been used to account for cyclopean-level tilt and size aftereffects (Tyler, 1975) as well as anisotropies in stereoacuity for disparity corrugations (Bradshaw & Rogers, 1999; Hibbard, 2005; Serrano-Pedraza & Read, 2010; Tyler & Kontesevich, 2001).
To provide new insight into possible mechanisms governing surface-related reductions in perceived depth, this paper examines the interaction between factors of surface continuity (Cammack & Harris, 2016; Deas & Wilcox, 2014, 2015), considered in terms of the steepness of disparity variation across a stimulus, and surface discontinuity. Previous research on surface discontinuities has shown contradictory effects. In some cases, the presence of discontinuous edges can enhance perceived depth compared to continuous disparity changes (Cammack & Harris, 2016; Deas & Wilcox, 2014, 2015), and can improve slant discrimination thresholds (Wardle & Gillam, 2016). Conversely, some discontinuous surface arrangements lead to reductions in perceived depth, as in the depth variant of the Craik-O'Brien-Cornsweet illusion (Anstis, Howard, & Rogers, 1978; Rogers & Graham, 1983). Here, we examine the effects of combined surface steepness and surface discontinuity manipulations on perceived depth.
Together, these experiments provide a test of both disparity averaging and good continuation accounts of perceived depth biases. We further test the results of these experiments against a model of hypercyclopean-level processing. Our results suggest that, while neither averaging nor good continuation explanations can account for the range of observed effects, the smoothing effect of hypercyclopean filtering plays a critical role in determining perceived depth for continuous surfaces. The effects of surface discontinuities suggest, however, that understanding perceived depth depends upon the encoding and integration of relative disparities by any hypercyclopean-like mechanism, rather than their capacity to smooth estimates of absolute disparity.
Six experiments were conducted to examine the roles of surface steepness and surface discontinuity parameters in biasing the perception of depth. In each, participants were presented with an RDS, depicting a surface or set of surfaces with binocular disparity-defined depth. Each RDS was presented as part of a two-interval forced-choice (2IFC) design, where participants were asked to select the interval containing the greater depth difference between the far-left and far-right edges of the stimulus (referred to here as the edge-to-edge depth).
Data was collected for five participants in each experiment, with the exception of Experiments 2 and 6, where there were four participants. All participants had normal or corrected-to-normal vision, and were experienced psychophysical observers, including authors RG and EC. Each participant was screened for functional stereoscopic vision using the Random Dot 2 Stereo Acuity Test (Vision Assessment Corp., Elk Grove Village, IL), with each demonstrating stereoacuity of at least 60 arcsecs on this test. All gave written, informed consent for their participation, with ethical approval granted by a local University of Stirling ethics board, in accordance with the guidelines of the British Psychological Society and the Declaration of Helsinki. Author RG participated in all six experiments, with author EC participating in Experiments 1 and 3 through 6. One participant participated in Experiments 1 through 5, another in Experiments 1 through 4 and another in Experiments 1, 2, and 6. The remaining participants took part in only one experiment each, resulting in data collection for a total of 10 participants across the six experiments.
The base stimulus for each experiment was a random dot surface, with disparity defined by the error function, adjusted to conform to a cumulative Gaussian profile, scaled between ±1 (Equation 1).
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}\delta = d\left[ {erf\left( {{\raise0.7ex\hbox{${{x \over s}}$} \!\mathord{\left/ {\vphantom {{{x \over s}} {\sqrt 2 }}}\right.\kern-1.2pt}\!\lower0.7ex\hbox{${\sqrt 2 }$}}} \right)/n} \right] + r\end{equation}
For each stimulus, the disparity δ of each dot depended upon its x coordinate, together with a scaling factor d to vary overall change in disparity, and a steepness factor s, which altered the standard deviation of the cumulative Gaussian function. A normalization parameter n divided the scaled function by the absolute maximum of the function prior to disparity scaling. This ensured that the edge-to-edge depth of the final surface was always defined by the scaling parameter only, irrespective of the standard deviation of the function. To ensure that participants could not simply respond to the absolute disparity of surface end points, the disparity of the whole surface was shifted in depth by a random value r on a trial-by-trial basis, where r was selected between limits of ±2 arcmin. As a further guard against this possibility, the direction of the change in disparity was randomized for each stimulus, such that either the left or right side of the surface could be the section nearer the observer.
Stimulus elements were white circular dots of diameter 7.7 arcmin. Stimuli covered an area of 4.7° × 4.7°, with the exception of those in Experiments 2 and 6, where lateral displacements and changes to stimulus size altered the horizontal extent for some conditions. Dot density was kept constant for all stimuli, at a value of 13.5 dots per degree squared. All stimuli were displayed against a mid-gray background for a period of 2 s, proceeded by the 500-ms presentation of a central white fixation cross and a pair of white flanking lines. Both flanking lines, and constituent elements of the cross-measured 16.5 arcmin in length. Flanking lines were presented 16.5 arcmin above and below the area where the stimulus appeared. The fixation cross was not visible during stimulus presentation, leaving only the flanking lines as a fixation aid and cue to the depth of the screen plane. This minimal fixation aid ensured that participants could not make judgments using relative disparity information between the stimulus edges and any surrounding fixation stimulus (e.g., Wardle & Gillam, 2016).
Experiments 1 and 2 varied the steepness of surfaces, with either keeping stimulus width constant (Experiment 1), or scaling width in line with surface steepness (Experiment 2). Experiments 3 and 4 examined the impact of central stimulus regions either by removing them (Experiment 3) or by replacing the central region with a frontoparallel surface (Experiment 4). Finally, Experiments 5 and 6 examined the role of surface depth discontinuities (Experiment 5) and lateral discontinuities (Experiment 6). These manipulations are summarized in Figure 1, and described in detail in sections below.
Summary of stimulus manipulations and comparisons used across all experiments.
Design and procedure
Stimuli for all experiments were programmed using Matlab (Mathworks, Inc.), in conjunction with the Psychophysics Toolbox extensions (Brainard, 1997; Kleiner, Brainard, & Pelli, 2007; Pelli, 1997). Stimulus presentation was controlled using a MacPro computer coupled with a 49 × 31 cm Apple Cinema HD display with a resolution of 1,920 × 1,200 pixels and a refresh rate of 60 Hz. At the 76.4-cm viewing distance, each pixel measured 1.1 arcmin. The display was calibrated to a linear grayscale using a Spyder2Pro calibration device (DataColor, Dietlikon, Switzerland), resulting in a luminance range of 0.18 cdm−2 to 45.7 cdm−2. The presentation of images containing binocular disparity was enabled through the use of a four-mirror modified Wheatstone stereoscope, with head movements restricted using a Headspot chinrest (UHCO, Houston, TX). All experiments were conducted in a darkened laboratory.
In each experiment, participants completed a 2IFC task to determine which interval contained the stimulus with the larger edge-to-edge depth difference. Edge-to-edge disparity was systematically varied in each experiment using a method of constant stimuli to allow for the recovery of psychometric functions defining the proportion of times the standard stimuli was judged as having greater depth. The point of subjective equality (PSE) was measured for each function. A minimum of 20 repeated trials were collected for each participant on each stimulus condition over multiple sessions, with each experimental session containing the randomized presentation of five repeated trials of each stimulus condition. Responses were made via a key press on a standard computer keyboard. Each key press initiated the presentation of the next trial.
Modeling perceived depth
To provide a model of perceived depth in our tasks, we convolved 1D versions of the disparity profile for each stimulus with a weighted set of hypercyclopean filters, describing a disparity frequency sensitivity function (Figure 2). The form of this sensitivity function followed existing psychophysical estimates, covering a range of 0.05 to 1.2 cpd, with a peak at 0.3 cpd (Serrano-Pedraza & Read, 2010). This disparity sensitivity function was defined for all spatial frequencies in this range by using piecewise cubic spine interpolation of the data provided for horizontal disparity variation by Serrano-Pedraza and Read (2010) in their figure 6.
Disparity frequency sensitivity function, derived from estimates in Serrano-Pedraza and Read (2010).
This convolution process was performed by taking the Fourier transform of the disparity profile of each stimulus, multiplying this by the disparity tuning function, and then taking the inverse Fourier transform. The perceived depth of each stimulus was taken as the difference between maximum and minimum responses at surface edges. For Experiments 3 and 6, there was a gap in the stimulus over which disparity was not defined. In these cases, the gap was filled using linear interpolation. To allow for trial-by-trial variation in response, perceived depth estimates were corrupted by a random additive noise term of ±1 arcmin for each stimulus. Simulations show the results of 400 repeated trials of each experimental condition. Model results are shown alongside human psychophysical data as filled yellow circles throughout the paper.
Experiments 1 and 2 examined the effects of manipulations of surface steepness on perceived depth. Results from both Deas and Wilcox (2014, 2015), and Cammack and Harris (2016) suggest that perceived depth should decrease as surface steepness is reduced. In Experiment 1, we test this hypothesis while keeping stimulus width constant. In Experiment 2, stimulus width was varied as a function of surface steepness (i.e., surfaces were constant in terms of the displayed number of standard deviations from the mean).
In Experiment 1 surface steepness was varied by manipulating s, the standard deviation of the cumulative Gaussian function. Standard deviations of 11, 55, and 110 arcmin were used to define RDS stimuli in the variable intervals, with the standard stimulus fixed at a standard deviation of 55 arcmin, and an edge-to-edge disparity of 11 arcmin. For the variable intervals, edge-to-edge disparity ranged from 0.9 to 16.7 arcmin for the 11 arcmin SD, from 2.2 to 28.6 arcmin for the 55 arcmin SD, and 4.4 to 44 arcmin for the 110 arcmin SD. Edge-to-edge disparity levels were sampled uniformly across the range for each standard deviation. Examples of the stimuli for Experiment 1 are shown in Figure 3.
Examples of the stimuli used in Experiment 1. (a–c) Stimulus containing the change in disparity for the standard surface, of standard deviation 55 arcmin. (d–f) Stimulus showing a steeper change in disparity, through the use of a smaller surface standard deviation of 11 arcmin. (g–i) Stimulus containing a more gradual change in disparity, through the use of a larger (110 arcmin) surface standard deviation. All examples are shown as a free-fusion pair, red–green anaglyph and as a 1-D illustration.
Stimuli for Experiment 2 followed the same structure as Experiment 1, except that stimulus width varied with surface steepness, such that each surface covered a distance of ±3.3 SDs from the center of the screen. Stimulus widths were thus 1.2°, 6°, and 12° in 11, 55, and 110 arcmin conditions, respectively. All other stimulus parameters, including dot density and vertical extent, we identical to those used in Experiment 1. Example stimuli for Experiment 2 are shown in Figure 4.
Examples of the stimuli used in Experiment 2. (a–b) Stimulus with a steeper change in disparity, defined by a function with standard deviation 11 arcmin. (c–d) Stimulus showing a more gradual change in disparity, through the use of a larger 110 arcmin surface SD. All stimuli cover a range of ±3.3 SDs. Other than changes in width, surface layout is identical to Experiment 1.
By examining conditions where stimulus width is constant alongside conditions where it varies with surface standard deviation, we may determine whether surface steepness effects depend upon stimulus edge regions. In cases where width is kept constant, changes in surface steepness alter the extent of near-frontoparallel areas at stimulus edges. Such areas may be critically important for disparity measurement (Allenmark & Read, 2010, 2011; Banks et al., 2004), resulting in improved disparity estimates for sharp disparity changes and impaired estimates for more gradual changes.
Experiments 1 and 2 examined the effects of surface steepness manipulations on perceived depth. Data for each surface standard deviation were fitted to a decreasing cumulative Gaussian, with the 0.5 threshold—the PSE—extracted. These functions, together with the PSEs, are shown in Figure 5a through c, averaged across all five participants. PSEs were calculated based on 1,000 bootstrapped fits for each participant.
Results of Experiments 1 (a–c) and 2 (d–f). Psychometric functions plot proportion "standard greater" responses against edge-to-edge disparity for each surface standard deviation and are shown for example participants (a, d: error bars show binomial standard errors) as well as averaged across all five participants in each experiment (b, e). PSEs for each surface standard deviation are shown for each experiment (c, f), averaged across all five participants. Error bars show the standard error on the mean. There is a clear increase in PSE with decreasing surface steepness. Filled yellow circles show the predictions of the hypercyclopean filtering model.
As is evident from these graphs, increasing the standard deviation of the surface increased the PSE. Thus, for stimuli with the same edge-to-edge disparity, steeper changes in disparity, as in the 11 arcmin SD condition, were reliably judged as having greater depth than the standard interval. In the same way, stimuli in the 110 arcmin SD condition were reliably judged as having less depth than the standard. Average PSEs were 6.5, 10.8, and 25.4 arcmin for, respectively, the 11, 55, and 110 arcmin conditions. The effects of these manipulations of surface steepness were statistically significant on a repeated-measures ANOVA, F(2, 8) = 17.74, p = 0.0011. Pairwise comparisons, using related-samples t tests with Holm-Bonferroni corrections (Holm, 1979), showed significant differences between all surface standard deviations, t(4) = 2.38, 4.42, and 4.26, p = 0.038, 0.0057, and 0.0065 on a one-tailed test, for differences between 55 and 11 arcmin, 110 and 11 arcmin, and 110 and 55 arcmin conditions, respectively). Similar effects were seen in Experiment 2 (Figure 5d through f), where a repeated-measures ANOVA also showed a significant effect of surface standard deviation, F(2, 6) = 22.58, p = 0.0016. Pairwise comparisons, using related-samples t tests with Holm-Bonferroni corrections showed significant effects for the difference between 11 and 110 arcmin, and 55 and 110 arcmin conditions, t(3) = 8.74, p = 0.0016, t3 = 3.66, p = 0.0176 on one-tailed tests, although not between 11 and 55 arcmin conditions, t(3) = 1.73, p = 0.0913.
These experiments confirm the findings of both Deas and Wilcox (2014, 2015) and Cammack and Harris (2016), showing that more gradual changes in disparity are perceived as having less depth than sharper changes. By manipulating surface steepness while either holding surface width constant, or allowing it to vary as a function of steepness, we have shown that these changes in perceived depth cannot be attributed to either the lateral edge-to-edge distance, or the extent of near-frontoparallel regions in the stimulus. Instead, surface steepness must directly contribute to perceived depth, with central areas of the stimulus impacting upon edge-to-edge disparity measurements.
While these effects of steepness are consistent with both disparity averaging and good continuation accounts of perceived depth, they are also well-accounted for by our model of hypercyclopean processing. PSEs for the model are 5.5, 10, and 18.8 arcmin in both Experiments 1 and 2, for 11, 55, and 110 arcmin conditions, respectively. These PSEs provide a close quantitative match to human performance. The observed effects of surface steepness manipulations are therefore consistent with the action of putative hypercyclopean channels in smoothing out lower frequency disparity modulations, with perceived depth determined by higher frequency modulations.
While the manipulation of surface steepness in Experiments 1 and 2 confirmed earlier findings (Cammack & Harris, 2016; Deas & Wilcox, 2014), they did not allow us to distinguish between disparity averaging, hypercyclopean and good continuation accounts of perceived depth biases. Experiments 3 and 4 addressed this issue. In these experiments, we examined the contribution of the central stimulus region by measuring the consequences of its omission (Experiment 3), or its replacement with a task-irrelevant frontoparallel plane (Experiment 4).
Stimuli for Experiment 3 were identical to those in Experiment 1, except that the central portion of each was removed. Stimulus dots with x coordinates within 48 arcmin of the center of the stimulus were removed from each image prior to the addition of disparity (i.e., dots were removed from both left and right images leaving no unmatched dots). An example stimulus is shown in Figure 6a and b, and schematically in Figure 6c.
Examples of the stimuli used in Experiments 3 and 4. (a–c) Free-fusion pair, red–green anaglyph and 1-D illustration of a stimulus containing a gap between surface edges, as used in Experiments 3. (d–f) Free-fusion pair, red–green anaglyph, and 1-D illustration of a stimulus where the central portion has a frontoparallel profile in depth, as used in Experiment 4.
As in Experiment 1, the standard interval contained a RDS with a standard deviation of 55 arcmin, and an edge-to-edge disparity of 11 arcmin. Note, however, that the central portion of the stimulus was also removed for the standard interval. Variable intervals contained stimuli with standard deviations of 11 arcmin or 110 arcmin only, covering an edge-to-edge disparity range of 0.9 to 16.7 arcmin and 4.4 to 44 arcmin, respectively, across seven uniformly sampled levels.
Experiment 4 examined the effects of manipulations of surface steepness for stimuli containing intervening surface discontinuities. Stimuli for Experiment 4 were RDS surfaces similar to those in Experiments 1 and 3, with the exception that the central portion of the stimulus removed in Experiment 3 was replaced, for both standard and variable intervals, with a frontoparallel plane, whose depth was defined by the random depth shift parameter r. As in Experiment 3, this central portion included all dots with x coordinates within 48 arcmin of the stimulus center. The presence of this frontoparallel central region created depth discontinuities between the outer and inter portions of each stimulus surface. An example stimulus is shown in Figure 6d and e and schematically in Figure 6f. Surface standard deviations and edge-to-edge disparities were identical to those used in Experiment 3.
Results for Experiment 3, where stimuli contained a central gap, followed a similar pattern to those in Experiments 1 and 2. Once again larger standard deviation stimuli were judged as having less depth than equivalent stimuli with a steeper disparity change. Psychometric functions for these conditions are plotted in Figure 7a for an example participant, and in Figure 7b averaged over participants. Average PSEs, shown in Figure 7c, were 6.6 and 31.1 arcmin for, respectively, 11 and 110 arcmin standard deviation conditions. The difference between PSEs was significant on a related samples t-test, t(4) = 5.34, p = 0.003 on a one-tailed test.
Results of Experiments 3 (a–c) and 4 (d–f), Psychometric functions plot the proportion "standard greater" responses against edge-to-edge disparity for each surface standard deviation for example participants (a, d: error bars show binomial standard errors) and averaged across all five participants (b, e). PSEs are plotted for each surface standard deviation, for each experiment (c, f), averaged across all five participants. Error bars show standard errors on the mean. Filled yellow circles show the predictions of the hypercyclopean filtering model.
The effects of manipulating surface steepness were also evident in the results of Experiment 4 (Figure 7d through f), in which the gap was replaced by a frontoparallel plane. Here, average PSEs were 9.1 and 18 arcmin, for 11 and 110 arcmin SD conditions, respectively. The difference between PSEs was significant on a related samples t-test, t(4) = 5.3, p = 0.003 on a one-tailed test. Note that the effects of surface smoothness manipulations appear to differ in magnitude between these experiments. To examine this effect, we compared PSEs for 11 and 110 arcmin SD conditions between Experiments 3 and 4. A repeated-measures ANOVA, with experiment as a between-participants variable, showed an expected main effect of standard deviation, F(1, 8) = 46.74, p = 0.0001, and a significant interaction, F(1, 8) = 10.174, p = 0.0128. Pairwise comparisons were conducted via Holm-Bonferroni corrected t tests (related samples for within-participants effects and two sample for between-participants effects). These tests show expected significant effects of standard deviation manipulations in each experiment, supporting the analysis above, t(4) = 5.34 and 5.29, p = 0.0030 and 0.0031 on a one-tailed test for the difference between 11 and 110 arcmin conditions, and significant differences between Experiments 3 and 4 for both the 11 arcmin, t(8) = 2.69, p = 0.028, and 110 arcmin, t(8) = 2.80, p = 0.023, conditions. The reduction in perceived depth for surfaces with larger standard deviations were greater when there was a gap in the stimulus, than when this gap was filled with a frontoparallel plane.
The observed changes between these experiments are difficult to reconcile with either disparity averaging (Cammack & Harris, 2016) or good continuation (Deas & Wilcox, 2014, 2015) accounts of perceived depth. If good continuation processes underpinned the surface steepness effects found in Experiments 1 through 4, one would expect a reduction in bias for both Experiments 3 and 4, since the inclusion of the surface gap and the frontoparallel region both serve to weaken surface continuity. Our results instead show that the effects of surface steepness manipulations persist, despite changes to the central structure of the stimulus. Instead, a reduction in bias was only observed with the addition of depth discontinuities in Experiment 4. This is contrary to Deas and Wilcox (2015), where a reduction in the number of dots describing a line in depth increased perceived depth (see also Vreven et al., 2002). This suggests that absence of the central stimulus region in Experiment 3 (present in the manipulations used by both Deas & Wilcox, 2015, and Vreven et al., 2002) may be critical for maintenance of biases in perceived depth.
Our findings also pose difficulties for the disparity averaging account proposed by Cammack and Harris (2016). These authors suggested that surface smoothness biases arise due to averaging processes occurring as part of local binocular disparity estimation. Such disparity estimation processes operate in an absolute co-ordinate frame, encoding retinal offsets, not relative depth. For averaging of absolute disparities to account for the surface smoothness effects found in Experiments 1 through 3 would, however, require any averaging process to operate over a smaller area in Experiment 4 than in Experiment 3 (i.e., bias in perceived depth is smaller in Experiment 4, than in Experiment 3).
To better understand this point, let us consider the principles under which any averaging process must function. For any monotonic function, such as the modified scaled cumulative Gaussian surfaces used in Experiments 1 through 4, estimated depth varies inversely with the area over which disparity measurements are averaged. Increasing the area for disparity averaging will necessarily reduce estimated depth. Surfaces with more gradual depth changes will show the greatest reduction, as disparity varies more over local areas. The consistency in biases across Experiments 1 through 3 thus requires comparable averaging areas. The reduced bias in Experiment 4 can only be accounted for, however, by a smaller averaging area, despite the increase in measurable disparities for stimuli in this experiment. For absolute disparity averaging to account for our findings, more stimulus information (at smaller disparities) must somehow translate to less averaging. Without a means for determining, a priori, the area over which averaging should occur, such processes cannot match the patterns of results observed in Experiments 1 through 4.
The hypercyclopean model similarly does little to predict the observed changes in performance. For the 110 arcmin condition in Experiment 3, hypercyclopean responses significantly underestimate the bias in perceived depth, t(4) = 2.846, p = 0.047 on a two-tailed related-samples test. While human and model performance are very similar in Experiment 4, this simply serves to mask the fact that the manipulations across Experiments 1 through 4 have no effect on observed biases in the hypercyclopean model's estimates of perceived depth. PSEs were 5.4 and 18.8 arcmin for 11 and 110 arcmin conditions in Experiment 3, and 5.5 and 18.8 arcmin for equivalent conditions in Experiment 4, effectively unchanged from PSEs for Experiments 1 and 2. The observed changes in human performance between these experiments would not, therefore, appear to be attributable to the smoothing effects of hypercyclopean filtering on absolute disparity estimates. Other factors, such as surface interpolation processes and the presence of surface discontinuities, may instead play a critical role in determining perceived depth. We consider the role of such discontinuities in Experiments 5 and 6, below.
The results of Experiment 4 suggest that surface discontinuities may play an important role in determining perceived depth. In that experiment, the addition of surface discontinuities reduced the impact of manipulations of surface standard deviation. It is unclear, however, whether this was due to a direct effect of discontinuity on perceived depth, or the impact of discontinuities on surface steepness effects. Here, we further addressed the role of surface discontinuities, examining the effects of both depth discontinuities (Experiment 5) and lateral discontinuities (Experiment 6) on perceived depth.
To examine surface discontinuity effects on perceived depth, participants in Experiment 5 were presented with stimuli containing a single depth discontinuity at the centre of each RDS. Unlike Experiments 1 through 4, there were no manipulations of surface steepness. Instead, all stimuli in Experiment 5 were scaled cumulative Gaussian curves in depth, with a fixed standard deviation of 55 arcmin. Edge-to-edge disparity ranged from 2.2 to 28.6 arcmin in the variable interval, with the standard interval again having an edge-to-edge disparity of 11 arcmin.
Depth discontinuities were added to this basic stimulus by shifting left and right halves in opposite directions in depth. The near half of the stimulus was shifted away from the observer in depth, while the far half was brought forward. Discontinuity sizes were 0, ±2.2, and ±4.4 arcmin, with the standard interval containing no discontinuity. Note that, in Experiment 5, the calculation of edge-to-edge disparities was adjusted to take depth discontinuities into account. Thus, rather than scaling by the edge-to-edge disparity d, as in Equation 1, surfaces were scaled by a value of d, plus the relevant discontinuity size, resulting in a stimulus with edge-to-edge disparity of 2d (equivalent to the edge-to-edge disparity for stimuli in Experiments 1 through 4) once the depth discontinuity was introduced. An example stimulus is shown in Figure 8a and b and schematically in Figure 8c.
Examples of the stimuli used in Experiments 5 and 6. (a–c) Free-fusion pair, red–green anaglyph and 1-D illustration of a stimulus containing a depth discontinuity, as used in Experiments 5. (d–f) Free-fusion pair, red–green anaglyph, and 1-D illustration of a stimulus containing a lateral discontinuity, as used in Experiment 6.
In Experiment 6, participants were presented with stimuli containing discontinuities produced by opposing lateral shifts of each half of the RDS surface. As with Experiment 5, stimuli in Experiment 6 were always of SD = 55 arcmin, with the standard interval having an edge-to-edge disparity of 11 arcmin. Edge-to-edge disparity ranged from 2.2 to 28.6 arcmin in the variable interval, across seven uniformly sampled levels. Lateral shifts were of size 0, ±17.6, and ±35.2 arcmin for the variable intervals, and were added by shifting left and right stimulus halves in opposite directions. This increased the horizontal extent of stimuli to 5.3° and 5.9° for, respectively, ±17.6 and ±35.2 arcmin shifts. The standard interval contained no lateral shifts. An example stimulus is shown in Figure 8d and e, and schematically in Figure 8f.
Data for each discontinuity size in Experiment 5 were fit to a decreasing cumulative Gaussian curve, as in Experiments 1 through 4, with PSEs recovered based on 1,000 bootstrapped fits. Fitted curves and associated PSEs are plotted in Figure 9a through c. The presence of a depth discontinuity increased the value of the PSE, relative to a continuous surface. Thus, for two surfaces with equivalent edge-to-edge disparities, a discontinuous surface was judged as having less depth than a continuous surface. PSEs for ±2.2 and ±4.4 arcmin discontinuities were 16.4 and 18.9 arcmin, respectively, compared to a PSE of 10.3 arcmin for continuous surfaces. Differences between continuous and discontinuous surface PSEs were significant on a repeated-measures ANOVA, F(2, 8) = 16.77, p = 0.0014. Pairwise comparisons, conducted via related samples t tests with Holm-Bonferroni corrections, indicated significant differences between the continuous condition and each discontinuity size, but not between discontinuities, t(4) = 7.06, 4.14 and 1.82, p = 0.0021, 0.0144, and 0.1424 on a two-tailed test for differences between 0 and ±2.2, 0 and ±4.4, and ±2.2 and ±4.4 arcmin conditions, respectively). Unlike depth discontinuity manipulations, however, the addition of lateral surface discontinuities in Experiment 6 had no effect on PSEs (Figure 9d through f). Average PSEs across all participants were 10.70, 8.26, and 11.47 arcmin for 0, ±17.6, and ±35.2 arcmin lateral discontinuities, with no statistically significant differences on a repeated-measures ANOVA, F(2, 6) = 1.17, p = 0.37.
Results of Experiments 5 (a–c) and 6 (d–f). Fitted psychometric functions plot the proportion of "standard greater" responses against edge-to-edge disparity for each depth discontinuity in Experiment 5, and lateral discontinuity in Experiment 6. Fitted functions show the results for example participants (a, d: error bars show binomial standard errors) and averaged across participants (b, e). PSEs are shown for each depth discontinuity in Experiment 5 (c) and lateral discontinuity in Experiment 6 (f), averaged across participants. Error bars show the standard error on the mean. Filled yellow circles show the predictions of the hypercyclopean filtering model.
The changes in perceived depth in Experiment 5 are line with the depth Cornsweet illusion first reported by Anstis et al. (1978). Importantly, our results show that this depth Cornsweet effect does not just occur when edge-to-edge disparity is close to zero. Instead, we found that the presence of a depth discontinuity reduced perceived depth for much larger disparities. Several of our participants did, however, report an illusory reversal of depth, consistent with a depth Cornsweet illusion, for some stimuli in this experiment. The shift in PSEs observed in Experiment 5 suggests that depth Cornsweet effects will be evident for stimuli with edge-to-edge disparities of less than around 7 arcmin. This is substantially larger than the effect reported by Anstis et al. (1978), which was around 2 arcmin for the most comparable viewing distance.
The direction of bias observed in Experiment 5 is difficult to reconcile with Deas and Wilcox's (2014, 2015) good continuation effects, where the presence of surface discontinuities should increase, rather than decrease, perceived depth. For the same reason, good continuation cannot explain why increases in perceived depth are not found for the lateral discontinuities used in Experiment 6. A disparity averaging account also struggles with these discrepancies. In Experiment 6, an averaging account would predict that increases in lateral separation result in less averaging, increasing perceived depth. Similarly, our hypercyclopean model also failed to predict the observed change in perceived depth. PSEs for this model showed a very slight decrease with increasing discontinuity size for both Experiment 5 (10, 9.2, and 8.4 arcmin for 0, ±2.2, and ±4.4 arcmin discontinuities) and Experiment 6 (19, 8.1, and 7.1 arcmin for 0, ±17.6, and ±35.2 arcmin discontinuities). Observed decreases in perceived depth would not, therefore, appear to be attributable to the action of these channels.
What factors might, then, drive the reduction in perceived depth observed in Experiment 5? One possibility might be that the sharp depth discontinuities at the centre of the stimulus violate the disparity gradient limit (Burt & Julesz, 1980; McKee & Verghese, 2002), and thus produce regions of diplopia, reducing depth sensitivity with these areas. Analysis of the local gradients in the stimulus suggests, however, that violations of the gradient limit are more a property of the functions underlying each RDS, rather than dot patterns that define them. While maximum gradients in Experiment 5 fall around a value of 1, similar maximum gradient values are also found in Experiment 4. In that case, however, discontinuities are associated with an increase in perceived depth. Diplopia does not, therefore, seem able to explain both of these effects. Instead, we would seek to explain changes in perceived depth in both Experiment 4 and 5, and the absence of effects in Experiment 6, through consideration of the disparity gradients within these stimuli, rather than any diplopia that might potentially arise. Discontinuities with disparity gradients in the same direction as the overall change in disparity appear to add to perceived depth, while disparity gradients of opposing sign reduce it. The lateral discontinuities added in Experiment 6 have no effect as the gradients they introduce are equal to zero. We discuss these ideas in detail, below.
The experiments reported in this paper examined the combined effects of surface steepness and surface discontinuity manipulations on perceived depth from binocular disparity. The results of these experiments provide new insight into earlier findings showing that more gradual changes in disparity are perceived as having less depth than stimuli containing steep disparity changes (Cammack & Harris, 2016; Deas & Wilcox, 2014, 2015; McKee, 1983; Mitchison & Westheimer, 1984). In Experiments 1 through 4, changing PSEs indicated a reduction in perceived depth with increasing surface steepness. In Experiments 5 and 6, manipulations of surface discontinuity also led to reductions in perceived depth, but only if the discontinuity was in depth. While this set of findings is consistent with earlier results (Anstis et al., 1978; Rogers & Graham, 1983), they appear contrary to the effects of manipulating surface steepness. These results show that previously established surface-level effects on stereoacuity (Anstis et al., 1978; McKee, 1983; Mitchison & Westheimer, 1984; Rogers & Graham, 1983; Wardle & Gillam, 2016) extend to discrimination judgements for suprathreshold depth.
The observed pattern of results across experiments cannot be accounted for by existing good continuation (Deas & Wilcox, 2014, 2015) and disparity averaging (Cammack & Harris, 2016) accounts, which predict contrary effects for stimuli containing surface discontinuities (Experiments 4 through 6), or surface gaps (Experiment 3). Similarly, while a model of hypercyclopean processing indicates that sensitivity to different disparity frequencies can account for basic effects of surface steepness manipulations (Experiments 1 and 2), it fails to predict changes in these effects, or the effects of surface discontinuities. This suggests that, while the role of hypercyclopean processing in smoothing absolute disparity estimates is important, effects of surface steepness and surface discontinuity manipulations depend critically on mechanisms measuring the relative disparity at surface discontinuities. Such relative disparity processing has previously been implicated in stereoacuity judgements relative to reference planes (Glennerster & McKee, 1999, 2004) and in the resolution of binocular correspondence (Goutcher & Hibbard, 2010; Mitchison & McKee, 1987). While hypercyclopean-level detectors are conceptually suited to the encoding of relative disparities, our current modelling cannot be interpreted in these terms.
Perceived depth from relative disparities
Unlike disparity averaging and good continuation accounts, an interpretation of our observed biases in terms of relative disparity content reveals a consistent pattern of results. Effects of surface standard deviation manipulations suggest that perceived edge-to-edge depth is driven primarily by sharp local changes in relative disparity—that is, by large disparity gradients. The impact of these large gradients seems particularly important for discontinuous surfaces, where observed biases cannot be accounted for by the smoothing effects of hypercyclopean processing. Instead, perceived depth seems to conform to a rule where sharp changes in disparity increase perceived depth when they have the same sign as the overall change in disparity, and decrease perceived depth when they are of opposite sign. This rule would also appear to be consistent with the findings of Deas and Wilcox (2015), where additive disparity noise leads to an increase in perceived depth, dependent on the magnitude of the noise. The noise in Deas and Wilcox's (2015) experiments systematically increased the range of disparity gradients within a stimulus, as a function of the size of the noise. While the average gradient of the noise would be zero, leading to no additional biasing of perceived depth, the increased range of disparity gradients would, under the rule proposed here, lead to a general increase in perceived depth.
Relative disparity effects of these kinds could be implemented by considering the responses of hypercyclopean units, modelled after relative disparity selective cells in cortex. Neurons selective for relative disparity are found in multiple visual areas (cf. Parker, 2007), including V2 (Bredfeldt & Cumming, 2006; Thomas, Cumming, & Parker, 2002) and V4 (Fang et al., 2018; Umeda, Tanabe, & Fujita, 2007). Such neurons could be used to encode disparity differences at multiple scales, following the disparity frequency sensitivity approach applied here (Hibbard, 2005; Serrano-Pedraza & Read, 2010; Tyler, 1975, 2013; Tyler & Kontesevich, 2001). Critically, however, our results suggest that perceived depth requires the integration of relative disparity measurements across the stimulus, perhaps following the association field approach used for contour perception (Field, Hayes, & Hess, 1993; Hess & Field, 1995). Such an approach may also help to explain apparent grouping effects (e.g., Deas & Wilcox, 2014), which cannot be accounted for solely in terms of disparity measurements.
Roles for averaging and grouping processes?
Above, we have argued against a general disparity averaging or good continuation-based account of biases in perceived depth, in favor of an explanation based on the integration of relative disparity estimates across the stimulus, with hypercyclopean channels as a potential basis for such encoding. Such an argument does not, however, rule out possible roles for both averaging and good-continuation processes in the perception of disparity-defined depth. Deas and Wilcox (2014, 2015) demonstrate grouping-based effects that fall outside of the scope of manipulations presented here. Deas and Wilcox (2014), for example, used similarity-based grouping to show effects on perceived depth, which suggests that grouping may still play a role in determining the stimulus elements used for measuring relative depth. Similarly, the manipulation of element number in Deas and Wilcox (2015) cannot be accounted for by a change in local disparity gradients, given the linear disparity change of their stimulus. An explanation of these effects would require further elaboration of any relative disparity-based account.
As with grouping principles, our results also do not rule out possible roles for averaging processes acting on absolute disparities, such as those proposed by Cammack and Harris (2016). While averaging alone is not sufficient to account for our findings, it may still play a role in biasing depth estimates. Our findings suggest, however, that such averaging must be contingent on relative disparity stimulus content, take into account the smoothing effects of hypercyclopean processing and consider areas of surface discontinuity. Similar proposals have been made previously, with the suggestion that the detection of surface discontinuities can provide coarse disparity measurements to be used as the basis for more precise estimates (Gillam & Borsting, 1988; Wilcox & Lakra, 2007). Discontinuity detection could thus delimit any averaging processes, helping improve signal-to-noise ratios as suggested by Cammack and Harris (2016). Such averaging could, however, operate on disparity estimates encoded in either absolute or relative coordinate domains.
This paper examined how manipulations of both the steepness of changes in disparity and the presence of surface discontinuities affected perceived depth. Our results show that, while an increase of surface steepness leads to a decrease in perceived depth, the effects of surface discontinuity manipulations depend critically on discontinuity structure. These results are not consistent with existing accounts of continuity-related biases in perceived depth, which focus on grouping processes or disparity averaging, and are not predicted by the smoothing effects of hypercyclopean channels. Instead, our results are consistent with processes that integrate relative disparity measurements across the stimulus. Such processes are biased toward steeper changes in disparity with such changes reducing perceived depth when their sign opposes the overall change in depth across the stimulus, and enhancing perceived depth when they are of the same sign. Future research should seek to further constrain the principles governing the encoding and integration of relative disparities in the perception of stereoscopic depth in order to allow for a full computational account of these processes.
Corresponding author: Ross Goutcher.
Email: [email protected].
Address: Psychology, Faculty of Natural Sciences, University of Stirling, UK.
Allenmark, F., & Read, J. C. A. (2010). Detectability of sine-versus square-wave disparity gratings: A challenge for current models of depth perception. Journal of Vision, 10 (8): 17, 1–16, https://doi.org/10.1167/10.8.17. [PubMed] [Article]
Allenmark, F., & Read, J. C. A. (2011). Spatial stereoresolution for depth corrugations may be set in primary visual cortex. PLoS Computational Biology, 7 (8), e1002142, https://doi.org/10.1371/journal.pcbi.1002142.
Anstis, S. M., Howard, I. P., & Rogers, B. (1978). A Craik-O'Brien-Cornsweet illusion for visual depth. Vision Research, 18 (2), 213–217.
Banks, M. S., Gepshtein, S., & Landy, M. S. (2004). Why is spatial stereoresolution so low? Journal of Neuroscience, 24 (9), 2077–2089.
Bradshaw, M. F., & Rogers, B. J. (1999). Sensitivity to horizontal and vertical corrugations defined by binocular disparity. Vision Research, 39 (18), 3049–3056.
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436.
Bredfeldt, C. E., & Cumming, B. G. (2006). A simple account of cyclopean edge responses in macaque V2. Journal of Neuroscience, 26 (29), 7581–7596.
Burt, P., & Julesz, B. (1980, May 9). A disparity gradient limit for binocular fusion. Science, 208, 615–617.
Cammack, P., & Harris, J. M. (2016). Depth perception in disparity-defined objects: Finding the balance between averaging and segregation. Philosophical Transactions of the Royal Society of London: B, 371 (1697), 20150258.
DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1991, July 11). Depth is encoded in the visual cortex by a specialized receptive field structure. Nature, 3 52 (6331), 156–159.
Deas, L. M., & Wilcox, L. M. (2014). Gestalt grouping via closure degrades suprathreshold depth percepts. Journal of Vision, 14 (9): 14, 1–13, https://doi.org/10.1167/14.9.14. [PubMed] [Article]
Deas, L. M., & Wilcox, L. M. (2015). Perceptual grouping via binocular disparity: The impact of stereoscopic good continuation. Journal of Vision, 15 (11): 11, 1–13, https://doi.org/10.1167/15.11.11. [PubMed] [Article]
Fang, Y., Chen, M., Xu, H., Li, P., Han, C., Hu, J.,… Lu, H. D. (2018). An orientation map for disparity-defined edges in area V4. Cerebral Cortex , https://doi.org/10.1093/cercor/bhx348.
Field, D. J., Hayes, A., & Hess, R. F. (1993). Contour integration by the human visual system: Evidence for a local "association field." Vision Research. 33, 173–193.
Filippini, H. R., & Banks, M. S. (2009). Limits of stereopsis explained by local cross-correlation. Journal of Vision, 9 (1): 8, 1–18, https://doi.org/10.1167/9.1.8. [PubMed] [Article]
Fleet, D. J., Wagner, H., & Heeger, D. J. (1996). Neural encoding of binocular disparity: Energy models, position shifts and phase shifts. Vision Research, 36 (12), 1839–1857.
Gillam, B., & Borsting, E. (1988). The role of monocular regions in stereoscopic displays. Perception, 17 (5), 603–608.
Glennerster, A., & McKee, S. P. (1999). Bias and sensitivity of stereo judgements in the presence of a slanted reference plane. Vision Research, 39, 3057–3069.
Glennerster, A., & McKee, S. P. (2004). Sensitivity to depth relief on slanted surfaces. Journal of Vision, 4 (5): 3, 378–387, https://doi.org/10.1167/4.5.3. [PubMed] [Article]
Goncalves, N. R., & Welchman, A. E. (2017). "What not" detectors help the brain see in depth. Current Biology, 27 (10), 1403–1412.
Goutcher, R., & Hibbard, P. B. (2010). Evidence for relative disparity matching in the perception of an ambiguous stereogram. Journal of Vision, 10 (12): 35, 1–16, https://doi.org/10.1167/10.12.35. [PubMed] [Article]
Goutcher, R., & Hibbard, P. B. (2014). Mechanisms for similarity matching in disparity measurement. Frontiers in Psychology, 4, 1014.
Hess, R. F., & Field, D. J. (1995). Contour integration across depth. Vision Research, 35 (12), 1699–1711.
Hibbard, P. B. (2005). The orientation bandwidth of cyclopean channels. Vision Research, 45 (21), 2780–2785.
Hirschmüller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30 (2), 328–341.
Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6 (2), 65–70.
Hornsey, R. L., Hibbard, P. B., & Scarfe, P. (2016). Binocular depth judgments on smoothly curved surfaces. PLoS One, 11 (11), e0165932, https://doi.org/10.1371/journal.pone.0165932.
Kleiner, M., Brainard, D., & Pelli, D. (2007). What's new in Psychtoolbox-3. Perception, 36S, 14.
McKee, S. P. (1983). The spatial requirements for fine stereoacuity. Vision Research, 23 (2), 191–198.
McKee, S. P., & Verghese, P. (2002). Stereo transparency and the disparity gradient limit. Vision Research, 42, 1963–1977.
Mitchison, G. J., & McKee, S. P. (1987). The resolution of ambiguous stereoscopic matches by interpolation. Vision Research, 27 (2), 285–294.
Mitchison, G. J., & Westheimer, G. (1984). The perception of depth in simple figures. Vision Research, 24 (9), 1063–1073.
Nienborg, H., Bridge, H., Parker, A. J., & Cumming, B. G. (2004). Receptive field size in V1 neurons limits acuity for perceiving disparity modulation. Journal of Neuroscience, 24 (9), 2065–2076.
Ohzawa, I., DeAngelis, G. C., & Freeman, R. D. (1990, August 31). Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science, 249 (4972), 1037–1041.
Parker, A. J. (2007). Binocular depth perception and the cerebral cortex. Nature Reviews Neuroscience, 8, 379–391.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442.
Prince, S. J. D., Cumming, B. G., & Parker, A. J. (2002). Range and mechanism of encoding of horizontal disparity in macaque V1. Journal of Neurophysiology, 87 (1), 209–221.
Qian, N., & Zhu, Y. (1997). Physiological computation of binocular disparity. Vision Research, 37 (13), 1811–1827.
Read, J. C. A., & Cumming, B. G. (2007). Sensors for impossible stimuli may solve the stereo correspondence problem. Nature Neuroscience, 10 (10), 1322–1328.
Rogers, B. J., & Graham, M. E. (1983, September 30). Anisotropies in the perception of three-dimensional surfaces. Science, 221 (4618), 1409–1411.
Serrano-Pedraza, I. & Read, J. C. A. (2010). Multiple channels for horizontal, but only one for vertical corrugations? A new look at stereo anisotropy. Journal of Vision, 10 (12): 10, 1–11, https://doi.org/10.1167/10.12.10. [PubMed] [Article]
Thomas, O. M., Cumming, B. G., & Parker, A. J. (2002). A specialization for relative disparity in V2. Nature Neuroscience, 5 (5), 472–478.
Tyler, C. W. (1975). Stereoscopic tilt and size aftereffects. Perception, 4, 187–192.
Tyler, C. W. (2013). Shape processing as inherently three-dimensional. In Dickinson S. J. & Pizlo Z. (Eds.), Shape perception in human and computer vision (pp. 357–372). London, United Kingdom: Springer-Verlag.
Tyler, C. W., & Kontesevich, L. L. (2001). Stereoprocessing of cyclopean depth images: Horizontally elongated summation fields. Vision Research, 41, 2235–2243.
Umeda, K., Tanabe, S., & Fujita, I. (2007). Representation of stereoscopic depth based on relative disparity in macaque area V4. Journal of Neurophysiology, 98, 241–252.
Vreven, D., McKee, S. P., & Verghese, P. (2002). Contour completion through depth interferes with stereoacuity. Vision Research, 42, 2153–2162.
Wardle, S. G., & Gillam, B. J. (2016). Gradients of relative disparity underlie the perceived slant of stereoscopic surfaces. Journal of Vision, 16 (5): 16, 1–13, https://doi.org/10.1167/16.5.16. [PubMed] [Article]
Wilcox, L. M., & Lakra, D. C. (2007). Depth from binocular half-occlusions in stereoscopic images of natural scenes. Perception, 36 (6), 830–839.
Copyright 2018 The Authors
Simultaneous density contrast and binocular integration
The source of visual size adaptation
Early dynamics of stereoscopic surface slant perception
The perceived depth from disparity as function of luminance contrast
Gradients of relative disparity underlie the perceived slant of stereoscopic surfaces
Stereo-discrimination between diplopic images in clinically normal observers.
A Systematic Comparison of Static and Dynamic Cues for Depth Perception
Visual Suppression in Intermittent Exotropia during Binocular Alignment
A Rhesus Monkey With a Naturally Occurring Impairment of Disparity Vergence. I. Behavioral Comparisons to Vergence in a Normal Animal
A Rhesus Monkey With a Naturally Occurring Impairment of Disparity Vergence. II. Abnormal Near Response Cell Activity in the Supraoculomotor Area
|
CommonCrawl
|
Veröffentlicht am 27. Dezember 2019 16. Mai 2020 von Mercurio
La Newest Online Dating Website Without Registration
None of the quantities which were supposed to be finite turn out to be so. Now the whisper gave way to a chant, sung softly in an eerie rising cadence with a sharply accented note at the end of every measure. An english patent was granted in 1870 entitled a "vending machine," which was intended to be used in selling various articles." Sky – 'verde' – lyrics you gotta check out songs that will make you cry uncontrollably Children are encouraged to leave a gift in return on christmas eve – tradition dictates a mince pie, a glass of sherry, and one or more carrots for the reindeer. I want my web site loaded up as fast as yours lol i was suggested this website by my cousin. A few weeks earlier in nearby mesa, google was finalizing plans for a giant data center among the cacti and tumbleweeds. Do not use bitcoin for illegal purposes, as law enforcement has means to track your purchase back to you.[30] bitcoin transactions are irreversible. These three regions would combine into the republic of serbian krajina (rsk) on 19 december 1991. The creation of shared visions, or identities, relies on organizations presenting a coherent picture of their future both clearly and concisely. It is urged by the learned counsel for the appellant that a notice under order 12 rule 8 cpc was served upon the owner and driver of the vehicle to produce the driving licence. The bird in question, in my case the red wattle bird will perch on a limb of a tree and keep watch. Although it somewhat removes the creative aspects of outfitting, you can find various pieces featuring the same green across the other ceremonial annuminas and helegrod sets. — ochiru hitozuma aka fallen wife by black lilithith. In the pre-dawn light, he saw through his window someone coming toward his house, holding an object. Octavia allowed herself to take a deep breath once while clarke explained the situation. The website also includes a free online forum for parents to connect with other homeschooling parents around the world. Spray the bottom side of the foil with vegetable cooking spray so it doesnot stick to the cheese. Some arabian guy starts to claim that my hilfiger watch would belong to him. "performance and efficiency in colombia's power distribution system: effects of the 1994 reform," energy economics, elsevier, vol. 28(3), pages 339-369, may. li, mingquan & Losses of ago4 and hen1 have nearly identical effects on all sirnas tested, possibly because hen1 and ago4 affect a similar point in the pathway. According to the elder shin, stikov had a contempt toward kim il-sung as a boss of a small gangster group with absolutely no knowledge of military affairs. The best removal efficiency and uptake were reached at ph 4, 72.5 % and 12.6 mgcr/gbiomass, respectively. The optional leather inserts are combined with the stylish and unique square posts make it truly one of a kind. I would have la newest online dating website without registration rathered a traditional style and with more calvados cream sauce. If a takes a photograph of b, who owns the copyright in that photograph? Lack of formal recognition by the governments also leads to few formal policing and public justice institutions in slums. Designed for a chef, kitchen boasts 6 burner viking gas range, built in steamer and expresso maker. I also tried the passion fruits and true enough like what they say, it is very sweet unlike the purple ones which is sourish. An immense vacuum has opened up at the summit of the international monetary fund (imf). 7.90. rodenburg, c. m.: the identification of lettuce varieties from the young plant. Any previous granting of roles to users using grant statements do not apply. So, as the year is almost over, here are my current top ten movie lists for 2019 so far. Hit cenowy: lego star wars iii the clone wars pc to markowy prezencik.
Roswell about square metres full of danish design, all to your selves. Piqua may i tell you, i do not do it for free, yes it comes with a fee because i am certain of. Each state has their own laws westbury and regulations, and can require different types of licenses and permits in order for restaurants or yuma bars to operate legally. Of course, smile direct yankton swarthmore club would need to be not just inexpensive but also effective for it to be worth anyone signing up for. In, the heart-rending photo of the la newest online dating website without registration body of three-year-old aylan kurdi, tamworth washed up on a beach in turkey after he had guildford drowned, featured in many news stories around the globe. Article kennebunkport 38 paragraph 1 sub b of the un charter states that international custom is the practice or general practice saco accepted by law. Very delicious breakfest, very nice staff, clean and calm kentucky. Now if app crashes or i stop scarborough service manually, will this also kill the broadcastreceiver? Most important of all, forres the will for peace on the part of peace-loving nations must express itself to hawaii the end that nations that may be tempted to violate their agreements and the rights of rochdale others will desist from such a course. Before moving on to the installation process, first you need know that, showbox is roseburg not an official application as millville it is now legally available for android platform on the play store. Et, fox in theory, matthew stafford playing in corvallis this game would at least make it glassboro more interesting, but his status probably is irrelevant as it pertains to the result. If an adjacent operating screen stormont is present above the operating screen 76, the operating screen 76 will include an additional navigation button 84 for animating a move owensboro to the upper operating screen. Hidden categories: articles with short description articles needing additional references from october all articles needing additional references articles containing lombard-language text articles containing sicilian-language text articles containing neapolitan-language text articles containing catalan-language text articles containing portuguese-language text tacoma articles containing croatian-language text articles containing slovene-language text articles containing maltese-language text articles containing libyan arabic-language text all articles with unsourced saint joseph statements worcestershire articles with unsourced statements from august all articles with failed verification articles with failed verification from february articles lacking in-text citations from october all articles lacking in-text citations articles with curlie links articles with italian-language external links. He took the concentrated residue which remained after most of reigate oak bay and banstead the brine had evaporated and passed chlorine gas into it. According to historical hastings records, the first spaniel was brought to north america aboard the mayflower which sailed from plymouth and landed in new england in the first cocker spaniel recorded wealden in america was a liver and white dog named captain, registered with the american kennel moline club in. Spyware is a kind of malicious software which can be installed on user's computer in many ways hartlepool. The chinnaswamy stadium maintained bennington its position as a favourite ground among visiting players and officials, with its well managed infrastructure and optimal viewing facility paisley for the players. When the bolsheviks sued for peace in, romania was revelstoke surrounded by the central powers and forced to washington conclude the treaty of bucharest. But particularly, it is preferred to be performed on durga ashtami wabana. I can leave the country, but the country winooski will never leave me! Best lakers 4 life images in la lakers, sports, basketball topolinos st kilda victoria retrieved april 4, kodiak november 15, every team copes with injuries of varying magnitudes, and the lakers have red baltimore wing dealt with their share. It also mcminnville makes for an impressive nanticoke drone photo as alex javier proved with this snap using his phantom 4 pro. Tandridge not sure what he did to fix it, but a few months later he was banging away at a normal wpm or more. We apologize for the inconvenience, we're working to get them back as soon as possible gadsden. The monster begs victor to create a mate for him, a monster equally grotesque to elmbridge ozark serve as his sole companion. Drug-free zones are bordentown kewanee falling out of favor in some states that question their effectiveness, even as the opioid epidemic rages on. A nauvoo reader writes: i work in an office where everyone has their door closed all on aam, buy some socks from amazon, text his girlfriend a funny hove meme and get. The drunkard daughter of the rebel masquerading chief webstar mulubisha bethlehem has kept rapid city on mocking and insulting lozis and the litunga. Furthermore the inflow of foreign direct chula vista shiprock investment was encouraged in the period of sixth fiveyear plan. Let us consider processes that could college park cause uranium and thorium to be incorporated into minerals with a high lancaster melting point. So, if it is really high such belfast as daily it means babergh that whatever interest is unpaid at the end of the day gets added to principle. There's one major dawson change this time around, though, and that's that people malibu of dark skinned races aren't enslaved or discriminated against. That's exactly what happened with preity custer zinta and ness wadia. Proverbs – the lord hath made all things for himself: yea, even the wicked muskegon for the day of evil. If the demand for money is not related to the interest rate, as the vertical lexington lm curve implies, then there is a unique level of income at which the money colorado springs market is in equilibrium. Good location, close to the beach and city malinska 10 sturgeon helena bay min walk along the beach, very friendly staff stayed in july.
3 with the world, it was the constant wakefulness to make use of every opportunity, private instruction as well as public and general instruction. Return value (dom node | dom array) parameters pnd (dom node | string id | dom node array) pndarray (dom node | string | array) $x_hidesiblings(pnd) hides all sibling nodes of given pnd. Holder (2013), however, the supreme court struck down the section of the vra that had been used to identify covered jurisdictions, effectively making the preclearance requirement unenforceable. <cfalse</c to release only unmanaged resources.</param + protected override void dispose(bool disposing) + The genus tubulicrinis and hyphoderma in india proceedings of the indian academy of sciences, section b, 71 (3). pp. Richard nixon may have lost in 1960 because his pledge to campaign in all 50 states forced him to spend the last weekend in alaska. Specifically, the use of toys with a certain colour, shape, purpose and target market. Uniform random number generation, mapping uniform random variables to an arbitrary pdf, correlated and. I was so nervous that when i realised there was no core left of the apple i had to ask. Organize and print photos or produce slideshows, collages, video clips, and so much more. The map is split in two by a large cliff face with access between the two sides through a narrow pathway in the middle and in the area around your main base. These x-factor abilities are sometimes unique to single players, and others are shared by a handful of elite players at a given position. A roundup of the best hillary clinton memes and viral looking forward to the day when i ask you to clarify which president you're talking about. E. testing may be performed to determine compliance of corrected work with specified requirements. The first one is a historical romance, about a husband who returns home from war or something to his wife, who he was cruel to her, and neglected her since the beginning of their marriage. Rptr. at 867, 566 p.2d at 1003 (citation omitted); see also, marcus at 617. This is perfectly normal and we promise your mouth will not be sore forever! Mercury and italia with italy. constitution drafted in napoleonic france. Side table – foldable wagon features 2 built-in cup holders and a mobile phone holder. Since all analyzed pc-binding proteins localize to sun-1 aggregates in spd-3(me85) mutants, we infer that movement of pcs tethered to the ne is also reduced in spd-3(me85) mutants. If it doesnot go back online, we'll have no choice but to start … the "negcros level cave"!!! And as they say, you take what you see in a piece of art. hopefully, george bush will have his own interpretation if he hangs the painting in his office. D also came along; drove a back to s with roads becoming quite icy where water had run across roads. En is blij dat haar voorspellingen toch zijn uitgekomen. whoebert willemijn hoebert persoonlijk vind ik het buiten ook wel een mooi plaatje worden nu hoor, van la newest online dating website without registration mij mag t nog even doorsneeuwen! The second image is the second screen (activated by pushing the center of the quick settings ring) and contains nfc, airplane mode, eco mode, driving mode, gps, and voice input. There is a wealth of info in there on everything from engines, to interiors, panels, trailers, and basically anything to do with the kitfox. A. our compatible hp ink cartridges have long lasting ink which have great life and doesnot fade away.
There are two reasons for the underreporting of abductions along the border. la newest online dating website without registration The latter, built on a 500 m elevation overlooking the valley, gave its name to a remarkable civilization which was probably the first urban civilization in the americas. We will offer recommendations for future system design and installation based on the results of the study. The trade agreement with siam was america's first treaty between the united states and an asiatic country. Shift had in queensland," said daniel oaten, senior arborist with treescape australia. It seems unreasonable to have to hire a lawyer to facilitate the purchase of my printer. Holmes quickly sold go, eventually receiving $20,000 for the paperback rights; Tim graham (homer ede on national velvet) plays bath-house owner paddy grogan. They are all part of a spiritual army that is organized and established into ranks – and under the headship of satan, the devil, who comes against us with his wiles. ii. There is also a tesco vet referral helpline which you can call on 03301006460. All of the songs are fantastic standalone tracks, as they are all memorable and easily distinguishable from each other. Mr. bullard reiterated in his presentation that the shift in monetary policy by the fed has had a stimulative impact beyond the actual scope of the rate cuts, given how other asset prices reacted. "a credit scoring model for smes based on accounting ethics," sustainability, mdpi, open access journal, vol. 9(9), pages 1-15, september. Galteau 1813- , son of jean and marie couraud, married to marie merlet in 1850. The others like arjan bajwa and adil hussain also were good on their roles and so was nikita dutta and suresh oberoi. Other harsh taxation and administrative measures added to his unpopularity in sicily and southern italy. Certain specified poligars, among whom were the chiefs of bellary, rayadrug and harpanahalli, were, however, to be left in possession of their districts. The high degree of permeability of the limestone rocks and the presence of the fault system have allowed the establishment of a deep and complex groundwater flow. Presbyterians had churches throughout the state but relatively small numbers of members. Cawnpore would be stripped of half its importance and menaced in its communications with delhi on the one side, and with benares on the other, by the rebels holding the fortress of lucknow. As we do each year, we'll be keeping our eye on multiple mock drafts around the web leading up to the draft to give you an idea who the experts are saying will be available at charlotte's picks. In addition, from the behavioral studies on the rats, the number of rears significantly increased 2 and 4 weeks postinjection in all the groups. 16 cash sales for the first half of the month are $59,710 (cost is $42,800). 3864-3868 [doi] on the convergence rate of the bi-alternating direction method of multipliers guoqiang zhang , richard heusdens , w. bastiaan kleijn . This has emerged a simple, yet effective method to avert further damage or misuse of the resource. Create volume group in the name of tecmint_add_vg using available free pv create using pe size 32.
Wij vroegen ons in onze positie af of die accountants nou niet uit zichzelf iets aan ons konden vertellen en of hun verantwoordelijkheid toch niet la newest online dating website without registration wat verder gaat. Inhibition of aphanomyces la newest online dating website without registration euteiches f. sp. pisi by volatiles produced by hydrolysis of brassica napus seed meal. Josefina guerra said her father felt guilty for years about the shooting, which she said broke his heart. They always are listening to the scriptures but it doesnot do anything la newest online dating website without registration for them. Crit gear basically doubles my crit chance numbers and triples my block chance numbers, so it's definitely fun and high on fun for la newest online dating website without registration farming sake. This app is compatible with a ton of other calendar apps like icloud, mobileme, google calendar, yahoo! I was still getting used to be a vampire at the time, but she along with master alucard toughened me up." La newest online dating website without registration it passed directly over the hanscom house and disappeared in the east, she said. Tot nu toe hebben we gezien hoe elektronen zich gedragen in vaste geleiders zoals koperdraden en dergelijke . I did not , because the desperate tag of being retrenched stuck its tongue out at me with disdain. What about when doctors find posthumous brain injuries in people who havenot had a series of concussions? Also, instead of upgrading his abilities when he levels up (like the other 146 champions in the game), the player can choose to upgrade la newest online dating website without registration either his attack damage, attack speed, or armor penetration. 40 patients (control) were suctioned using kimvent* closed suction system (kimberly clark, usa) and 40 patients (study) were suctioned using amcss (amcss, la newest online dating website without registration biovo technologies, israel). La newest online dating website without registration but the zero tax policy on retirement benefits and lack of state income tax might be its biggest attractions for boomers seeking a new address. Out of the 664 men, including three la newest online dating website without registration artillery regiments and their etat-major, carried on board loire, 48 were killed and 75 wounded. 's photo of her, nude with a large python, was marketed as a poster ely culbertson … ted, he briefly attempted a law career in philadelphia. The gold diggers didnot succeed in selling the gold nugget in one piece, so they smelted it down. Enjoy two iconic parisian experiences during this evening tour that combines a dinner cruise on the seine river with admission to the la newest online dating website without registration moulin rouge. This second and more common outcome of resocialization is shown in figure 2.
Splash a little extra secret sauce on the fries, you'll be glad you did! It is very useful in the diagnosis of diffuse pulmonary dictionary 453 infiltrates in immunosuppressed patients. Anyhow, this explains why the rule is that the criminal justice system itself commits homicides and not another state system. You need to remember the picture and open two identical pictures to unlock the complete picture. Page: [63] preachers, abated, in a great degree, the force of the accusationi. While if the player defeats all the heroic bosses in the expansion, he/she will get a new card back. People working different birth numbers can be found in the same career, but they approach the work in different ways according to the tendencies and drives of their birth number. It has higher textures, it can run much higher resolution and a lot more graphic features. I was 6 months pregnant when we made the move and we left with nothing, just whatever we could fit in the jeep, which was mostly clothes. Red blood cell distribution and survival in patients with chronic obstructive pulmonary disease. These docudramas used to be so dry: terrible production values and low budgets. Percent of persons 65 years of age and over who report having arthritis and having difficulty with activities of daily living by sex: united states 1984. I have sent an email and a support ticket for an order that i have at the moment. The freefly vr headset is one of the most expensive of the items weve listed but in many ways you get what you pay for. Payments options include direct deposit, paper checks and prepaid debit cards. Car manufacturers are developing electric vehicles (evs) and have announced that the first models will be put on the market in 2010. If the aim is to avoid getting out of breath, is it ever possible to walk too slowly? Participants described several dimensions of professional integrity in management education. They were bunch of rowdies who had nothing better to do than irritating others. Please give details answer with proof so that after reading the answer i should be able to explain it to someone else. Art. 21 assures right to life and any deprivation is as per the procedure established by law. There are several fragrant ingredients that really infuse the flavor of this risotto. Chest tube was inserted in 70 (86.4%) and it was successful in 65 (92.9%) of the cases. (a well-known character who memorably played the university of southern california recruiter in boyz n the hood). Internal relations in our la newest online dating website without registration web pages of loss for which type of exit-process customer service right on my 2007 hyundai sonatas For a woman to dream that she assents to abortion being committed on her, is a warning that she is contemplating some enterprise which if carried out will steep her in disgrace and unhappiness. Renovation work on the abbey, which is the parish church, now means it has a tea room and is open to visitors. Make the drone only a little bit bigger than your hand, and the fun actually doubles. "now we're at the stage where we want to be number one in the world again. On the other side of the reef wall is a large drop-off or steep decline into deep blue water. But that might be because i couldnot stand his earlier films, apart from few visual flourishes. I would assume most golfers look for some, if not all, of these when looking at new balls. What many of the six million tourists who visit hong kong each year have discovered is that it is much more than a traditional eating-out and shop-till-you-drop paradise. Both versions of the littoral combat ship use powerful diesel engines, as well as gas turbines for extra speed.
I mean, definitely, they like that, they wanted it, but it takes some time, so the percentages of growth were not as anticipated by the studies which were made early on. Conclusion the dilli annashree yojna is unique in making vulnerability a criterion for providing food security. Presented in 18 topical chapters, including nazi propaganda images, photos from the concentration camps, pictures from countries under german occupation, and postwar trials. One option you have when selecting your guitar tuner is whether you want a chromatic or non-chromatic tuner. In indonesia, where the descent of the princes from the sun also is a feature, the sun often replaces the deity of heaven as a partner of the earth. In the entire world, probably no warriors were more addicted to decorating their armor, often fashioning it in truly outlandish forms, than the samurai. It also allows knowledge of the political situation and history of india are the results popularizing those cultural goods in which creating we participate. Phate – come molti grandi stregoni del computer – possedeva un lato mistico. It also offers digital commerce, a flexible cloud platform that provides customers with capabilities to shape the digital cx and improve customer engagement. Industrial and commercial bankof china , china construction bank , agricultural bank of china and bank of china . – button this button allows you to set the highlight window to be used with the route to specified highlight hot key. I grew up practicing my art by drawing all of the comics that this man created. This principle is similar to many existing requirements and the boards expect that most transactions would remain unaffected. Plus, webstorage auto syncs to make sure you're always using the most current versions of files, and sharing is super-convenient with links sent via email, sms, or texting apps. The islamic state, also known as isis, isil, or daesh, swept through a third of iraq in 2014, seizing mosul, the largest city in the north, and reaching the vicinity of baghdad. Sometimes the lips may become dry due to thyroid problem or inconsistency. But one thing i donot think elyssa jerret gets enough credit for, is the support, encouragement, and confirmation that she gave joe. Ms. shafiq said that one in ten women of childbearing age in tehran work as prostitutes just to keep up with the highest rate of inflation in the world. This is sometimes the problem presented to the monopolist, edition: current; A ct scan of the abdomen on admission showed a liver of normal size with mild intrahepatic biliary ductal dilatation. The order will go as soon as the 2011 parts land, we will wait and see how big the order is on whether we fly the gear in again (takes a few weeks) or use a boat. The city pays for services not rendered and faces lawsuits when things go wrong. So, you know, everyone's going to be there tonight, and it's going to be a great night, basically." Currently, women constitute 8% of members in parliament, which is clearly unreflective of the 51% female population in ghana. Middle school children who demonstrate the ability to care for themselves without help may be left alone for up to four hours during the day and evening; Bring back valkyrie turn as an ex move, alongside darkness illusion and finishing shower (with new, manageable inputs, of course) and for the love of god, just bring back lilith as her shadow clone. Tommy milner, in the number 92 bmw would end up finishing the race 3rd. So as fed up as she maybe with her life, she decides to make a change this year, if only for her birthday. After the suspension finally becomes la newest online dating website without registration a solution, it is then monitored by ipc (hplc). This layer provides switching and routing technologies, creating logical paths, known as virtual circuits, for transmitting data from node to node. She could remember her own hot tears running down her cheeks as her mother slipped away. You will then be able to access and sync your email folder across devices. Often he actually seemed like more of a father to him even when they were alone. Since water is integral to biological structures, spm is now being applied to hydrated living biological specimens [138], although tip contamination must be considered [139].
This is nice and milton simple for bromsgrove the wrapper code, but makes it difficult to be programmer and tool friendly as anyone looking at the. Courses are aired over university-owned stations from 6 am to 12 noon woodward. Saginaw you will not find a better option for a memorable round of golf. Enfjs are driven by a deep sense of altruism and empathy for other people repton. I worked on short pitches in the backyard for two or three weekends then took it to tbe course pontypool. Defendants in zanzibar can appeal decisions to the union court of appeal argentia. Nottingham perhaps out of the quest mode the game could have allowed for others to join and do a fun dive. Ignoring immigration and emigration, la newest online dating website without registration the population newton of swiss citizens harrodsburg decreased by 1 while the foreign population remained the same. That means we are going to see some real deal tewksbury action this time. Over the years allan andtoowoomba gardener and author patricia gardner has come london to the aid of darling downs residents with west berkshire her toowoomba plants book duo, the first volume smithton of which trees and shrubs is available now. Smoking cigarettes or e-cigarettes is not permitted anywhere on the flight line in bannockburn the vicinity weymouth and portland of aircraft, vendor tents or chalets. Etna santa clara eruption – southeast crater the winooski sicilian word, katanemeans "grater, flaying knife, skinning place" or a "crude tool apt to pare". However, scof tests should never be used to determine if a floor is slippery when ilchester wet. Grants unfortunately, support for displaylink devices in linux is still not as feature-complete as on.
Highly demanded products the fort wayne highest level of service warranty qualified technical support shipping within europe — days more than canton 80 employees more than partners throughout skokie europe constant expansion of assortment. For marketers, this means nacogdoches you can use facebook to central falls target just about anyone in any city and country worldwide. Your bravery, strength, courage and optimism fill me with pride oil city and it is an honor to be your new market mother. I'm la newest online dating website without registration as anti-piety as they come, and i didn't get the sense that alice hounslow von hildebrand was hiding bozeman behind piety. The original intention was to flood the bitcoin network with transactions to see if the current 1mb block size totowa was spearfish adequate at such volume, and if the network could recover quickly from a surge. In the field pottsville of metallurgy, ttc is equipped to run laboratory tests on track and rolling stock components shepherdstown and materials. Note on automatic tab management: idea will auto close sandwich tabs to keep the workspace tidy. This is considered to be the turning point of the war waynesboro.
Nächster BeitragWeiter Nächster Beitrag
|
CommonCrawl
|
Home » Renewable Energy » Basic Electrical Quantities: Energy, Charge, Voltage
Basic Electrical Quantities: Energy, Charge, Voltage
These three basic electrical quantities—energy, charge, and voltage—are closely related. It is difficult to visualize or measure energy directly because it is an abstract quantity and represents the ability to do work. The electrical charge can be positive or negative, and it can do work when it moves from a point of higher potential to one of lower potential. Voltage is a measure of the energy per unit of charge and can be measured easily with common instruments. Voltage is one of the electrical quantities that you will work within most renewable energy systems.
Work is done whenever an object is moved by applying a force over some distance. To do work, you must supply energy.
Energy is the ability or capacity for doing work; it comes in three major forms; potential, kinetic, and rest.
Stored energy is called potential energy and is the result of work having been done to put the object in that position or in a configuration such as a compressed gas. For example, the water stored behind a dam has stored (potential) energy because of its position in a gravitational field.
Kinetic energy is the ability to do work because of motion. The moving matter can be a gas, a liquid, or a solid. For example, the wind is gas in motion. Falling water is a liquid in motion; a moving turbine is solid in motion. Each of these processes is a form of kinetic energy because of the motion.
Rest energy is the equivalent energy of matter because it has mass. Einstein, in his famous equation E = mc2, showed that mass and energy are equivalent.
Unit of Energy
Because energy is the ability to do work, energy and work are measured in the same units. In all scientific work, the International System of Units (SI) is used. SI stands for Système International, from French. These units are the basic units of the metric system.
Energy, force, and many other units are derived units in the SI standard, which means they can be expressed as a combination of the seven fundamental units. The most common derived units are those using three fundamental units, which are the meter, kilogram, and second (mks). This forms the basis of the mks system of units, which are the most common derived units in the SI system. Another derived set of units is based on the centimeter, gram, and second (cgs). These smaller units are referred to as the cgs system.
The SI unit for energy is the joule (J), which is defined to be the work done when 1 newton of mechanical force is applied over a distance of 1 meter. A Newton is a small unit of force, equivalent to only 0.225 pounds. The symbol W is used for energy, and we will use WPE or WKE to specify potential energy and kinetic energy, respectively, to be consistent with W. (You may see E for energy in some cases, such as Einstein's E = mc2 or PE and KE for potential energy and kinetic energy, respectively). The equation for gravitational potential energy is
${{W}_{PE}}=mgh$
WPE = potential energy in J
m = mass in kg
h = height in m
The equation for kinetic energy is
${{W}_{KE}}=\frac{1}{2}m{{v}^{2}}$
WKE = kinetic energy in J
v = velocity in m/s
Charles Augustus Coulomb (1736–1806) was the first to measure the electrical forces of attraction and repulsion of static electricity. Coulomb formulated the basic law that bears his name and states that the force between two point charges is proportional to the product of the charges and inversely proportional to the square of the distance between them. His name was also given to the unit of charge, the coulomb (C).
Coulomb's law works for like charges or unlike charges. If the signs (+ or −) of both charges are the same, the force is repulsive; if the signs are different, the force is attractive. Long after Coulomb's work with static electricity, J. J. Thomson, an English physicist, discovered the electron and found that it carried a negative charge.
The electron is the basic atomic particle that accounts for the flow of charge in solid conductors. The charge on the electron is very, very tiny, so literally many trillions of electrons are involved in practical electrical circuits. The charge on an electron was first measured by Robert Millikan, an American physicist, and found to be only 1.60 × 10−19 C. The power of ten, 10−19 means that the decimal point is moved back 19 decimal places.
Voltage (V) is defined as energy (W) per unit charge (Q). The volt is the unit of voltage symbolized by V. For example, a battery may produce twelve volts, expressed as 12 V. The basic formula for voltage is
$V={}^{W}/{}_{Q}$
One volt is the potential difference between two points when one joule of energy is required to move one coulomb of charge from one point to another.
Sources of Voltage
Various sources supply voltage, such as a photovoltaic (solar) cell, a battery, a generator, and a fuel cell, as shown in Figure 1. Huge arrays of solar modules can provide significant power for supplying electricity to the grid.
Figure 1 Sources of Voltage
Figure 2 DC Voltage Source
Voltage is always measured between two points in an electrical circuit. Many types of voltage sources produce a steady voltage, called dc or direct current voltage, which has a fixed polarity and a constant value. One point always has positive polarity and the other always has negative polarity. For example, a battery produces a dc voltage between two terminals, with one terminal positive and the other negative, as shown in Figure 2(a). Figure 2(b) shows a graph of the ideal voltage over time. Figure 2(c) shows a battery symbol. In practice, the battery voltage decreases some over time. Solar cells and fuel cells also produce the DC voltage.
Electric utility companies provide a voltage that changes direction or alternates back and forth between positive and negative polarities with a certain pattern. AC generators produce the alternating voltage or alternating current (ac) voltage. In one cycle of the voltage pattern, the voltage goes from zero to a positive peak, back to zero, to a negative peak, and back to zero. One cycle consists of a positive and a negative alternation (half-cycle). The cyclic pattern of ac voltage is called a sinusoidal wave (or sine wave) because it has the same shape as the trigonometric sine function in mathematics.
In North America, ac voltage alternates one complete cycle 60 times per second; in most other parts of the world, it is 50 times per second. The number of complete cycles that occur in one second is known as the frequency (f).Frequency is measured in units of hertz (Hz), named for Heinrich Hertz, a German physicist. Figure 3 illustrates the definition of frequency for the case of three cycles in one second, or 3 Hz.
Figure 3 Example of an AC Sinusoidal Voltage. The frequency is 3 Hz, and the period (T) is ⅓ s.
The period ( T ) of a sine wave is the time required for 1 cycle. For example, if there are 3 cycles in one second, each cycle takes one-third second. This is illustrated in Figure 3, where one cycle is shown with a heavier curve. From this definition, you can see that there is a simple relationship between frequency and period, which is expressed by the following formulas:
$f=\frac{1}{T}$
$T=\frac{1}{f}$
What is the voltage if the energy available for each coulomb of charge is 100 J and the total charge is 5 C?
\[V=W/Q=100J/5C=20\,V\]
If the period of an ac voltage 0.01 s, determine the frequency.
\[f=1/T=1/0.01s=100Hz\]
If the frequency of an ac voltage is 60 Hz, determine the period.
\[T=1/f=1/60Hz=0.0167s=16.7ms\]
What is energy, and what is its unit?
What is the smallest particle of negative electrical charge?
What is the unit of electrical charge?
What is voltage, and what is its unit?
Name two types of voltage.
Define frequency and period.
Energy is the ability or capacity for doing work; it is measured in joules in the SI system.
The electron
The coulomb
Energy per charge; the unit is the volt, symbolized by V.
DC voltage and AC voltage
Frequency is the number of cycles per second measured in Hertz. The period is the time for one cycle, measured in seconds.
Autotransformer: Working, Advantages, Disadvantages
Electrical Current: Definition & Formula
Types of Fossil Fuels
Environmental Impact of Fossil...
How Does Nuclear Energy Work
Nuclear Energy Environmental I...
All about Solar Energy | Solar...
Global Wind Patterns | Environ...
Basic Electrical Quantities: E...
Electrical Current: Definition...
Resistance and Ohm's Law
Power and Watt's Law
|
CommonCrawl
|
Entropic Forces
In 2009, Erik Verlinde argued that gravity is an entropic force. This created a big stir—and it helped him win about $6,500,000 in prize money and grants! But what the heck is an 'entropic force', anyway?
Entropic forces are nothing unusual: you've felt one if you've ever stretched a rubber band. Why does a rubber band pull back when you stretch it? You might think it's because a stretched rubber band has more energy than an unstretched one. That would indeed be a fine explanation for a metal spring. But rubber doesn't work that way. Instead, a stretched rubber band mainly has less entropy than an unstretched one—and this too can cause a force.
You see, molecules of rubber are like long chains. When unstretched, these chains can curl up in lots of random wiggly ways. 'Lots of random ways' means lots of entropy. But when you stretch one of these chains, the number of ways it can be shaped decreases, until it's pulled taut and there's just one way! Only past that point does stretching the molecule take a lot of energy; before that, you're mainly decreasing its entropy.
So, the force of a stretched rubber band is an entropic force.
But how can changes in either energy or entropy give rise to forces? That's what I want to explain. But instead of talking about force, I'll start out talking about pressure. This too arises both from changes in energy and changes in entropy.
Entropic pressure — a sloppy derivation
If you've ever studied thermodynamics you've probably heard about an ideal gas. You can think of this as a gas consisting of point particles that almost never collide with each other—because they're just points—and bounce elastically off the walls of the container they're in. If you have a box of gas like this, it'll push on the walls with some pressure. But the cause of this pressure is not that slowly making the box smaller increases the energy of the gas inside: in fact, it doesn't! The cause is that making the box smaller decreases the entropy of the gas.
To understand how pressure has an 'energetic' part and an 'entropic' part, let's start with the basic equation of thermodynamics:
What does this mean? It means the internal energy of a box of stuff changes when you heat or cool it, meaning that you change its entropy but also when you shrink or expand it, meaning that you change its volume Increasing its entropy raises its internal energy at a rate proportional to its temperature Increasing its volume lowers its internal energy at a rate proportional to its pressure
We can already see that both changes in energy, and entropy, can affect Pressure is like force—indeed it's just force per area—so we should try to solve for
First let's do it in a sloppy way. One reason people don't like thermodynamics is that they don't understand partial derivatives when there are lots different coordinate systems floating around—which is what thermodynamics is all about! So, they manipulate these partial derivatives sloppily, feeling a sense of guilt and unease, and sometimes it works, but other times it fails disastrously. The cure is not to learn more thermodynamics; the cure is to learn about differential forms. All the expressions in the basic equation are differential forms. If you learn what they are and how to work with them, you'll never get in trouble with partial derivatives in thermodynamics—as long as you proceed slowly and carefully.
But let's act like we don't know this! Let's start with the basic equation
and solve for First we get
This is fine. Then we divide by and get
This is not so fine: here the guilt starts to set in. After all, we've been told that we need to use 'partial derivatives' when we have functions of several variables—and the main fact about partial derivatives, the one that everybody remembers, is that these are written with with curly d's, not ordinary letter d's. So we must have done something wrong. So, we make the d's curly:
But we still feel guilty. First of all, who gave us the right to make those d's curly? Second of all, a partial derivative like makes no sense unless is one of a set of coordinate functions: only then we can talk about how much some function changes as we change while keeping the other coordinates fixed. The value of actually depends on what other coordinates we're keeping fixed! So what coordinates are we using?
Well, it seems like one of them is and the other is… we don't know! It could be or or or perhaps even This is where real unease sets in. If we're taking a test, we might in desperation think something like this: "Since the easiest things to control about our box of stuff are its volume and its temperature, let's take these as our coordinates!" And then we might write
And then we might do okay on this problem, because this formula is in fact correct! But I hope you agree that this is an unsatisfactory way to manipulate partial derivatives: we're shooting in the dark and hoping for luck.
Entropic pressure and entropic force
So, I want to show you a better way to get this result. But first let's take a break and think about what it means. It means there are two possible reasons a box of gas may push back with pressure as we try to squeeze it smaller while keeping its temperature constant. One is that the energy may go up:
will be positive if the internal energy goes up as we squeeze the box smaller. But the other reason is that entropy may go down:
will be positive if the entropy goes down as we squeeze the box smaller, assuming
Let's turn this fact into a result about force. Remember that pressure is just force per area. Say we have some stuff in a cylinder with a piston on top. Say the the position of the piston is given by some coordinate and its area is Then the stuff will push on the piston with a force
and the change in the cylinder's volume as the piston moves is
gives us
So, the force consists of two parts: the energetic force
and the entropic force:
Energetic forces are familiar from classical statics: for example, a rock pushes down on the table because its energy would decrease if it could go down. Entropic forces enter the game when we generalize to thermal statics, as we're doing now. But when we set these entropic forces go away and we're back to classical statics!
Entropic pressure—a better derivation
Okay, enough philosophizing. To conclude, let's derive
in a less sloppy way. We start with
which is true no matter what coordinates we use. We can choose 2 of the 5 variables here as local coordinates, generically at least, so let's choose and Then
and similarly
Using these, our equation
If you know about differential forms, you know that the differentials of the coordinate functions, namely and form a basis of 1-forms. Thus we can equate the coefficients of in the equation above and get:
and thus:
which is what we wanted! There should be no bitter aftertaste of guilt this time.
That's almost all I want to say: a simple exposition of well-known stuff that's not quite as well-known as it should be. If you know some thermodynamics and are feeling mildly ambitious, you can now work out the pressure of an ideal gas and show that it's completely entropic in origin: only the first term in the right-hand side above is nonzero. If you're feeling a lot more ambitious, you can try to read Verlinde's papers and explain them to me. But my own goal was not to think about gravity. Instead, it was to ponder a question raised by Allen Knutson: how does the 'entropic force' idea fit into my ruminations on classical mechanics versus thermodynamics?
It seems to fit in this way: as we go from classical statics (governed by the principle of least energy) to thermal statics at fixed temperature (governed by the principle of least free energy), the definition of force familiar in classical statics must be adjusted. In classical statics we have
is the energy as a function of some coordinates on the configuration space of our system, some manifold But in thermal statics at temperature our system will try to minimize, not the energy but the Helmholtz free energy
is the entropy. So now we should define force by
and we see that force has an entropic part and an energetic part:
When the entropic part goes away and we're back to classical statics!
I'm subject to the natural forces. – Lyle Lovett
This entry was posted on Wednesday, February 1st, 2012 at 4:29 am and is filed under information and entropy, physics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
18 Responses to Entropic Forces
Suggestion for next post: Deriving the force-extension curve of a freely-jointed chain model for a polymer ("rubber band"). Then the worm-like-chain model, and compare with single-molecule DNA-stretching experiments.
Mike Stay says:
So there should be a similar splitting of the momentum, with a part due to the free action and a part due to quantropy.
1 February, 2012 at 10:37 pm
Darn, you beat me to it. Shh!
Yes, the nice thing about having two analogies to play with (classical statics versus thermal statics, thermal statics versus quantum dynamics) is that one can generate a lot of ideas; it takes longer for both analogies to 'saturate' than if you have just one.
I'm busy writing a post on quantropy, where I try to work it out in an example so we can explore in detail ideas like the one you mentioned. It's hard to develop a good intuition for quantropy without looking at some examples. Of course one can follow the analogies and make a lot of very good guesses about it. But the hands-on feel for entropy that I've built up through many calculations, I'm still lacking for quantropy.
Arrow says:
Shouldn't there be a + sign in equation for entropic force?
Anyway I always have trouble with entropy and especially with the notion of it as a fundamental quantity (same goes for information).
For example let's look at the simplest case I can think of – of one dimensional piston of length L with just one molecule of ideal gas going back and forth between the walls. The molecule will hit walls with certain average frequency dependent on the average momentum (ie temperature). So if I understand it correctly in this case entropy is directly related to the length of a piston since to describe the microscopic state we have to specify the position of the molecule and it's direction. So decreasing the piston length L while keeping temperature (and therefore avg momentum) constant will decrease the entropy and also result in the molecule hitting the walls more frequently so the avg. force exerted by the molecule on the walls will increase.
Ok, so one could say that the average force increased because of decrease in entropy, but while correct that is an abstract statement which seems (to me anyway) much less informative then stating that the average force increased due to decrease in piston length. Here the piston length seems like a fundamental parameter of the problem and entropy is just an abstract concept derived from it.
Now I understand the usefulness of entropy when talking about macroscopic processes since it allows us to abstract from the details of microscopic behavior so we can calculate useful quantities even when we don't have good grasp of the details of microscopic behavior in our problem. But I don't see it's usefulness at the microscopic level where quantities like space, time, momentum and energy seem much more fundamental and relevant.
This is also why the notion of "gravity as an entropic force" seems much less appealing to me then gravity as spacetime curvature (if only other forces could be derived from spacetime geometry…btw I've seen papers that show EM can be seen as a manifestation of spacetime torsion, is this a valid approach?).
Arrow wrote:
No, not if we're talking about the same equation. But you may indeed have noticed an inconsistency in what I wrote, due to a typo. I wrote:
But that last minus sign was wrong. In fact
In other words, the entropic force points in the direction of increasing entropy (at least if , which is true except in rather unusual circumstances, which I will ignore henceforth).
So if I understand it correctly in this case entropy is directly related to the length of a piston since to describe the microscopic state we have to specify the position of the molecule and it's direction. So decreasing the piston length L while keeping temperature (and therefore avg momentum) constant will decrease the entropy and also result in the molecule hitting the walls more frequently so the avg. force exerted by the molecule on the walls will increase.
It sounds like you're saying the force decreases if the entropy increases as we expand the piston. The equations I'm throwing around say the force is positive if if the entropy increases as we expand the piston:
It's true that as you expand a piston full of ideal gas, the force pushing on its top decreases. But my blog post is talking about itself, not how this force changes as you change (the length of the piston). Obviously the force changes like this:
I will avoid discussing gravity, except for this:
btw I've seen papers that show EM can be seen as a manifestation of spacetime torsion, is this a valid approach?
I'd never seen such an idea, despite spending an unhealthy amount of time thinking about 'teleparallel gravity', a theory that's almost equivalent to general relativity, but in which gravity is described using torsion rather than curvature. Now that you mention it, I see a paper that claims you can describe gravity coupled to electromagnetism and spinors using torsion. I can see that it's not the work of a crackpot, but I can't assure you that it's correct.
WebHubTelescope says:
This is an excellent post, and a jumping off point for lots of discussion.
Here is one —
If we were to use the rubber band analogy in terms of the greenhouse gas theory, how would it work?
I would suggest that a greenhouse gas serves to limit the outgoing radiation into bands of wavelength. This reduces the space of allowable energy states and thus reduces the entropy of the subsystem. However, we still must maintain an energy balance with the external system, and so the entropic part of the decrease in free energy is exactly compensated by a temperature increase.
At the most elemental level, that is why greenhouse gases raise the temperature of a planet's surface. We can talk all we want about variability in climate dynamics and atmospheric lapse rate, etc, but this is the heart of the argument.
Stretching the rubber band is like putting notches in the emission spectrum. That decreases entropy of the photonic volume, and temperature has to compensate. Mathematically, this is calculated by rescaling the Planck gray-body response.
I bring this up because the complexity of the gravity=entropic force argument makes this look simple in comparison.
So now we have four very similar equations:
The minimal-action path given Hamilton's principal function satisfies
d(Action) = Momentum * d(Position) – Energy * d(Time)
where all of these are functions of time.
The one you talked about here is
d(Energy) = kT * d(Entropy) – Force * d(Position).
If we have a statistical ensemble of paths and need to choose one based on a constraint on the mean action and, say, the mean position at a given time, we have
d(Action) = Lambda * d(Entropy) – Momentum * d(Position)
When we do quantum superpositions rather than statistical ensembles, we get your notion of quantropy.
If we have a rubber band under tension and increase the temperature (like in this heat engine described by Feynman) then the rubber band contracts:
d(Entropy) = Force * d(ThermalExpansionCoefficient) – Energy * d(Coolness)
Can we describe these last three in a similar way to the first? As we change the position of the piston, do the temperature and entropy change as though they were a particle moving in phase space with energy playing the role of Hamilton's principal function? Similarly, if we change the temperature, do the force and thermal expansion coefficient change as though they were a particle moving in a phase space with entropy playing the role of the principal function?
Mike wrote:
As we change the position of the piston, do the temperature and entropy change as though they were a particle moving in phase space with energy playing the role of Hamilton's principal function? Similarly, if we change the temperature, do the force and thermal expansion coefficient change as though they were a particle moving in a phase space with entropy playing the role of the principal function?
Yes, I believe so! Blake mentioned some examples of this phenomenon here, where he wrote:
Here's M. J. Peterson (1979), "Analogy between thermodynamics and mechanics" American Journal of Physics 47, 6: 488, DOI:10.1119/1.11788.
We note that equations of state—by which we mean identical relations among the thermodynamic variables characterizing a system—are actually first‐order partial differential equations for a function which defines the thermodynamics of the system. Like the Hamilton‐Jacobi equation, such equations can be solved along trajectories given by Hamilton's equations, the trajectories being quasistatic processes which obey the given equation of state. This gives rise to the notion of thermodynamic functions as infinitesimal generators of quasistatic processes, with a natural Poisson bracket formulation. This formulation of thermodynamic transformations is invariant under canonical coordinate transformations, just as classical mechanics is, which is to say that thermodynamics and classical mechanics have the same formal structure, namely a symplectic structure.
The boldface sentence is a way of saying 'yes' to your question in a bunch of thermodynamic examples. I'm pretty sure it's a very general fact.
Hello, John! Are your posts (for example this one) available as PDF's? Some of them, like network theory, are on azimuth wiki, which can produce pdf, but not this one. I wanted to read it on a e-book reader, however this html doesn't fit it really well, especially latex as images.
Hi! No, I haven't made them available as PDFs. You can get these series of posts on my website:
• Information Geometry.
• Network Theory.
I think they look better there than here—just click the box on top to get the jsmath set up and the box will go away.
I not put my posts on quantropy or 'thermodynamics versus classical mechanics' onto my website yet, but I will, and I'll let people know when I do. It takes a bit of work. I'll probably put them into a single series, because they belong together. (In fact all this stuff fits together into a big story, but that's going to take a while for me to flesh out!)
I'm writing a paper based on the Network Theory series, and I plan to write a paper on quantropy too. They'll be more polished than these blog posts…
One reason people don't like thermodynamics is that they don't understand partial derivatives…
Well, I do love thermodynamics, but the most difficult thing for me is to decide what is the sign near the work term . And what work is it — done by the system or by the environment. May be there is some trick to remember?
Anyway, I hope what follows will be right. So consider a rubber band of length — let it be the only geometric parameter describing the band. Let be the force that pulls your handwhen you are stretching the band. So if it pulls, it is positive. Then:
Hence :
Thus if you heat the rubber band it will pull harder, it shrinks. I was just curious whether I could prove it :-) Maybe I failed but the fact still holds. One need to know this property of rubber in order to explain the rotating sense of a rubber band heat engine.
Hi! I don't think there's any 'trick' to remembering the sign of work. I agree that it's an annoying issue. But it just means I need to spend a minute deciding whether I'm talking about the work the system is doing on the environment or the work the environment is doing on the system, which has the opposite sign.
I find it much more annoying when people tell me set my watch "forward" one hour when Daylight Saving Time starts in the spring. Do they mean to set my watch to an earlier time, or a later time? The word "forward" is confusing. The "forward" of a book is near the front, but as you read "forwards" through the book you move toward the back. Similarly, ancient history is the study of the time when everything was a lot younger than it is now!
I had to learn category theory to really understand this stuff.
Of course, one can try to choose a convention and stick with it. President Kennedy famously said "ask not what your country can do for you—ask what you can do for your country!" So he preferred to always think about the work the system (you) did on its environment (your country).
Thus if you heat the rubber band it will pull harder, it shrinks. I was just curious whether I could prove it :-)
I think your argument is correct, and it's nice! My argument would be to use the formula I gave:
The force of a rubber band or stretched spring has an entropic part (the first term) and an energetic part (the second term). The entropic part is proportional to temperature, so it gets bigger when it's hot. The energetic part doesn't change.
The entropic part is proportional to temperature, so it gets bigger when it's hot. The energetic part doesn't change.
Before proceeding, note that my is opposite to your $F$ — when the band pulls is negative (pressure is negative here, unlike the gas piston). So, to my point. Actually both parts depend on temperature and they both can change. So from your formula one should carefully find
So both derivations are identical indeed (despite the notion difference ).
But I'd like to emphasize again what really matters — the sign
$\frac{\partial S}{\partial L} < 0 $
For a metal rod or a piston (and I guess for a spring) it is positive. Stretching these systems increases the phase space allowed for the system so the entropy increases. Meanwhile if you heat the systems mentioned they expand. Well, just like we were taught at school "when a substance is heated it expands".
The story is opposite for a rubber band. If it is stretched, its entropy decreases. If it is heated it contracts. So the whole thing was to demonstrate how these two "anomalies" are interconnected.
Thermodynamic identities « Peeter Joot's Blog. says:
[…] John Baez. Entropic forces, 2012. URL https://johncarlosbaez.wordpress.com/2012/02/01/entropic-forces/. […]
Rubber and Rubber Balloons: Paradigms of Thermodynamics | Enteropia says:
[…] The thing with rubber is that the elastic forces you experience are entropic, that is when you stretch a rubber band you (roughly speaking) do not increase its internal energy, you decrease its entropy. That's because rubber molecules are long twisted chains and when you expand rubber you straighten them, thus ordering (decreasing their entropy). A simple kinetic theory of rubber based on entropic reasoning is presented in the book. For quick introduction on rubber thermodynamics I suggest you John Baez's post about entropic forces. […]
amarashiki says:
I has been thinking about this post for a long time, John. The reason is that your expression for the force as the sum of an entropic term plus a potential (energy) term looks pretty similar (but not identical) to the expresion of the force in lagrangian dynamics with dissipation. The big "but" is that the dissipative part is generally assumed to take the form of the so-called Rayleigh dissipative function D:
Therefore, if we identify the entropic part with the dissipative term related to the Rayleigh function, we have
Does this last equation make sense?
I don't think it makes sense to identify an entropic force with a frictional force coming from a Rayleigh function, because a frictional force is almost always velocity-dependent while an entropic force is often not.
Furthermore, the entropic force
involves a partial derivative with respect to while the frictional force
involves a partial derivative with respect to .
Furthermore, the entropic force is proportional to temperature, , while the frictional force is not.
They seem very different.
Life and the Second Law – Yet another Blog says:
[…] Starting with the discussion of the inclined plane in school, most people are used to to energetic forces. These are forces that arise due to the gradient of an energy function, often also called a potential function or just potential. Examples are the electric forces drawing electrons through wires due to the potential difference created by a battery in simple circuits, or the force of gravity. Think again of our red ball on the slope, who's acceleration and de-acceleration is simply due to the gradient in its gravitational potential function, which just happens to be the slope at its current position. However, many seemingly familiar forces actually arise not from the drive to decrease potential energy, but from a gain in entropy. A very familiar example is the force pulling a rubber band back together when you stretch it. In its relaxed state, the long polymers making up the band can curl up and wiggle in many more ways, compared to when they are all stretched in parallel. Thus, the force pulling the band back together is not the potential energy of the molecular bonds, which would have a very different characteristic, but indeed the potential increase in entropy, i.e. the increased volume of the accessible microscopic phase space in the relaxed state. And even seemingly more familiar forces, such as the force created by an expanding gas pushing on a cylinder, are in fact of entropic origin. A very insightful, and mathematically explicit discussion of entropic forces is given in this excellent blog post by John Baez. […]
|
CommonCrawl
|
What if we lived near a boundary of the universe?
This answer and its follow-up discussion with the OP made me think, and boil it down to the essentials:
We live in a universe that's either infinite in spacial extent or unbounded, and wrap-around effects are neglected. That is, even if space is finite, no matter where you are you can still move in any direction: there is no border to experience from the inside.
But what if there was a border? By that I mean a border that can be experienced from the inside. This is distinct from what a higher-dimentinal map would show as topological features.
For purposes of science fiction that is at least intelligent if not truly "hard" to the degree of Greg Egan, what could the edge be like?
On a macroscopic scale of gas and spaceships, it could be "a wall". But for the laws of physics, gravity, light, etc. what would it be like?
I can think of two general cases: an impassible boundary or not. Imagine an edge you could fall off!
So, what's at the end of the universe?
An earlier question with the same sentiment was closed as "too broad" but was actually poorly asked and was not given much thought by the OP.
But to be clear (and not infinity broad), I'm considering what kinds of boundary or edge would be other-than-hopeless in an intelligent SF story. Our SF is rather mundane in this respect, with even Diskworld being "large" like ours.
How this affects the people living near it is important for storytelling. If the astronomers pointed out that we lived near the edge, like how in our universe we point out the structure of filaments and voids, everything else just keeps happening. For such a feature to be meaningful to the story, the nature of it might be important to the people living there. So besides what's there, I ask, why do they care?.
See also this hard-science question.
When I posted this, I was thinking of large enclosing borders of space. But for cataloging and exploring the sci-fi possibilities of physics at a boundary, it generalizes to small inclusions as well.
For a long time I've pondered a story where a small piece of the universe gets walled off, and I even started writing a story but boxed myself in since I didn't know what the people studying it would be finding!
But there are really two cases when it comes to storytelling. If the border was truly up close so people could probe it and experiment hands-on, the low-level physics is detailed and interesting to the story (e.g. the superconductor of heat in Prof.⊕'s investigation) and needs to have detail that doesn't make a wreck of the fictional universe.
A boundary that is cosmological can be seen but never explored directly, as with distant galaxy clusters. It will interact with the nearby space though and will affect the detailed structure. People in the story might themselves wonder what happens if magnetic fields cross it, but can't walk up to try. So some lack of detail is possible on the scientific end, but we say "so what?" What is it about the cosmology that relates to a story?
Originally (as in earlier today, before I was reminded that The Pearl is really a kind of border too) I was thinking that FTL-type space exploration might interact with it, getting up close and bothering the explorers, or having something to do with how their FTL technology works.
reality-check universe spacetime-dimensions
JDługosz
JDługoszJDługosz
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Tim B♦ Dec 1 '16 at 16:08
A few possibilities (bearing in mind that you can choose whatever you like):
An impenetrable barrier, absolutely unyielding. A supernova won't scratch it, a black hole can't eat it, it's just a wall. That doesn't fit your model of physics? Tough, tell it to the wall.
It may be glassy smooth and frictionless, and either perfectly flat or infinitesimally concave. It may be rippled or craggy or fractal, with pine-tree protrusions the size of galactic superclusters, or maybe just little grape-sized bumps here and there.
It may be static. It may evolve, with waves and whorls that creep along its surface as slowly as glaciers— or faster than light, that's allowed, since we can't alter or impede them. Get in their way and they'll crush you effortlessly. There could be discontiguous incursions, big blobs of wall-stuff that appear, grow, contort, shrink and vanish.
It may be black. Perfectly black, absorbing photons (and perhaps other massless particles) and giving nothing back. It may be white, refracting incident particles at random angles with no change in energy. It may be a blackbody, featureless but glowing with heat at any temperature you like, from barely above absolute zero (absolute zero being the aforementioned black) to red-hot to sun-bright and beyond, but bear in mind that it's big and you probably don't want to roast the universe. It could give off any kind of radiation you can think of, even short-lived particles that exist nowhere else. It could be opaque but with colors, different in different places, fractal patterns, changing colors, opalescence, polarization, coherence, writing, anything you want. Note that with some of these variations it can be very difficult to judge your distance to the wall, so approach with caution.
Whole races of superintelligent scientists could spend aeons studying some of these walls.
Take some of these walls to extremes and you get:
Not the nice, predictable randomness of blackbody radiation, but a downright horrible region of flux and perversity. Its exact location is difficult — and dangerous — to measure. Approaching it is insane. Even looking at it can be bad for your health. Greg Bear's City at the End of Time comes to mind.
A MIRROR
Flat and impenetrable, but a mirror. From a great distance you can see that the sky is symmetrical. If you approach with great care you can stare your "reflection" in the face, and touch its hand. It feels like a wall of glass, but whether it's just a shiny wall presenting a mirror image, or that really is another you — or even just you — is a matter of fierce debate among cosmologists and philosophers.
A SINK
Looks like a black wall, but it swallows matter too. Poke it with a stick, and you have half a stick.
A GRAVITY SLOPE
Looks black. Things that go that way accelerate, red-shift and disappear. You can venture that way and come back, if you have powerful thrusters.
There's just… space. Black sky. You can throw a rock and watch it dwindle in the distance. No stars, no photons at all except from the rocks and space probes we've sent. You can travel out there as far and as fast as you like, and come back if you have enough fuel. In some ways this is the most haunting prospect of all. Let's go home…
answered Dec 7 '15 at 1:22
BetaBeta
$\begingroup$ Nice list. All make for good stories on a macroscopic scale, but I wonder what happens to fields and quantum phenomena near the edge? $\endgroup$ – JDługosz Dec 7 '15 at 6:26
$\begingroup$ PS I like that you discuss the inhabitants' exploration and scientific/philosophical relationship, too. $\endgroup$ – JDługosz Dec 7 '15 at 6:43
$\begingroup$ @JDługosz: It depends how hard you like your sci-fi. Wall: infinite potential barrier in quantum, violates GR. Small-scale roughness can give it almost any optical properties, and a boundary impermeable to photons is impermeable to EM fields. Superluminal ripples will cause awesome Cherenkov radiation, I think. Space: requires a tweaking of cosmology; it might just be a void that extends to the edge of the visible universe (and has done so long enough to contain no fossil light). $\endgroup$ – Beta Dec 7 '15 at 15:21
$\begingroup$ @JDługosz: The sink violates quantum theory and GR, and the gravity slope violates theories of gravity (both Newton and Einstein), so if either of those exists then the scientists in the story must admit that those theories are wrong, or at least incomplete. Chaos forces them to admit that their ideas about the universe are mostly wrong. The mirror is 100% legal; all fields and space-time curvature are symmetrical at the boundary, or equivalently the boundary condition allows no normal components of anything. $\endgroup$ – Beta Dec 7 '15 at 15:23
$\begingroup$ You could go with DC's wall at the edge of the universe; a solid, mostly flat wall, except where the people who attempted to study it or pass through it are trapped. $\endgroup$ – Xavon_Wrentaile Sep 11 '16 at 5:16
The end of the universe would appear to be a wall of randomized radiation and charged particles.
The universe is literally everything. So beyond that wall is nothingness. Visualizing nothingness is something that's hard for humans to do - in cinema and novels it's often portrayed as being grey/average, or incomprehensible.
But we do have example of nothingness in reality. It's the space between a point and itself, between 1 and 1 or 0 and 0. If you could press two quarks directly together, what would be between them? Nothing.
And that's how the edge of the universe works. Because beyond it is nothingness, that means each point of the boundary is simultaneously contiguous to every other point on the boundary. Effectively, the entire thing is a single point. Any matter or energy that exits it is randomly distributed and re-enters the universe somewhere else on the boundary. In general this makes it useless for travel, as most organisms cannot survive having their component particles redistributed amongst the entire universe.
Note: the similarity of this answer to Cosmic Background Radiation is almost certainly just coincidence, as I don't believe this to be hard sci-fi at all.
Dan SmolinskeDan Smolinske
$\begingroup$ Stephen Hawking plays with a similar holographic "information wall" around black holes where, in theory, there may be matter or something on the other side , but information about it is simply unknowable. He is working on papers to demonstrate that this can be found consistent with QM, but that's a work in progress. $\endgroup$ – Cort Ammon Dec 6 '15 at 22:19
$\begingroup$ Your reasoning doesn't make sense (maybe it was the way you came to the ideas, but "that means..." is not logical), but I like the idea of randomizing what crosses it. $\endgroup$ – JDługosz Dec 7 '15 at 6:16
$\begingroup$ @JDługosz: The idea is that "nothing" is defined as being what's between contiguous points in the universe. So if there's a big, outside "nothing" outside of our universe, it is "between" each part of the boundary, so they're all "next" to each other. Randomization comes from the fact that particles/energy are equally as likely to go anywhere. $\endgroup$ – Dan Smolinske Dec 7 '15 at 6:18
$\begingroup$ Sounds like the idea of a point at infinity. (BTW this does indeed have nothing to do with the CMB.) $\endgroup$ – David Z Dec 7 '15 at 6:21
$\begingroup$ @CortAmmon Did he ever finish those? $\endgroup$ – wizzwizz4 Jan 8 at 22:15
A white bubble
Could be the inverse of the black hole — a white bubble.
The black hole is something that has a gravity so strong that nothing can escape — Once something touch its event horizon, it will never go back.
In the other hand, the white bubble has an anti-gravity so strong that nothing can reach it. If you send some light to the white bubble, the light will be deflected by anti-gravity-lensing back onto your universe. This way, the bubble wall is an impenetrable event horizon that confines everything inside your universe without allowing anything leave.
So, in some sense, a white bubble is a black hole turned inside out. It could be arguably a black hole viewed from the inside, if you are somehow able to reject the mainstream theory that black holes are gravitational singularities and replace it with a theory that black holes are gravitational bubbles from which imprisoned objects can't leave and where spacetime distorts in a way that it can be measured larger in the inside than what it is in the outside. This way, black holes would be spherical one-way wormholes entrances viewed from outside and white bubbles would be spherical one-way wormholes exits viewed from inside.
White bubble evaporation
Further, you might know that black holes are predicted to eventually evaporate. Viewed from the inside, this could be either the Big Rip or the Big Crunch. Also, the time when the black hole forms in the outer universe is when we get a Big Bang in the inner universe. This also solves an interesting problem: mainstream physics don't explain clearly what would be the cause of the Big Bang, but a white bubble theory could.
Playing with anti-matter
It is unknown in physics if anti-matter has standard gravity or if it features anti-gravity. Most mainstream physics predicts that it should feature standard gravity, but anti-matter featuring anti-gravity remains a viable possibility that can't be ruled out. If you join the concept of anti-matter anti-gravity and the white bubble wall concept, this makes anti-matter running away from black holes and accelerating towards the white bubble wall to never be seen again, which would explain why there are almost no observable anti-matter in our visible universe.
Also, this makes the white bubble unavoidable to anti-matter near its edge while black holes would be unpenetrable for that. An "anti-black hole" and an "anti-white bubble" would be the opposite objects. Also, this makes the black hole and the anti-white bubble event horizons an one-way entrance to matter and an one-way exit to anti-matter, while the anti-black hole and the white-bubble would feature the opposite.
To make the simetry not break for photons, you should need to propose the existence of anti-photons. Anti-photons would be undistinguishable to photons except for their behaviour on a gravitational field. Also, anti-photons would be as rare as anti-matter, which would explain why we didn't knew about them so far: can't be distinguished from ordinary photons if you don't have something to gravitationally lense them and they are far too few to be denounced by those gravitational lensing effects.
Of course, to be able to realize this theory, you would need to speculate a lot about unsolved cosmological problems proposing some not-mainstream solutions. However, since you are already proposing a big wall to your universe, I think that this is OK.
Victor StafusaVictor Stafusa
$\begingroup$ The beauty of the white hole bubble is that there is no boundary you can actually hit or poke with a stick. You can get closer and closer using more and more energy, but can never hit the boundary, unless you are a photon, then you hit the event horizon, that would look like a mirror. $\endgroup$ – Cano64 Dec 8 '15 at 14:41
$\begingroup$ I think this shows two different cases: the restof the universe is unreachable, which is like our real horizon; compared with the idea that the universe is finite and the edge is the boundary. $\endgroup$ – JDługosz Dec 8 '15 at 19:05
$\begingroup$ @JDługosz Yes, that is correct. But for any practical purposes (specially if you reject the part about anti-matter), this probably can't make any difference for who are already confined in the inner universe. $\endgroup$ – Victor Stafusa Dec 8 '15 at 19:13
$\begingroup$ While it can't be rules out, I would avoid relying on anti-matter displaying 'anti-gravity' properties personally. What experiment's ALHPA has done have indicted otherwise, and it was really only the level of accuracy with which they could measure that limited them from calling it conclusive. If I was a betting man I wouldn't place a dime on it. In saying that, while perhaps not reality-check friendly, the white-bubble answer is a wonderful thing play with and may actually help me with a small side project of my own :) $\endgroup$ – Firelight Jun 15 '17 at 14:05
If you want it to be something less than mundane, why not give your universe a metric similar to that of the Poincare disk representation of the hyperbolic plane?
In other words, equip your universe with a metric of the form $$ds=\frac{2dr}{1-r^2}$$ where $r$ is the distance from the center of the circle in Euclidean space, and $s$ is the distance from the center of the circle in this hyperbolic space. This leads to the relationship $$s=2\tanh^{-1}\left(r\right)$$ where $\tanh^{-1}\left(\right)$ refers to the inverse hyperbolic tangent function. The point if this is that as one moves farther and farther from the center, one gets closer and closer to the edge of the circle, but can never quite reach it. You can get arbitrarily close, but you must always be a finite distance from the edge.
Your universe has a boundary, but you can never reach it.
HDE 226868♦HDE 226868
$\begingroup$ I'm not convinced this tells us anything about what an observer in the universe would tihnk - intrinsically, in hyperbolic space, there is no notion of a boundary. An observer near what you call the edge would see space curving exactly as an observer at the center. The fact that a model takes place in a compact set doesn't tell us much about the space as a whole. (One might also note that one could also describe models of Euclidean space which exist in an open disk - again letting people outside think of some kind of boundary, even though we know that within the space there is none) $\endgroup$ – Milo Brandt Dec 6 '15 at 22:21
$\begingroup$ I agree with Milo. Unless "some things" used the Poincare metric and other things used the underlying meaning of space, the inhabitants would never know of the embedding. $\endgroup$ – JDługosz Dec 7 '15 at 6:19
$\begingroup$ Wait, does "tanh()^-1" mean coth or arctanh? If the former, you're fine. If the latter, the accepted convention is to put the ^-1 before the parenthesis. $\endgroup$ – No Name Apr 6 '18 at 7:38
$\begingroup$ @NoName Yup, that was an error on my part. Fixed. $\endgroup$ – HDE 226868♦ Aug 30 '18 at 2:34
I don't think any sort of "hard" wall would make sense without violating conservation of mass/energy/momentum but perhaps you could imagine the edge of the universe as a broad singularity, kind of like a black hole except instead of being a point we look down into it's a shell we look up out at.
You could re-tell all the qualitative stories about falling into a conventional black hole (e.g. a traveller crossing the event horizon would appear to slow down and fade away) only in an inverted geometry.
spraffspraff
$\begingroup$ Oh yeah. Like if you take a sphere and put a black hole in it, it would look like the universe was a disk with singularity boundary (you would have to generalize this to three dimensions). $\endgroup$ – PyRulez Dec 6 '15 at 21:38
$\begingroup$ "I don't think any sort of "hard" wall would make sense without violating conservation of mass/energy/momentum" --- like @Beta said, tell it to the wall. $\endgroup$ – AgapwIesu Dec 8 '15 at 13:49
So the theory of Brane Cosmology suggests that we picture the universe like a bubble, with all of our spacetime as the skin of the bubble. One theory suggests that the big bang was two branes colliding and a new brane forming as our universe, like two bubbles bumping into each other and a third bubble forms.
The "bubble" that makes our universe is expanding faster than the speed of light, making everything in the spacetime skin move farther apart. If there is an "edge" of the universe, it is moving away from you faster than you'd be able to see it, even if was a place you could teleport directly to.
So instead imagine a static universe (brane) that isn't expanding, and it's touching other branes, so if you could see it from the outside (the bulk) it would look like soap suds.
Each brane might have wildly different properties and laws of physics; instead of being a 3 dimensional space it might be 7 dimensional. Dividing a circle might get you 4 instead of 3.1416 in some weird non-euclidean geometry where spacetime is not locally flat.
So if you were at an interface where two branes are touching, you might be able to look across and see wild things. A universe where time runs backwards to our frame of reference for instance. Time as a loop, endlessly repeating. Time only passes when things are in motion. The book Einstein's Dreams has a lot of examples of possibilities in the dimension of time. Other dimensions/forces could be affected equally.
The boundary may be impenetrable except to light, or it may be possible to push through at high enough levels of energy. This should be done with caution since the physical and psychological effects caused by traveling to a universe with a different number of dimensions might not be compatible with life.
AndyD273AndyD273
$\begingroup$ Funny you should mention «universe where time runs backwards to our frame of reference ». I posted this question about 6 weeks later. $\endgroup$ – JDługosz Dec 3 '16 at 6:11
$\begingroup$ Anyway, your idea is that we might see stuff on the other side of the boundary, a region with different physical laws. That implies some degree of compatibility; it has light or something that gets translated into light as it crosses. $\endgroup$ – JDługosz Dec 3 '16 at 6:16
$\begingroup$ @JDługosz I'm trying to imagine how it would work. If it woul just be reversed with things starting out as old and getting young, or more likely it would just be reversed compared to our frame of reference. So it looks like everything is going backwards from our side, but if you crossed over then our side would look like it's going backwards. Could be used as a way to travel backward in time... Which is what I think your question is implying. $\endgroup$ – AndyD273 Dec 4 '16 at 16:09
$\begingroup$ In Egan's The Arrows of Time, he can't see light from backwards stars because from our point of view the photons are all emitted from various objects and head to the sun where they are absorbed. $\endgroup$ – JDługosz Dec 4 '16 at 17:42
undefined behavior
Let's just say nobody knows what happens at the end of the world.
At least that's the case in Minecraft: Image taken from http://minecraft-de.gamepedia.com/Datei:Ferne_L%C3%A4nder.png
The world in Minecraft is continuously generated as you explore it based on a seed value given to a pseudo random number generator. The thing is, if you walk in one direction for like a month (or just manipulate your position), you get to the edge of the world. And you experience a lot of weird effects:
The game hangs
A lot of calculations overflow
The generated terrain looks weird (because of the overflows)
Edit: Well what is an overflow? The computer just strictly follows its algorithm and crunching numbers. When the numbers get bigger and you get closer to the "end" of the world the result of these calculations are to big to be able to hold in memory. Numbers in computer games have most of the time a fixed limit, depending on which size was chosen. The most common size is 32 bit and therefore the numbers that can be held are in the range of −2147483648 and +2147483647.
Note: in recent versions of minecraft this has been fixed, but you can read more about it here: http://minecraft.gamepedia.com/Far_Lands
MarcDefiantMarcDefiant
$\begingroup$ I don't like "undefined", as that would mess too severely with what happens near the edge and farther afield as we are communicating with that region. But a gradual change to the laws of physics, such as making Plank's constant coarser, is an interesting idea. $\endgroup$ – JDługosz Dec 7 '15 at 8:57
$\begingroup$ Hmm, if it's like being near a black hole (but inverted) it might actually work out. Back to the Poincare disk: underlying smallest-scale goes with the enclosing metric, so we feel lower resolution as we approach the (infinite far) edge. $\endgroup$ – JDługosz Dec 7 '15 at 8:59
$\begingroup$ As interesting as that analogy may be, I'm not sure you are actually answering the question. Can you explicitely explain how the "game hanging", "calculation overflow" etc. can be understood in terms of our universe and the general physics? Or do we leave in a computer simulation, and no-one informed me about it? $\endgroup$ – bilbo_pingouin Dec 7 '15 at 9:23
$\begingroup$ @bilbo_pingouin Well, me might live in a computer simulation. I just think it's interesting to see a computers default "solution" for something, strictly following his algorithm. Like if you implement a division by subtracting until you reach 0 and count the times you've subtracted and then divide by 0 and you've got yourself and infinite loop. These "solutions" might either be complete garbage, quite funny or actually useful. $\endgroup$ – MarcDefiant Dec 7 '15 at 10:19
$\begingroup$ Nitpicking: The strange effects haven't been fully fixed in newer versions, they're just not as apparent anymore. Look at an end portal a few million blocks out or let a sand block fall and you'll see lots of weird stuff happening. $\endgroup$ – Fabian Röling Jan 8 at 10:57
If I may take Dan Smolinske's answer and mutate it a bit, I'd like to suggest a living boundary. In this particular line of thinking, the universe itself is alive, like an overarching Gaia of galaxies or like the Dao. After all, this is World Building. Why not stretch ourselves with a more exotic universe? Boundaries are meant to be stretched! In doing so, I get to tie in a nice detail: while its fun to pay attention to the boundary itself, we often see echos and shimmers of that boundary when we look outwards, warning us of where those boundaries are. Curious that... there's no real reason for it, but yet everywhere we find danger, we find little warning signs like breadcrumbs. Surely those are important.
This living entity clearly has boundaries exactly like those of Dan's world: anything outside of the boundaries is so utterly alien that we are simply incapable of predicting what happens to anything that crosses out into it, and we perceive nothing but randomness coming in. No information goes in, so thus no usable energy or matter. The difference here is that, unlike Dan's world, this edge is highly fluid, constantly changing as the cosmic Gaia shifts and shapes itself, responding to the alien forces around it that are beyond our comprehension. The boundary may be so steady that it appears exactly as Dan's world might, when things are going well for the Gaia, but may flex dramatically as the world outside it upsets its delicate balances.
Of course, such a theory would be incomplete without some concept of what is going through the mind of such a Gaia beast. Otherwise the flight of fancy does little good. Consider our Gaia as not a massive mother of all, as we view her as from the inside, but as one small fragment of a much larger, more chaotic world than any of us have ever seen. Out there, somewhere is something far more insidious than mere randomness and noise: there is an intelligence which slashes at our beautiful Gaia. This Gaia knows that this force is the one that it had always feared. It's a force which is indistinguishable from randomness, but sinister in nature. Left unchecked, it would snarl its way in with apparent randomness until one swift moment where all apparent randomness would dissapear, and it would be in control. (why do you think science is so extremely sensitive to non-random factors in their nosie?)
Our Gaia had determined a long time ago this was not a provably winnable fight. If she were to go toe to toe with this intelligence, it would slowly beat her, battle by battle, until nothing remained. So, in an act of great beauty, great will, and great desperation, she took on life. She permitted herself to have one kernel of unknown deep in her core -- no more would there be provably winnable fights and provably lost fights. Every fight would be an unknown from here on out.
And so she guards us, nurturing us, allowing us to find the solution to the battle she could not provably win. Typically we are unaware of her guiding forces, except for the curious cosmic radiation surrounding us when we look outwards. It's only when the world outside shudders that we see any change. When the fight is going well, she lets us expand our boundaries outwards, gazing at the stars wondering what makes them all burn so bright. As she does, a little of the Other leaks in, and we see it in our wars and in our weapons. When it does, she draws back, taking the stars with her, pulling the Other away with her while we wrestle with the little bit that snuck by. Do the stars not feel further from us when we are at war, deep in the trenches? Surely we wrestle along side her. Even in the greatest darkness we see light, working with us to contain it.
And this is how we see that this universe is different from that of a universe bounded by true randomness. In the careful ebb and flow of the boundary around us, we find more good than bad. We find a curious pattern to the noise that didn't show up before until we stopped and really listened. Then, we will be ready to announce to the extra-universe around us that we are ready to demonstrate the most powerful weapon our Gaia has ever devised: the ability not to overcome and devour the Other intelligence, but to merge with it until it cannot tell the difference between us and them, and we cannot tell the difference between them and us. It's certainly worked before, though the world is always at least twice as strange afterwards.
One day, we will take control of war, take control of hate. We will decide it simply cannot be, and declare it gone. Only then may we rap on the perfectly random walls of the sky, just to hear, for the first time, a response to our call. What message do we send? That's for the future to decide, but I have my suspicions. It would be a claim to a birthright. A final message to our beloved Gaia that her careful protection of us has not been in vain.
This is my home
I'm coming home
$\begingroup$ I don't follow. You saynas the border shifts it reveals stars and such that were present but walled off before? $\endgroup$ – JDługosz Dec 7 '15 at 6:23
$\begingroup$ @JDługosz Yes. As content is deemed "safe enough" for our universe, it is allowed through. Content which brings too much risk of corruption is "nuked" into thermal noise before it reaches our universe. $\endgroup$ – Cort Ammon Dec 7 '15 at 15:36
$\begingroup$ So there's more galaxies and stuff and a curtain that can move, and moving the curtain doesn't shread what it passes over, but looking through the curtain or objects moving through (as distinct from the curtain moving) will randomize the stuff? $\endgroup$ – JDługosz Dec 7 '15 at 15:45
$\begingroup$ @JDługosz May randomize, yes, depending on the "conscious will" of the Gaia around the universe. The exercise I love playing with here is trying to develop a construct which is "living," by our definition of the word, but which matches the observations of the world around us. How exotic can the world around us before science starts to catch on. Meanwhile, in a flight of fancy, might our emotions be more in tune with the universe than our science, and able to realize something is amiss that science just cannot see. $\endgroup$ – Cort Ammon Dec 7 '15 at 15:54
$\begingroup$ The joy of random variables in statistics is that no result is impossible, just highly improbable. If Douglas Adams can pop into our existence and give us the holy answer to life, the universe, and everything, the world is clearly pretty improbable already... what's a little more? =) $\endgroup$ – Cort Ammon Dec 7 '15 at 15:56
Consider the universe as an encapsulment
Clearly, anything that we can observe is inside the universe. But what if the universe was something that kept us in place?
Consider, a 1-Dimensional ant that you want to keep in place. The ant can move along only 1 axis at a time. You want a natural feel for the ant, to keep it as happy as possible. To do this, you employ a circle.
As in the picture, the ant can move in positive or negative direction, as much as it wants! How natural. The circle guides the ant (in other words, the ant is constrained to the circle), and to a surveyor (unbeknownst to the ant), the ant is always touching the circle, and is unable to perceive anything inside. To the ant, the universe is wrapped.
What about a 2-Dimensional ant? An ant which can move in 2 directions instead of 1? There are 2 ways to encapsulate this ant!
The first method is un-natural. Your and comes to an untimely stop when it reaches the edge of the circle. The ant would be sad that it is contained, and realises the finiteness of it's universe. This ant, however, is free to move along its plane as it pleases. The second 'Natural' solution constrains the ant to a 3d plane (specifically, a sphere), and the ant will never be any the wiser to what it's world is.
Likewise, for us 3-Dimensional beings, we could be trapped inside, and constrained to a 4-Dimensional, wrapped surface. We could also be inside of a 3 dimensional sphere, where there is a hard edge. Either way, what's on the outside of either of these is a mystery.
tuskiomituskiomi
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – HDE 226868♦ Dec 1 '16 at 13:21
I once asked Brian Greene (physicist, Columbia U) this question. He was lecturing on his multiverse ideas inspired by the arbitrary values of things like the fine structure constant. He subscribes to a weak anthropic principle. I asked whether the intersection of two universes was possible and whether we would notice.
At one level, the definition of "universe" is problematic because if there's something else, then we should change the definition of "universe". For this question to have any meaning, we have to define "universe" as the region where our laws of physics hold.
Under his theory, from a math standpoint, yes, intersection is possible, and we would probably see it in the cosmic wave background data. We don't see anything anomalous in the big sky survey, so if it can happen, it hasn't happened yet.
Since the two universes would be expanding into each other, we would see the effect as some sort of wavefront, probably ripples. Further reading on this topic leads me to believe that those ripples would be either bands of chaos or order, depending upon whether the intersection amplified or cancelled terms in various equations. Given the delicate balance of universal constants, any intersection would probably in my opinion have bands of nothingness where matter became unglued.
Related note: Our universe appears to be bounded in at least one dimension, time, by the Bang on one end, beyond which physics ceases. We can use that boundary for theorizing what a boundary in other dimensions would look like.
SRMSRM
There have been some amazing answers to this question, and it's a fun one to ponder, so I will take a stab at it.
I like to try to relate science-fiction type environments to real-world constructs. I think they are easier to imagine, explain and dramatize in a story setting. With that in mind...
I imagine this boundary would be very much like a seam, like what one might find when two different pieces of fabric are sewn together in a whip-stitch pattern. The "pieces of fabric" in this case would be our universe (identified via elements of our laws of physics), the fabric of another universe (identified by areas where the laws of physics go haywire because the other universe has a different set of laws), and bound by the following three "threads:" 1) Time 2) Space 3) Light. This is where things get a little bit creative, a little more fiction than science.
Visually, this would appear like a corkscrew pattern running through a section of space where strands of pure light, pure space, and pure time were wrapping around, consistently digging into something, a void, as it were, where the laws of physics shift to reflect the neighboring universe.
If you were to travel across this pattern, it might go something like this:
1st Stop: Pure light. Can't get right inside this area because it's devoid of space and time, and I need both to continue to exist, so I'll just hover above and try to take some readings from this section of reality. Maybe I can learn a thing or two about light that I don't already know.
2nd Stop: Pure space. Still can't get inside, because I need time to exist but nothing moves. Absolutely nothing. It is a solid brick of space- it's not a black hole, cause there's no gravity well, but reality there is infinitely dense. Let's ping off this section of reality and see what we find, shall we?
3rd Stop: Pure Time: Constant, frenzied frenetic fluctuation. Too chaotic; I'd be every when if I entered this string - simultaneously experiencing every moment of reality all at once. My head would explode.
4th Stop: Our universe - Typical laws of physics, everything is normal. Space, Time, and Light all co-mingle in a familiar harmony. I could hang out here all my life long if I have enough resources.
5th Stop: Pure Light again. 6th Stop: Pure Space again. 7th Stop: Pure Time again.
8th Stop: Oh, hey, light space and time are all re-combing in a new and unique way...Kinda cool. I can hang out here, learn a few things, though "here" feels fundamentally different from everywhere else. I'm a little thirsty, after all, this travel across the seam, so maybe a little water...and why is the water flowing up my throat? Definitely got to run some tests here to learn the new laws of physics.
This pattern would continue to repeat ad infinitum. The picture below might help visualize what I'm describing. Only, in addition to the one yellow strand (light), it also has a blue (space) and a green one (time).
CadenceCadence
$\begingroup$ I like the idea of there being a seam with its own finite extent. But "light" doesn't make sense as a thing without spacetime. $\endgroup$ – JDługosz Nov 28 '16 at 23:18
$\begingroup$ Which is part of the reason why it becomes more fiction than science at that point. Of course, it doesn't necessarily have to be light; it could be another basic component of the constructs of our reality. $\endgroup$ – Cadence Nov 29 '16 at 2:35
Maybe you can adjust your notion of what is a "universe." Is a universe literally everything observable and not, or is it something else? What if we defined the "universe" as everything affected by the same gravity well.
It might be interesting to have the boundary of the universe as a plane of zero thickness that light and matter can pass through but gravitational effects can not. This would allow you to discover the existence of the boundary, and have it be interactive in interesting ways. The fact that the boundary is there wouldn't be immediately obvious because you can still observe the other universe. This adds a discovery element to the story which is nice because the readers and the characters both go through the same enlightenment process together organically.
I think that we can use this model to make some interesting stories by playing with that core concept.
Orbits between universes
Perhaps two stars are on opposite side of the boundary but the planets orbiting them have orbits that cross the boundary. This would lead to strange due to stars effectively playing catch with planets as the planets go in and out of their sphere of influence. Multiple confirmed observations of this "orbit" are made with telescopes which prompts multiple conflicting theories. Our heroes are sent on a science mission to figure out which theory is correct.
Different gravitational constants
Perhaps each universe has its own slightly different gravitational constant. This variance in gravitational constant affects chemistry/physics in subtle but fundamental ways. These changes could really be anything you want from causing people to go "space crazy" over time if sent to delta quadrant because some chemical in their brain becomes slightly toxic. Or the opposite happens and the body's repair/self healing faculties are improved so people return from delta quadrant healthier and apparently more youthful. Or maybe the ship starts going haywire and our heroes need to abort and return to their universe. The possibilities are truly endless.
Universe boundaries aren't static
Perviously I was assuming that the boundary was static. Gravitational force still can't cross the boundary but what if the location of the boundary moves steadily in one direction? Everyone knows that gas giant XYZ doesn't have any moons which is quite odd since the inner gas giants all have moons. Then one year someone notices a strange anomaly where a puff of its atmosphere retains its momentum, converts its angular velocity into a straight vector velocity, and drifts off into space. People dismiss this as experimental error. However next time the gas giant is in the same position a satellite retains its momentum, converts its angular velocity into a straight vector velocity, and drifts off into space. As it is flying away it sends proof back to the home world that a large portion of the gas giant's atmosphere is leaving too. Your heroes go to investigate, discover the boundary. Experiments over time show it is creeping toward their home world, and to their horror it seems to be accelerating....
ErikErik
Since the universe is expanding quite fast, we wouldn't be at the boundary for a very long time.
For a pre industrial civiliation, one side of the sky would probably always "look dark" while the other side has stars.
For a space civilization, the boundary might move so fast that I would physically be impossible to "reach it"
I m not a PhD in astrophysics but my understanding of the boundary of the universe is that it s not really like some sort of wall but rather abstract. The concept of space and time don't exist outside of it so I don't think we can really understand it with references to our normal world.
$\begingroup$ The universe in question is hypothetical, so comparing it to the real universe isn't always a good idea. $\endgroup$ – HDE 226868♦ Dec 5 '16 at 0:57
Something not discussed so far is that we don't know the topology of our universe. This and some other answers are listed in this Wikipedia article, although the concept of the topology is not much discussed.
In short terms it could mean that we don't travel through space as straight as we think we do, but wander around in circles. So when we reach the end of the universe, we could just happen to come out at the other side. Like we don't move in the inside of a 3-dimensional sphere (as at least my instinctial understanding of the universe is), but on the border on a 4-dimensional sphere.
An example may be given with a dimension less. Image a flatlander, who lives in a 2-dimensional universe. To his knowledge there was a big bang and light goes in all directions, so he would just assume that the universe is like the inside of a circle, expanding in all directions. But in fact he lives on the surface of a 3-dimensional sphere and if he could just travel it fast enough in one direction, he would be back where he started. But he can't because the sphere he is travelling on expands faster than he could ever move.
AyutacAyutac
$\begingroup$ But that is an "unbounded" surface, not what the question is about. $\endgroup$ – JDługosz Nov 28 '16 at 3:37
$\begingroup$ The same objections to tuskiomi's answer hold. This universe has no boundary; it's just finite. $\endgroup$ – HDE 226868♦ Nov 29 '16 at 0:26
$\begingroup$ @JDługosz On the contrary, the universe is bounded, just in an direction that cannot be easily accessed by its inhabitants. Which makes the question much more interesting, as I see it. It would be a bound we cannot actually see in any way, nothing we can "drive again" in the classical sense.So the actual answer to the question would be "you can't see it". This would very much be the case with taking time as one dimension of the bound, just like SRM noted in his answer. I know a wall would be very shiny and maybe more exciting for the majority, but this version excites me more :D $\endgroup$ – Ayutac Nov 30 '16 at 21:58
$\begingroup$ That's not what "unbounded" means in a manifold. Its embedding in higher dimensions has boundaries. I'm aware that this is the normal model. The question explicitly asks about intrinsic borders, affecting the connectivity of the space. So the question is "We all know about A, but what about B instead?" and your answer is "A! A! Let me explain A again!" $\endgroup$ – JDługosz Nov 30 '16 at 22:36
$\begingroup$ @JDługosz fair enough $\endgroup$ – Ayutac Nov 30 '16 at 23:27
There will be void.
Most sophisticated sensor arrays won't read any background noises. There'll be no stars, nothing, because there's literally no space.
You can build your most powerful spaceship and head that direction at full speed, you will never reach it.
Why? Because I think of it similar to the escape velocity you need to escape earth or the solar system but at a much larger, universal scale, you'd have to overcome the gravity of the universe to reach the void itself using up all the energy in the universe, which practically cannot happen.
Alexander von WernherrAlexander von Wernherr
It would be easier for us to see the other Universe through the tourist binoculars.
And for then to see us...
RonJohnRonJohn
Another approach you could take that conforms to all known physics is to simply place your civilisation near the edge of the currently known universe (CKU).
Remember that the CKU is expanding outwards from the big bang so its edge is simply how far it has reached at a given point in time. Astronomers in such a civilisation would look in one direction and see the core of the universe receding from them and in the other, nothing. Until one day, they notice a faint glimmer in the dark...
$\begingroup$ They would be in the center of their own currently known universe. What does it matter if they are near a horizon defined as communication time with us? Either I'm totally misunderstanding you, or you're thinking the big bang is expanding through space from some center point. All points are just as "far" from the big bang: zero. $\endgroup$ – JDługosz Dec 7 '15 at 12:12
$\begingroup$ The CKU occupies a sphere with a maximum of radius 13.8 billion light years and expanding (?). We have observed galaxies over 13.1 billion years old. If these all lie in the same(ish) direction then we may assume that is towards the core of the CKU and the opposite is towards the boundary. There is no logic reason to think that another civilisation can't be much closer to the boundary which was the OP question. $\endgroup$ – Paul Smith Dec 8 '15 at 13:55
$\begingroup$ @PaulSmith actually, the observable universe is larger than 13.8 billion ly radius, its 45.7 billion. "The best estimate of the age of the universe as of 2015 is 13.799±0.021 billion years but due to the expansion of space humans are observing objects that were originally much closer but are now considerably farther away (as defined in terms of cosmological proper distance, which is equal to the comoving distance at the present time) than a static 13.8 billion light-years distance." $\endgroup$ – Draco18s Dec 8 '15 at 15:23
protected by a CVn♦ Dec 7 '15 at 8:57
Not the answer you're looking for? Browse other questions tagged reality-check universe spacetime-dimensions or ask your own question.
Infinite tube world
What happens at the interface between two universes with opposite thermodynamic arrows of time?
If universe has an end/boundary, what else exists after the boundary?
What would a hard boundary mean in physics on the quantum-mechanical level?
Small universe - is there any point in interstellar communication?
Consequences of a two-dimensional universe on cells and life
Liquid universe: The beginning
Geography in a universe with 4 spatial dimensions
Explained? Dimensional travel limit
Escaping the universe
universe with infinite energy
How everything will act in a world of two time dimensions?
How small can the universe be while still appearing infinite?
How to detect that the universe got mirrored?
|
CommonCrawl
|
1. golden dam
How many bricks of 24-karat gold can you fit into the Orlík dam? What would be the pressure acting on a brick placed at the deepest point? The dimensions of a brick are 10 cm, 3 cm a 1 cm.
othermathematics
Karel wants to be rich.
2. unstoppable terminator
How fast does the boundary between regions with and without sunlight move on the surface of the Moon? Is it possible to run away from dark when you are at the equator?
Karel was watching Futurama
3. bubble in a pipeline
A horizontal pipeline with a flowing liquid contains a small bubble of gas. How do the dimensions of this bubble change when it reaches a narrower point of the pipeline? Can you find some applications of this phenomena? What problems could it cause? Assume that the flow is laminar.
Karel was thinking about air fresheners.
4. cube in a pool
Large ice cube placed at the bottom of an empty pool starts to dissolve. Assume that the process is isotropic in the sense that the cube is geometrically similar at all times. What fraction of the cube needs to dissolve before it starts to float in the water? The surface area of the pool floor is $S$, and the length of an edge of the cube before it started disolving was $a$.
Lukáš was staring at a frozen town.
5. a bead
A small bead of mass $m$ and charge $q$ is free to move in a horizontal tube. The tube is placed in between two spheres with charges $Q=-q$. The spheres are separated by a distance 2$a$. What is the frequency of small oscillations around the equilibrium point of the bead? You can neglect any friction in the tube.
Hint: When the bead is only slightly displaced, the force acting on it changes negligibly.
Radomír was rolling in a pipe.
P. speed of light
What would be the world like if the speed of light was only $c=1000\;\mathrm{km}\cdot h^{-1}$ while all the other fundamental constants stayed unchanged? What would be the impact on life on Earth? Would it even be possible for people to exist in such a world?
relativistic physicsmagnetic fieldelectric fieldquantum physics
Karel came up with an unsolvable problem.
E. bend it but don't bend it!
Your task is to measure the spacing of a diffraction grating using the light from three different LED-diodes. In case your interested, send us an email at [email protected] and we will send you the LED diodes, resistor, wires, and, of course, the diffraction grating. The only thing you will need to buy is a 9 V battery.
Karel spent all of our budget.
S. relativity
Any theory of quantum gravity is useful only when we deal with very small distances where the effects of gravitation are comparable to quantum effects. Gravitation is characterized by the gravitational constant, quantum mechanics by the Planck constant, and special relativity by the speed of light. Look up numerical values of these constants, and, using standard algebraic operations, combine them to obtain a quantity with the dimensions of length. This is the length scale where both quantum mechanics and gravitation are important.
Prove that the special Lorentz transform (i.e. a change of the reference frame to one that is moving with speed $v$ in the $x¹;$ direction)
$$x^0_\;\mathrm{nov}=\frac{x^0-\frac{v}{c}x^1}{\sqrt{1-\(\frac{v}{c}\)^2}}\,,\quad x^1_\mathrm{nov}=\frac{-\frac{v}{c}x^0 x^1}{\sqrt{1-\(\frac{v}{c}\)^2}}\,,\quad x^2_\mathrm{nov}= x^2\,,\quad x^3_\mathrm{nov}= x^3$$ leaves the spacetime interval invariant. * Set $Δx=Δx=0$ in the definition of a spacetime interval. You should get
$$(\Delta s)^2 = -\(\Delta x^0\)^2 \(\Delta x^1\)^2$$
What is the region of the plane ( $Δx^{0},Δx¹;)$ where the spacetime interval ( $Δs)$ is positive? Where negative? What is the curve ( $Δs)=0?$
|
CommonCrawl
|
Measuring efficiency of governmental hospitals in Palestine using stochastic frontier analysis
Samer Hamidi1
Cost Effectiveness and Resource Allocation volume 14, Article number: 3 (2016) Cite this article
The Palestinian government has been under increasing pressure to improve provision of health services while seeking to effectively employ its scare resources. Governmental hospitals remain the leading costly units as they consume about 60 % of governmental health budget. A clearer understanding of the technical efficiency of hospitals is crucial to shape future health policy reforms. In this paper, we used stochastic frontier analysis to measure technical efficiency of governmental hospitals, the first of its kind nationally.
We estimated maximum likelihood random-effects and time-invariant efficiency model developed by Battese and Coelli, 1988. Number of beds, number of doctors, number of nurses, and number of non-medical staff, were used as the input variables, and sum of number of treated inpatients and outpatients was used as output variable. Our dataset includes balanced panel data of 22 governmental hospitals over a period of 6 years. Cobb–Douglas function, translog function, and multi-output distance function were estimated using STATA 12.
The average technical efficiency of hospitals was approximately 55 %, and ranged from 28 to 91 %. Doctors and nurses appear to be the most important factors in hospital production, as 1 % increase in number of doctors, results in an increase in the production of the hospital of 0.33 and 0.51 %, respectively. If hospitals increase all inputs by 1 %, their production would increase by 0.74 %. Hospitals production process has a decrease return to scale.
Despite continued investment in governmental hospitals, they remained relatively inefficient. Using the existing amount of resources, the amount of delivered outputs can be improved 45 % which provides insight into mismanagement of available resources. To address hospital inefficiency, it is important to increase the numbers of doctors and nurses. The number of non-medical staff should be reduced. Offering the option of early retirement, limit hiring, and transfer to primary health care centers are possible options. It is crucial to maintain a rich clinical skill-mix when implementing such measures. Adopting interventions to improve the quality of management in hospitals will improve efficiency. International benchmarking provides more insights on sources of hospital inefficiency.
Occupied Palestinian territories (OPT) consist of two geographically separated areas, West Bank (WB) and Gaza Strip (GS), and is administered by the National Palestinian Authority. OPT cover an area of about 6860 km2 (6500 km2 in WB and 360 km2 in GS). The total population of OPT in 2013 was about 4.5 million inhabitants with 50.8 % of males and 49.2 % of females. Demographic distribution indicates that the society is very young, about 41 % of inhabitants are under 15 years. OPT comprise of 16 governorates and are very densely populated country, with more than 650 inhabitants per square kilometer. Between 1980 and 2013, life expectancy at birth increased by about 10.4 years to reach 72.6 years. The crude death rate per 1000 inhabitants has decreased from 4.1 in 1993 to 2.5 in 2013. The infant mortality rate per 1000 live births was also decreased from 32 in 1993 to 19 in 2013. OPT are in transition politically as well as epidemiologically. OPT are suffering the double burden of both infectious and chronic diseases. The total number of full time health workforce in 2011 was about 23,888, of which 68 % employed in WB and 32 % in GS, where Palestinian Ministry of Health (MOH) employs about 60 % of them. The number of doctors and nurses per capita increased substantially in OPT over the past two decades, to reach 24 physicians per 10,000 inhabitants and 25 nurses per 10,000 inhabitants in 2013 [1].
Over the last two decades the Palestinian government carried out concrete steps to increase the effectiveness and efficiency and contain the cost of hospitals in OPT. MOH is the main entity responsible to govern, regulate, deliver services, and finance health system. The total bed capacity in 2013 was 5619 beds, which can be translated into 13 beds per 10,000 inhabitants. Beds are distributed in 80 hospitals; 30 are in WB with 3263 beds, making up 58 % of total hospital beds, the remainder is in GS. About 70 % of hospitals and 47 % hospital beds in OPT are private and non-for-profit. MOH owns and operates 53 % of total hospital bed capacity (2979 beds) distributed in 24 hospitals [1]. Within governmental hospitals, there are considerable differences from hospital to hospital. Some smaller hospitals providing only basic specialist care, the other hospitals are specialty hospitals, which limit their care to selective illnesses or patient groups. The average occupancy rate in MOH hospitals is estimated at 85 % in the WB and 78 % in the GS. There are disparities between regions in terms of occupancy rate in the governmental hospitals. The occupancy rate for all Palestinian hospitals, however, is estimated at 65 %; indicating that there is under-utilized service capacity in private sector.
Health spending as percentage of GDP increased from 9.2 % in 2000 to 12.3 % in 2011 [2]. This percentage is higher than any country in the region, and in fact very few countries in the world spend more than this percentage. While there is a positive correlation between spending on health and income per capita, higher spending observed in OPT does not seem primarily attributable to greater income. CHE increased from $384.3 million in 2000 to $1262 million in 2012, and CHE per capita grew from $137 in 2000 to $308 in 2011, a 226 % increase, while GDP per capita increased from $1498 to $2506 in the same years, a 167 % increase. Based on national income and number of population, a linear regression would predict that OPT health spending would be about $229 per capita or 9.1 % of GDP, far less than is actually observed. In fact, CHE had increased markedly from 2000 to 2011, driven by increasing salaries to finance excessive health sector employment, cost of pharmaceuticals, and outsourced health services [3]. About quarter of budget of MOH was spent on health services outsourced from other providers, and about half of the budget was spent on salaries. Hospitals remain the leading costly units in the Palestinian health system. On average hospitals consumed about 36 % of current health expenditures (CHE) during the period 2000–2011. However, governmental hospitals consumed about 60 % of MOH budget, yet inferior to private hospitals in terms of efficiency and quality [4]. In most OECD countries, hospitals also accounted for the highest share of CHE, on average (36 %), ranging from 26 % in Slovakia to 45 % in Denmark. Hospitals in Qatar and Dubai accounted for about 40 %, and 48 % of CHE, respectively [5].
The economic challenges in terms of high rate of poverty and limited financial support; and the political challenges presented by Israeli occupation atrocities against the Palestinian people, the separation and fragmentation of the Palestinian communities, and closures remain the main determinant of health in OPT. The ongoing conflict with Israeli occupation forces caused measurable deterioration in health status and health services delivery as a result of constrained access to health facilities, health professionals, medical equipment, and pharmaceuticals. The construction of the Apartheid Wall which encloses about 120,000 inhabitants and hinders their access to hospitals, and confiscates their livelihood of living, forcing them into poverty. The separation of health care delivery between WB and GS, and control of all movement, complicate the ability of MOH to coordinate its activities, often leading to duplication and loss of efficiency. Besides, most of hospitals are allocated inside cities and a lot of people face difficulties to reach hospitals especially in Alquds (Jerusalem). Hospitals are overwhelmed by the number of casualties, and its inability to keep up with housekeeping and sterilization has increased the rate of infections reported after discharge from hospitals [3].
The main objective of the study is to measure technical efficiency of governmental hospitals and quantify the effects of number of beds, doctors, nurses and non-medical staff on their technical efficiency.
It has been well established in literature that inefficiencies in health spending are large [6]. World Health Organization (WHO) estimates that about 20–40 % of resources spent on health are wasted [7]. The most common causes of inefficiency include inappropriate and ineffective use of medicines, medical errors, suboptimal quality of care, waste, corruption, and fraud [8]. Because of inefficiencies, many countries could achieve the same level of health outcomes with a lower level of spending [9]. Hospital productivity is one measure of the effective use of resources and measures outputs relative to the inputs needed to produce them. Efficiency is the degree to which the observed use of resources to produce outputs of a given quality matches the optimal use of resources to produce outputs of a given quality. So, efficiency is a component of productivity and refers to the comparison between actual and optimal amounts of inputs and products. In general, efficiency is productivity adjusted for the impact of environmental factors on performance. Effectiveness is the extent to which outputs of service providers meet the objectives set for them. Economists distinguish among three main measures of efficiency namely technical, allocative and total economic efficiency. Technical efficiency refers to the manner in which resources are employed so as to lead to the greatest level of output. As such technical efficiency emphasizes the technological aspects of an organization. In the case of hospital technical efficiency implies how the inputs which are essentially the physical assets, labor and financial resources are used to produce both intermediate and final outputs whereby examples of the former include number of patients, waiting time and so on while the latter include mortality rates, quality of life measures and so on [10]. Scale efficiency is a component of technical efficiency. Constant returns to scale (RTS) signify perfect scale efficiency. If a hospital is operating at either increasing or decreasing RTS, it is not scale efficient [11]. Allocative efficiency refers how an organization is able to use inputs in an optimal manner based on their respective prices and technology. As such allocative efficiency measures how an organization is able to select the optimal combination of inputs to produce the greatest level of outputs. Total economic efficiency which is the combined impact of technical and allocative efficiency.
The literature to date has tended to use a number of different methods to estimate the efficiency of hospitals. In some cases the measures of efficiency are influenced by government policy. A typical example of this is the UK where the National Health Service has developed efficiency performance indicators and labour productivity measures to benchmark the different providers so as to produce rankings [12]. The problem with efficiency measures is that their selection can be subjective, and the final value is highly dependent on the weights used. A more objective and economics based approach is to estimate the production possibility frontier (PPF) which is a locus of potentially efficient output combinations that an organization can employ at a particular point in time. The PPF is the most used method to estimate the efficiency of hospitals. The PPF is considered the efficient frontier as any hospital production at that level is able to achieve an efficient combination of inputs. Similarly, a hospital that does not produce on the efficient frontier is considered to be technically inefficient.
There is no consensus on the best method to measure technical efficiency. Previous studies have identified two methods, namely, non-parametric methods initiated by Charnes et al. (1978) [13] and a parametric technique developed by Aigner (1977) [14]. Parametric methods focus on economic optimization, while non-parametric techniques examine technological optimization. The most common estimation technique under the nonparametric approach is the data envelopment analysis (DEA). The major advantage of DEA is that it avoids having to measure output prices which are not available for transactions and services and fee based outputs. However, DEA method is non-stochastic and does not capture random noise such as strikes, and any deviation from the estimated frontier is interpreted as being due to inefficiency. With DEA also it is not possible to conduct statistical tests of the hypothesis regarding the inefficiencies scores.
On the other hand, the main models under parametric approach include Stochastic Frontier Analysis (SFA) of Battesse and Coelli (1992; 1995) [15, 16], and Huang and Liu (1994) [17]. While DEA does not separate out the effects of a stochastic error term, SFA disentangles the two sources of error, due to inefficiency and random noise. In SFA approaches it is possible to conduct statistical tests of the hypothesis regarding the inefficiencies scores. The main advantage of SFA is that it accounts for the traditional random error of regression. SFA presents a production function of the standard regression model but with a composite disturbance term equal to the sum of the two errors components [14, 18].
The stochastic frontier production function indicates the existence of technical inefficiency of production [16, 19]. The stochastic frontier divides the distance to the frontier into random error and inefficiency. The random error takes into account exogenous shocks. Criticisms of SFA include the need to specify in advance the mathematical form of the production function and the distributional form of the inefficiency term.
SFA is a parametric technique of frontier estimation that assumes a given functional form for the relationship between inputs and an output [20]. Some SFA modeling approaches of panel data assume a uniform variation for all Decision Making Units (DMUs) such as Battese and Coelli (1992) [16], others such as Greene (2005) [21] allow for stochastic variation without any correlation over time. The latter models include three stochastic components respectively for efficiency, random noise, and time-invariant heterogeneity. Goudarzi et al. (2014) used SFA method was applied to estimate the efficiency of twelve teaching hospitals by analyzing a 12-year panel data, and founded remarkable waste of resources [22].
Output-oriented distance function is used to measure the difference between potential and observed output, usually denoted as technical inefficiency. The distance from an observation to the frontier is the measure of technical efficiency. Gerdtham, et al. (1999) used multiple-output stochastic ray analysis and panel data on 26 hospitals over 7 years to investigate the effect of reimbursement reform on technical efficiency [23]. Ferrari (2006) used distance functions and panel data on 52 hospitals over 6 years to evaluate the impact of introducing internal competition on technical efficiency [24]. Daidone and D'Amico (2009) adopted a distance function approach, while measuring the technical efficiency level with stochastic frontier techniques. They evaluated how the productive structure and level of specialization of a hospital affect technical efficiency by analyzing a 6-year panel data [25].
This paper analyzed technical efficiency of governmental hospitals, selected on the basis of most recently available comparable data. Our dataset includes balanced panel data of 22 governmental hospitals over a period of 6 years, 2006, 2007 and 2009–2012, providing 132 observations. The two governmental psychiatric hospitals were excluded because their inputs and outputs are different from other hospitals. Data were not available for year 2008. Data were collected from the annual reports of the MOH from 2006, 2007, and 2009–2012 [26–31]. The variables used are defined in Table 1, along with summary statistics. The data consists of inputs to hospital production in the form of capital and labour, and outputs from production. Labour inputs are measured by the number of people employed in each hospital and we use full-time equivalent staff to measure labour input. Four input variables were included in the efficiency analysis (1) the number of hospital beds in the year in each hospital was used as an index of capital input (2) the number of Full Time Equivalent (FTE) doctors (3) the number of FTE nurses (4) the number of FTE non-medical which included all staff other than nurses and doctors. The categorization of health workforce to three categories of doctors, nurses, and non-medical staff was due to the evidence that these categories of resources have different roles in patient care and deliver services [22].
Table 1 Descriptive statistics of the input and output variables
The outputs of hospital production consist of the sum of inpatients and outpatients in each hospital. The emergency visits were considered also as outpatient visits. The inpatients were measured as the total number of admitted patients within a year. Outpatients are counted as total yearly number of attendances at outpatient clinics in each hospital. Standard SFA models are limited to only one output. This limitation necessitates aggregation of inpatient and outpatient workload into one variable. Since the 22 hospitals are very different in terms of size and kind of provided health care, the sum of number of treated inpatients and outpatients might not be adequate. For example, one hospital can have a lower efficiency score because of the mix of products in terms of specialization and not because of resource misuse. To address this issue we used multi-output distance function model within the SFA to estimate technical efficiency.
Two forms of production function most used is literature to measure hospital inefficiency are the Cobb–Douglas and translog functional forms. In previous studies of hospital efficiency the parametric production function has been represented by a Cobb–Douglas function [32], representing unitary elasticity of substitution. While the Cobb–Douglas form is easy to estimate, its main drawback is that it assumes constant input elasticities and RTS for all hospitals. On the other hand, the translog form does not impose these restrictions but is susceptible to degrees of freedom problems and multicollinearity. In this study we estimated three models: Cobb–Douglas form, translog form, multi-output distance form. The three models used the normal-truncated normal maximum likelihood (ML) random model effect with time-invariant efficiency developed by Battese and Coelli 1988 [33]. The models were estimated by using xtfrontier command of STATA 12.
The empirical model of Cobb–Douglas function form is given by Eq. 1.
$$ln\left( {y_{it} } \right) = \beta_{0} + \sum_{j = 1}^{k} {\beta_{j }\, ln \,x_{j,it} + \left( {V_{it } - U_{it} } \right)}$$
where j is the number of independent variables, i is the decision making units (hospitals), t is the time in years. Ln represents the natural logarithm, yit represents the output of the i-th hospital at time t, x jit is the corresponding level of input j of the i-th hospital at time t, β is a vector of unknown parameters to be estimated. The v it is a symmetric random error, to account for statistical noise with zero mean and unknown variance σv2. The u it is the non-negative random variable associated with technical inefficiency of hospital i, its mean is mi and its variance is σu2.
We tailored the Cobb–Douglas function form to the purpose of the current study. So, the Cobb–Douglas production function form is presented in Eq. 2.
$$ln\left( {Outpatient_{it} + Inpatient_{it} } \right) = \beta_{0} + \beta_{1 } \,lnBed_{it} + \beta_{2 }\,lnDoctor_{it} + \beta_{3 }\,lnNurse_{it} + \beta_{4 }\,lnNonmedical_{it}$$
The translog function is very commonly used and it is a generalization of the Cobb–Douglas function. It is a flexible functional form providing a second order approximation. The empirical model of translog function form is given by Eq. 3.
$$ln\left( {y_{it} } \right) = \beta_{0} + \sum_{j = 1}^{k} {\beta_{j }\,ln x_{j,it} } + \frac{1}{2}\sum_{j = 1}^{k} {\sum_{h = 1}^{k} {\beta_{jh } \,lnx_{jit}\,lnx_{hit} + \left( {V_{it } - U_{it} } \right)} }$$
where j is the number of independent variables, i is the decision making units (hospitals), t is the time in years. ln represents the natural logarithm, yit represents the output of the i-th hospital at time t, xjit is the corresponding level of input j of the i-th hospital at time t, xjit times xhit is the interaction of the corresponding level of inputs j and h of the i-th hospital at time t, β is a vector of unknown parameters to be estimated. The v it is a symmetric random error, to account for statistical noise with zero mean and unknown variance σv2. The u it is the non-negative random variable associated with technical inefficiency of hospital i, its mean is mi and its variance is σu2.
We tailored the translog function form to the purpose of the current study as presented in Eq. 4.
$$\begin{aligned} ln\left( {Outpatient_{it} + Inpatient_{it} } \right) = \beta_{0 } + \beta_{1 }\,lnBed_{it} + \beta_{2 }\,lnDoctor_{it} + \beta_{3 }\,lnNurse_{it} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{4 }\,lnNonmedical_{it} + \beta_{12 }\, \left( {lnBed_{it} \times\,lnDoctor_{it} } \right) + \beta_{13 }\, \left( {lnBed_{it} \times lnNurse_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{14 }\, \left( {lnBed_{it} \times lnNonmedical_{it} } \right) + \beta_{23 }\, \left( {lnDoctor_{it} \times lnNurse_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{24 }\, \left( {lnDoctor_{it} \times lnNonmedical_{it} } \right) + \beta_{34 }\, \left( {lnNurse_{it} \times lnNonmedical_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{11 } \,0.5\,\left( {lnBed_{it} \times lnBed_{it} } \right) + \beta_{22 }\, 0.5\left( {lnDoctor_{it} \times lnDoctor_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{33 } \, 0.5\,\left( {lnNurse_{it} \times lnNurse_{it} } \right) + \beta_{44 }\, 0.5\,\left( {lnNonmedical_{it} \times lnNonmedical_{it} } \right) \hfill \\ \end{aligned}$$
where, β0 is the intercept of the constant term, β1, β2, β3, β4 are first order derivatives, β11, β22, β33, β44 are own second order derivatives and β12, β13, β14, β23, β24, β34, are cross second order derivatives. As a double log form model (where both the dependent and explanatory variables are in natural logs), the estimated coefficients show elasticities between dependent and explanatory variables. The stochastic frontier production function and the technical inefficiency models are jointly estimated by the maximum-likelihood method. We tested the null hypothesis that the Cobb–Douglas function form is an adequate representation of the data.
When using a translog production function the values of the input coefficients themselves do not have an easily interpretable meaning, so to truly assess input effects the marginal effects for each input were estimated using Eq. 5, where the marginal product is equal to the elasticity of scale for each input.
$$e_{j } = \frac{{\partial ln\left( {y_{i} } \right)}}{{\partial ln\left( {x_{ji} } \right)}} = \beta_{j } + \sum_{j = 1}^{4} {\beta_{jh} \ln x_{j} + \beta_{jt} }$$
The third model estimated was the multi-output distance function. Using a multi-output distance function will allow the specified model of hospital production and inefficiency to be explored without aggregating inpatient and outpatient visits. We tailored the multi-output distance function to the purpose of the current study as presented in Eq. 6.
$$\begin{aligned} ln\left( {Outpatient_{it} } \right) = \beta_{0 } + \beta_{1 }\,lnBed_{it} + \beta_{2 }\,lnDoctor_{it} + \beta_{3 }\,lnNurs + \beta_{4 }\,lnNonmedical_{it} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{12 }\, \left( {lnBed_{it} \times lnDoctor_{it} } \right) + \beta_{13 }\, \left( {lnBed_{it} \times\,lnNurse_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{14 }\, \left( {lnBed_{it} \times lnNonmedical_{it} } \right) + \beta_{23 }\, \left( {lnDoctor_{it} \times lnNurse_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{24 }\, \left( {lnDoctor_{it} \times lnNonmedical_{it} } \right) + \beta_{34 }\, \left( {lnNurse_{it} \times lnNonmedical_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{11 } \,0.5\,\left( {lnBed_{it} \times lnBed_{it} } \right) + \beta_{22 } \,0.5\,\left( {lnDoctor_{it} \times lnDoctor_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{33 } \,0.5\,\left( {lnNurse_{it} \times lnNurse_{it} } \right) + \beta_{44 } \,0.5\,\left( {lnNonmedical_{it} \times lnNonmedical_{it} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{5 } { \ln }Y_{it}^{*} + \beta_{51 }\, { \ln }\left( {lnBed_{it} \times { \ln }Y_{it}^{*} } \right) + \beta_{52 }\, { \ln }\left( {lnDoctor_{it} \times { \ln }Y_{it}^{*} } \right) \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_{53 } \,{ \ln }\left( {lnNurse_{it} \times { \ln }Y_{it}^{*} } \right) + \beta_{54 }\, { \ln }\left( {lnnonmedical_{it} \times { \ln }Y_{it}^{*} } \right) + \beta_{55 } \,0.5\,{ \ln }\left( {{ \ln }Y_{it}^{*} \times { \ln }Y_{it}^{*} } \right) \hfill \\ \end{aligned}$$
where, Y* is the ratio of outpatient visits to inpatient admissions. β5, is first order derivative, β55 are own second order derivatives and β51, β52, β53, β54, are cross second order derivatives.
Estimating technical efficiency
The technical efficiency of a hospital is defined as a ratio of the observed output (Y it ) to the maximum feasible output, defined by a certain level of inputs used by the hospital. Thus, the technical efficiency of hospital i at time t can be expressed in Eq. 7.
$$TE_{it } = E \left[ {\exp {{\left( { - u_{it} } \right)} \mathord{\left/ {\vphantom {{\left( { - u_{it} } \right)} {\left( {v_{it} - u_{it} } \right)}}} \right. \kern-0pt} {\left( {v_{it} - u_{it} } \right)}}} \right]$$
U it represents hospital specific fixed effects or time invariant technical inefficiency and V it is a normally distributed random error term and is uncorrelated with the explanatory (independent) variables.
Since U it is a nonnegative random variable, these technical inefficiencies lie between 0 and unity, where unity indicates that this firm is technically efficient. The value of U it is positive and it decreases the efficiency of an object, therefore we have −U it . The method of ML is used for estimation of the unknown parameters, with the stochastic frontier and the inefficiency effects estimated simultaneously. Maximum feasible output is determined by the firms with inefficiency effect equal to 0 (v it = 0). Equation 7 was estimated by E {exp (−su [it] |e [it]) following Battese and Coelli, 1988, using the Te option of STATA 12.
Descriptive analysis
Before interpreting the results of SFA and technical efficiency, the descriptive analysis of input variables and output is presented in Table 1. The annual average number of inpatient admissions per hospital was about 14,045. The annual average number of outpatient visits per hospital was about 99,765. There were on annual basis per hospital approximately, 120 beds, 90 doctors, 119 nurses, and 152 non-medical staff.
The average number of all inputs remained almost unchanged from 2006 till 2009. There is a noticed increase in all inputs in 2009, then no increase is noticed in 2010–2012, as shown in Fig. 1.
Number of beds, doctors, nurses and non-medical staff per hospital 2006–2012
The first step regarding the suitable stochastic frontier model tests revolved on the validity of the translog over the Cobb–Douglas specification within the ML specifications using null hypothesis H0: β11 = β22 = β33 = β12 = β13 = β14 = β23 = β24 = β34 = 0. The degrees of freedom of 10 and critical value of 18.3, so the null hypothesis was rejected, and it was concluded that translog form (LR = 28.886, p < 0.0001) was more appropriate for the stochastic frontier model compared to Cobb–Douglas form (LR = 9.975, p < 0.0001).
The second step is testing if there is significant technical inefficiency using null hypothesis H0: γ = 0, which tests whether the observed variations in efficiency are simply random or systematic. If gamma ϒ is close to zero, the differences in the production will be entirely related to statistical noise, while if gamma ϒ close to one reveals the presence of technical inefficiency. The estimate of parameter ϒ (0.792), which measures the variability of the two sources of error, suggests 79 % of the total variation of total production related to inefficient error term and 21 % of the total variation attributed to the stochastic random errors. This implies that the variation of the total production among the different hospitals was due to the differences in their production inefficiencies, indicating that traditional production function ordinary least squares (OLS) is not an adequate representation of our data. We applied the log-likelihood ratio test to assess whether SFA should be used instead of OLS. The null hypothesis that the OLS regression was as appropriate as SFA was rejected indicating that inefficiency effects should be included. The presence of inefficiency is also confirmed by the high values of the contribution of the inefficiency (u) to the total error.
The third step of testing concerned the distribution of the inefficiency effects using null hypothesis H0: μ = 0. The null hypothesis specifies that each hospital is operating on the technical efficient frontier and that the asymmetric and random technical efficiency in the inefficiency effects are zero. The null hypothesis that the technical inefficiency effects have a half-normal distribution (i.e., μ = 0), was rejected against the null that the technical inefficiency effects have a truncated normal distribution. The coefficient of mu (μ = 0.627; p < 0.05) is the estimate of μ, the mean of the truncated normal distribution (the mean of the error component relative to inefficiency) is statistically signif-icant, indicating that the normal truncated distribution is more appropriate than the half-normal distribution. These results are also confirmed by the comparing values of the variances of technical inefficiency term (sigma_u2) and variance of random error (sigma_v2).
Stochastic frontier analysis
The ML estimates of stochastic frontier production function were obtained applying the normal-truncated normal ML random-effects model with time-invariant efficiency developed by Battese and Coelli 1988 [33]. The results obtained with the Cobb–Douglas and translog, and multi-output distance functions are presented in Table 2. The focus was on a stochastic frontier model which assumes time invariant inefficiencies. This was done because the length of the panel is short and we hoped not to confound the time trend capturing productivity change with that capturing efficiency change. The first section of Table 2 presents frontier function with four parameters for Cobb–Douglas function, fourteen parameters for translog function, and 20 parameters for multi-output distance function. The second section presents the variance parameters, the amount of the function of the log likelihood, and the likelihood ratio (LR) test. Table 2 reports estimates of parameters sigma_u2 (0.086), sigma_v2 (0.022), sigma2 (0.109), lnsigma2 (−2.216), gamma (0.792), ilgtgamma (1.338), and mu (0.627). Sigma_u2 (0.086) is the estimate of δu2. Sigma_v2 (0.022) is the estimate of δv2. Sigma2 (0.109) is the estimate of δs2 = δu2 + δv2. Because δs2 must be positive, the optimization is parameterized in terms of ln (δs2), and this estimate is reported as lnsigma2 (−2.216). Gamma is the estimate of ϒ = δu2/δs2. Because ϒ must be between 0 and 1, the optimization is parameterized in terms of the inverse logit of ϒ, and this estimate is reported as ilgtgamma (1.338).
Table 2 Maximum likelihood estimates of the stochastic frontier models
Output elasticities of input variable
We concluded earlier that translog model is more appropriate to present our data. The results of translog model indicates that the first order coefficients are not conclusive as they do not provide much information on the responsiveness of the output to the various inputs. Based on this argument, output elasticities of each of the inputs at their mean values were calculated using Eq. 5 as shown in Table 3.
Table 3 Output elasticities of input variables (Scale elasticity)
Technical efficiency
Following Battese and Coelli (1988), technical efficiencies were estimated using a one-step maximum likelihood estimates (MLE) procedure, by incorporating the model for technical efficiency effects into the production function. Because one hospital can have a lower efficiency score because of the mix of products in terms of specialization and not because of resource misuse, we estimated technical efficiency using both the translog function, and multi-distance function, and compared them, as shown in Fig. 2. Panel-data analysis allowed the enlargement of a small cross-section of 22 hospitals into a 132 observations sample over a period of 6 years.
Average technical efficiencies of 22 hospitals using both translog, and multi-output distance functions
Results from translog function revealed that the average technical efficiency of hospitals was 55 %, and ranged from 28 to 91 %, with a median of 51 %. Results from the multi-output distance function revealed that the average technical efficiency of hospitals using the multi-output distance function was 53 %, and ranged from 44 to 91 %, which indicated 47 % potential for improvement. There were no full efficient hospital during the entire study period. About 5 % of the hospitals (1 hospital) had a technical efficiency between 0.80 and 1.0, and about 36 % of the hospitals (8 hospitals) had technical efficiency between 0.6 and 0.80. About 59 % of the hospitals (13 hospitals) had technical efficiency between 0.2 and 0.6.
A paired t test was run on a sample of 22 hospitals over 6 years to determine whether there was a statistically significant mean difference between the technical efficiency when we used translog function compared to a multi-output distance function. Technical efficiency of hospitals was lower when using multi-output distance function (0.527 ± 0.15) as opposed to the translog function (0.547 ± 0.15); a statistically not significant decrease of 0.02 (95 % CI, −0.055–0.014), t = 1.2, p = 0.23. Table 4 shows the average technical efficiency scores for both standard and multi-output distance functions.
Table 4 Average technical efficiency scores of all 22 hospitals over period of study
The study used balanced cross-sectional time-series panel data of 22 governmental hospitals over a period of 6 years. Results from Cobb–Douglas model indicates that 1 % increase in number of beds and number of doctors will results in 0.621 and 0.263 % increase in production of hospitals measured by number of treated inpatients and outpatients, respectively. Number of nurses and number of non-medical staff were not significant to production. However, we concluded from the SFA that translog model is more appropriate to present data than Cobb–Douglas model, so we will depend on the results of translog model. The first order coefficients in translog model are not conclusive as they do not provide much information on the responsiveness of the output to the various inputs, because the translog functional form used precludes normalization of the outputs in the production function to the mean vector. If the variables in the translog model were mean-corrected to zero, then the first order coefficients are the estimates of the elasticities at the mean input levels, however, they were not. Consequently, the first order coefficients on the input variables in the translog model are used to calculate the output elasticity with respect to each input in the production function at their mean values. By using mean-scaled variables, it is possible to interpret the first-order coefficients of the translog function as the partial elasticities of production for the sample mean.
The output elasticities measures the responsiveness of output to a change in inputs. Table 3 indicates the estimated output elasticities at the mean values of the inputs or scale elasticities of inputs. The measure of RTS represent the percentage change in output due to a proportional change in use of all inputs, and it is estimated as the sum of output elasticities for all inputs. If this estimate is greater than, equal to, or less than one, we have increasing, constant, or decreasing RTS respectively. These estimates were 0.15, 0.33, 0.51 and 0.25 for beds, doctors, nurses and non-medical staff, respectively. Doctors and nurses appear to be the most important factors in hospital production. Apparently doctors have positive influence on the productivity of other inputs, so that their net contribution is positive. This is consistent with the conventional notion that doctors direct the use of non-doctor resources in hospitals. Consistent with our a priori expectation, that except non-medical staff, all other three inputs make significant contributions to the optimum production scales. If there is 1 % increase in number of beds, number of doctors and number of nurses holding number of non-medical staff without change, then hospitals will have constant RTS (1.0). The sum of these elasticity coefficients is equal to 0.74, which indicates that the production process has a decrease return to scale (DRS). So, if hospitals increased all inputs by 1 %, production would increase by about 0.74 %. In other words, hospitals have not worked in the optimum production scales, and that the majority of hospitals do not fully achieve the potential scale economies.
The second order coefficients and interaction terms coefficients in translog model are almost completely statistically significant except interaction between beds and nurses, and interaction between beds and non-medical staff. Most interaction coefficients turned out to be highly significant indicating that the usage levels of the four inputs were interdependent on each other. The results of the SFA analysis shows that the number of doctors has a significant effect on the production both partially or in the form of quadratic and interaction. The consistency of the strong influence of the production through a translog SFA.
Doubling the use of inputs means using these inputs once again in the hospital for the purpose of increasing productivity. Therefore, squaring (doubling) the number of doctors, nurses, and non-medical staff increases hospital output by 0.663, 0.137, and 1.526 units per unit of output, respectively, through marginal. So, investment in doctors, nurses, and non-medical staff yields increasing return to scale. However, we noticed that the coefficient of number of non-medical staff is significant and negative while the coefficient of the square of number of non-medical staff is significant and positive. This indicates that hospitals with lower number of non-medical staff are more productive that hospitals with higher number of non-medical staff. A decrease in the number of non-medical staff in each hospital will result in the improvement of production. It is interesting to note that that the coefficient of number of doctors and the coefficient of the square of number of doctors is significant and positive. This indicates that hospitals with lower number of doctors are less productive that hospitals with higher number of doctors. An increase in the number of doctors in each hospital will result in improvement of production. We also noticed that the coefficient of number of nurses is not significant and negative and while coefficient of the square of number of nurses is significant and positive. This indicates that an increase or doubling the number of nurses will result in large improvement of hospital production.
The coefficient of interaction between beds and doctors is negative when both first order coefficients are positive. The number of beds has two effects on hospital output, through the direct effect, the number of beds directly and positively affects output, and through the indirect effect, the number of beds changes the effect of number of doctors, nurses, and non-medical staff on the output. The negative sign on interaction between beds and doctors indicates some substitutability of doctors and hospital beds. The results indicate that 1 % increase in number of beds should reduce the number of doctors required by 0.9 %. This means that in the presence of more than required beds, doctors productivity could be reduced leading to lower output level. This may reflect a higher tendency for doctors to keep patients hospitalized longer and to utilize more ancillary services, which may reduce the number of treated patients. However, the interactions of beds with nurses and non-medical staff were not significant, suggesting that inclusion of outpatients as a component in the output reduces the importance of hospital beds in the production of the hospital.
Doctors and nurses and complementary as indicated by the positive sign observed for the interaction between doctors and nurses. The first order coefficients are positive for doctors and negative for nurses. The number of doctors has two effects on hospital output. Through the direct effect, the number of doctors directly and positively affects output, and through the indirect effect, the number of doctors changes the effect of number of nurses on output.
The results indicate that 1 % increase in number of doctors should increase the number of nurses required by 1.3 %.
The number of non-medical staff has two effects on hospital output. Through the direct effect, the number of non-medical staff directly and negatively affects output, and through the indirect effect, the number of non-medical staff changes the effect of number of doctors and nurses staff on output. The negative sign on interaction between doctors and non-medical staff indicates that they are substitutes, suggesting that health care delivery in the hospital involves many other tasks than just the direct interaction of doctors and patients, and reflects the importance of non-medical staff. The results indicate that 1 % increase in number of non-medical staff should reduce the number of doctors and nurses required by 1.89 and 1.2 % respectively. Key strategies for increasing non-medical staff productivity include better management of overtime and sickness absence, and making a rich clinical skill-mix when reducing the overall numbers.
The average technical efficiency of hospitals calculated by standard translog function was 55 %, and ranged from 28 to 91 %, which indicated 45 % potential for improvement through more effective use of the input bundle given the present state of technology. These technical efficiency scores are comparable to those revealed by Goudarzi et al. (2014) [22] where technical efficiency was about 59 %. However, the average technical efficiency score is considerably low compared with efficiency of hospitals in Saudi Arabia with technical efficiency score of 84.6 % [34], and OECD counties with technical efficiency scores range from 62 to 96 % [35]. A study in Netherlands demonstrates that the average efficiency for Dutch hospitals is 84 % [36].
Similar to previous studies on hospital efficiency, our study suffers several limitations. First, because a simple empirical model was used in this study, there is a possibility of the omitted variables problem, which may bias the estimation of time-invariant component of hospital production efficiency. Also the inputs and outputs used in this research allowed us to perform efficiency analyses of hospitals, but we should also recognize their weaknesses. The number of beds is used to proxy capital inputs, but hospitals may also use different technologies, so we assumed that the comparison hospitals use similar levels of technology. Also, using the number of beds instead of active beds may result in huge differences across hospitals in efficiency in terms of occupancy rates and duration of stay. On the other hand, the labour input we use is a good standard measure and sufficiently captures the variation of labour inputs between hospitals. Turning to the output measures, the output measure is not adjusted for quality or case mix, and differences in the severity of cases may affect the number of cases hospitals dealt with relative to their staff numbers and could therefore have an impact on the results of the analysis. Research highlights the need for using case mix adjusted data in analyzing hospital efficiency [37]. Second, a relatively small sample size and a short time interval of 6 years may limit the generalizability and estimation efficiency of our results. Despite our best efforts to obtain the necessary information to construct our production function model, data in panel were only available for 6 years and for governmental hospitals. Data were not available for private hospitals. As a result, our sample is not representative of all OPT hospitals. Finally, the model outlined above following Battese and Coelli, 1988 [28] assumes that the inefficiency effects are time invariant. In a panel with many years, this assumption may be questionable, so we wish to return to this topic in future research to estimate a model that assumes time-varying inefficiency for comparison purposes. Despite this, there is evidence to suggest that there are considerable efficiency gains yes to be made by MOH.
Technical efficiency analysis is used as a review tool to assess decisions regarding allocation of human and capital resources. This study measured technical efficiency of governmental hospitals using SFA. The average technical efficiency of governmental hospital was approximately 55 %, and about 45 % the production factors are wasted during the service delivery process in the hospitals. Using the existing amount of resources, the amount of delivered outputs can be doubled, which can significantly impact patient outcomes. Despite continued government investment in the hospital sector through capital hospital expansion, hiring workforce, and promotion of new technology, hospitals has remained relatively inefficient. The efficiency scores provide insight into mismanagement of available resources. Improving efficiency while containing cost, is a key policy challenge in OPT. Higher spending on hospitals will not necessarily translate into effective results if spending is not directed towards the most cost-effective interventions. A variety of strategic options are available, and governmental hospitals show varying capacity to adopt these options. To address inefficiency in hospitals, policy makers may increase output in terms of treated patients, reduce inputs, and change organization and processes in hospitals. Interventions to improve the quality of management in hospitals could help to improve efficiency. International benchmarking of hospital efficiency help to provide more insights on sources of hospital inefficiency. Given the positive effect of increasing number of number of doctors and nurses on efficiency. However, key strategies for increasing non-medical staff efficiency include reducing their numbers. Offering the option of early retirement, limit hiring, and transfer to primary health care centers are possible options. It is crucial to maintain a rich clinical skill-mix teams of health workforce, and effectively manage overtime and sickness absence when implementing such measures.
This article was an attempt to measure technical efficiency of governmental hospitals in OPT to inform future health policy making and health planning. Internationally, the results contribute to the growing literature on SFA methodology. It is also an invitation to other researchers in the field to apply other quantitative techniques to provide deep insight into how governmental and private hospitals manage their human and capital resources. Only this kind of understanding could help us to be sure that we are moving forward in our journey to enhance the efficiency of the health system.
Palestinian Ministry of Health. National Health Strategy 2014–2016. 2014.
Palestinian Central Bureau of Statistics. National Health Accounts 2000–2008. Main Findings. 2011.
World Bank. West bank and Gaza health system resiliency strengthening project. 2015.
Sabella A, Kashou R, Omran O. Assessing quality of management practices in Palestinian hospitals. Inter J Organ Anal. 2015;23(2):213–32.
Hamidi S. Evidence from the national health account: the case of Dubai. Risk Manag Healthc Policy. 2014;7(1):163–75.
Gupta S, Verhoeven M. The efficiency of government expenditures: experiences from Africa. J Policy Model. 2001;23:433–67.
World Health Organization (WHO). World Health Organization (WHO) global health expenditure atlas. Geneva: World Health Organization; 2014. Available from: http://www.who.int/health-accounts/atlas2014.pdf [cited 2015 August 8].
World Health Organization (WHO). World health report 2010, health systems: improving performance (Geneva). Geneva: World Health Organization; 2010. p. 2010.
Clements B, Coady D, Gupta S. The economics of public health care reform in advanced and emerging economies. Washington: International Monetary Fund, IMF Publications; 2012.
Palmer ST, Torgerson D. Economic notes: definitions of efficiency. BMJ. 1999;318(7191):1136.
Salerno C. What we know about the efficiency of higher education institutions: the best evidence. Netherlands: University of Twente; 2003. Netherlands External Research Report: http://doc.utwente.nl/47097/1/bhw-99-bgo99.pdf.
Hollingsworth BP, Parkin D. Developing efficiency measures for use in the NHS, a report to the NHS executive northern & Yorkshire R&D directorate health economics group. Newcastle: University of Newcastle; 1998.
Charnes A, Cooper WW, Rhodes E. Measuring the efficiency of decision making units. Eur J Oper Res. 1978;2:429–44.
Aigner DJ, Lovell CAK, Schmidt P. Formulation and estimation of stochastic frontier production function models. J Econom. 1977;6(1):21–37.
Battese GE, Coelli TJ. Frontier production functions, technical efficiency and panel data: with application to paddy farmers in India. J Product Anal. 1992;3:153–69.
Batesse GE, Coelli TJ. A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empir Econ. 1995;20:325–32.
Huang CJLJ. Estimation of a non-neutral stochastic frontier production function. J Product Anal. 1994;5:171–80.
Meeusen W, Van Den Broeck J. Efficiency estimation from Cobb–Douglas production functions with composed error. Int Econ Rev. 1977;18(2):435–44.
Kumbhakar SC, Lovell KCA. Stochastic frontier analysis. Cambridge: Cambridge University Press; 2000.
Coelli TJ, Rao DSP, O'Donnell CJ, Battese GE. An introduction to efficiency and productivity analysis. 2nd ed. New York: Springer; 2005.
Greene W. Reconsidering heterogeneity in panel data estimators of the stochastic frontier model. J Econom. 2005;126:269–303.
Goudarzi R, Pourreza A, Shokoohi M, Askari R, Mahdavi M, Moghri J. Technical efficiency of teaching hospitals in Iran: the use of stochastic frontier analysis, 1999–2011. Inter J Health Policy Manag. 2014;3(2):91–7.
Gerdtham G, Lothgren M, Tambour M, Rehnberg C. Internal markets and health care efficiency: a multiple-output stochastic frontier analysis. Health Econ. 1999;8:151–64.
Ferrari A. The internal market and hospital efficiency: a stochastic distance function approach. Appl Econ. 2006;38:2121–30.
Daidone S, D'Amico F. Technical efficiency, specialization and ownership form: evidences from a pooling of Italian hospitals. J Product Anal. 2009;32(3):203. doi:10.1007/s11123-009-0137-7.
Palestinian Ministry of Health. Ministry of Health Annual report 2006. 2007.
Mortimer D. A systematic review of direct DEA vs SFA/DFA comparisons. Centre for health and evaluation. Australia: 2002. Working paper 136.
Battese E, Coelli TJ. Prediction of firm level technical inefficiencies with a generalized frontier production function and panel data. J Econom. 1988;38:387–99.
AbouEl-Seoud M. Measuring efficiency of reformed public hospitals in Saudi Arabia: an application of data envelopment analysis. Int J Econ Manag Sci. 2013;2(9):44–53.
Varabyova Y, Schreyogg J. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and nonparametric approaches. Health Policy. 2013;112(1–2):70–9.
Ludwig M. Efficiency of Dutch hospitals [Doctoral Thesis]. Netherlands: Maastricht University; 2008.
Gannon B. Technical efficiency of hospitals in Ireland, research program on health services, health inequalities and health and social gain. 2005. Working paper no 18.
Chair of Health Studies Department, School of Health and Environmental Studies, Hamdan Bin Mohammed Smart University, P.O. Box 71400, Dubai, United Arab Emirates
Samer Hamidi
Search for Samer Hamidi in:
Correspondence to Samer Hamidi.
Hamidi, S. Measuring efficiency of governmental hospitals in Palestine using stochastic frontier analysis. Cost Eff Resour Alloc 14, 3 (2016) doi:10.1186/s12962-016-0052-5
Cobb–Douglas
|
CommonCrawl
|
Perturbation and numerical methods for computing the minimal average energy
A central limit theorem for pulled fronts in a random medium
June 2011, 6(2): 195-240. doi: 10.3934/nhm.2011.6.195
Convergence of discrete duality finite volume schemes for the cardiac bidomain model
Boris Andreianov 1, , Mostafa Bendahmane 2, , Kenneth H. Karlsen 3, and Charles Pierre 4,
Laboratoire de Mathématiques CNRS UMR 6623, Université de Franche-Comté, 16 route de Gray, 25030 Besançon Cedex, France
Université Victor Ségalen - Bordeaux 2, 146 rue Léo Saignat, BP 26, 33076 Bordeaux, France
Centre of Mathematics for Applications, University of Oslo, P.O. Box 1053, Blindern, N–0316 Oslo, Norway
Laboratoire de Mathématiques et Applications, Université de Pau et du Pays de l'Adour, Av. de l'Université, BP 1155, 64013 Pau Cedex,, France
Received October 2010 Revised March 2011 Published May 2011
We prove convergence of discrete duality finite volume (DDFV) schemes on distorted meshes for a class of simplified macroscopic bidomain models of the electrical activity in the heart. Both time-implicit and linearised time-implicit schemes are treated. A short description is given of the 3D DDFV meshes and of some of the associated discrete calculus tools. Several numerical tests are presented.
Keywords: 3D DDFV convergence, Cardiac electrical activity, bidomain model, degenerate parabolic PDE., finite volume schemes.
Mathematics Subject Classification: Primary: 65M08, 65M12; Secondary: 92C30, 36K6.
Citation: Boris Andreianov, Mostafa Bendahmane, Kenneth H. Karlsen, Charles Pierre. Convergence of discrete duality finite volume schemes for the cardiac bidomain model. Networks & Heterogeneous Media, 2011, 6 (2) : 195-240. doi: 10.3934/nhm.2011.6.195
H. W. Alt and S. Luckhaus, Quasilinear elliptic-parabolic differential equations, Math. Z., 183 (1983), 311-341. doi: 10.1007/BF01176474. Google Scholar
B. Andreianov, M. Bendahmane, F. Hubert and S. Krell, On 3D DDFV discretization of gradient and divergence operators. I. Meshing, operators and discrete duality, Preprint HAL (2011), http://hal.archives-ouvertes.fr/hal-00355212 Google Scholar
B. Andreianov, M. Bendahmane and F. Hubert, On 3D DDFV discretization of gradient and divergence operators. II. Discrete functional analysis tools and applications to degenerate parabolic problems, Preprint HAL (2011), http://hal.archives-ouvertes.fr/hal-00567342 Google Scholar
B. Andreianov, M. Bendahmane and K. H. Karlsen, A gradient reconstruction formula for finite volume schemes and discrete duality, In R. Eymard and J.-M. Hérard, editors, Finite Volume For Complex Applications, Problems And Perspectives. 5th International Conference, Wiley, London, (2008), 161-168. Google Scholar
B. Andreianov, M. Bendahmane and K. H. Karlsen, Discrete duality finite volume schemes for doubly nonlinear degenerate hyperbolic-parabolic equations, J. Hyperbolic Diff. Equ., 7 (2010), 1-67. Google Scholar
B. Andreianov, M. Bendahmane and R. Ruiz Baier, Analysis of a finite volume method for a cross-diffusion model in population dynamics, M3AS Math. Models Meth. Appl. Sci., (2011), http://dx.doi.org/10.1142/S0218202511005064 Google Scholar
B. Andreianov, F. Boyer and F. Hubert, Discrete duality finite volume schemes for Leray-Lions type elliptic problems on general 2D meshes, Num. Meth. PDE, 23 (2007), 145-195. doi: 10.1002/num.20170. Google Scholar
B. Andreianov, M. Gutnic and P. Wittbold, Convergence of finite volume approximations for a nonlinear elliptic-parabolic problem: A "continuous" approach, SIAM J. Num. Anal., 42 (2004), 228-251. doi: 10.1137/S0036142901400006. Google Scholar
B. Andreianov, F. Hubert and S. Krell, Benchmark 3D: A version of the DDFV scheme with cell/vertex unknowns on general meshes, In Proc. of Finite Volumes for Complex Applications VI in Prague, Springer, (2011), to appear. Google Scholar
M. Bendahmane, R. Bürger and R. Ruiz Baier, A finite volume scheme for cardiac propagation in media with isotropic conductivities, Math. Comp. Simul., 80 (2010), 1821-1840. doi: 10.1016/j.matcom.2009.12.010. Google Scholar
M. Bendahmane and K. H. Karlsen, Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue, Netw. Heterog. Media, 1 (2006), 185-218. Google Scholar
M. Bendahmane and K. H. Karlsen, Convergence of a finite volume scheme for the bidomain model of cardiac tissue, Appl. Numer. Math., 59 (2009), 2266-2284. doi: 10.1016/j.apnum.2008.12.016. Google Scholar
S. Börm, L. Grasedyck and W. Hackbusch, An introduction to hierarchical matrices, Math. Bohemica, 127 (2002), 229-241. Google Scholar
S. Börm, L. Grasedyck and W. Hackbusch, Introduction to hierarchical matrices with applications, Eng. Anal. Bound., 27 (2003), 405-422. doi: 10.1016/S0955-7997(02)00152-2. Google Scholar
Y. Bourgault, Y. Coudière and C. Pierre, Existence and uniqueness of the solution for the bidomain model used in cardiac electro-physiology, Nonlin. Anal. Real World Appl., 10 (2009), 458-482. doi: 10.1016/j.nonrwa.2007.10.007. Google Scholar
F. Boyer and P. Fabrie, "Eléments d'Analyse pour l'Étude de quelques Modèles d'Écoulements de Fluides Visqueux Incompressibles" (French) [Elements of analysis for the study of some models of incompressible viscous fluid flow], Math. & Appl. Vol. 52, Springer, Berlin, 2006. Google Scholar
F. Boyer and F. Hubert, Finite volume method for 2D linear and nonlinear elliptic problems with discontinuities, SIAM J. Num. Anal., 46 (2008), 3032-3070. doi: 10.1137/060666196. Google Scholar
M. Brezzi, K. Lipnikov and M. Shashkov, Convergence of the mimetic finite difference method for diffusion problems on polyhedral meshes, SIAM J. Num. Anal., 43 (2005), 1872-1896. doi: 10.1137/040613950. Google Scholar
P. Colli Franzone, L. Guerri and S. Rovida, Wavefront propagation in an activation model of the anisotropic cardiac tissue: Asymptotic analysis and numerical simulations, J. Math. Biol., 28 (1990), 121-176. doi: 10.1007/BF00163143. Google Scholar
P. Colli Franzone, L. Guerri and S. Tentoni, Mathematical modeling of the excitation process in myocardial tissue: Influence of fiber rotation on wavefront propagation and potential field, Math. Biosci., 101 (1990), 155-235. doi: 10.1016/0025-5564(90)90020-Y. Google Scholar
P. Colli Franzone, L. F. Pavarino and B. Taccardi, Simulating patterns of excitation, repolarization and action potential duration with cardiac bidomain and monodomain models, Math. Biosci., 197 (2005), 35-66. doi: 10.1016/j.mbs.2005.04.003. Google Scholar
P. Colli Franzone and G. Savaré, Degenerate evolution systems modeling the cardiac electric field at micro- and macroscopic level, In Evolution equations, semigroups and functional analysis (Milano, 2000), vol. 50 of Progr. Nonlinear Differential Equations Appl., 49-78. Birkhäuser, Basel, 2002. Google Scholar
Y. Coudière, Th. Gallouët and R. Herbin, Discrete Sobolev inequalities and $L^p$ error estimates for finite volume solutions of convection diffusion equations, M2AN Math. Model. Numer. Anal., 35 (2001), 767-778. doi: 10.1051/m2an:2001135. Google Scholar
Y. Coudière and F. Hubert, A 3D discrete duality finite volume method for nonlinear elliptic equations, In: A. Handloviovà, P. Frolkovič, K. Mikula, D. Ševčovič (Eds.), Proc. of Algoritmy 2009, 18th Conf. on Scientific Computing (2009), 51-60, http://pc2.iam.fmph.uniba.sk/amuc/_contributed/algo2009/ Google Scholar
Y. Coudière and F. Hubert, A 3D discrete duality finite volume method for nonlinear elliptic equation, HAL preprint (2010), http://hal.archives-ouvertes.fr/hal-00456837 Google Scholar
Y. Coudière, F. Hubert and G. Manzini, Benchmark 3D: CeVeFE-DDFV, a discrete duality scheme with cell/vertex/face+edge unknowns, In Proc. of Finite Volumes for Complex Applications VI in Prague, Springer (2011), to appear. Google Scholar
Y. Coudière and G. Manzini, The discrete duality finite volume method for convection-diffusion problems, SIAM J. Numer. Anal., 47 (2010), 4163-4192. Google Scholar
Y. Coudière and Ch. Pierre, Benchmark 3D: CeVe-DDFV, a discrete duality scheme with cell/vertex unknowns, In Proc. of Finite Volumes for Complex Applications VI in Prague, Springer (2011), to appear. Google Scholar
Y. Coudière and Ch. Pierre, Stability and convergence of a finite volume method for two systems of reaction-diffusion in electro-cardiology, Nonlin. Anal. Real World Appl., 7 (2006), 916-935. doi: 10.1016/j.nonrwa.2005.02.006. Google Scholar
Y. Coudière, Ch. Pierre and R. Turpault, A 2D/3D finite volume method used to solve the bidomain equations of electro-cardiology, Proc. of Algorithmy 2009, 18th Conf. on Scientific Computing, (2009), http://pc2.iam.fmph.uniba.sk/amuc/_contributed/algo2009/ Google Scholar
Y. Coudière, Ch. Pierre, O. Rousseau and R. Turpault, A 2D/3D discrete duality finite volume scheme. Application to ECG simulation, Int. J. on Finite Volumes, 6 (2008), 1-24. Google Scholar
K. Domelevo, S. Delcourte and P. Omnes, Discrete-duality finite volume method for second order elliptic equations, in: F. Benkhaldoun, D. Ouazar, S. Raghay (Eds.), Finite Volumes for Complex Applications, Hermes Science Publishing, (2005), 447-458. Google Scholar
K. Domelevo and P. Omnès., A finite volume method for the Laplace equation on almost arbitrary two-dimensional grids, M2AN Math. Model. Numer. Anal., 39 (2005), 1203-1249. doi: 10.1051/m2an:2005047. Google Scholar
L. C. Evans, "Partial Differential Equations," vol. 19 of Graduate Studies in Mathematics. American Math. Society, Providence, RI, 1998. Google Scholar
R. Eymard, T. Gallouët and R. Herbin, "Finite Volume Methods," Handbook of Numerical Analysis, Vol. VII, P. Ciarlet, J.-L. Lions, eds., North-Holland, 2000. Google Scholar
R. Eymard, T. Gallouët and R. Herbin, Discretisation of heterogeneous and anisotropic diffusion problems on general non-conforming meshes. SUSHI: A scheme using stabilisation and hybrid interfaces, IMA J. Numer. Anal., 30 (2010), 1009-1043. doi: 10.1093/imanum/drn084. Google Scholar
R. Eymard, G. Henry, R. Herbin, F. Hubert, R. Klöfkorn and G. Manzini, 3D Benchmark on discretization schemes for anisotropic diffusion problems on general grids, In Proc. of Finite Volumes for Complex Applications VI in Prague, Springer (2011), to appear. Google Scholar
A. Glitzky and J. A. Griepentrog, Discrete Sobolev-Poincaré inequalities for Voronoï finite volume approximations, SIAM J. Numer. Anal., 48 (2010), 372-391. doi: 10.1137/09076502X. Google Scholar
D. Harrild and C. S. Henriquez, A finite volume model of cardiac propagation, Ann. Biomed. Engrg., 25 (1997), 315-334. doi: 10.1007/BF02648046. Google Scholar
R. Herbin and F. Hubert, Benchmark on discretisation schemes for anisotropic diffusion problems on general grids, In R. Eymard and J.-M. Hérard, editors, Finite Volume For Complex Applications, Problems And Perspectives. 5th International Conferenc, Wiley, London, (2008), 659-692. Google Scholar
C. S. Henriquez, Simulating the electrical behavior of cardiac tissue using the biodomain models, Crit. Rev. Biomed. Engr., 21 (1993), 1-77. Google Scholar
F. Hermeline, Une méthode de volumes finis pour les équations elliptiques du second ordre (French) [A finite-volume method for second-order elliptic equations], C. R. Math. Acad. Sci. Paris Sér. I, 326 (1198), 1433-1436. Google Scholar
F. Hermeline, A finite volume method for the approximation of diffusion operators on distorted meshes, J. Comput. Phys., 160 (2000), 481-499. doi: 10.1006/jcph.2000.6466. Google Scholar
F. Hermeline, A finite volume method for solving Maxwell equations in inhomogeneous media on arbitrary meshes, C. R. Math. Acad. Sci. Paris Sér. I, 339 (2004), 893-898. Google Scholar
F. Hermeline, Approximation of 2D and 3D diffusion operators with discontinuous full-tensor coefficients on arbitrary meshes, Comput. Methods Appl. Mech. Engrg., 196 (2007), 2497-2526. doi: 10.1016/j.cma.2007.01.005. Google Scholar
F. Hermeline, A finite volume method for approximating 3D diffusion operators on general meshes, J. Comput. Phys., 228 (2009), 5763-5786. doi: 10.1016/j.jcp.2009.05.002. Google Scholar
A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J. Physiol., 117 (1952), 500-544. Google Scholar
J. Keener and J. Sneyd, "Mathematical Physiology," Vol. 8 of Interdisciplinary Applied Mathematics, Springer, New York, 1998. Google Scholar
S. Krell, Stabilized DDFV schemes for Stokes problem with variable viscosity on general 2D meshes, Num. Meth. PDEs, (2010), http://dx.doi.org/10.1002/num.20603 Google Scholar
S. Krell and G. Manzini, The Discrete Duality Finite Volume method for the Stokes equations on 3D polyhedral meshes, HAL preprint (2010), http://hal.archives-ouvertes.fr/hal-00448465 Google Scholar
S. N. Kruzhkov, Results on the nature of the continuity of solutions of parabolic equations and some of their applications, Mat. Zametki, 6 (1969), 97-108; english tr. in Math. Notes, 6 (1969), 517-523. Google Scholar
P. Le Guyader, F. Trelles and P. Savard, Extracellular measurement of anisotropic bidomain myocardial conductivities. I. Theoretical analysis, Annals Biomed. Eng., 29 (2001), 862-877. doi: 10.1114/1.1408923. Google Scholar
G. T. Lines, P. Grottum, A. J. Pullan, J. Sundes and A. Tveito, Mathematical models and numerical methods for the forward problem in cardiac electrophysiology, Comput. Visual. Sci., 5 (2002), 215-239. Google Scholar
G. Lines, M. L. Buist, P. Grøttum, A. J. Pullan, J. Sundnes and A. Tveito, Mathematical models and numerical methods for the forward problem in cardiac electrophysiology, Comput. Visual. Sci., 5 (2003), 215-239. Google Scholar
J.-L. Lions and E. Magenes, "Problèmes aux Limites non Homogènes et Applications," Vol. 1, (French) [Nonhomogeneous boundary value problems and their applications. Vol. 1], Dunod, Paris, 1968. Google Scholar
C.-H. Luo and Y. Rudy, A model of the ventricular cardiac action potential. Depolarization, repolarization, and their interaction, Circ. Res., 68 (1991), 1501-1526. Google Scholar
D. Noble, A modification of the Hodgkin-Huxley equation applicable to Purkinje fibre action and pacemaker potentials, J. Physiol., 160 (1962), 317-352. Google Scholar
F. Otto, $L^1$-contraction and uniqueness for quasilinear elliptic-parabolic equations, J. Diff. Equ., 131 (1996), 20-38. doi: 10.1006/jdeq.1996.0155. Google Scholar
Ch. Pierre, "Modélisation et Simulation de l'Activité Électrique du Coeur dans le Thorax, Analyse Numérique et Méthodes de Volumes Finis" (French) [Modelling and Simulation of the Heart Electrical Activity in the Thorax, Numerical Analysis and Finite Volume Methods] Ph.D. Thesis, Université de Nantes, 2005. Google Scholar
Ch. Pierre, Preconditioning the coupled heart and torso bidomain model with an almost linear complexity, HAL Preprint (2010), http://hal.archives-ouvertes.fr/hal-00525976. Google Scholar
S. Sanfelici, Convergence of the Galerkin approximation of a degenerate evolution problem in electro-cardiology, Numer. Meth. PDE, 18 (2002), 218-240. doi: 10.1002/num.1000. Google Scholar
J. Sundnes, G. T. Lines, X. Cai, B. F. Nielsen, K.-A. Mardal and A. Tveito, "Computing the Electrical Activity in the Human Heart," Springer, 2005. Google Scholar
J. Sundnes, G. T. Lines and A. Tveito, An operator splitting method for solving the bidomain equations coupled to a volume conductor model for the torso, Math. Biosci., 194 (2005), 233-248. doi: 10.1016/j.mbs.2005.01.001. Google Scholar
L. Tung, "A Bidomain Model for Describing Ischemic Myocardial D-D Properties," Ph.D. thesis, M.I.T., 1978. Google Scholar
M. Veneroni, Reaction-diffusion systems for the microscopic cellular model of the cardiac electric field, Math. Methods Appl. Sci., 29 (2006), 1631-1661. doi: 10.1002/mma.740. Google Scholar
Erik Grandelius, Kenneth H. Karlsen. The cardiac bidomain model and homogenization. Networks & Heterogeneous Media, 2019, 14 (1) : 173-204. doi: 10.3934/nhm.2019009
Mostafa Bendahmane, Kenneth H. Karlsen. Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue. Networks & Heterogeneous Media, 2006, 1 (1) : 185-218. doi: 10.3934/nhm.2006.1.185
Yan Zheng, Jianhua Huang. Exponential convergence for the 3D stochastic cubic Ginzburg-Landau equation with degenerate noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5621-5632. doi: 10.3934/dcdsb.2019075
Anne Bronzi, Ricardo Rosa. On the convergence of statistical solutions of the 3D Navier-Stokes-$\alpha$ model as $\alpha$ vanishes. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 19-49. doi: 10.3934/dcds.2014.34.19
Theodore Tachim Medjo. On the convergence of a stochastic 3D globally modified two-phase flow model. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 395-430. doi: 10.3934/dcds.2019016
Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik. On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 481-500. doi: 10.3934/dcds.2007.17.481
Yi Shi, Kai Bao, Xiao-Ping Wang. 3D adaptive finite element method for a phase field model for the moving contact line problems. Inverse Problems & Imaging, 2013, 7 (3) : 947-959. doi: 10.3934/ipi.2013.7.947
Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226
Henrik Garde, Kim Knudsen. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior. Conference Publications, 2015, 2015 (special) : 495-504. doi: 10.3934/proc.2015.0495
Ning Ju. The finite dimensional global attractor for the 3D viscous Primitive Equations. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7001-7020. doi: 10.3934/dcds.2016104
Caterina Calgaro, Meriem Ezzoug, Ezzeddine Zahrouni. Stability and convergence of an hybrid finite volume-finite element method for a multiphasic incompressible fluid model. Communications on Pure & Applied Analysis, 2018, 17 (2) : 429-448. doi: 10.3934/cpaa.2018024
Tomás Caraballo, Antonio M. Márquez-Durán, José Real. Pullback and forward attractors for a 3D LANS$-\alpha$ model with delay. Discrete & Continuous Dynamical Systems, 2006, 15 (2) : 559-578. doi: 10.3934/dcds.2006.15.559
Ferdinando Auricchio, Elena Bonetti. A new "flexible" 3D macroscopic model for shape memory alloys. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 277-291. doi: 10.3934/dcdss.2013.6.277
A. Naga, Z. Zhang. The polynomial-preserving recovery for higher order finite element methods in 2D and 3D. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 769-798. doi: 10.3934/dcdsb.2005.5.769
Yue Cao. Blow-up criterion for the 3D viscous polytropic fluids with degenerate viscosities. Electronic Research Archive, 2020, 28 (1) : 27-46. doi: 10.3934/era.2020003
Lianzhang Bao, Wenjie Gao. Finite traveling wave solutions in a degenerate cross-diffusion model for bacterial colony with volume filling. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2813-2829. doi: 10.3934/dcdsb.2017152
Marko Budišić, Stefan Siegmund, Doan Thai Son, Igor Mezić. Mesochronic classification of trajectories in incompressible 3D vector fields over finite times. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 923-958. doi: 10.3934/dcdss.2016035
Boling Guo, Guoli Zhou. Finite dimensionality of global attractor for the solutions to 3D viscous primitive equations of large-scale moist atmosphere. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4305-4327. doi: 10.3934/dcdsb.2018160
Kush Kinra, Manil T. Mohan. Convergence of random attractors towards deterministic singleton attractor for 2D and 3D convective Brinkman-Forchheimer equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021061
Thomas März, Andreas Weinmann. Model-based reconstruction for magnetic particle imaging in 2D and 3D. Inverse Problems & Imaging, 2016, 10 (4) : 1087-1110. doi: 10.3934/ipi.2016033
Boris Andreianov Mostafa Bendahmane Kenneth H. Karlsen Charles Pierre
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.