text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Comparing the speed and accuracy of approaches to betweenness centrality approximation
John Matta ORCID: orcid.org/0000-0002-7666-14091,
Gunes Ercal1 &
Koushik Sinha2
Many algorithms require doing a large number of betweenness centrality calculations quickly, and accommodating this need is an active open research area. There are many different ideas and approaches to speeding up these calculations, and it is difficult to know which approach will work best in practical situations.
The current study attempts to judge performance of betweenness centrality approximation algorithms by running them under conditions that practitioners are likely to experience. For several approaches to approximation, we run two tests, clustering and immunization, on identical hardware, along with a process to determine appropriate parameters. This allows an across-the-board comparison of techniques based on the dimensions of speed and accuracy of results.
Overall, the speed of betweenness centrality can be reduced several orders of magnitude by using approximation algorithms. We find that the speeds of individual algorithms can vary widely based on input parameters. The fastest algorithms utilize parallelism, either in the form of multi-core processing or GPUs. Interestingly, getting fast results does not require an expensive GPU.
The methodology presented here can guide the selection of a betweenness centrality approximation algorithm depending on a practitioner's needs and can also be used to compare new methods to existing ones.
Many centrality measures exist, utilizing both local and global information for quantifying the importance of a node in a network. Local measures such as degree and closeness centrality can be computed in linear time [1], but are limited in their usefulness. Global measures have a much higher time complexity, but have a wider variety of applications. Betweenness centrality [2] is an important global measure based on the shortest paths through an entire network. Given a graph \(G = (V, E)\), where V is a set of vertices and \(E \subset \left( {\begin{array}{c}V\\ 2\end{array}}\right)\) is a set of edges, the betweenness centrality of vertex \(v \in V\) is defined as
$$\begin{aligned} \mathrm {BC}(v) = \sum _{s\ne v\ne t} \frac{\sigma _{st}(v)}{\sigma _{st}}, \end{aligned}$$
where \(\sigma _{st}\) is the number of shortest paths from node s to node t, and \(\sigma _{st}(v)\) is the number of those paths that pass through vertex v.
Betweenness centrality has been applied to problems as diverse as cancer diagnosis [3], network flow [4], and the measurement of influence in social networks [5]. There are many applications involving a large number of betweenness centrality calculations, such as power grid contingency analysis [6] (which is so computationally intensive that it must be done on a supercomputer), Girvan–Newman community detection [7], network routing [8], skill characterization in artificial intelligence [9], analysis of terrorist networks [10], node-based resilience clustering [11], and the modeling of immunization strategies [12]. In these applications, the vertex with the highest betweenness centrality must be determined for a continually changing graph O(n) times.
The most efficient known algorithm for calculating exact betweenness centralities is a faster algorithm by Brandes [13]. For unweighted graphs, this algorithm is based on a modification of breadth-first search (BFS) and can calculate the betweenness centrality of every node in a graph with a time complexity of O(|V||E|). Many algorithms used in the biological and social sciences, such as those mentioned above, rely on repeatedly calculating betweenness centrality for every node in a network. The relevant networks examined, such as online social networks, can become arbitrarily large, making Brandes' algorithm impractical for many realistic scenarios. Therefore, the problem of performing a large number of betweenness centrality calculations efficiently is an active open research area [14] to which this work contributes.
We wish to compare different ways of decreasing the execution time of the above algorithms, and both speed and accuracy are important in the assessment of outcomes. Simply comparing the results of approximate betweenness centralities to the true values is not necessarily useful for several reasons. First, several of the aforementioned algorithms use betweenness centrality as a measure of relative importance with which to rank vertices, and only the correct ranking is required. An approximation that produces some incorrect values, but preserves the correct ranking will perform just as well as exact calculations. Second, for some of the algorithms, even the exact centrality ordering is not critical. For example, it is noted in works such as [15, 16] that only the top or top k vertices with the highest betweenness centrality are required. In light of such considerations, approximation algorithms for betweenness centrality that consistently and efficiently discover the highest betweenness node or nodes may be more useful than slower traditional methods computing the betweenness centrality for every node exactly.
The first major study to attempt to comprehensively benchmark betweenness centrality calculations is by Al-Ghamdi et al. [17]. They use a supercomputer to compute exact betweenness centralities for large graphs, and then compare the results from approximations to these "gold standard" values. They run tests by benchmarking seven different approximation methods, including methods by Brandes and Pich [18], Bader et al. [19], Riondato and Kornaropoulos [20], Geisberger et al. [21], Riondato and Upfal [22], Pfeffer and Carley [23], and Everett and Borgatti [24]. As we are interested in comparing the benchmarking performances obtained in [17] with our results, we examine the first four of the listed methods in this paper along with several other algorithms. Whereas the focus of the work by Al-Ghamdi et al. [17] is on the process of benchmarking itself, the focus of our work is to clarify conditions under which the approximation algorithms are sufficiently accurate and fast for the important applications considered. For example, most methods use parameters which control the trade-off between speed and accuracy. Results in [17] are based on the use of only one set of parameters, while our work follows a methodical process which yields insight into the process of finding and using the best parameters given a preferred outcome.
The present work is an extension of the conference paper A Comparison of Approaches to Computing Betweenness Centrality for Large Graphs [25]. Whereas the previous paper compared only two approaches to increasing the speed of a large number of betweenness centrality calculations, this paper compares many different approaches: eight are fully tested, and experiences with several others are reported upon. The previous paper used a clustering test as the benchmark for comparison. Here we add a second benchmark, a test based on the immunization problem. A third addition is the comparison of our results to [17]. While [17] is an important related work, we come to very different conclusions, and the reasons for the differences are analyzed. Finally, because the trade-off between speed and accuracy of approximation methods is determined by user-supplied parameters, this paper gives detailed notes on parameter selection meant to be useful to practitioners.
The rest of the paper is organized as follows. "Background" discusses research in the area of betweenness centrality calculation and provides a useful survey of many different heuristic and approximation methods. "Methods" describes and justifies the tests and graphs used, and the process of carrying out the experiments. "Experimental results" describes individual results for ten different estimation techniques and summarizes performance based on both speed and accuracy. A description of each algorithm including its time complexity is given in this section. Finally, "Conclusion" contains a brief summary of the paper.
Attempts to increase the speed of betweenness centrality calculations go back at least to Brandes's original faster algorithm in 2001 [13]. Brandes's paper notably establishes a time complexity for exact calculation of O(|V||E|) (for unweighted graphs) against which many other algorithms are compared. Brandes is also known for his paper on variants of betweenness centrality, which provides ideas for efficient implementations [26].
Betweenness centrality is a shortest-paths based measure, and computing it involves solving a single-source shortest paths (SSSP) problem. One heuristic strategy for speeding the overall computation is to solve the SSSP problem for a smaller set of vertices. Brandes and Pich's 2007 work (hereafter referred to as Brandes2007) creates estimates that "are based on a restricted number of SSSP computations from a set of selected pivots" [18]. The idea of using a sample, based on an earlier technique for speeding closeness centrality calculations [27], is tested using different sampling strategies, such as uniformly at random, random proportional to degree, and maximizing/minimizing distances from previous pivots. It is determined that a randomly chosen sample gives best results.
Bader et al. [19] introduce an adaptive sampling algorithm (Bader2007) which, like Brandes2007, reduces the number of SSSP computations through choosing representative samples. The sampling is called adaptive because "the number of samples required varies with the information obtained from each sample." For example, a vertex whose neighbors induce a clique has betweenness centrality of zero, and its inclusion in the sample does not provide much information. Geisberger, Sanders and Schultes develop a method termed Better Approximation in [21] (Geisberger2008), in which the process for selecting the pivot nodes is improved. To further increase speed, if an arbitrary number k of vertices is selected, the algorithm can be parallelized for up to k processors. The overall idea of sampling is generalized by Chehreghani in [28], where a framework is given for using different sampling techniques, and a strategy for choosing sampling techniques that minimize error is discussed. Further discussion of a large number of sampling strategies can be found in [29].
Riondato and Kornaropoulos introduce a different idea for sampling in [20] (Riondato2016). In this sampling algorithm, betweenness centrality computations are based not on pivots as in Bader2007 and Brandes2007, but on a predetermined number of samples of node pairs. This fixed sample size results in faster computation with the same probabilistic guarantees on the quality of the approximation. Riondato and Upfall have another technique, called ABRA, that uses Rademacher averages to estimate betweenness centrality [22].
KADABRA (the ADaptive Algorithm for Betweenness via Random Approximation) is described by Borassi and Natale in [30] (Borassi2016). With KADABRA speed is achieved by changing the way breadth-first search is performed and by taking a different approach to sampling. The use of parallelization also helps to make it very fast. In addition to computing betweenness centrality for every vertex, the algorithm can be configured to compute the set of the k most-central vertices. This will turn out to be an important property that gives the algorithm great speed with applicable applications. Along these same lines, work by Mumtaz and Wang focuses on identifying the top-k influential nodes in networks using an estimation technique which is based on progressive sampling with early stopping conditions [31].
In general, the time complexity of the sampling algorithms (such as described above) is based on the O(|V||E|) time complexity of Brandes' algorithm, but is proportional to the number of samples. The algorithms therefore have a practical time complexity of O(k|E|) where k is the number of samples used. The fact that all time complexities are the same makes the actual performance of the algorithms difficult to judge and says nothing about the accuracy obtained.
A non-sampling strategy for reducing calculation time is to exploit the properties of the networks being considered. In these algorithms, the graph is partitioned in ways that preserve betweenness centralities, but allow them to be computed on smaller graphs. One such algorithm is BADIOS (Bridges, Articulation, Degree-1, and Identical vertices, Ordering, and Side vertices), introduced by Sariyüce et al. [32] and developed further in [33]. BADIOS relies on "manipulating the graph by compressing it and splitting into pieces so that the centrality computation can be handled independently for each piece". Erdös et al. present an exact, divide-and-conquer algorithm in [34]. This algorithm also relies on partitioning, causing all computations to be run "over graphs that are significantly smaller than the original graph" [34]. Baglioni et al. rely on the sparseness of social networks to reduce the size of graphs [35], and work by Li et al. [36] relies on the inherent community structure of real-world networks to do "hierarchical decomposition" of a graph. These strategies speed computation by shrinking the size of the graph that must be computed. Chehreghani et al. [37] have an algorithm that combines both graph reduction and sampling techniques, although it applies only to directed graphs.
As with the sampling algorithms, the time complexity for the network-reducing algorithms is, in the worst case, the same O(|V||E|) as computing the entire graph, and performance must be measured empirically.
The calculation of many elements of betweenness centrality can be run in parallel, and this is a widely attempted strategy. Sariyüce et al. have developed algorithms for use with heterogeneous architectures, by which they mean using multiple CPU cores and a GPU concurrently. Their algorithm is called "gpuBC" [38], and they borrow ideas from BADIOS, as well as introducing new ideas specific to GPU architecture.
In addition to [21, 30, 38], ideas for distributed methods include those by Wang and Tang [39], Shi and Zhang [40], and works by David Bader with Adam McLaughlin [41] and other authors [42, 43]. McLaughlin and Bader introduce Hybrid-BC in [41] (McLaughlin2014). The term hybrid refers to the use of a combination of approaches. The first approach is edge parallel, where each edge is assigned to a different thread. The second approach is called work-efficient, where each vertex is handled by a separate thread in a manner that minimizes redundant or wasted work. At each iteration, either an edge-parallel or work-efficient algorithm is chosen depending on the overall structure of the graph. Other algorithms exploiting extreme parallelism involving multiple GPUs are explored by Bernaschi et al. [44, 45]. In [46], Ostrowski explores using big-data techniques such as map reduce to decompose a graph resulting in faster computations.
Some ideas are radically different and offer unique perspectives on the problem of speeding betweenness centrality calculations. Yoshida's [1] paper on adaptive betweenness centrality is such a work. The term adaptive betweenness centrality is defined in [12] and refers to computing betweenness centrality for a succession of vertices "without considering the shortest paths that have been taken into account by already-chosen vertices" [1]. In effect, adaptive betweenness centrality means that once the vertex with top betweenness centrality is chosen, it is removed from the network along with its adjacent vertices. Subsequent calculations of betweenness centrality are calculated based on this newly configured network. This description matches exactly the method of calculating used in our benchmark problems. Like Yoshida's method, the HEDGE algorithm introduced in [47] works with evolving networks and has provable bounds on the accuracy of the approximation. An additional algorithm with applicability to dynamic networks is described in [48].
One last speed enhancing strategy is to substitute a different measure that is easier to compute, but has results that correlate with betweenness centrality. This has been attempted using such measures as Katz centrality [49], closeness centrality [1], \(\kappa\)-path centrality [50], and routing betweenness centrality [51]. A variation on this strategy is to use local measures to estimate global betweenness centralities [52].
While numerous other proposals may exist for speeding up betweenness centrality calculations, the aforementioned algorithms are a good representative sample. The prominent ideas are summarized as follows:
Compute fewer shortest paths, by extrapolating information from pivots, or otherwise limiting the number of SSSP problems solved [18,19,20,21,22].
Partition the graph in a way that preserves betweenness centralities, but allows them to be computed on a smaller vertex set [33, 34, 38].
Use multiple cores or a GPU to compute shortest paths simultaneously, or otherwise parallelize an algorithm [21, 30, 38, 41].
Use an adaptive approach, which, instead of removing nodes and recalculating in a loop, attempts to calculate directly the succession of top betweenness centrality vertices, taking into account the removal of previous vertices [1].
Identify influential nodes using easier to compute measures that are correlated with betweenness centrality such as coverage centrality [1], or \(\kappa\)-path centrality [50].
The goal of this work is to compare the speed and accuracy of several different approximation algorithms for betweenness centrality in the context of important applications. For each algorithm, we attempt to perform two tests. First, a clustering test is performed on a series of six 10,000-node graphs. Parameters are chosen strategically to balance speed and accuracy. If results are promising, we continue the clustering test with six 100,000-node graphs. Third, we run the immunization test on three popular graphs from SNAP [53]: email-enron, p2p-Gnutella31, and soc-sign-epinions. If results are exceptional, we run the immunization test on a fourth graph, web-Google. In this section we describe the clustering test, the immunization test, the graphs used, and the strategy for choosing parameters.
The scope of the experiment
An important consideration in the design of the experiments is the size of the networks being tested. Because different runs can take different amounts of time, even on the same computer, we wanted to test graphs large enough that the algorithms took at least a few minutes to compute. This way, the magnitude of the duration is being examined more than the exact duration. In initial testing, graphs of approximately 5000 nodes took only a few seconds to run and were considered too small to give an accurate impression of the speedup of the techniques tested. Initial experiments on a graph with 10,000 nodes (Graph 6 described below) took approximately 48 h. Most of the estimation techniques were able to reduce this time to less than an hour. In addition, works such as [17] use graphs of approximately 10,000, 100,000 nodes and 1,000,000 nodes, so it was thought that using these graph sizes would facilitate comparison of results. With the real-life datasets, we followed the same strategy, using the Enron and Gnutella databases with approximately 10,000 nodes, Epinions with approximately 100,000 nodes and Google, which approaches 1,000,000 nodes. The difficulty of the clustering test is great, and even the fastest approximation techniques would take weeks to complete it on million-node graphs. Therefore, only the immunization test was attempted on the largest graph.
The clustering test
The first evaluation of approximation approaches is conducted with the clustering test, which uses a graph clustering methodology called node-based resilience measure clustering (NBR-Clust). A specific variant of NBR-Clust is first discussed in [54], and the full framework is defined in [11]. As implied by the name, node-based resilience clustering takes a resilience measure R(G) as a functional parameter computed on the input graph G. Every node-based resilience measure operates by computing a limited size attack set S of vertices whose removal causes substantial disruption to the graph with respect to the resulting connected component size distribution. NBR-Clust takes the resulting components to be the foundational clusters, and completes the clustering of the input graph G by subsequently adding each attack set node in S to its appropriately chosen adjacent cluster. The description of the NBR-Clust framework follows.
The NBR-Clust clustering framework
Approximate a resilience measure R(G) with acceptable accuracy and return the corresponding attack set S whose removal results in some number (\(\ge 2\)) of candidate clusters. In these experiments we will use the resilience measure integrity [55] as R(G).
Create \(G' = G\setminus\{S\}\), which must consist of two or more candidate clusters.
Adjust the number of clusters. Datasets used in this experiment come with automatically generated ground truth data, including the number of clusters the graph contains. If more clusters have been obtained than that indicated by the ground truth data, clusters are joined based on their adjacencies. If there are not enough clusters, choose the least resilient of the clusters (based on the resilience measure R) and repeat steps 1 and 2 on that cluster. Continue until the correct number of clusters is obtained.
Assign attack set nodes to clusters.
The dependence of NBR-Clust on betweenness centrality calculations, in turn, hinges upon the calculation of the node-based resilience measure R. Many resilience measures exist across a large number of applications. One example is the controllability of a complex network [56, 57], which uses control theory to determine a set of driver nodes that are capable of controlling the network and is applicable to static networks as well as networks with changing topologies [58, 59]. A specific measure is control centrality which quantifies a node's ability to perform as a driver. Such resilience measures can be used as R(G) with the NBR-Clust framework. In [58], it is noted that the control centrality of a node is not determined by its degree or betweenness centrality, but by other factors. We are looking to measure the speed of calculating betweenness centrality, and therefore do not use such measures in our experiments.
Computational aspects of many important node-based resilience measures are considered in [60], including vertex attack tolerance (VAT) [61, 62], integrity [55], tenacity [63, 64], toughness [65], and scattering number [66]. As all of these measures have associated computational hardness results, the performance of heuristics was considered on several representative networks in [60]. Amongst the algorithms considered in [60], a betweenness centrality-based heuristic called Greedy-BC exhibited high-quality performance, particularly on the most relevant measures such as integrity, VAT, and tenacity. Integrity quantifies resilience by measuring the largest connected component after removal of an attack set. Integrity is defined as
$$\begin{aligned} I(G) = \min _{S \subset V} \left\{ |S| + C_{max}(V-S) \right\} , \end{aligned}$$
where S is an attack set and \(C_{max}(V-S)\) is the largest connected component in \(V-S\). Because integrity is a function of the largest connected component size distribution, it shares a similarity with the immunization problem and well-studied methods of maximizing results on both problems use betweenness centrality.
With Greedy-BC, the highest betweenness vertex is calculated and removed from the graph. This process is repeated until all nodes have been removed, and the attack set whose removal results in the smallest resilience measure value is returned. Given a resilience measure R, the resilience measure of a specific graph G denoted by R(G), and the resilience measure of a specific graph G with attack set S removed denoted by R(S, G), the steps of Greedy-BC are as follows:
The Greedy-BC heuristic
\(R_{min} = R(G)\), \(S_{min} = \{\}\)
repeat |V| times
\(v = argmax_{v \in V}BC(V)\)
\(G = G \setminus \{v\}\) and \(S = S \cup \{v\}\)
if \(R(S,G) < R_{min}\) then \(R_{min} = R(S,G)\) and \(S_{min} = S\)
return \(S_{min}.\)
Amongst the various resilience measures considered, integrity was found to be particularly useful with respect to the application to clustering in NBR-Clust due to the higher accuracy and one-step clustering it provides when the number of clusters is not known a priori [11]. Therefore, the resilience measure we use here with NBR-Clust is integrity [55]. Here, I(G) from Eq. 2 will be the resilience measure called for by R(G)
NBR-Clust is a good test case for algorithms that speed up betweenness centrality calculations because it shares many of the properties of practical problems that practitioners face. First, the exact values being calculated are not important. The algorithm does not use the exact value, except to determine the highest ranking vertex. Therefore, it is the ranking of vertices that is important, and small variations of ranking do not change the result. Second, because a large number of calculations are required (conceivably \(O(n^2)\) betweenness centralities must be calculated), scalability is tested. Third, the usefulness of the results can be judged by the accuracy of the resulting clustering, which allows for presentation of results along two axes of speed and accuracy. Because of its similarities to many algorithms, results from NBR-Clust can be generalized to other algorithms that also rely on a large number of betweenness centrality calculations.
One limitation of the clustering test is that the datasets used require a ground truth against which accuracy results can be determined. This limits the types of data that can be used. In this work, we use generated datasets with meaningful ground truth data. We note that, to achieve high accuracy, the clustering test must match a particular predetermined outcome, confirming the difficulty of this test.
The immunization test
The immunization problem is well known and well studied in the complex networks literature, with applications to diffusion of information [67], identification of important nodes [68], and also to the spread of disease and computer viruses [69]. The immunization problem involves attacking a network by removing vertices, presumably in the order most likely to cause damage to the network. The problem is made more difficult by limiting the number of nodes that can be attacked. The success of the attack is measured by the size of the largest remaining component, also called the giant component.Footnote 1 There are several well-known strategies for choosing the attack order for the immunization problem. Four of the strategies detailed in [12] are as follows:
Select vertices in order of decreasing degree in the original graph and remove vertices in that order.
Use an adaptive strategy which involves selecting the highest degree vertex in a graph, removing it, selecting the highest degree vertex in the newly configured graph, and continuing in that fashion.
Select vertices in order of decreasing betweenness centrality in the original graph and remove vertices in that order.
Use an adaptive strategy which involves selecting the highest betweenness centrality vertex in a graph, removing it, selecting the highest degree vertex in the newly configured graph, and continuing in that fashion.
When the immunization test is conducted in this paper, the adaptive betweenness strategy (strategy iv above) is used to remove nodes from the network until either (1) all nodes have been removed, (2) some predetermined number of nodes have been removed, or (3) the betweenness centralities of the remaining nodes are all zero. Obviously, if all nodes are removed the largest connected component will be zero. We are more interested in seeing if removing smaller numbers of nodes can result in small components. Therefore, our results show the maximum component size after a predetermined number of removals, usually 10% of the nodes in the network.
Graphs used with the clustering test
There are many generative models for graphs, such as Erdös–Rényi random graphs, the planted l-partition model [70], the Gaussian random partition generator [71], and the LFR benchmark network [72]. We test using the LFR benchmark because it is considered state of the art [73] and was created specifically to test community detection algorithms. The LFR Net can be generated based on several parameters, such as minimum and maximum cluster size and average and maximum degree and mixing factor. The mixing factor controls the percentage of edges that leave a cluster, also called boundary edges. A higher mixing factor means less tightly bound clusters that are more difficult to detect. One strength of the LFR model is that it accounts for "heterogeneity in the distributions of node degrees and community sizes" [72], and it is often used to generate scale-free graphs. We controlled both the degree and the community sizes as described below to keep most graph properties constant while varying mixing factor, and our graphs are not scale free. The graphs are generated according to the following sequence, which is described more fully in [72, 73].
A sequence of community sizes is extracted. This is done randomly, although it is controlled by parameters specifying minimum and maximum community sizes.
Each vertex i of a community is assigned an internal degree of \((1-\mu )d_i\), where \(d_i\) is the degree of vertex i, and \(\mu\) is the previously described mixing factor. The degree is chosen randomly, although it is subject to parameters controlling its minimum and maximum values. The node is considered to have \((1-\mu )d_i\) stubs, where a stub is a potential edge.
Stubs of vertices of the same community are randomly attached until all stubs are used.
Each vertex i receives \(\mu d_i\) additional stubs that are randomly attached to vertices of different communities, until all stubs are used.
Twelve LFR network graphs are used with the clustering test. Half have 10,000 nodes and half have 100,000 nodes. All graphs are generated randomly, to have approximately 40 communities. The six 10,000-node graphs are generated to have degree distributions from 19 to 64 and community sizes from 100 to 500. Based on extensive experimentation, it was discovered that the mixing factor parameter, denoted \(\mu\), had the greatest influence on the ease of clusterability of a graph. Our graphs have mixing factors \(\mu\) ranging from 0.01 to 0.1. Complete details are given in Table 1.
Table 1 10,000-node randomly generated LFR nets used
The 10,000-node graphs 1 through 6 keep properties such as degree structure and average community size constant while changing \(\mu\). This allows a test based on clustering difficulty without complicating factors such as number of edges (which would require more computation time in edge-based sampling methods). Most of the approximation methods tested use parameters which control the trade-off between speed and accuracy. It is anticipated that succeeding in the clustering test will require much less accuracy for Graph 1 than for Graph 6. Therefore, Graph 1 was used as a baseline accuracy test. It was expected that as accuracy parameters were increased, graphs 2 through 5 would become clusterable, with Graph 6 being the ultimate test. It is noted that Graph 6 is much more difficult to cluster than the first five.
A second series of LFR networks with 100,000 nodes was also generated. Details for these graphs are presented in Table 2. The six graphs can best be viewed as two series of three graphs. Graphs 1–3 are characterized by low average degree and low number of edges. They are in order of increasing difficulty. With approximation methods that are based on the number of edges, these graphs should be substantially faster to cluster than graphs 4–6, which have three times as many edges. This gives an effective test of the scalability of the algorithms. To test the power of mixing factor \(\mu\) over clusterability, graphs 1–3 and 4–6 represent independent progressions of mixing factor. There should be cases where an accuracy parameter setting will fail to cluster Graph 3 but will succeed in clustering Graph 4. However, Graph 4 may take longer because of its larger number of edges. With the largest mixing factor, and a large number of edges, Graph 6 is the most difficult to cluster on all counts. Our goal in setting parameters was to find a combination that would cluster Graph 6 with at least 90% accuracy, although that did not always happen.
Table 2 100,000-node randomly generated LFR nets used
Graphs used with the immunization test
For the immunization test, we used four graphs taken from the SNAP repository [53]. Their information is shown in Table 3.
Table 3 Graphs used in the immunization test
Choosing parameters
The approximation methods use parameters to control the trade-off between speed and accuracy. For each method that was evaluated, we followed a structured process to find an optimal speed/accuracy trade-off. The steps followed are listed below:
We took note of suggested parameters in the source papers, particularly parameters that the authors had used in their own experiments.
We attempted to use parameters discovered in step 1 to cluster the 10,000-node Graph 1, hoping to obtain results of 100% accuracy. If the graph was not clustered to 100% accuracy, we changed the parameters to increase accuracy. This process was repeated until an appropriate setting was found. Often, these parameters successfully clustered graphs 1–6, with varying degrees of accuracy. If possible, we tried to find the fastest setting that gave 100% accuracy with Graph 1.
Having found the minimum accuracy settings, we increased the accuracy parameter until Graph 6 was clustered with at least 90% accuracy (although we could not always achieve this in a reasonable time).
For comparison, we chose a parameter setting between the settings discovered in steps 2 and 3. We ran the clustering test for all six graphs for all three parameter settings.
Moving to the 100,000-node graphs, we tried the same settings used for the 10,000-node graphs. The most common problem was that the maximum accuracy setting for the 10,000-node graphs was very slow when used with the 100,000-node graphs. In that case, we tried slightly faster/less accurate settings. In all cases, we tried to find a setting that would cluster 100,000-node Graph 6 with at least 90% accuracy. If that did not succeed, we tried to find a setting where the clustering for Graph 6 at least did not fail. Preliminary work for this paper appeared in [25], and those timings were used as an indication of what should be considered reasonable.
The goal of the experiments is to reproduce the steps a practitioner with a real problem to solve might take to use one of the approximation algorithms, although under more controlled circumstances. In real life, exact calculations of graphs of tens of thousands of nodes are going to take days to run, at least with current, readily available technology. For example, executing the clustering test with 10,000-node Graph 6 took approximately 48 h using Brandes's exact faster algorithm [13]. We did not have access to a supercomputer and assume many practitioners do not.
All experiments were conducted on two identical computers. Each had an Intel i7-7700K CPU running at 4.20 GHz with four cores and eight logical processors, and 16 GB ram. Computers also had Nvidia GPU cards, which will be described later. We attempted to use many different betweenness centrality approximation methods, compiled from a variety of sources, and code from the original authors was used where available. All implementations of algorithms used are available from documented public sources. All were written in C++ (or CUDA and C++) and compiled to optimize speed.
Results of the clustering test depend to an extent on the difficulty of the graph being clustered. The speed and accuracy under optimal conditions of the top four algorithms for the 10,000-node graphs are compared in Fig. 1. Results for Graph 5 are shown in Fig. 1a, and results for Graph 6 are shown in Fig. 1b. Algorithms offering the best combinations of speed and accuracy will appear in the upper left hand corner of the chart. On the easier-to-cluster Graph 5, McLaughlin2014 is most successful. Borassi2016 offers a slight gain in speed at the cost of some accuracy, and Geisberger2008 gives consistently good accuracy, but takes more than twice as long. For the more difficult \(\mu =0.1\) graphs shown in Fig. 1a, Borassi2016 matches McLaughlin2014 in speed, and in one case beats it in accuracy as well. Two additional runs of the Borassi2016 algorithm (detailed with all runs in Table 18) with speed of 62, accuracy 50% and speed of 50, accuracy 44% did not have high enough accuracy to be shown in Fig. 1b.
Best performing algorithms for the clustering test on 10,000-node graphs. These results represent optimal conditions. McLaughlin2014 is run using an Nvidia Titan V GPU, and Borassi2016 is configured to return five betweenness centralities at one time
The overall clustering test results for 100,000-node graphs are shown in Fig. 2. Results for the sparse-edge Graph 3 are shown in Fig. 2a and for the denser Graph 6 in Fig. 2b. The 100,000-node clustering test is a good indication of the scalability of the algorithms. Note that on the most difficult examples, the Riondato2016 algorithm has dropped off the chart. On the less dense Graph 3, Borassi2016 performs best, both in terms of speed and accuracy. On the denser Graph 6, McLaughlin2014 is the most consistent combination of speed and accuracy, although again some accuracy can be traded for gains in speed with Borassi2016. In both cases, Geisberger2008 is slower but with consistently high accuracy.
Best performing algorithms for the clustering test on 100,000-node graphs. These results represent optimal conditions. McLaughlin2014 is run using an Nvidia Titan V GPU, and Borassi2016 is configured to return five betweenness centralities at one time
Because of the exceptional performance of McLaughlin2014, Borassi2016 and Geisberger2008, we tested each further for scalability using the immunization test on the Google dataset, which has 875,713 nodes and over 5 million edges. Results are shown in Fig. 3. Borassi2016 performs very well here, having the best speed, and also the best speed/accuracy combination. McLaughlin2014 was run on an Nvidia Titan V GPU, and performed well with speed, but had trouble achieving accuracy. Results from the circled runs in Fig. 3a, which are shown in Fig. 3b, show that for quick results Borassi2016 performs best. Borassi2016 removes less than 2% of nodes to achieve a smallest cluster of approximately 1%. Geisberger2008 requires attacking almost twice as many nodes to achieve the same results. McLaughlin2014 never achieves the dramatic drop of the other two, and only achieves a smallest component size of 12% in roughly the same amount of time.
Scalability test for the top performing algorithms on the 875K-node Google graph
Individual algorithm and parameter selection results
Following is a list of tested algorithms. For each algorithm we describe the parameters, show the parameters selected, and, to the extent available, display the results of the clustering and immunization tests. All results include a time component, which is given in seconds. Hopefully, this will aid the practitioner in selecting parameters given graph size, available time, and desired performance. We understand that many algorithms are relevant because they contain important new ideas, and that all algorithms do not need to be fastest. We also understand that sometimes those algorithms and ideas will later be incorporated into new algorithms that will surpass previous algorithms in speed or accuracy. All of the algorithms below have made important contributions to the understanding of the mechanics of calculating betweenness centrality. We test them for speed, although a slower speed does not diminish their importance or usefulness.
Brandes and Pich 2007 (Brandes2007)
This algorithm is taken from [18], and we used code from [17] to run it. The algorithm uses a heuristic: shortest paths are calculated for a group of samples, called pivots, instead of for all nodes in the graph. The paper states that the ultimate result would be that "the results obtained by solving an SSSP from every pivot are representative for solving it from every vertex in V" [18]. They test several different strategies for selecting the pivots, and find that uniform random selection works best. The algorithm tested here uses uniform random selection. The worst-case time complexity of this approach is O(|V||E|). The speedup is directly proportional to the number of pivots, giving a practical time complexity of O(k|E|) where k is the number of pivots. Results are shown in Table 4. For the 10,000-node graphs, 50 samples gave good results on Graph 1 and Graph 2, but accuracies fell for the remaining graphs. Note that 200 samples were required to obtain uniformly high accuracies, and even then did not meet 90% for Graph 6. The 100,000-node graphs got good results with only 25 samples. The 10- and 50-sample results are shown mostly to demonstrate speed. Note that with Graph 3, doubling the number of samples roughly doubles the amount of time required.
It is useful to compare these results to the Bader2007 algorithm (shown in the next section in Table 7). Note that on the 10,000-node graphs, Brandes2007 takes sometimes twice or even three times as long as Bader2007, even though the latter algorithm required a larger number of samples. With the 100,000-node graphs, Brandes2007 was able to cluster Graph 6 with 200 samples, while Bader2007 required 250 samples. Nonetheless, the time for Bader2007 is shorter and the accuracy higher. Both algorithms have the same theoretical time complexity, making the large difference in running times an interesting result.
Numerical results for the immunization test are shown in Table 5. Brandes2007 does very well with 100 and 250 samples. The time for the large Epinions dataset with 100 samples is less than 3 h. Again, it is interesting to compare to Bader2007. Accuracy results for both algorithms are visualized in Fig. 4. Note that concerning accuracy, both algorithms are almost identical. With the Enron test, both algorithms achieve a cluster of less than 1% with removal of about 8.5% of vertices. With the Epinions dataset, both leave a largest cluster of less than 1% with the removal of approximately 7% of vertices. Note that speed-wise Brandes2007 takes roughly twice as long as Bader2007, using the same numbers of samples.
Table 4 Brandes2007 clustering results for LFR nets of 10,000 and 100,000 nodes
Table 5 Brandes2007 run time and max cluster size immunization results
Visualization of results for Brandes2007 versus Bader2007 immunization test
Bader et al. 2007 (Bader2007)
Like Brandes2007, this algorithm estimates centrality by solving the SSSP problem on a sampled subset of vertices. The sampling is referred to as adaptive, because the number of samples required can vary depending on the information obtained from each sample [19]. As with other sampling algorithms, the speedup achieved is directly proportional to the number of shortest path computations. In the worst case, all vertices would be sampled, and so the time complexity of this algorithm is O(|V||E|) in the worst case, and O(k|E|), where k is the number of samples, in the practical case. There are two parameters to this algorithm. The first is a constant c that controls a stopping condition. The second parameter is an arbitrary vertex. For optimum performance, the arbitrary vertex should have high betweenness centrality. The algorithm is of great interest because its benchmark time was the fastest in [17], where the algorithm is referred to as GSIZE. We used code from [17] to run tests of four approximation methods, calculating betweenness centrality values for the Enron dataset. Results from this experiment are shown in Table 6.
Table 6 Initial testing of four algorithms on the Enron dataset
It seems very clear that the fastest method here is Brandes2007, and yet results presented in [17] show that the Bader method is overall ten times faster than the others. Interestingly, by experimenting with the arbitrary vertex parameter, we were able to obtain a range of execution times from 430 to 3 s, depending on the choice of vertex. Obviously, this makes benchmarking the speed of the algorithm difficult.
The benchmarking method in [17] only calculates once on the initial graph. Therefore, it is easy to pick one high-betweenness vertex, perhaps by guessing based on degree. The clustering and immunization tests in this work calculate betweenness centralities repeatedly on a changing graph. There is no way to know what the highest betweenness vertices are at a given time. One easy-to-calculate proxy is vertex degree. Running the algorithm with a parameter of \(c = 2\), and pivoting on the highest degree vertex at each step, our 10,000-node Graph 1 was clustered at 100% accuracy in 8960 s. Compare this to 290 s for Brandes2007. A second attempt was made to speed the algorithm up by, at each iteration, choosing the second highest betweenness centrality vertex from the previous iteration. This also did not speed things up. Bader et al. state in [19] that they set a limit on the number of samples at \(\frac{n}{20}\). We tried this and noticed that with Graph 1 the stopping condition was never reached—500 samples were always used. This configuration clustered perfectly again, but was still not fast at 1112 s. Last, we tried simply changing the limits on the number of samples. With limits of 50, 100 and 250 we got acceptable speeds and accuracies. All tests were run with parameter \(c = 2\), although the parameter probably was not used as the number of samples in fact controlled the stopping point.
Results for the clustering test are shown in Table 7. With a small number of 50 samples, this algorithm clustered the 10,000-node graph 1 in 171 s with 100% accuracy. Clustering the most difficult graph, the 100,000-node Graph 6, to over 90% accuracy took 8.75 h. It is noted on the immunization tests that the 50 sample test was largely ineffective, while the 250 sample test was much more successful, with the largest remaining cluster of only 1.1% with approximately 9% of vertices removed. These results can be seen in Table 8. Accuracy results for the immunization test are compared with Brandes2007 in Fig. 4.
Table 7 Bader2007 clustering results for LFR nets of 10,000 and 100,000 nodes
Table 8 Bader2007 run time and max cluster size immunization results
Geisberger et al. better approximation (Geisberger2008)
This algorithm is developed by Geisberger et al. [21]. It is based on the pivot method originally proposed by Brandes and Pich, where calculations are run on some number k of sampled pivots. The Brandes2007 method is said to overstate the importance of nodes near a pivot, a problem that this method solves by using a linear scaling function to reestimate betweenness centrality values based on the distance a from a node to a pivot. Overall time complexity of the method is O(|V||E|), where the speedup is proportional to the number of pivots, k, that are sampled. What will perhaps turn out to be the true power of this algorithm is realized by the authors' statement that the "algorithms discussed are easy to parallelize with up to k processors" [21]. We used the implementation in NetworKit [74], in the version which parallelizes the computations. The parallelization is programmed using Open MPI. The only parameter is the number of samples, which is chosen by the user. In [21] the number of samples is chosen as powers of 2, ranging from 16 to 8192. We had surprisingly good luck with sample sizes as small as 2. Results for the clustering test are shown in Table 9. This is a quick method to use for clustering with high accuracies on both the 10,000 and 100,000 graphs. Note especially the performance on the most difficult graphs. This is an algorithm that scales to a difficult graph with a large number of edges much better than those we have seen so far. See Figs. 1 and 2 for a comparison of Geisberger2008 performance among the fastest of the algorithms tested.
Results from the immunization test are listed in Table 10. It is interesting to note that, while the improvement from 4 samples to 8 is large for both Enron and Epinions, the improvement from 8 to 16 samples is not as large. Overall, this is a highly accurate algorithm. This is especially evident in Fig. 5.
Table 9 Geisberger2008 clustering results for LFR nets of 10,000 and 100,000 nodes
Table 10 Geisberger2008 run time and max cluster size immunization results
Visualization of results for Geisberger2008 immunization test
Riondato–Kornaropoulos fast approximation through sampling (Riondato2016)
Riondato and Kornaropoulos introduce this method in [20], which is based on sampling a predetermined number of node pairs (chosen uniformly at random). The sample size is based on the vertex diameter of the graph, denoted VD(G), which is the minimum number of nodes in the longest shortest path in G, and may vary from (diameter + 1) if the graph is weighted. Sampled vertices can be chosen to guarantee that the estimated betweenness values for all vertices are within an additive factor \(\epsilon\) from the real values, with a chosen probability at least 1-\(\delta\), where \(0< \delta < 1\).
The time complexity of this algorithm is \(O(r(|V|+|E|))\), where r is determined by the equation:
$$\begin{aligned} r = \frac{c}{\epsilon ^2}\Big ( \lfloor {\text {log}}_2 (\text {VD(G)} - 2) \rfloor + 1 + \text {ln} \frac{1}{\delta } \Big ). \end{aligned}$$
In Eq. 3, c is a universal positive constant that is estimated in [20] to be 0.5, VD(G) is the vertex diameter of the graph, \(\epsilon\) is the additive factor by which the approximation may vary from the true betweenness centrality value, and \(\delta\) is the probability that the estimated value will be within \(\epsilon\) of the true value.
We used the implementation in NetworKit [74]. There are three parameters, the first of which, a constant c, is set at 1.0 in NetworKit. The other parameters are \(\epsilon\), which is the maximum additive error, and \(\delta\), which is the probability that the values are within the error guarantee. Our experiments use \(\delta =0.1\) for a probability of 90%, and vary \(\epsilon\), where higher \(\epsilon\) values mean lower accuracy but also greater speed. Results are shown in Table 11. A couple of things are interesting. First, notice that relatively good accuracies are obtained even at relatively high values of \(\epsilon\). The algorithm displays good speeds on the simpler 100,000-node graphs, taking only 556 s to cluster Graph 1 compared to 2808 s for Geisberger2008. However, Riondato2016 does not scale as well—the most difficult graph takes about three times as long as Geisberger2008 to cluster with at least 90% accuracy.
Results from the immunization test are shown in Table 12 and Fig. 6.
Overall, Riondato2016 is the fourth most successful of our survey, and offers a theoretical guarantee of accuracy. A similar theoretical guarantee is also offered by Borassi2016.
Table 11 Riondato2016 clustering results for LFR nets of 10,000 and 100,000 nodes
Table 12 Riondato2016 run time and max cluster size immunization results
Visualization of results for Riondato2016 Immunization Test
McLaughlin and Bader Hybrid-BC (McLaughlin2014)
Hybrid-BC is introduced by McLaughlin and Bader in [41]. It offers an approximation that uses a sampling of k vertices to determine the structure of the graph. As k is increased, calculations are slower, but accuracy is greater.
McLaughlin2014 is a GPU-based algorithm, and calculations were initially performed and timed on a computer with an Nvidia GeForce GTX 1080 GPU, which has 2560 CUDA cores, 20 streaming multiprocessors, and a base clock of 1607MHz. This GPU is at this writing found in good-quality gaming computers, and is at a price point that most practitioners can afford. It is arguably Nvidia's second or third most powerful GPU. We used code downloaded from Adam McLaughlin's website [41]. Our hardware setup was capable of running 1024 threads simultaneously.
Results for the clustering test are shown in Table 13. Across the board, these are the best results we have seen so far. In fact, McLaughlin2014 is able to cluster the most difficult graph with a high 98% accuracy in approximately 60% of the time of Geisberger2008. It is noted that McLaughlin2014 is one of the fastest and most accurate of all algorithms on the clustering test.
Table 13 McLaughlin2014 clustering results for LFR nets of 10,000 and 100,000 nodes With Nvidia GTX1080 GPU
Results for the immunization test are shown in Table 14. The times are the fastest seen so far, and the the max cluster sizes are small. As can be seen in Fig. 7, 64 samples produced the best result in all cases, but 32 samples also performed well. Notice the large difference in accuracy between those and the test with 16 samples. Also notice that due to somewhat lower accuracy on the Enron test, we have changed the scale of the x-axis. The McLaughlin algorithm is interesting because it scales very well time-wise, but accuracies are not as good. It does better on the clustering test, but is also very successful on the immunization test.
Table 14 McLaughlin2014 run time and max cluster size immunization results
Visualization of results for McLaughlin2014 immunization test
The McLaughlin2014 results are so impressive that we wanted to determine the extent to which the properties of the GPU influenced calculation times. We replaced the Nvidia GeForce GTX 1080 GPU with an Nvidia Titan V GPU, which has 5120 CUDA cores (twice as many as the GTX1080), 80 streaming multiprocessors (four times as many as the GTX1080), and a base clock of 1200 MHz (interestingly, slower than the GTX1080). Results from rerunning the clustering test with this new configuration are shown in Table 15. For the 10,000-node graphs, clustering times are reduced by about 70%. The time to process 100,000-node Graph 6 with k = 64 went from 10917 to 6007 s, a savings of almost 82 min off an already fast 3-h processing time. These are among the best times that we will see.
The performance of McLaughlin2014 on the clustering tests with the Titan V GPU is compared to the other top algorithms in Figs. 1 and 2. A comparison of immunization test results using the large Google graph is shown in Fig. 3. Interestingly, although it is still a top performer, it is sometimes bested by other algorithms in terms of speed and accuracy. At the time of this writing, the price of the Titan V was approximately five times the cost of the GTX 1080.
Table 15 McLaughlin2014 clustering results for LFR nets of 10,000 and 100,000 nodes with Nvidia Titan V GPU
Borassi and Natale KADABRA (Borassi2016)
ADaptive Algorithm for Betweenness via Random Approximation (KADABRA) is described by Borassi and Natale in [30]. These authors achieve speed in two different ways. First, distances are computed using balanced bidirectional breadth-first search, in which a BFS is performed "from each of the two endpoints s and t, in such a way that the two BFSs are likely to explore about the same number of edges, and ... stop as soon as the two BFSs touch each other." This reduces the time complexity of the part of the algorithm that samples shortest paths from O(|E|) to \(O(|E|^{\frac{1}{2}+ o(1)})\). A second improvement is that they take a different approach to sampling, one that "decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error." The algorithm has a theoretical guarantee of error less than \(\epsilon\) with probability 1 − \(\delta\). For our experiments, we kept \(\delta\) at 0.1, which implied a 90% probability of being within the specified error, and varied \(\epsilon\).
Experiments with Borassi2016 were run using the original code as referenced in [30]. Results are shown in Table 16. Borassi2016 code is parallelized using OpenMP. Note that it is very fast, and that the accuracies are impressive, even at what seem to be high error tolerances. For example, the threshold time for the easiest 10,000-node graph is 60 s with no special hardware, where the same graph is clustered using an expensive GPU in 68 s. Notice that with the 100,000-node graphs, the \(\epsilon =0.05\) results from Borassi2016 are comparable with, and sometimes faster than, the k = 32 results from McLaughlin2014 with the Titan V GPU. The only issue we notice with the Borassi2106 algorithm is that the time seems to be very sensitive to small changes in \(\epsilon\). For example, our attempts to run the 100,000-node graphs at \(\epsilon =0.01\) were going to take hours longer than at \(\epsilon =0.025\). Clustering 100,000-node Graph 6 with \(\epsilon =0.025\) takes over twice as long as with the high-powered GPU, raising questions about scalability; however, it appears that if one is willing to trade a small amount of accuracy, the speed of Borassi2016 rivals that of a GPU algorithm.
Results from the immunization test are shown in Table 17. Notice the extremely small largest components for all three networks when \(\epsilon =0.01\), and the small sizes when \(\epsilon =0.05\). Times are shorter than for any algorithms but McLaughlin2014. Accuracy results for the immunization test are shown in Fig. 8. Notice particularly in Fig. 8b the rapid decrease in size of the largest component with only 6% of nodes removed, and results for \(\epsilon =0.05\) and \(\epsilon =0.01\) are very similar. These results are so impressive that we ran the algorithm on the 875k-node Google graph. It achieved these accuracies: \(\epsilon =0.1\): 53,706 s and 27.9% accuracy, \(\epsilon =0.05\): 58,405 s and 11.7% accuracy, and \(\epsilon =0.025\): 85,277 s and 8.7% accuracy. A comparison of Borassi2016 with the other two algorithms tested on the Google graph is shown in Fig. 3a. Notice that this is arguably the most successful of the three algorithms in terms of speed and accuracy. In Fig. 3b it shows by far the most rapid decline in largest component size, decreasing to less than 1% of graph size with less than 2% of nodes removed.
Table 16 Borassi2016 KADABRA clustering results for LFR nets of 10,000 and 100,000 nodes
Table 17 Borassi2016 run time and max cluster size immunization results
Visualization of results for Borassi2016 immunization test
The Borassi2016 algorithm can be configured to return the top k vertices. In previous experiments we used it to return only the top vertex. Because the exact order of vertices chosen is not of the utmost importance in the clustering test, we wondered if the speed of this algorithm could be improved even further by choosing a set of nodes at each iteration. We tested this on the 10,000-node graphs by using the algorithm to find the top five vertices, while all other factors remained the same. Results can be seen in Table 18. The 10,000-node baseline graph is clustered in 36 s, compared to 68 s for the GPU algorithm. The most difficult graph took 4402 s, which is 27 min shorter than the clustering time with the GPU.
Charts showing the overall performance of Borassi2016 compared to the other three top algorithms are contained in Figs. 1 and 2. Figure 2a is particularly interesting, as Borassi2016 beats all other algorithms with its combination of speed and accuracy.
Table 18 Borassi2016 KADABRA clustering results for LFR nets of 10,000 and 100,000 nodes with top five vertices chosen
Yoshida's adaptive betweenness centrality (Yoshida2014)
Yoshida2014 adaptive betweenness centrality refers to computing betweenness centrality for a succession of vertices "without considering the shortest paths that have been taken into account by already-chosen vertices" [1]. This description matches exactly the Greedy-BC heuristic used with NBR-Clust. Once the array of betweenness centralities is calculated, the node-removal order is known and the resilience measures are calculated for the graph at each configuration. The configuration with the lowest resilience measure value is chosen. Yoshida's adaptive betweenness centrality algorithm has an expected total runtime of \(O((|V| + |E|)k + hk + hk\) log |V|), where h is a probabilistic parameter that in most graphs is much less than \((|V| + |E|)\), and k is the number of times the graph is sampled. The algorithm runs faster or slower depending on the parameter k, which also controls the expected error. It is suggested that k should be chosen to be O(log \(|V| / \epsilon ^2)\), where \(\epsilon\) is the expected error. Our experiments used different values of k to help establish a link between the speed and accuracy of this algorithm.
We used the original code downloaded from Yoshida's website. Results for the clustering test are shown in Table 19. We had success with this algorithm in [25], but here were unable to cluster the 100,000-node Graph 6 with 90% accuracy, even with \(k=800,000\). Also, the corresponding times were comparatively long, and we did not continue with the immunization test.
Table 19 Yoshida2014 clustering results for LFR nets of 10,000 and 100,000 nodes
Sariyüce et al. BADIOS (Sariyüce2012)
BADIOS (Bridges, Articulation, Degree-1, and Identical vertices, Ordering, and Side vertices) is introduced by Sariyüce et al. [32]. BADIOS is not actually an approximation technique, but a speed enhancement. This algorithm involves splitting a graph in ways that will not affect the accuracy of the betweenness centrality calculation. Because the pieces are smaller, scaling problems are reduced. For example, Degree-1 means that all degree-1 vertices have a betweenness centrality of zero. They can be removed from a graph, shrinking its size without affecting the accuracy of the final calculation. It sounds simple, but even after a graph is shattered by removing an articulation node, that node becomes a degree-1 node and can be removed. The different strategies are repeated until all options are exhausted.
In the worst case, manipulations to the graph fail or are few (for example, a graph with many large cliques), which indicates a time complexity of O(|V||E|). In the best case times are greatly enhanced. Our 10,000-node Graph 1 took approximately 5 h with BADIOS. This is a substantial improvement over the original exact calculation time of 48 h; however, the savings in time does not compare to the other approximation algorithms discussed.
Sariyüce et al. gpuBC (Sariyüce2013)
Sariyüce et al. introduced gpuBC in 2013. It contains some ideas from BADIOS, such as removal of degree 1 vertices and graph ordering. It also introduces new ideas: first, vertex virtualization, which replaces high-degree vertices with virtual vertices that have at most a predetermined maximum degree. One of the original problems with GPU parallelization of betweenness centrality is whether to parallelize based on edge or node traversals. By evening out the degree of vertices, this method solves the problem. Second, they use techniques to store the graph that improve the speed of memory access.
The software is available on their website. The software is written for CUDA 4.2.9. We could not get it to run with more recent versions (at this writing, CUDA 9.1 is the most recent version). We were able to run gpuBC for the 10,000-node graphs after installing older versions of Linux and CUDA. Due to memory errors that we could not correct, we were not able to run gpuBC on the 100,000-node graphs.
We used the version of the software with ordering and degree-1 vertex removal techniques enabled, and virtual-vertex based GPU parallelism with strided access. There are two parameters available, one is the max degree of the virtual vertex, which we set at 4, and the number of BFS runs to be executed, which we varied from 100 to 200 to 1000. Results for the clustering test are shown in Table 20, and they are good, although we could not get the Graph 6 to cluster to 90% accuracy.
Table 20 Sariyüce2013 gpuBC clustering results for LFR nets of 10,000 nodes
Kourtellis et al. \(\kappa\)-path centrality (Kourtellis2013)
\(\kappa\)-path centrality is introduced in [50], where it is shown empirically that nodes with high \(\kappa\)-path centrality are highly correlated with nodes with high betweenness centrality. This randomized algorithm runs in time \(O(\kappa ^{3}|V|^{2-2\alpha }\log |V|)\), where graph exploration is limited to a neighborhood of \(\kappa\) hops around each node, and \(\alpha \in [-\frac{1}{2}, \frac{1}{2}]\) is a parameter that controls the tradeoff between speed and accuracy. The algorithm outputs, for each vertex v, an estimate of its \(\kappa\)-path centrality up to additive error of \(\pm |V|^{1/2+ \alpha }\) with probability \(1-1/|V|^2\) [50].
Recommended values for the parameters are \(\alpha = 0.2\) and \(\kappa = {\rm ln}(|V|+|E|)\). For the 10,000-node graph 1, the recommended \(\kappa\) would be 13. Our graphs generally do not have long shortest paths, and it was observed that lowering \(\kappa\) speeded the run time considerably. Our tests used \(\kappa\) = 10, and varied \(\alpha\) from 0.1 to 0.3. Results from the clustering test for 10,000-node graphs are shown in Table 21. Even with the suggested parameters we could not get Graph 1 to cluster to 90% accuracy. Because accuracies are generally low and run times high, we did not test further.
Table 21 Kourtellis2013 \(\kappa\)-path centrality clustering results for LFR nets of 10,000 nodes
Many different factors will affect the choice of an algorithm. Which is better is often in the eye of the beholder. Among the algorithms considered here, the easiest to install are the algorithms contained with NetworKit, which is an ongoing project, supports not only UNIX, but MacOS and Windows 10, and contains both the Riondato2016 and Geisberger2008 algorithms. Notes contained in NetworKit suggest that if you do not need the guaranteed error bounds of Riondato2016, Geisberger2008 is thought to work better in practice. Our results agree with that conclusion, although both algorithms are in the top four for performance in terms of the combinations of speed and accuracy they provide.
Parallelization is a key concept in increasing the speed of betweenness centrality calculations, and many of the algorithms parallelize well. The top three algorithms in terms of speed and performance have parallel implementations, Geisberger2008 and Borassi2016 by using CPU cores, and McLaughlin2014 as a GPU algorithm. Fast results are obtained by using a powerful GPU and an algorithm like McLaughlin2014, which performed even better with the high-performance Nvidia Titan V card. In the original paper [25], we concluded by stating that the results "almost certainly underestimate the potential of GPU algorithms." With the additional results in this paper, we see the complete potential.
The best alternative to the GPU-based algorithm, and perhaps the best all-around algorithm, is Borassi2016 (KADABRA), which offers results that rival and even beat the GPU with certain configurations. In one instance, Borassi2016 clustered a 10,000-node graph in half the time of the GPU algorithm. Borassi2016 was especially useful in its ability to predictably trade accuracy for additional speed. Also, the speed is gained without the added cost and programming skill that running a GPU algorithm requires.
Although some algorithms offer a more desirable combination of speed and accuracy, all algorithms examined here bring new ideas and insights to the search for faster betweenness centrality computations.
We briefly note the plausible similarity of this process with that of computing node-based resilience measures such as integrity and VAT, which also limit the size of both the attack set and the resulting giant component. Therefore, some commonalities of their betweenness centrality-based computations may be expected.
Yoshida Y. Almost linear-time algorithms for adaptive betweenness centrality using hypergraph sketches. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. ACM. 2014, p. 1416–25.
Freeman LC. A set of measures of centrality based on betweenness. Sociometry. 1977;40:35–41.
Jothi R. A betweenness centrality guided clustering algorithm and its applications to cancer diagnosis. In: International conference on mining intelligence and knowledge exploration. Springer. 2017, p. 35–42.
Borgatti SP. Centrality and network flow. Soci Netw. 2005;27(1):55–71.
Rusinowska A, Berghammer R, De Swart H, Grabisch M. Social networks: prestige, centrality, and influence. In: International conference on relational and algebraic methods in computer science. Springer. 2011, p. 22–39.
Jin S, Huang Z, Chen Y, Chavarría-Miranda D, Feo J, Wong PC. A novel application of parallel betweenness centrality to power grid contingency analysis. In: 2010 IEEE international symposium on parallel & distributed processing (IPDPS). IEEE. 2010, p. 1–7.
Girvan M, Newman ME. Community structure in social and biological networks. Proc Natl Acad Sci. 2002;99(12):7821–6.
MathSciNet MATH Article Google Scholar
Yan G, Zhou T, Hu B, Fu ZQ, Wang BH. Efficient routing on complex networks. Phys Rev E. 2006;73(4):046108.
Şimşek, Ö, Barto AG. Skill characterization based on betweenness. In: Advances in neural information processing systems. 2009, p. 1497–504.
Carpenter T, Karakostas G, Shallcross D. Practical issues and algorithms for analyzing terrorist networks. In: Proceedings of the Western Simulation MultiConference. 2002.
Matta J, Obafemi-Ajayi T, Borwey J, Wunsch D, Ercal G. Robust graph-theoretic clustering approaches using node-based resilience measures. In: 2016 IEEE 16th international conference on data mining (ICDM). 2016, p. 320–9. https://doi.org/10.1109/ICDM.2016.0043.
Holme P, Kim BJ, Yoon CN, Han SK. Attack vulnerability of complex networks. Phy Rev E. 2002;65(5):056109.
Brandes U. A faster algorithm for betweenness centrality. J Math Sociol. 2001;25:163–77.
MATH Article Google Scholar
Bonchi F, De Francisci Morales G, Riondato M. Centrality measures on big graphs: exact, approximated, and distributed algorithms. In: Proceedings of the 25th international conference companion on World Wide Web, international World Wide Web Conferences Steering Committee. 2016, p. 1017–20.
Chong WH, Toh WSB, Teow LN. Efficient extraction of high-betweenness vertices. In: 2010 international conference on advances in social networks analysis and mining (ASONAM). IEEE. 2010, p. 286–90.
Ufimtsev V, Bhowmick S. Finding high betweenness centrality vertices in large networks. In: CSC14: The sixth SIAM workshop on combinatorial scientific computing. 2014, p. 45.
AlGhamdi Z, Jamour F, Skiadopoulos S, Kalnis P. A benchmark for betweenness centrality approximation algorithms on large graphs. In: Proceedings of the 29th international conference on scientific and statistical database management. ACM. 2017, p. 6.
Brandes U, Pich C. Centrality estimation in large networks. Int J Bifur Chaos. 2007;17(07):2303–18.
Bader DA, Kintali S, Madduri K, Mihail M. Approximating betweenness centrality. WAW. 2007;4863:124–37.
MathSciNet MATH Google Scholar
Riondato M, Kornaropoulos EM. Fast approximation of betweenness centrality through sampling. Data Mining Knowl Discov. 2016;30(2):438–75.
Geisberger R, Sanders P, Schultes D. Better approximation of betweenness centrality. In: Proceedings of the meeting on algorithm engineering & expermiments. Society for Industrial and Applied Mathematics. 2008, p. 90–100.
Riondato M, Upfal E. Abra: Approximating betweenness centrality in static and dynamic graphs with rademacher averages. arXiv preprint arXiv:1602.05866. 2016.
Pfeffer J, Carley KM. k-centralities: local approximations of global measures based on shortest paths. In: Proceedings of the 21st international conference on World Wide Web. ACM. 2012, p. 1043–50.
Everett M, Borgatti SP. Ego network betweenness. Soc Netw. 2005;27(1):31–8.
Matta J. A comparison of approaches to computing betweenness centrality for large graphs. In: International workshop on complex networks and their applications. Springer. 2017, p. 3–13.
Brandes U. On variants of shortest-path betweenness centrality and their generic computation. Soci Netw. 2008;30(2):136–45.
Eppstein D, Wang J. Fast approximation of centrality. In: Proceedings of the twelfth annual ACM-SIAM symposium on discrete algorithms. Society for Industrial and Applied Mathematics. 2001, p. 228–9.
Chehreghani MH. An efficient algorithm for approximate betweenness centrality computation. Comput J. 2014;57(9):1371–82.
Bromberger S, Klymko C, Henderson K, Pearce R, Sanders G. Improving estimation of betweenness centrality for scale-free graphs. Livermore: Lawrence Livermore National Lab; 2017.
Borassi M, Natale E. Kadabra is an adaptive algorithm for betweenness via random approximation. arXiv preprint arXiv:1604.08553. 2016.
Mumtaz S, Wang X. Identifying top-k influential nodes in networks. In: Proceedings of the 2017 ACM on conference on information and knowledge management. ACM. 2017, p. 2219–22.
Sariyüce AE, Saule E, Kaya K, Çatalyürek ÜV. Shattering and compressing networks for betweenness centrality. In: Proceedings of the 2013 SIAM international conference on data mining. SIAM. 2013, p. 686–94.
Sariyüce AE, Kaya K, Saule E, Çatalyürek ÜV. Graph manipulations for fast centrality computation. ACM Trans Knowl Discov Data. 2017;11(3):26.
Erdős D, Ishakian V, Bestavros A, Terzi E.: A divide-and-conquer algorithm for betweenness centrality. In: Proceedings of the 2015 SIAM international conference on data mining. SIAM. 2015, p. 433–41.
Baglioni M, Geraci F, Pellegrini M, Lastres E. Fast exact computation of betweenness centrality in social networks. In: Proceedings of the 2012 international conference on advances in social networks analysis and mining (ASONAM 2012). IEEE Computer Society. 2012, p. 450–6.
Li Y, Li W, Tan Y, Liu F, Cao Y, Lee KY. Hierarchical decomposition for betweenness centrality measure of complex networks. Sci Rep. 2017;7:46491.
Chehreghani MH, Bifet A, Abdessalem T. Efficient exact and approximate algorithms for computing betweenness centrality in directed graphs. arXiv preprint arXiv:1708.08739. 2017.
Sariyüce AE, Kaya K, Saule E, Çatalyürek ÜV. Betweenness centrality on gpus and heterogeneous architectures. In: Proceedings of the 6th workshop on general purpose processor using graphics processing units. ACM. 2013, p. 76–85.
Wang W, Tang CY. Distributed estimation of betweenness centrality. In: 2015 53rd Annual allerton conference on communication, control, and computing (Allerton). IEEE. 2015, p. 250–7.
Shi Z, Zhang B. Fast network centrality analysis using gpus. BMC Bioinf. 2011;12(1):149.
McLaughlin A, Bader DA. Scalable and high performance betweenness centrality on the gpu. In: Proceedings of the international conference for high performance computing, networking, storage and analysis. IEEE Press. 2014, p. 572–83.
Bader, D.A., Madduri, K.: Parallel algorithms for evaluating centrality indices in real-world networks. In: International conference on parallel processing, 2006. ICPP 2006. IEEE. 2006, pp. 539–50.
Pande P, Bader DA. Computing betweenness centrality for small world networks on a gpu. In: 15th Annual High performance embedded computing workshop (HPEC). 2011.
Bernaschi M, Carbone G, Vella F. Scalable betweenness centrality on multi-gpu systems. In: Proceedings of the ACM international conference on computing frontiers. ACM. 2016, p. 29–36.
Bernaschi M, Bisson M, Mastrostefano E, Vella F. Multilevel parallelism for the exploration of large-scale graphs. IEEE transactions on multi-scale computing systems. 2018.
Ostrowski DA. An approximation of betweenness centrality for social networks. In: 2015 IEEE international conference on semantic computing (ICSC). IEEE. 2015, p. 489–92.
Mahmoody A, Tsourakakis CE, Upfal E. Scalable betweenness centrality maximization via sampling. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM. 2016, p. 1765–73.
Bergamini E, Meyerhenke H. Approximating betweenness centrality in fully dynamic networks. Internet Math. 2016;12(5):281–314.
Lü L, Chen D, Ren XL, Zhang QM, Zhang YC, Zhou T. Vital nodes identification in complex networks. Phys Rep. 2016;650:1–63.
Kourtellis N, Alahakoon T, Simha R, Iamnitchi A, Tripathi R. Identifying high betweenness centrality nodes in large social networks. Soc Netw Anal Mining. 2013;3(4):899–914.
Dolev S, Elovici Y, Puzis R. Routing betweenness centrality. JACM. 2010;57(4):25.
Hinne M. Local approximation of centrality measures. The Netherlands: Radboud University Nijmegen; 2011.
Leskovec J, Sosič R. Snap: A general-purpose network analysis and graph-mining library. ACM Trans Intell Syst Technol. 2016;8(1):1.
Borwey J, Ahlert D, Obafemi-Ajayi T, Ercal G. A graph-theoretic clustering methodology based on vertex-attack tolerance. In: The twenty-eighth international flairs conference. 2015.
Barefoot C, Entringer R, Swart H. Vulnerability in graphs-a comparative survey. J Combinat Math Combinat Comput. 1987;1:12–22.
Liu YY, Slotine JJ, Barabási AL. Controllability of complex networks. Nature. 2011;473(7346):167.
Moschoyiannis S, Elia N, Penn A, Lloyd D, Knight C. A web-based tool for identifying strategic intervention points in complex systems. CASSTING. 2016.
Liu YY, Slotine JJ, Barabási AL. Control centrality and hierarchical structure in complex networks. PloS ONE. 2012;7(9):e44459.
Savvopoulos S, Moschoyiannis S. Impact of removing nodes on the controllability of complex networks. In: COMPLEX NETWORKS 2017: The 6th international conference on complex networks & their applications. 2017, p. 361–3.
Matta J, Ercal G, Borwey J. The vertex attack tolerance of complex networks. RAIRO Oper Res. 2017. https://doi.org/10.1051/ro/2017008.
Ercal G, Matta J. Resilience notions for scale-free networks. In: Complex adaptive systems. 2013. p. 510–15.
Ercal G. A note on the computational complexity of unsmoothened vertex attack tolerance. CoRR. 2016. http://arxiv.org/abs/1603.08430.
Cozzens M, Moazzami D, Stueckle S. The tenacity of a graph. In: Seventh international conference on the theory and applications of graphs. New York: Wiley; 1995. p. 1111–22.
Mann DE. The tenacity of trees. Ph.D. thesis. Boston: Northeastern University; 1993.
Chvatal V. Tough graphs and hamiltonian circuits. Discr Math. 2006;306(1011):910–7.
Broersma H, Fiala J, Golovach PA, Kaiser T, Paulusma D, Proskurowski A. Linear-time algorithms for scattering number and hamilton-connectivity of interval graphs. J Graph Theory. 2015;79(4):282–99.
Milli L, Rossetti G, Pedreschi D, Giannotti F. Information diffusion in complex networks: the active/passive conundrum. In: Milli L, Rossetti G, editors. International workshop on complex networks and their applications. Berlin: Springer; 2017. p. 305–13.
Yu H, Chen L, Cao X, Liu Z, Li Y. Identifying top-k important nodes based on probabilistic-jumping random walk in complex networks. International workshop on complex networks and their applications. Berlin: Springer; 2017. p. 326–38.
Joneydi S, Khansari M, Kaveh A. An opportunistic network approach towards disease spreading. International workshop on complex networks and their applications. Berlin: Springer; 2017. p. 314–25.
Condon A, Karp RM. Algorithms for graph partitioning on the planted partition model. Random Struct Algorith. 2001;18(2):116–40.
Brandes U, Gaertler M, Wagner D. Experiments on graph clustering algorithms. European symposium on algorithms. Berlin: Springer; 2003. p. 568–79.
Lancichinetti A, Fortunato S, Radicchi F. Benchmark graphs for testing community detection algorithms. Phys Rev E. 2008;78(4):046110.
Fortunato S. Community detection in graphs. Phys Rep. 2010;486(3):75–174.
Staudt CL, Sazonovs A, Meyerhenke H. Networkit: a tool suite for large-scale complex network analysis. Netw Sci. 2016;4(4):508–30.
JM designed and ran the experiments. GE contributed ideas on content and worked on the writing. KS contributed ideas on content. All authors read and approved the final manuscript.
The authors would like to thank Ziyad Al-Ghamdi for genuinely useful help, as well as the anonymous reviewers for the 6th International Conference on Complex Networks and Their Applications for their insightful suggestions.
Southern Illinois University Edwardsville, Edwardsville, IL, USA
John Matta & Gunes Ercal
Southern Illinois University Carbondale, Carbondale, IL, USA
Koushik Sinha
John Matta
Gunes Ercal
Correspondence to John Matta.
Matta, J., Ercal, G. & Sinha, K. Comparing the speed and accuracy of approaches to betweenness centrality approximation. Comput Soc Netw 6, 2 (2019). https://doi.org/10.1186/s40649-019-0062-5
Betweenness centrality
Approximation algorithms
GPU algorithms | CommonCrawl |
Kinetic & Related Models
December 2019 , Volume 12 , Issue 6
Select all articles
Export/Reference:
Sonic-supersonic solutions for the two-dimensional pseudo-steady full Euler equations
Yanbo Hu and Tong Li
2019, 12(6): 1197-1228 doi: 10.3934/krm.2019046 +[Abstract](1332) +[HTML](146) +[PDF](428.29KB)
This paper is focused on the existence of classical sonic-supersonic solutions near sonic curves for the two-dimensional pseudo-steady full Euler equations in gas dynamics. By introducing a novel set of change variables and using the idea of characteristic decomposition, the Euler system is transformed into a new system which displays a transparent singularity-regularity structure. With a choice of weighted metric space, we establish the local existence of smooth solutions for the new system by the fixed-point method. Finally, we obtain a local classical solution for the pseudo-steady full Euler equations by converting the solution from the partial hodograph variables to the original variables.
Yanbo Hu, Tong Li. Sonic-supersonic solutions for the two-dimensional pseudo-steady full Euler equations. Kinetic & Related Models, 2019, 12(6): 1197-1228. doi: 10.3934/krm.2019046.
Memory effects in measure transport equations
Fabio Camilli and Raul De Maio
Transport equations with a nonlocal velocity field have been introduced as a continuum model for interacting particle systems arising in physics, chemistry and biology. Fractional time derivatives, given by convolution integrals of the time-derivative with power-law kernels, are typical for memory effects in complex systems. In this paper we consider a nonlinear transport equation with a fractional time-derivative. We provide a well-posedness theory for weak measure solutions of the problem and an integral formula which generalizes the classical push-forward representation formula to this setting.
Fabio Camilli, Raul De Maio. Memory effects in measure transport equations. Kinetic & Related Models, 2019, 12(6): 1229-1245. doi: 10.3934/krm.2019047.
Numerical comparison of mass-conservative schemes for the Gross-Pitaevskii equation
Patrick Henning and Johan Wärnegård
2019, 12(6): 1247-1271 doi: 10.3934/krm.2019048 +[Abstract](1793) +[HTML](134) +[PDF](3318.3KB)
In this paper we present a numerical comparison of various mass-conservative discretizations for the time-dependent Gross-Pitaevskii equation. We have three main objectives. First, we want to clarify how purely mass-conservative methods perform compared to methods that are additionally energy-conservative or symplectic. Second, we shall compare the accuracy of energy-conservative and symplectic methods among each other. Third, we will investigate if a linearized energy-conserving method suffers from a loss of accuracy compared to an approach which requires to solve a full nonlinear problem in each time-step. In order to obtain a representative comparison, our numerical experiments cover different physically relevant test cases, such as traveling solitons, stationary multi-solitons, Bose-Einstein condensates in an optical lattice and vortex pattern in a rapidly rotating superfluid. We shall also consider a computationally severe test case involving a pseudo Mott insulator. Our space discretization is based on finite elements throughout the paper. We will also give special attention to long time behavior and possible coupling conditions between time-step sizes and mesh sizes. The main observation of this paper is that mass conservation alone will not lead to a competitive method in complex settings. Furthermore, energy-conserving and symplectic methods are both reliable and accurate, yet, the energy-conservative schemes achieve a visibly higher accuracy in our test cases. Finally, the scheme that performs best throughout our experiments is an energy-conserving relaxation scheme with linear time-stepping proposed by C. Besse (SINUM, 42(3):934–952, 2004).
Patrick Henning, Johan W\u00E4rneg\u00E5rd. Numerical comparison of mass-conservative schemes for the Gross-Pitaevskii equation. Kinetic & Related Models, 2019, 12(6): 1247-1271. doi: 10.3934/krm.2019048.
A kinetic theory approach to model pedestrian dynamics in bounded domains with obstacles
Daewa Kim and Annalisa Quaini
2019, 12(6): 1273-1296 doi: 10.3934/krm.2019049 +[Abstract](1778) +[HTML](145) +[PDF](11624.51KB)
We consider a kinetic theory approach to model the evacuation of a crowd from bounded domains. The interactions of a person with other pedestrians and the environment, which includes walls, exits, and obstacles, are modeled by using tools of game theory and are transferred to the crowd dynamics. The model allows to weight between two competing behaviors: the search for less congested areas and the tendency to follow the stream unconsciously in a panic situation. For the numerical approximation of the solution to our model, we apply an operator splitting scheme which breaks the problem into two pure advection problems and a problem involving the interactions. We compare our numerical results against the data reported in a recent empirical study on evacuation from a room with two exits. For medium and medium-to-large groups of people we achieve good agreement between the computed average people density and flow rate and the respective measured quantities. Through a series of numerical tests we also show that our approach is capable of handling evacuation from a room with one or more exits with variable size, with and without obstacles, and can reproduce lane formation in bidirectional flow in a corridor.
Daewa Kim, Annalisa Quaini. A kinetic theory approach to model pedestrian dynamics in bounded domains with obstacles. Kinetic & Related Models, 2019, 12(6): 1273-1296. doi: 10.3934/krm.2019049.
Large amplitude stationary solutions of the Morrow model of gas ionization
Walter A. Strauss and Masahiro Suzuki
2019, 12(6): 1297-1312 doi: 10.3934/krm.2019050 +[Abstract](1188) +[HTML](125) +[PDF](1246.29KB)
We consider the steady states of a gas between two parallel plates that is ionized by a strong electric field so as to create a plasma. We use global bifurcation theory to prove that there is a curve \begin{document}$ \mathcal{K} $\end{document} of such states with the following property. The curve begins at the sparking voltage and either the particle density becomes unbounded or the curve ends at the anti-sparking voltage.
Walter A. Strauss, Masahiro Suzuki. Large amplitude stationary solutions of the Morrow model of gas ionization. Kinetic & Related Models, 2019, 12(6): 1297-1312. doi: 10.3934/krm.2019050.
Focusing solutions of the Vlasov-Poisson system
Katherine Zhiyuan Zhang
2019, 12(6): 1313-1327 doi: 10.3934/krm.2019051 +[Abstract](1773) +[HTML](114) +[PDF](353.6KB)
We study smooth, spherically-symmetric solutions to the Vlasov-Poisson system and relativistic Vlasov-Poisson system in the plasma physical case. We construct solutions that initially possess arbitrarily small \begin{document}$ C^k $\end{document} norms (\begin{document}$ k \geq 1 $\end{document}) for the charge densities and the electric fields, but attain arbitrarily large \begin{document}$ L^\infty $\end{document} norms of them at some later time.
Katherine Zhiyuan Zhang. Focusing solutions of the Vlasov-Poisson system. Kinetic & Related Models, 2019, 12(6): 1313-1327. doi: 10.3934/krm.2019051.
Mean-field limit of a spatially-extended FitzHugh-Nagumo neural network
Joachim Crevat
We consider a spatially-extended model for a network of interacting FitzHugh-Nagumo neurons without noise, and rigorously establish its mean-field limit towards a nonlocal kinetic equation as the number of neurons goes to infinity. Our approach is based on deterministic methods, and namely on the stability of the solutions of the kinetic equation with respect to their initial data. The main difficulty lies in the adaptation in a deterministic framework of arguments previously introduced for the mean-field limit of stochastic systems of interacting particles with a certain class of locally Lipschitz continuous interaction kernels. This result establishes a rigorous link between the microscopic and mesoscopic scales of observation of the network, which can be further used as an intermediary step to derive macroscopic models. We also propose a numerical scheme for the discretization of the solutions of the kinetic model, based on a particle method, in order to study the dynamics of its solutions, and to compare it with the microscopic model.
Joachim Crevat. Mean-field limit of a spatially-extended FitzHugh-Nagumo neural network. Kinetic & Related Models, 2019, 12(6): 1329-1358. doi: 10.3934/krm.2019052.
A note on two species collisional plasma in bounded domains
Yunbai Cao
We construct a unique global-in-time solution to the two species Vlasov-Poisson-Boltzmann system in convex domains with the diffuse boundary condition, which can be viewed as one of the ideal scattering boundary model. The construction follows a new \begin{document}$ L^{2} $\end{document}-\begin{document}$ L^{\infty} $\end{document} framework in [3]. In our knowledge this result is the first construction of strong solutions for two species plasma models with self-consistent field in general bounded domains.
Yunbai Cao. A note on two species collisional plasma in bounded domains. Kinetic & Related Models, 2019, 12(6): 1359-1429. doi: 10.3934/krm.2019053.
RSS this journal
Tex file preparation
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals | CommonCrawl |
Erufosine, a novel alkylphosphocholine, in acute myeloid leukemia: single activity and combination with other antileukemic drugs
We're sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Michael Fiegl1,
Lars H. Lindner1,2,
Matthias Juergens1,
Hansjoerg Eibl4,
Wolfgang Hiddemann1,3 &
Jan Braess1
Cancer Chemotherapy and Pharmacology volume 62, pages 321–329 (2008)Cite this article
Alkylphosphocholines represent a new class of cytostatic drugs with a novel mode of action. Erufosine (ErPC3), the first compound of this class that can be administered intravenously, has recently been shown to be active against human tumor and leukemic cell lines.
In order to evaluate the antileukemic potential of ErPC3 in acute myeloid leukemia (AML) the lethal concentration 50% (LC 50) was determined using WST-1 assay. For analysis of cell death, staining for Annexin V and activated caspase 3 was performed. An interaction analysis was performed by calculation of combination index and construction of isobolograms.
The LC 50 was 7.4 μg/ml after 24 h and 3.2 μg/ml after 72 h in HL 60 cells and 30.1 and 8.6 μg/ml, respectively, in 19 fresh samples from patients with AML. ErPC3 was found to be cytotoxic in HL60 cells with distinct activation of caspase 3. ErPC3 was not cross-resistant with cytarabine, idarubicine and etoposide as shown by the linear relation of respective LC 50s. The latter agents, however, exerted an additive cytotoxicity in combination with ErPC3 as revealed by isobologram analysis and combination index, although results are uneven for idarubicine.
Based on these data ErPC3 appears as a novel antileukemic candidate drug, which needs to be explored further in the treatment of AML.
Avoid the most common mistakes and prepare your manuscript for journal editors.
The combination of cytarabine with an anthracycline has remained the therapeutic standard in the treatment of acute myeloid leukemia (AML) for over three decades. The improvements in remission rate and long-term survivors achieved during that time were due to an intensification of therapy and improvements of supportive therapy. Clearly there is a need for new promising antileukemic agents.
Alkylphosphocholines represent a new class of lipophilic drugs that interact with the cell membrane [16, 18, 27], the so called "membrane active lipids". They also modulate intracellular signal transduction pathways. Their exact mechanism of action still remains to be elucidated in detail. To date they have been shown to exert apoptosis in many cancer models [9, 16] and also leukemic cell lines [7, 14, 15]. Various interactions with intracellular signaling have been found, e.g., activation of c-myc [20] and inhibition of MAPK [23] and PI3K/Akt survival pathways [24]. As cytarabine in its active form as Ara-CDP choline is supposed to interact with cellular membranes as well [2, 13], it could resemble a very interesting combination partner for erufosine.
The class of alkylphosphocholines comprises various compounds, from which only one, hexadecylphosphocholine (miltefosine, HePC) has been introduced into clinical practice. HePC can be used topically for local control of skin metastases of breast cancer [17] and orally for eradication of visceral leishmaniasis [10]. The major disadvantage of this class of compounds is their poor gastrointestinal tolerability after oral application leading to insufficient plasma levels for systemic cancer treatment. An intravenous application of micelle-forming HePC and analogue alkylphosphocholines is impossible due to intravascular hemolysis. In addition, the recent experience with alkylphosphocholines as drugs in the treatment of cancer and leishmaniasis clearly demonstrated that the use of saturated alkyl chains results in a narrow therapeutic index. More favored and superior are alkylphosphocholines with one cis-double bond in the alkyl chain, for instance oleoylphosphocholine and erucylphosphocholine. These compounds have better efficacy and a much larger safety margin [1, 25].
One prototype of alkylphosphocholines with cis-double bond is erucylphosphohomocholine (erufosine, ErPC3), which has been shown to be applicable intravenously in animals with a high efficacy against tumor models in animals and various tumor or leukemic cell lines in vitro [7, 9].
The present study investigated the antileukemic potential of erufosine in AML in vitro both in leukemic cells lines and fresh patient samples. In addition, the mode of action of erufosine in AML and possible synergisms with the most commonly used drugs cytarabine, idarubicine (as an example for anthracyclines) and etoposide (as topoisomerase II inhibitor) were investigated.
Patients and cell culture
Samples from adult patients (≥18 years) with first diagnosis of AML were included into this study. Diagnosis was based on cell morphology according to FAB criteria complemented by cytochemistry and immunophenotyping. A bone marrow infiltration of >70% and collection of a sufficient number of bone marrow cells (>108 cells) during the initial diagnostic bone marrow aspiration was required. The conduct of the study was performed in accordance with the Declaration of Helsinki and was approved by the local ethics committee. All patients were informed of the investigational nature of the study and gave their informed consent.
Leukemic blasts were collected by bone marrow aspiration and subsequently subjected to Ficoll hypaque centrifugation. Prior studies have shown that following Ficoll hypaque centrifugation purity >90% of leukemic blasts can be obtained. Cells were diluted to a concentration of 1 × 107 cells/ml in Iscove's Modified Dulbecco's Medium (IMDM), which was supplemented with 20 mM HEPES, 100 μg/ml streptomycin, 10 mM L-glutamine and 10% FCS. Cultures were kept at 37°C, 5% CO2 and 95% humidity. Before incubation viability was checked by trypan blue exclusion test and only samples with a viability exceeding 80% were accepted for subsequent experiments. Blasts from the HL 60 cell line were processed in analogy for some of the experiments.
Drugs and media
Erufosine was kindly provided by H. Eibl and L. Lindner. Cytarabine (AraC) was obtained from CellPharm (Hannover, Germany), idarubicine was from Pharmacia (Karlsruhe, Germany) and etoposide was from Hexal (Holzkirchen, Germany). Cell culture medium (IMDM), PBS, HEPES, streptomycin, L-glutamine and fetal calf serum were purchased from Gibco Life Technologies (Eggenstein, Germany). WST-1 reagent was available from Roche Diagnostics (Mannheim, Germany). An OPTImax ELISA reader by Molecular Devices (Munich, Germany) was used.
Determination of LC 50
The overall viability of cell samples was measured by the WST-1 assay. Briefly this assay is based on the cleavage of the tetrazolium salt WST-1 to a (colored) formazan dye by mitochondrial dehydrogenases of viable cells. The signal can be detected spectrophotometrically in an ELISA reader and directly correlates to the number of viable and metabolically active cells in the sample. The assay was performed in 96-well plates with 100 μl of a 106 AML blasts/ml suspension.
Following drug exposure with increasing concentrations (ErPC3: 0.5–100 μg/ml, AraC: 0.1–100 μg/ml, idarubicine: 0.1–1 μg/ml, etoposide: 0.1–50 μg/ml) for exposure times of 24 up to 96 h 10 μl of WST-1 reagent were added to the 100 μl cell suspension for a further incubation of 4 h. The ELISA reader was set at a wavelength of 450 nm with a reference wavelength of 690 nm. The results following drug exposure were calculated as a percentage relative to untreated controls and plotted in semi-logarithmic dose-effect curves. Using these data points a curve was fitted (4-parameter fit: y = {(A − D)/[1 + (x/C)B] + D} [19], which was then used to interpolate the LC 50 (the—hypothetical—concentration that would reduce the viability of the cell population to 50% as compared to the untreated control).
Determination of concentration coefficient N
The concept of the concentration coefficient N is described in detail elsewhere [4]. In brief, it is based on the finding that the equation C × T = constant (C = concentration of drug, T = exposure time of the drug) is not sufficient to describe the pharmacodynamic relationship between both parameters for several cytotoxic agents (e.g., cytarabine) to achieve a certain cell kill. Therefore, the concentration is modified with an index N, resulting in the equation C N × T = constant. When N = 1, C and tT are equally important in achieving a certain effect, e.g., LC 50. When N > 1, C is more important for a drug than T, i.e., that halving C cannot be compensated by doubling T. N < 1 is found for drugs that are dependent on long exposure times rather than on high concentrations. The detailed method to determine N has been described by our group recently [4, 6]. In brief, double logarithmic transformation of a time/concentration isoeffective analysis results in a linear relationship from which the parameter N can be extracted as the negative gradient.
FACS analysis for detection of apoptosis
The cells from the HL 60 cell line were incubated at a concentration of 106 cells/ml for 78 h and then exposed to 50 μg/ml of ErPC3 for 4 h. The cells were then washed with ice-cold PBS buffer. For Annexin V staining, a commercially available kit by Becton Dickinson was used. In brief, the cells were washed twice in ice-cold PBS buffer and diluted tenfold. The cells were then stained with Phycoerythrin (PE) conjugated antibodies against Annexin V and counterstained with 7-Amino-actinomycin D (7-AAD, a color dye enriched in dead cells). After incubation for 15 min in the dark at room temperature, cells were diluted with 300 μl buffer and then measured without gating within 1 h with FACSCalibur with CellQuest software (Becton Dickinson, Heidelberg, Germany) on an Apple Macintosh computer.
In addition, the active caspase 3 kit from Becton Dickinson was used. In brief, after exposure to ErPC3 as mentioned above the cells were fixed and permeabilized with the Cytofix/Perm Puffer provided within the kit for 30 min on ice. The cells were then washed twice and diluted tenfold. Staining was performed with PE conjugated anti-caspase 3 antibodies at 4°C in the dark for 20 min. Afterwards, the cells were washed and measured as mentioned above. All experiments were performed in triplicate.
Interaction analysis
For interaction analysis fixed ratios of combinations as determined by the relationship of the respective LC 50s in ng/ml determined in HL60 cells [erufosine : AraC = 1:0.2 (molar relationship 1:0.42), erufosine:idarubicine = 1:0.0025 (molar relationship 1:0.003) and erufosine:etoposide = 1:0.2 (molar relationship 1:0.17)] were used. The combination index (CI) was calculated according to the following formula [12]:
$$ {\text{CI}} = \frac{{{\text{C}}_{{\text{c}}} {\text{X}}}} {{{\text{C}}_{{\text{i}}} {\text{X}}}} + \frac{{{\text{C}}_{{\text{c}}} {\text{Y}}}} {{{\text{C}}_{{\text{i}}} {\text{Y}}}} + \alpha \frac{{{\left( {{\text{C}}_{{\text{c}}} {\text{X}}} \right)}{\left( {{\text{C}}_{{\text{c}}} {\text{Y}}} \right)}}} {{{\left( {{\text{C}}_{{\text{i}}} {\text{X}}} \right)}{\left( {{\text{C}}_{{\text{i}}} {\text{Y}}} \right)}}} $$
where CcX is the LC 50 of drug X in combination with drug Y, CiX is the individual LC 50 of drug X, CcY the LC 50 of drug Y in combination with drug X and CiY is the individual LC 50 of drug Y; α = 1 or 0 depending on whether both drugs are assumed to be mutually nonexclusive or mutually exclusive in their action respectively. According to this method, synergy is assumed when the CI is less than 1, additivity when the CI equals 1 and antagonism by a CI greater than 1.
For isobologram analysis, we used the method of Steel and Peckham [26], modified by Kano et al. [11]. For detailed description and theoretical background, please refer to these publications. In brief, the isobologram shows the relationship of the dose of drugs X and Y to achieve a certain effect, in this case LC 50. From the two individual LC 50s derived by dose-response curves and the calculation of Mode I and Mode II lines (see below), the so-called "envelope of additivity" can be drawn (see Fig. 1). The measured LC 50s of the combination of both drugs falling in this area are assumed to be additive to each other (example point A), combinations falling below are assumed to be supra-additive (example point B) while combination values above the envelope are assumed to be sub-additive (example point C). The combination points above the square defined by both individual LC 50s are defined to be protective, i.e., antagonistic (example point D). Note that not absolute concentrations but relative values were used for the calculation.
Schema of isobologram analysis (for detailed information refer to text). The grey area enclosed by Mode I and Mode IIb line defines the envelope of additivity. Point A represents an additive combination of drug X and drug Y. Combinations (point B) falling under this area are supraadditive, combinations above subadditive (point C). Combinations beyond the square defined by LC 50s of individual drug X and drug Y values are considered to be protective (or antagonistic, point D)
Mode I line
When the dose of drug X is chosen there remains an increment in effect to be produced by drug Y. If two drugs were to act independently, the addition is performed by taking the increment in doses, starting from 0 that give survivals, which add up to LC 50.
Mode IIa line
When the dose of drug X is chosen, an isoeffect curve can also be calculated by taking the dose increment of drug Y that gives the required contribution to the total effect up to the limit, in this case LC 50.
Mode IIb line
Similar as mode IIa for drug Y.
Patients' characteristics
Samples of 19 patients with AML were examined. Median age was 62.5 years (40–84 years) with 15 male and 4 female patients. Distribution of FAB subtypes was as follows: M0 three patients, M1 four patients, M2 seven patients, M4 four patients and M5 one patient. Unfavorable karyotypes were seen in eight patients and normal karyotypes and those with unknown significance were observed in five patients. In six patients, no karyotype was assessable. Classification in prognostic subgroups was performed as follows: favorable [inv(16), t(8;21), t(15;17)], unfavorable [−7, −5, 5q-, 7q-, inv(3), aberrations involving 11q23 and complex aberrations (three or more numerical or structural aberrations without involvement of prognostic favorable aberrations)], normal [46 (XY), 46 (XX)] and unknown significance [all others].
Determination of LC 50 and concentration coefficient in HL 60 and AML blasts
HL 60 cells
We determined the mean LC 50 of ErPC3 in HL 60 cells after 24 h exposure to ErPC3 as 7.0 μg/ml (median 6.6 μg/ml, range 4.0–11.0). The respective mean LC 50 after 72 h was 3.2 μg/ml (median 3.2 μg/ml, range 2.1–4.3). A typical example of one experiment is shown in Fig. 2. The mean LC 50 for cytarabine was 2.0 μg/ml after 24 h and 0.8 μg/ml after 72 h. For idarubicine, values of LC 50 were 0.02 and 0.004 ng/ml and for etoposide 2.8 and 0.4 ng/ml, respectively.
Example of WST-1 assay in HL 60 cells after 24 h exposure to ErPC3. Dots represent the relative survival according to an untreated control at a certain concentration of ErPC3. From the resulting line, the LC 50 can be extracted
According to the equation given in the section "Patients and methods" the concentration coefficient N was therefore calculated as a mean of 1.9 (median 1.4).
Patient samples
The median LC 50 of ErPC3 in freshly collected blasts from patients with AML was determined as 30.1 μg/ml (mean 29.2 μg/ml, range 10.7–60.9) after 24 h exposition. The respective median LC 50 after 96 h was 8.6 μg/ml (mean 8.9 μg/ml, range 0.6–16.7). For cytarabine, the mean LC 50 was 20.7 μg/ml after 24 h and 0.6 μg/ml after 96 h. For idarubicine, values of LC 50 were 1.7 and 0.9 ng/ml and for etoposide 34.6 and 14.0 ng/ml, respectively.
The concentration coefficient N for ErPC3 was calculated individually for each sample, and then the median value was calculated from these data. N was therefore defined as 1.2 (mean 2.7, range 0.4–11.4)—implicating a similar relevance of both concentration C and exposure time T for the cytotoxic effect.
Analysis of cytotoxicity of ErPC3 in HL 60 cells
To characterize the mode of cell death inflicted by ErPC3, we used FACS analysis in HL 60 cells exposed to ErPC3. Only cells that stained positive for Annexin V and negative for 7-AAD were analyzed by 4-quadrant-statistics. This fraction represents cells in the early stage of apoptosis. The endogenous apoptotic fraction in HL 60 cells as shown by a positive staining for Annexin V [mean 21.4% (median 20.0%)] was significantly raised by co-exposure to ErPC3. After 4 h exposure to ErPC3, the apoptotic fraction was significantly (P < 0.05) increased 4.5-fold to a mean of 97.7% (median 97.6). Figure 3a (endogenous apoptosis) and Fig. 3b (apoptosis after 4-h exposure to ErPC3) show representative examples.
Representative FACS analysis of apoptosis by ErPC3 in HL 60 cells. Upper line: Dot blots of Annexin V/7-AAD staining (a endogenous, b after exposure to ErPC3). Lower line: Histogram blots of active caspase 3 staining (c endogenous, d after exposure to ErPC3)
For further analysis of cell death, the number of cells with activated caspase 3 as one relevant inductor of classical apoptosis was measured. A mean of 10.9% (median 11.5%) of HL 60 cells stained positive for endogenous activated forms of caspase 3, and this fraction was significantly doubled (P = 0.02) by a 4-h exposure to ErPC3 (mean 21.8%, median 21.5%). Figure 3c (endogenous activation of caspase 3) and Fig. 3d (after 4-h exposure with ErPC3) are showing histogram plots of FACS analysis of one representative experiment.
Analysis of synergy of ErPC3 with drugs in HL 60 cells and AML samples
Cross-resistance
For detection of cross-resistance, the logarithmic values of the respective LC 50s (ErPC3 and cytarabine, idarubicine and etoposide) obtained in each individual experiment were correlated (Pearson's test). We were unable to observe significant cross-resistance (represented by significant linear correlation) between ErPC3 and the established antileukemic agents in the AML samples after 96 h of incubation. The regression coefficient r was −0.03 (P = 0.9) for cytarabine, r = 0.04 (P = 0.9) for idarubicine and r = 0.2 (P = 0.5) for etoposide.
Interaction analysis in HL 60 cell line
As it is unknown whether ErPC3 and cytarabine, etoposide and idarubicine are mutually exclusive or nonexclusive in their mechanisms of action, we calculated the CI for both alternatives (α = 0 and α = 1). For calculation of the mean CI, it was calculated individually for each experiment and from the resulting CIs the mean was calculated. The results are given in Table 1. For cytarabine, additive interactions can be assumed with a minimum mean value of CI of 1.0 (for α = 0 after 24 h) and a maximum mean value of 1.5 (for α = 1 after 72 h). For etoposide mean values ranged between 1.2 (for α = 0 after 24 h) and 1.8 (for α = 1 after 72 h), indicating (sub-)additivity. However, the combination of idarubicine with ErPC3 resulted in much higher mean values with 1.6 (for α = 0 after 24 h) and 5.6 (for α = 1 after 72 h), respectively. These results imply antagonism for the combination of ErPC3 and idarubicine in HL60 cells.
Table 1 Results of interactions analysis in HL 60 cells and patient samples
For the construction of isobolograms, LC 50s and Modes I, IIa and IIb the mean results of all experiments were chosen (results after 72-h exposure were chosen for better accuracy). The data points (dots) for combinations of each experiment are shown individually in Fig. 4a–c together with the mean value of all combinations (diamonds). The isobologram findings are consistent with the results of the respective combination index analysis. For cytarabine and etoposide (Fig. 4a, b) the individual outcomes (two supraadditive and one subadditive and antagonistic, respectively) accumulate in an additive mean which projects right into the envelope of additivity in both series. For idarubicine (Fig. 4c), two additive and one antagonistic sample add up to an antagonistic mean value which accordingly falls into the area of protection.
Isobologram analysis of ErPC3 and AraC, etoposide and idarubicine interaction in HL 60 (upper line) and patient samples (lower line). The grey area surrounded by Mode I, IIa and IIb represents the "envelope of additivity". Dots represent individual experiments. Dots that fall outside the depicted area are marked with their coordinates. Diamonds represent mean value (in HL 60) and median value (in patient samples). For detailed description please refer to text
Interaction analysis in patient samples
For calculation of the CI, each individual CI from the respective experiment was calculated and the median CI was assessed. The results of each combination of ErPC3 with cytarabine, etoposide and idarubicin are also given in Table 1. For the ErPC3–cytarabine combination, median values ranged between 0.7 (for α = 0 after 24 h) and 1.6 (for α = 1 after 96 h) and therefore predict an additive effect. The similar result can be assumed for the combination of ErPC3 and etoposide with median values of 1.0 (for α = 0 after 24 h) and 1.3 (for α = 1 after 24 h). The results for the ErPC3–idarubicine combination with median values of 1.1 (for α = 0 after 24 h) and 2.3 (for α = 1 after 24 h) predict a (sub-)additive outcome.
To construct an isobologram for analysis of the different patient samples, we chose the median values of the different LCs (LC 10–LC 50) obtained in the experiments after 96-h exposure to assess LC 50s, Mode I, IIa and IIb. The individual data points of combinations for each individual experiment were then drawn separately into the graph as dots with the median of all results depicted as diamond. The respective isobologram analysis are given in Fig. 4d–f. Figure 4d shows the result for ErPC3 and cytarabine. Most of the data points together with the median value project under the envelope of additivity and therefore indicate supraadditivity. For etoposide (Fig. 4e) the median value falls into the envelope of additivity, although coordinates of several data points projects also into the area of protection. Results of both analyses are in line with the results of combination indices. In contrast, for idarubicine and ErPC3, the result of the isobologram analysis implies mutual supraaddivity in patient samples, as the median value drops below the envelope of additivity (Fig. 4f) despite several data points projecting into the area of protection.
The presented data show that ErPC3 exerts a high antileukemic activity in an AML cell line and also in freshly collected samples from patients with AML. LC 50s as a surrogate marker for the cell killing potential was 7.0 μg/ml (13.9 μM) and 3.2 μg/ml (6.4 μM) after 24 and 72 h exposure in HL 60 cells and 30.1 μg/ml (59.8 μM) after 24 h and 8.6 μg/ml (17.1 μM) after 96 h in patient samples. From these data, the concentration coefficient N for ErPC3 can be determined as 1.9 for HL 60 cells and 1.2 for patient samples, therefore being close to 1 in both settings. Concluding from our experience of ex vivo drug measurement in AML [4, 6] exposure concentration as well as exposure time is (almost) equally important for this drug in AML, especially in patient samples. Taking these data into account, the determined LC 50 for HL 60 cells in our study is higher than in previous studies that found the IC 50 for HL 60 cells to be 1.54 μM [7]. The finding of an even higher LC 50 for ErPC3 in patient samples of 59.8 μM stresses the relevance of in-vitro drug testing in fresh samples instead of model cell lines. A detailed pharmacokinetic analysis of ErPC3 is currently performed in the context of an ongoing phase I clinical study, preliminary data however show that plasma concentrations of more than 30 μg/ml ErPC3 can be achieved in humans without toxicity (personal communication by L. Lindner). The in vitro analysis of ErPC3 performed in this study gives reasonable options how modulation of peak plasma concentrations can be achieved by adjustment of exposure time and administered concentration.
The observed decrease of tetrazolium metabolism by ErPC3 in the WST-1 assay can not be explained by growth inhibition alone and is therefore not termed IC 50 (inhibition of 50% cell growth). Rather, we were able to show that ErPC3 significantly increased the Annexin V positive fraction in HL 60 cells by 4.5-fold. The cell death seems to be executed by apoptosis via activation of caspase 3 whose activity was doubled after exposure to ErPC3. This finding is in line with other experiments that showed classical apoptotic features of leukemic cell lines as DNA fragmentation, cell shrinkage and formation of apoptotic bodies after exposure to alkylphosphocholines [5, 14]. Interestingly no cross resistance was seen in patient samples between ErPC3 and cytarabine, etoposide and idarubicine which implicate a functionally different mode of action of the alkylphosphocholines. To test their interaction in vitro, we performed simultaneous exposure of ErPC3 and AraC, etoposide and idarubicine in HL 60 cells and patient samples and analyzed the results by CI and isobologram analysis. The CI was calculated for both α = 0 and α = 1, however it seems unlikely that ErPC3 interferes with nucleoside metabolism or topoisomerase activity and could therefore considered mutually exclusive in their action. In HL60 cells the combination of ErPC3 with AraC and etoposide was additive by CI and isobologram analysis. For idarubicine, CI after 72 h suggested antagonism (especially for α = 1), as well as isobologram analysis. In patient samples these results were consistent except for idarubicine which was found to be (sub-)additive by CI and supraadditive by isobologram analysis. The reason for these different results remains unclear, however in respect of possible clinical applications, the outcome in fresh patients samples seems more relevant than those in AML models. With caution, it can therefore be concluded that these drugs augment each other in an additive way.
The question of "synergy" (i.e., supra-additivity) between ErPC3 and the other antileukemic agents may therefore be answered with "no"; however the definition of synergy as a larger effect of a combination of two (or more) drugs than the predicted effect of combined agents is far too simple. According to W. Greco, "the main rationale for combining anticancer agents (which are also effective individually) in the clinic is that by combining agents with non-overlapping toxic effects, more total tumor-killing poisons can be applied without increasing the overall toxicity to the host beyond acceptable limits" [8]. Especially in this context, alkylphosphocholines seem an attractive combination partner in the treatment of leukemias, as they are lacking myelosuppression as the major side effect of all other antileukemic agents. Rather, treatment with HePC results in an increase in leukocyte and thrombocyte count with a maximum effect of up to 20 G/l leucocytes and 700 G/l thrombocytes after 4 weeks [21, 22]. This clinical finding was confirmed by the observation of co-stimulatory effects of HePC on granulopoesis [28] and thrombopoesis by increased synthesis and excretion of interleukin-6, thrombopoetin and granulocyte-macrophage colony-stimulating factor in vitro [3]. ErPC3 may feature similar effects but this has to be confirmed in man; however whether it stimulates myelopoiesis or just not compromises it remains almost alike, as in both cases the agent would render an ideal combination partner in highly myelotoxic polychemotherapy regimens applied to patients with an already insufficient hematopoiesis.
In conclusion, erufosine possess a high antileukemic activity in AML in vitro in both cell lines and patient samples by induction of apoptosis and acts additive to antileukemic drugs used in induction and consolidation therapy. This together with a (probable) attractive panel of side effects warrants further clinical studies of this drug in AML. The results of the ongoing phase I clinical trial of erufosine should be interesting with respect to future phase II studies with potential combinations partners for ErPC3 which have first been characterized in this study.
Berger MR, Sobottka SB, Konstantinov SM, Eibl H (1998) Erucylphosphocholine is the prototype of i.v. injectable alkylphosphocholines. Drugs Today 43:73–81
Berkovic D, Fleer EA, Breass J, et al (1997) The influence of 1-beta-D-arabinofuranosylcytosine on the metabolism of phosphatidylcholine in human leukemic HL 60 and Raji cells. Leukemia 11:2079–2086
Berkovic D, Bensch M, Bertram J, et al (2001) Effects of hexadecylphosphocholine on thrombocytopoiesis. Eur J Cancer 37:503–511
Braess J, Fiegl M, Lorenz I, Waxenberger K, Hiddemann W (2005) Modeling the pharmacodynamics of highly schedule-dependent agents: exemplified by cytarabine-based regimens in acute myeloid leukemia. Clin Cancer Res 11:7415–7425
Engelmann J, Henke J, Willker W, et al (1996) Early stage monitoring of miltefosine induced apoptosis in KB cells by multinuclear NMR spectroscopy. Anticancer Res 16:1429–1439
Fiegl M, Juergens M, Hiddemann W, Braess J (2007) Cytotoxic activity of the third-generation bisphosphonate zoledronic acid in acute myeloid leukemia. Leuk Res 31:531–539
Georgieva MC, Konstantinov SM, Topashka-Ancheva M, Berger MR (2002) Combination effects of alkylphosphocholines and gemcitabine in malignant and normal hematopoietic cells. Cancer Lett 182:163–174
Greco WR, Faessel H, Levasseur L (1996) The search for cytotoxic synergy between anticancer agents: a case of Dorothy and the ruby slippers? J Natl Cancer Inst 88:699–700
Jendrossek V, Erdlenbruch B, Hunold A, et al (1999) Erucylphosphocholine, a novel antineoplastic ether lipid, blocks growth and induces apoptosis in brain tumor cell lines in vitro. Int J Oncol 14:15–22
Jha TK, Sundar S, Thakur CP, et al (1999) Miltefosine, an oral agent, for the treatment of Indian visceral leishmaniasis. N Engl J Med 341:1795–1800
Kano Y, Ohnuma T, Okano T, Holland JF (1988) Effects of vincristine in combination with methotrexate and other antitumor agents in human acute lymphoblastic leukemia cells in culture. Cancer Res 48:351–356
Kaufmann SH, Peereboom D, Buckwalter CA, et al (1996) Cytotoxic effects of topotecan combined with various anticancer agents in human cancer cell lines. J Natl Cancer Inst 88:734–741
Koehler KA, Hines J, Mansour EG, et al (1985) Comparison of the membrane-related effects of cytarabine and other agents on model membranes. Biochem Pharmacol 34:4025–4031
Konstantinov SM, Eibl H, Berger MR (1998) Alkylphosphocholines induce apoptosis in HL-60 and U-937 leukemic cells. Cancer Chemother Pharmacol 41:210–216
Konstantinov SM, Topashka-Ancheva M, Benner A, Berger MR (1998) Alkylphosphocholines: effects on human leukemic cell lines and normal bone marrow cells. Int J Cancer 77:778–786
Langen P, Maurer HR, Brachwitz H, et al (1992) Cytostatic effects of various alkyl phospholipid analogues on different cells in vitro. Anticancer Res 12:2109–2112
Leonard R, Hardy J, van Tienhoven G, et al (2001) Randomized, double-blind, placebo-controlled, multicenter trial of 6% miltefosine solution, a topical chemotherapy in cutaneous metastases from breast cancer. J Clin Oncol 19:4150–4159
Modolell M, Andreesen R, Pahlke W, Brugger U, Munder PG (1979) Disturbance of phospholipid metabolism during the selective destruction of tumor cells induced by alkyl-lysophospholipids. Cancer Res 39:4681–4686
Molecular Devices Corporation (1998) SOFTmax PRO—user manual, 2.4 edn. Molecular Devices Corporation, Sunnyvale
Mollinedo F, Fernandez-Luna JL, Gajate C, et al (1997) Selective induction of apoptosis in cancer cells by the ether lipid ET-18-OCH3 (Edelfosine): molecular structure requirements, cellular uptake, and protection by Bcl-2 and Bcl-X(L). Cancer Res 57:1320–1328
Planting AS, Stoter G, Verweij J (1993) Phase II study of daily oral miltefosine (hexadecylphosphocholine) in advanced colorectal cancer. Eur J Cancer 29A:518–519
Pronk LC, Planting AS, Oosterom R, et al (1994) Increases in leucocyte and platelet counts induced by the alkyl phospholipid hexadecylphosphocholine. Eur J Cancer 30A:1019–1022
Ruiter GA, Zerp SF, Bartelink H, van Blitterswijk WJ, Verheij M (1999) Alkyl-lysophospholipids activate the SAPK/JNK pathway and enhance radiation-induced apoptosis. Cancer Res 59:2457–2463
Ruiter GA, Zerp SF, Bartelink H, van Blitterswijk WJ, Verheij M (2003) Anti-cancer alkyl-lysophospholipids inhibit the phosphatidylinositol 3-kinase-Akt/PKB survival pathway. Anticancer Drugs 14:167–173
Sobottka SB, Berger MR, Eibl H (1993) Structure-activity relationships of four anti-cancer alkylphosphocholine derivatives in vitro and in vivo. Int J Cancer 53:418–425
Steel GG, Peckham MJ (1979) Exploitable mechanisms in combined radiotherapy–chemotherapy: the concept of additivity. Int J Radiat Oncol Biol Phys 5:85–91
Unger C, Fleer EA, Kotting J, Neumuller W, Eibl H (1992) Antitumoral activity of alkylphosphocholines and analogues in human leukemia cell lines. Prog Exp Tumor Res 34:25–32
Vehmeyer K, Kim DJ, Nagel GA, Eibl H, Unger C (1989) Effect of ether lipids on mouse granulocyte-macrophage progenitor cells. Cancer Chemother Pharmacol 24:58–60
Part of this work was performed in the context of the doctoral thesis of Matthias Juergens.
Laboratory for Leukemia Diagnostics, Department of Internal Medicine III, University Hospital Grosshadern, Ludwig-Maximilians University, Marchioninistr. 15, 81377, Munich, Germany
Michael Fiegl, Lars H. Lindner, Matthias Juergens, Wolfgang Hiddemann & Jan Braess
Institute of Molecular Immunology, GSF-National Research Center for Environment and Health, Munich, Germany
Lars H. Lindner
Clinical Cooperative Group Acute Myeloid Leukemia, GSF-National Research Center for Environment and Health, Neuherberg, Germany
Wolfgang Hiddemann
Max Planck Institute for Biophysical Chemistry, Göttingen, Germany
Hansjoerg Eibl
Michael Fiegl
Matthias Juergens
Jan Braess
Correspondence to Jan Braess.
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License ( https://creativecommons.org/licenses/by-nc/2.0 ), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Fiegl, M., Lindner, L.H., Juergens, M. et al. Erufosine, a novel alkylphosphocholine, in acute myeloid leukemia: single activity and combination with other antileukemic drugs. Cancer Chemother Pharmacol 62, 321–329 (2008). https://doi.org/10.1007/s00280-007-0612-7 | CommonCrawl |
Difference between revisions of "Memory-Based Parameter Adaptation"
Z5abbas (talk | contribs)
Latest revision as of 22:48, 9 November 2018 (view source)
As2na (talk | contribs)
(→Incremental Learning)
The paper generalizes some approaches in language modelling that seek to overcome some of the shortcomings of neural networks including the phenomenon of catastrophic forgetting using memory-based adaptation. Catastrophic forgetting occurs when neural networks perform poorly on old tasks after they have been trained to perform well on a new task. The paper also presents experimental results where the model in question is applied to continual and incremental learning tasks.\\
This is a summary based on the paper, Memory-based Parameter Adaptation by Sprechmann et al.<sup>[[#References|[1]]]</sup>.
== Presented by ==
The paper generalizes some approaches in language modelling that seek to overcome some of the shortcomings of neural networks including the phenomenon of catastrophic forgetting using memory-based adaptation. Catastrophic forgetting occurs when neural networks perform poorly on old tasks after they have been trained to perform well on a new task. The paper also presents experimental results where the model in question is applied to continual and incremental learning tasks.
= Presented by =
*J.Walton
*J.Schneider
*A.Na
= Introduction =
Model-based parameter adaptation (MbPA) is based on the theory of complementary learning systems which states that intelligent agents must possess two learning systems, one that allows the gradual acquisition of knowledge and another that allows rapid learning of the specifics of individual experiences. Similarly, MbPA consists of two components: a parametric component and a non-parametric component. The parametric component is the standard neural network which learns slowly (low learning rates) but generalizes well. The non-parametric component, on the other hand, is a neural network augmented with an episodic memory that allows storing of previous experiences and local adaptation of the weights of the parametric component. The parametric and non-parametric components therefore serve different purposes during the training and testing phases.\\
Model-based parameter adaptation (MbPA) is based on the theory of complementary learning systems which states that intelligent agents must possess two learning systems, one that allows the gradual acquisition of knowledge and another that allows rapid learning of the specifics of individual experiences<sup>[[#References|[2]]]</sup>. Similarly, MbPA consists of two components: a parametric component and a non-parametric component. The parametric component is the standard neural network which learns slowly (low learning rates) but generalizes well. The non-parametric component, on the other hand, is a neural network augmented with an episodic memory that allows storing of previous experiences and local adaptation of the weights of the parametric component. The parametric and non-parametric components therefore serve different purposes during the training and testing phases.
= Model Architecture =
[[File:MbPA_model_architecture.PNG|700px|thumb|center|Architecture for the MbPA model. Left: Training Usage. Right: Testing Setting.]]
== Training Phase ==
The model consists of three components: an embedding network <math>f_{\gamma}</math>, a memory <math>M</math> and an output network <math>g_{\theta}</math>. The embedding network and the output network can be thought of as the standard feedforward neural networks for our purposes, with parameters (weights) <math>\gamma</math> and <math>\theta</math>, respectively. The memory, denoted by <math>M</math>, stores "experiences" in the form of key and value pairs <math>\{(h_{i},v_{i})\}</math> where the keys <math>h_{I}</math> are the outputs of the embedding network <math>f_{\gamma}(x_{i})</math> and the values <math>v_{i}</math>, in the context of classification, are simply the true class labels <math>y_{i}</math>. Thus, for a given input <math>x_{j}</math>
The last layer of the output network <math>g_{\theta}</math> is a softmax layer, such that the output can be interpreted as a probability distribution. This process is also known as backpropagation with mini-batch gradient descent. Finally, the embedded samples <math>\{(f_{\gamma}(x_{b}),y_{b})\}_{b}</math> are stored into the memory. No local adaptation takes place during this phase.
== Testing Phase ==
During the testing phase, the model will temporarily adapt the weights of the output network <math>g_{\theta}</math> based on the input <math>x</math> and the contents of the memory, <math>M</math>, according to
<math>
\theta^x = \theta + \Delta_M.
</math>
First, <math>x</math> is inputted into the embedding network, <math>q = f_{\gamma}(x)</math>. Based on query <math>q</math>, a K-nearest neighbours search is conducted. The contextual, $C$, is the result of this search.
C = \{(h_k, v_k, w_k^{(x)})\}^K_{k=1}
Each of the neighbours has a weighting <math>w_k^{(x)}</math> attached to it, based on how close it is to query <math>q</math>. This calculation is based on the kernel function,
kern(h,q) = \frac{1}{\epsilon + ||h-q||^2_2}.
The temporary updates during adaptation are based on maximizing the weighted average of the log likelihood over the neighbours in C, also known as the maximum a posteriori over the contextual, <math>C</math>,
\max_{\theta^x} \log p(\theta^x | \theta) + \sum^K_{k=1}w_k^{(x)} \log p(v^{(x)}_k | h_k^{(x)}, \theta^x,x).
Note that the first term here acts as regularization that prevents over-fitting. Unfortunately, equation 1 does not have a closed form solution. However, it can be maximized using gradient descent in a fixed number of steps. Each of these steps is calculated via <math>\Delta M</math>,
\Delta_M (x, \theta) = - \alpha_M \nabla_\theta \sum^K_{k=1} w_k^{(x)} \log p(v^{(x)}_k | h_k^{(x)}, \theta^x,x)\bigg |_\theta - \beta(\theta - \theta^x),
where <math>\beta</math> is a hyper-parameter of gradient descent. After a series of gradient descent steps, the weights of the final output network <math>g_{\theta}</math> are temporarily adapted and a prediction is made, <math>\hat y</math>.
[[File:Figure2.PNG|400px|thumb|center|Local fitting on a regression task given a query (blue) and the context from memory (red).<sup>[[#References|[1]]]</sup>.]]
As can be seen in figure 2, the final prediction <math>\hat y</math> is similar to a weighted average of the values of the K-nearest neighbours.
= Examples =
== Continual Learning ==
Continual learning is the process of learning multiple tasks in a sequence without revisiting a task. The authors consider a permuted MNIST setup, similar to [[#References|[3]]], where each task was given by a different permutation of the pixels. The authors sequentially trained the MbPA on 20 different permutations and tested on previously trained tasks.
The model was trained on 10 000 examples per task, using a 2 layer multi-layer perceptron (MLP) with an ADAM optimizer. The elastic weight consolidation (EWC) method and regular gradient descent were used to estimate the parameters. A grid search was used to determine the EWC penalty cost and the local MbPA learning rate was set as <math>\beta\in(0.0,0.1)</math> and number of steps (n) was <math>n\in[1,20]</math>.
[[File:ContinualLearning.PNG|400px|thumb|center|Results on baseline comparisons on permuted MNIST
with MbPA using different memory sizes.]]
The authors used the pixels as the embedding, i.e. <math>f_{\gamma}</math> is the identity function, and looked at regions where episodic memory was small. The authors found that through MbPA only a few gradient steps on carefully selected data from memory is enough to recover performance. They found that MbPA outperformed MLP and worked better than EWC in most cases and found that the performance of MbPA grew with the number of examples stored. They note that the memory requirements were lower than EWC. The lower memory requirements are attributed to the fact that EWC stores all task identifiers, whereas MbPA only stores a few examples. The figure above also shows the results of MbPA combined with other methods. It is noted that MbPA combined with EWC gives the best results.
== Incremental Learning ==
Incremental learning has two steps. First, the model is trained on a subset of the classes found in the training data. The second step is to give it the entire training set and see how long it takes for the model to perform well on the entire set. The purpose of this is to see how quickly the model learns information about new classes and how likely it is to lose information about the old ones. The authors used the ImageNet dataset from [[#References|[4]]], and the initial training set contained 500 out of the 1000 classes.
For the first step, they used three models. A parametric model, MbPA, and a mixture model. The parametric model they used was Resnet V1 from [[#References|[5]]]. It was used both as the parametric model in MbPA and as a separate model for testing. The non-parametric model used was the memory as described earlier. The memory was created by taking the keys from the second last layer of the parametric model. The mixture model was a convex combination of the outputs of the parametric and non-parametric model as shown below:
p(y|q) = \lambda p_{param}(y|q) + (1-\lambda)p_{mem}(y|q).
<math>\lambda</math> was tuned as a hyperparameter. Finally, MbPA was used as the fourth model with the Resnet V1 parametric model, and the non-parametric model being identical to the one described above. They were evaluated using their "Top 1" accuracy. That is to say that the class with the highest output value was taken to be the model's prediction for a given data point in the test set.
[[File:Figure4.PNG|400px|thumb|center|All three models perform similarly on the data they were pre-trained on. On the new classes, the mixture and parametric models perform similarly and MbPA performs much better<sup>[[#References|[1]]]</sup>.]]
There was also a test on how well the models perform on unbalanced datasets. In addition to the previous three, they included a non-parametric model which was just the memory running without the rest of the network. Since most real-world datasets have different amounts of data in each class, a model that could use unbalanced datasets without becoming biased would have more information available to it for training. The testing here was done similarly to the other incremental learning experiment. The models were trained on 500 of the 1000 classes until they performed well. They were then given a dataset containing all of the data from the first 500 classes and only 10% of the data from the other 500 classes. Accuracy was evaluated both using Top 1 and AUC (area under the curve) accuracy. It was found that after 0.1 epochs, MbPA and the non-parametric model performed similarly and much better than the other two by both accuracy metrics. After 1 or 3 epochs, the non-parametric model begins to perform worse than the others and MbPA continues to perform better.
= Conclusion =
The MbPA model can successfully overcome several shortcomings associated with neural networks through its non-parametric, episodic memory. In fact, many other works in the context of classification and language modelling have successfully used variants of this architecture, where traditional neural network systems are augmented with memories. Likewise, the experiments in incremental and continual learning presented in this paper use a memory architecture similar to the Differential Neural Dictionary (DND) used in Neural Episodic Control (NEC) found in [[#References|[6]]], though the gradients from the memory in the MbPA model are not used during training. In conclusion, MbPA presents a natural way to improve the performance of standard deep networks.
=References=
* <sup>[1]</sup>Sprechmann. Pablo, Jayakumar. Siddhant, Rae. Jack, Pritzel. Alexander,Badia. Adria, Uria. Benigno, Vinyals. Oriol, Hassabis. Demis, Pascanu.Razvan, and Blundell. Charles. Memory-based parameter adaptation.ICLR, 2018.
* <sup>[2]</sup>Kumaran. Dhushan, Hassabis. Demis, and McClelland. James. What learning systems do intelligent agents need? Trends in Cognitive Sciences,2016.
* <sup>[3]</sup>Goodfellow. Ian, Warde-Farley. David, Mirza. Mehdi, Courville. Aaron,and Bengio. Yohsua. Maxout networks.arXiv preprint, 2013.
* <sup>[4]</sup>Russakovsky. Olga, Deng. Jia, Su. Hao, Krause. Jonathan, Satheesh. San-jeev, Ma. Sean, Huang. Zhiheng, Karpathy. Andrej, Khosla. Aditya, andBernstein. Michael. Imagenet large scale visual recognition challenge.International Journal of Computer Vision, 2015.
* <sup>[5]</sup>He. Kaiming, Zhang. Xiangyu, Ren. Shaoqing, and Sun. Jian. Deep residual learning for image recognition.IEEE conference on computer vision and pattern recognition, 2016.
* <sup>[6]</sup>Pritzel. Alexander, Uria. Benigno, Srinivasan. Sriram, Puigdomenech.Adria, Vinyals. Oriol, Hassabis. Demis, Wierstra. Daan, and Blundell.Charles. Neural episodic control.ICML, 2017.
This is a summary based on the paper, Memory-based Parameter Adaptation by Sprechmann et al.[1].
3 Model Architecture
3.1 Training Phase
3.2 Testing Phase
4.1 Continual Learning
4.2 Incremental Learning
J.Walton
J.Schneider
Z.Abbas
A.Na
Model-based parameter adaptation (MbPA) is based on the theory of complementary learning systems which states that intelligent agents must possess two learning systems, one that allows the gradual acquisition of knowledge and another that allows rapid learning of the specifics of individual experiences[2]. Similarly, MbPA consists of two components: a parametric component and a non-parametric component. The parametric component is the standard neural network which learns slowly (low learning rates) but generalizes well. The non-parametric component, on the other hand, is a neural network augmented with an episodic memory that allows storing of previous experiences and local adaptation of the weights of the parametric component. The parametric and non-parametric components therefore serve different purposes during the training and testing phases.
Model Architecture
Architecture for the MbPA model. Left: Training Usage. Right: Testing Setting.
Training Phase
The model consists of three components: an embedding network [math]f_{\gamma}[/math], a memory [math]M[/math] and an output network [math]g_{\theta}[/math]. The embedding network and the output network can be thought of as the standard feedforward neural networks for our purposes, with parameters (weights) [math]\gamma[/math] and [math]\theta[/math], respectively. The memory, denoted by [math]M[/math], stores "experiences" in the form of key and value pairs [math]\{(h_{i},v_{i})\}[/math] where the keys [math]h_{i}[/math] are the outputs of the embedding network [math]f_{\gamma}(x_{i})[/math] and the values [math]v_{i}[/math], in the context of classification, are simply the true class labels [math]y_{i}[/math]. Thus, for a given input [math]x_{j}[/math]
[math] f_{\gamma}(x_{j}) \rightarrow h_{j}, [/math]
[math] y_{j} \rightarrow v_{j}. [/math]
Note that the memory has a fixed size; thus when it is full, the oldest data is discarded first.
During training, the authors sample of a set of [math]b[/math] training examples randomly (ie. mini-batch size [math]b[/math]), say [math]\{(x_{b},y_{b})\}_{b}[/math], from the training data that they input into the embedding network [math]f_{\gamma}[/math], followed by the output network [math]g_{\theta}[/math]. The parameters of the embedding and output networks are updated by maximizing the likelihood function (equivalently, minimizing the loss function) of the target values
[math] p(y|x,\gamma,\theta)=g_{\theta}(f_{\gamma}(x)). [/math]
The last layer of the output network [math]g_{\theta}[/math] is a softmax layer, such that the output can be interpreted as a probability distribution. This process is also known as backpropagation with mini-batch gradient descent. Finally, the embedded samples [math]\{(f_{\gamma}(x_{b}),y_{b})\}_{b}[/math] are stored into the memory. No local adaptation takes place during this phase.
Testing Phase
During the testing phase, the model will temporarily adapt the weights of the output network [math]g_{\theta}[/math] based on the input [math]x[/math] and the contents of the memory, [math]M[/math], according to
[math] \theta^x = \theta + \Delta_M. [/math]
First, [math]x[/math] is inputted into the embedding network, [math]q = f_{\gamma}(x)[/math]. Based on query [math]q[/math], a K-nearest neighbours search is conducted. The contextual, $C$, is the result of this search.
[math] C = \{(h_k, v_k, w_k^{(x)})\}^K_{k=1} [/math]
Each of the neighbours has a weighting [math]w_k^{(x)}[/math] attached to it, based on how close it is to query [math]q[/math]. This calculation is based on the kernel function,
[math] kern(h,q) = \frac{1}{\epsilon + ||h-q||^2_2}. [/math]
The temporary updates during adaptation are based on maximizing the weighted average of the log likelihood over the neighbours in C, also known as the maximum a posteriori over the contextual, [math]C[/math],
[math] \max_{\theta^x} \log p(\theta^x | \theta) + \sum^K_{k=1}w_k^{(x)} \log p(v^{(x)}_k | h_k^{(x)}, \theta^x,x). [/math]
Note that the first term here acts as regularization that prevents over-fitting. Unfortunately, equation 1 does not have a closed form solution. However, it can be maximized using gradient descent in a fixed number of steps. Each of these steps is calculated via [math]\Delta M[/math],
[math] \Delta_M (x, \theta) = - \alpha_M \nabla_\theta \sum^K_{k=1} w_k^{(x)} \log p(v^{(x)}_k | h_k^{(x)}, \theta^x,x)\bigg |_\theta - \beta(\theta - \theta^x), [/math]
where [math]\beta[/math] is a hyper-parameter of gradient descent. After a series of gradient descent steps, the weights of the final output network [math]g_{\theta}[/math] are temporarily adapted and a prediction is made, [math]\hat y[/math].
Local fitting on a regression task given a query (blue) and the context from memory (red).[1].
As can be seen in figure 2, the final prediction [math]\hat y[/math] is similar to a weighted average of the values of the K-nearest neighbours.
Continual Learning
Continual learning is the process of learning multiple tasks in a sequence without revisiting a task. The authors consider a permuted MNIST setup, similar to [3], where each task was given by a different permutation of the pixels. The authors sequentially trained the MbPA on 20 different permutations and tested on previously trained tasks.
The model was trained on 10 000 examples per task, using a 2 layer multi-layer perceptron (MLP) with an ADAM optimizer. The elastic weight consolidation (EWC) method and regular gradient descent were used to estimate the parameters. A grid search was used to determine the EWC penalty cost and the local MbPA learning rate was set as [math]\beta\in(0.0,0.1)[/math] and number of steps (n) was [math]n\in[1,20][/math].
Results on baseline comparisons on permuted MNIST with MbPA using different memory sizes.
The authors used the pixels as the embedding, i.e. [math]f_{\gamma}[/math] is the identity function, and looked at regions where episodic memory was small. The authors found that through MbPA only a few gradient steps on carefully selected data from memory is enough to recover performance. They found that MbPA outperformed MLP and worked better than EWC in most cases and found that the performance of MbPA grew with the number of examples stored. They note that the memory requirements were lower than EWC. The lower memory requirements are attributed to the fact that EWC stores all task identifiers, whereas MbPA only stores a few examples. The figure above also shows the results of MbPA combined with other methods. It is noted that MbPA combined with EWC gives the best results.
Incremental Learning
Incremental learning has two steps. First, the model is trained on a subset of the classes found in the training data. The second step is to give it the entire training set and see how long it takes for the model to perform well on the entire set. The purpose of this is to see how quickly the model learns information about new classes and how likely it is to lose information about the old ones. The authors used the ImageNet dataset from [4], and the initial training set contained 500 out of the 1000 classes.
For the first step, they used three models. A parametric model, MbPA, and a mixture model. The parametric model they used was Resnet V1 from [5]. It was used both as the parametric model in MbPA and as a separate model for testing. The non-parametric model used was the memory as described earlier. The memory was created by taking the keys from the second last layer of the parametric model. The mixture model was a convex combination of the outputs of the parametric and non-parametric model as shown below:
[math] p(y|q) = \lambda p_{param}(y|q) + (1-\lambda)p_{mem}(y|q). [/math]
[math]\lambda[/math] was tuned as a hyperparameter. Finally, MbPA was used as the fourth model with the Resnet V1 parametric model, and the non-parametric model being identical to the one described above. They were evaluated using their "Top 1" accuracy. That is to say that the class with the highest output value was taken to be the model's prediction for a given data point in the test set.
All three models perform similarly on the data they were pre-trained on. On the new classes, the mixture and parametric models perform similarly and MbPA performs much better[1].
The MbPA model can successfully overcome several shortcomings associated with neural networks through its non-parametric, episodic memory. In fact, many other works in the context of classification and language modelling have successfully used variants of this architecture, where traditional neural network systems are augmented with memories. Likewise, the experiments in incremental and continual learning presented in this paper use a memory architecture similar to the Differential Neural Dictionary (DND) used in Neural Episodic Control (NEC) found in [6], though the gradients from the memory in the MbPA model are not used during training. In conclusion, MbPA presents a natural way to improve the performance of standard deep networks.
[1]Sprechmann. Pablo, Jayakumar. Siddhant, Rae. Jack, Pritzel. Alexander,Badia. Adria, Uria. Benigno, Vinyals. Oriol, Hassabis. Demis, Pascanu.Razvan, and Blundell. Charles. Memory-based parameter adaptation.ICLR, 2018.
[2]Kumaran. Dhushan, Hassabis. Demis, and McClelland. James. What learning systems do intelligent agents need? Trends in Cognitive Sciences,2016.
[3]Goodfellow. Ian, Warde-Farley. David, Mirza. Mehdi, Courville. Aaron,and Bengio. Yohsua. Maxout networks.arXiv preprint, 2013.
[4]Russakovsky. Olga, Deng. Jia, Su. Hao, Krause. Jonathan, Satheesh. San-jeev, Ma. Sean, Huang. Zhiheng, Karpathy. Andrej, Khosla. Aditya, andBernstein. Michael. Imagenet large scale visual recognition challenge.International Journal of Computer Vision, 2015.
[5]He. Kaiming, Zhang. Xiangyu, Ren. Shaoqing, and Sun. Jian. Deep residual learning for image recognition.IEEE conference on computer vision and pattern recognition, 2016.
[6]Pritzel. Alexander, Uria. Benigno, Srinivasan. Sriram, Puigdomenech.Adria, Vinyals. Oriol, Hassabis. Demis, Wierstra. Daan, and Blundell.Charles. Neural episodic control.ICML, 2017.
Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Memory-Based_Parameter_Adaptation&oldid=38601" | CommonCrawl |
Prediction of programmed cell death protein 1 in hepatocellular carcinoma patients using radiomics analysis with radiofrequency-based ultrasound multifeature maps
Qingmin Wang1 na1,
Yi Dong2 na1,
Tianlei Xiao1,
Shiquan Zhang3,
Jinhua Yu1,
Leyin Li1,
Qi Zhang2,
Yuanyuan Wang1,
Yang Xiao3 &
Wenping Wang2
BioMedical Engineering OnLine volume 21, Article number: 24 (2022) Cite this article
This study explored the feasibility of radiofrequency (RF)-based radiomics analysis techniques for the preoperative prediction of programmed cell death protein 1 (PD-1) in patients with hepatocellular carcinoma (HCC).
The RF-based radiomics analysis method used ultrasound multifeature maps calculated from the RF signals of HCC patients, including direct energy attenuation (DEA) feature map, skewness of spectrum difference (SSD) feature map, and noncentrality parameter S of the Rician distribution (NRD) feature map. From each of the above ultrasound maps, 345 high-throughput radiomics features were extracted. Then, the useful radiomics features were selected by the sparse representation method and input into support vector machine (SVM) classifier for PD-1 prediction.
Results and conclusion
Among all the RF-based prediction models and the ultrasound grayscale comparative model, the RF-based model using all of the three ultrasound feature maps had the highest prediction accuracy (ACC) and area under the curve (AUC), which were 92.5% and 94.23%, respectively. The method proposed in this paper is effective for the meaningful feature extraction of RF signals and can effectively predict PD-1 in patients with HCC.
We proposed RF-based radiomics analysis method by introducing three ultrasound features of direct energy attenuation (DEA), skewness of spectrum difference (SSD) and noncentrality parameter S of Rician distribution (NRD) as the feature extraction method from RF signals, investigated the effectiveness of RF-based radiomics analysis method in the immunocheckpoint prediction of programmed cell death protein 1 (PD-1), and validated the results with contrast testing of grayscale-based radiomics analysis method in this study. We also demonstrate a trend in prediction performance changes and its correlation with the number of ultrasound features.
The results demonstrated that there were significant differences (p < 0.05) in radiomics scores between HCC patients with PD-1 and HCC patients without PD-1. RF-based radiomics analysis method performed well in PD-1 noninvasive preoperative prediction of HCC patients.
In this study, the performance of RF-based radiomics analysis method was better than that of grayscale-based radiomics analysis method in the preoperative prediction of PD-1 in HCC patients. The AUC of DSNM, which was the RF-based radiomics analysis model with three ultrasound feature maps, reached 94.23% in the prediction of PD-1 cell protein in HCC patients.
Hepatocellular carcinoma (HCC) is the second most common cause of cancer-related death [1]. To date, there is no clinical evidence that most of the adjuvant agents studied can improve survival in any stage of HCC [2]. In addition, the prognosis of HCC patients is generally poor. As the most common option for HCC patients, resection with curative intent or ablation is associated with 5-year recurrence rates as high as 75% [3]. Programmed cell death protein 1 (PD-1) could act as an indicative marker for the prognosis of HCC patients after surgical resection and may have a positive impact on the choice of treatment for HCC patients [4]. However, the current detection of PD-1 mainly depends on the immunohistochemical method with pathological tissue obtained by resection or puncture. There is a clear need to find a noninvasive, accurate preoperative PD-1 prediction technique for patients with HCC.
At present, anti-PD-1 antibody has been approved by the Food and Drug Administration (FDA) for the treatment of malignant melanoma and nonsmall cell lung cancer [5, 6]. A recent study found that PD-1 was highly expressed in the peripheral and intratumoral areas of HCC and could predict progression and postoperative recurrence [4]. However, PD-1 detection currently mainly depends on immunohistochemical staining of puncture or resected specimens. There are few studies on PD-1 prediction using other technologies. Although the trauma of puncture is small, the heterogeneity of the tumor and other reasons may cause inaccurate puncture results. Moreover, the puncture channel left in the liver tissue may damage the tumor microenvironment and stimulate the development and spread of the tumor. Therefore, it is urgent to develop a noninvasive and accurate method to predict PD-1 in HCC.
Ultrasonography is the first-line investigative technique for the surveillance of most diseases, as it has relatively low cost, noninvasive, and is widely available [7]. Surveillance of HCC with ultrasonography at 6-month intervals is recommended by the current guidelines [2, 8]. Radiomics is a new research method developed in the past decade [9, 10], that uses the computing power of computers to mine medical data in depth and extracts abundant tumor pathophysiology information that cannot be effectively found by the human eye [11, 12]. Therefore, applying radiomics analysis technology in ultrasound to deeply mine pathological information has great clinical development value. To date, it has performed well in breast cancer, HCC, liver fibrosis in chronic hepatitis B, nonsmall cell lung cancer, and so on [13,14,15,16,17,18]. This makes it possible and worth trying to research the performance of radiomics analysis techniques combined with the ultrasound data of HCC patients for PD-1 preoperative prediction.
The radiofrequency (RF) signal is the original ultrasonic signal without signal postprocessing (brightness compensation, envelope detection, depth compensation, dynamic range adjustment, etc.). It contains most of the acoustic information including attenuation, scattering, sound velocity, phase, and so on. In essence, it can provide more information than ultrasonic images [19]. In particular, large amount of high-frequency information before detection is suitable for algorithm compilation. However, the large amount of information from the RF signal is also accompanied by excessive noise interference. Even if an existing deep learning network with good performance is used, it is usually unable to create a successful model because of the large amount of noise and the abstract labels. At the same time, there is a lack of an effective feature extraction method to establish a diagnostic model directly using RF signals. Therefore, simplification can be considered by calculating as many physical parameters as possible. Then the intelligent model can be built on these physical parameter spectra, which is more likely to effectively realize the deep data mining of RF signals.
The attenuation [20], skewness [21], and Rician distribution [22] are the traditional characteristic parameters in ultrasound. In this study, direct energy attenuation (DEA), skewness of spectrum difference (SSD), and noncentrality parameter S of the Rician distribution (NRD) were used to compose three feature maps. We established an RF-based radiomics analysis method to extract radiomics features from ultrasound feature maps obtained by RF and realized the noninvasive prediction of PD-1 in HCC patients. Our aim was to investigate the value of the RF-based radiomics analysis algorithm in the preoperative prediction of PD-1 in patients with HCC. In summary, the contributions of this paper are as follows:
We proposed the RF-based radiomics analysis method by introducing the three ultrasound features of DEA, SSD, and NRD as the feature extraction method from RF signals, investigated the effectiveness of the RF-based radiomics analysis method in the immune checkpoint prediction of PD-1, and validated the results with contrast testing of the grayscale-based radiomics analysis method in this study. We also demonstrate a trend in prediction performance changes and its correlation with the number of ultrasound features.
The results demonstrated that there were significant differences (p < 0.05) in radiomics scores between HCC patients with PD-1 and HCC patients without PD-1. RF-based radiomics analysis method can realize the noninvasive preoperative prediction of PD-1 in HCC patients.
In this study, the performance of the RF-based radiomics analysis method was better than that of the grayscale-based radiomics analysis method in the preoperative prediction of PD-1 in HCC patients. The AUC of DSNM, which was the RF-based radiomics analysis model with three ultrasound feature maps, reached 94.23% in the prediction of PD-1 in HCC patients (Additional file 1).
Ultrasound features results
In this study, we extracted multiple ultrasound parameters from RF signals, including DEA, SSD, and NRD. These three ultrasound parameters had varying degrees of positive roles in the preoperative prediction of PD-1. They were the basis of the ultrasound radiomics analysis method in this study. We compared the differences in the DEA, SSD, and NRD ultrasound feature parameters between patients with and without PD-1. ANOVA showed that there were no significant differences (p < 0.05) between patients with PD-1 and patients without PD-1 in DEA, SSD and NRD.
Radiomics features results
Through feature extraction, 345 radiomics features were obtained from every ultrasound grayscale image and the DEA, SSD, and NRD ultrasound feature maps. A total of 345 radiomics features were extracted from every patient in the PD-1 prediction model based on ultrasound grayscale image (GM), 345 radiomics features were extracted from every patient in PD-1 prediction model based on ultrasound DEA feature map (DM), 690 radiomics features were extracted from every patient in PD-1 prediction model based on DEA and SSD feature maps (DSM), and 1035 radiomics features were extracted from every patient in PD-1 prediction model based on DEA, SSD, and NRD feature maps (DSNM).
The SRC coefficient represented the importance of the features relative to prediction. According to the above parameter, the high-throughput features of every model were sorted and the useful features were preliminarily selected. SRC feature selected part was the first time to reduce the feature dimension. After SRC feature selection, the feature numbers of GM, DM, DSM, and DSNM of the PD-1 prediction models were 241, 260, 432, and 564, respectively.
At this time, the feature number of every prediction model was still large. The preliminarily selected features were put into the SVM classifier to realize further feature dimension reduction. When each model used the SVM classifier for training, the first feature of its preliminarily selected features was put into the classifier for training, and the evaluation parameters of ACC, AUC, SPEC, and SENS after training were calculated. Then, the first two features of its preliminarily selected features were selected and put into the classifier for training, and the ACC and other evaluation parameters were also calculated. In this way, different numbers of features were extracted in turn to train the SVM classifier, and the corresponding evaluation parameters were recorded. Finally, the best result was selected through the saved evaluation parameters. The number of features put into the classifier corresponding to the best result was the final feature dimension of this model after the second dimension reduction. These features were also the final features of every PD-1 classification prediction model. In this way, the final feature dimensions of the PD-1 prediction models of GM, DM, DSM, and DSNM were 33, 13, 13, and 10, respectively.
Prediction model results
The performance of the GM PD-1 prediction model and other RF-based PD-1 prediction models are shown in Table 1. As a contrast experiment, GM used grayscale images, and its AUC and ACC reached 80.77% and 80.00%, respectively. However, the SENS of this model was low, only 57.14%. The performance outcomes of the RF-based DM, DSM, and DSNM models were better than that of GM.
Table 1 Diagnostic performance of GM, DM, DSM, and DSNM for PD-1 classification
AUC, area under the receiver operating characteristic curve; ACC, accuracy; SENS, sensitivity; SPEC, specificity; GM, PD-1 prediction model based on ultrasound grayscale image; DM, PD-1 prediction model based on DEA feature map of RF signals; DSM, PD-1 prediction model based on DEA and SSD feature maps; DSNM, PD-1 prediction model based on DEA, SSD, and NRD feature maps.
The ACC, AUC, and SENS of DSNM were the largest among the three RF-based prediction models. The AUC of DSNM was 94.23% (95% confidence interval [CI] 0.820 to 0.991). The AUCs of DSM and DM reached 88.46% (CI 0.744 to 0.964) and 83.52% (CI 0.684 to 0.933), respectively. With the increase in the number of RF-based ultrasound feature maps, the performance of the PD-1 prediction models for HCC patients gradually improved.
The SVM classifier in the PD-1 prediction model was used to calculate the radiomics score of each HCC patient. The model can predict the presence of PD-1 in HCC patients based on the radiomics score. Figure 1 shows a boxplot of the radiomics scores of the DSNM PD-1 prediction model for HCC patients with and without PD-1. ANOVA showed that there was a significant difference (p < 0.05) between patients with PD-1 and patients without PD-1.
Boxplot of the radiomics scores of DSNM PD-1 prediction model for HCC patients with and without PD-1
The ROC curves of the grayscale-based and RF-based PD-1 prediction models are shown in Fig. 2. The total areas under the ROC curve can also represent the AUCs of the PD-1 prediction models. The area under the ROC curve of DSNM was 0.94 ± 0.04, which was the largest of the four PD-1 prediction models of HCC patients.
Comparison of the ROC curves of DSNM, DSM, DM, and GM PD-1 prediction models
The PRCs of DSNM, DSM, DM, and GM are shown in Fig. 3. The break-even point (BEP) in the PRC is the basis for judging the performance of the PD-1 prediction models. The BEP is the value when the precision of the model is equal to the recall. The BEP of the DSNM model in Fig. 3 is the intersection of the red curve and the diagonal. Its value is larger than that at the intersections of the other three models and the diagonal. This indicates that the DSNM based on three ultrasound feature maps calculated from RF signals has the best performance of all four models in predicting the PD-1 in HCC patients.
Precision-recall curves of the GM, DM, DSM, and DSNM PD-1 prediction models
This study demonstrated that the combination of ultrasound multifeature maps of RF signals and radiomics analysis was highly effective in predicting PD-1 of HCC patients. This is the first report of the preoperative prediction of PD-1 in HCC patients by using radiomics technology based on ultrasound multifeature maps of RF data. Three prediction models based on RF signals and one prediction model based on grayscale images were established in this study. The results showed that the three prediction models based on RF all performed better than the model based on grayscale images in predicting PD-1 in HCC. Among the above three RF-based prediction models, the DSNM model using DEA, SSD, and NRD, which are three kinds of ultrasound feature maps, performed best, with an AUC of 0.94 ± 0.04. The texture features and wavelet texture features extracted from the three ultrasound feature maps in DSNM model had the best accuracy for PD-1 preoperative prediction. Thus, RF-based ultrasound multifeature map radiomics analysis had a positive effect on the prediction of this immune checkpoint inhibitor. In this study, the grayscale-based radiomics analysis contrast test and LOOCV ensured the correctness of the DSNM prediction method.
In radiomics, changes in tumor microproteins, molecules, and genes are closely related to changes in macro medical imaging. Many studies on radiomics have confirmed this phenomenon in CT [9], PET [23, 24], and MRI [25,26,27,28] images. From the above images, researchers extracted features related to the proteins and genes studied. At the same time, radiomics technology based on the above imaging techniques provides assistance for disease diagnosis, clinical decision-making, and prognosis. At present, the development of radiomics in ultrasound is in the initial stage. Since ultrasound is noninvasive, does not require radiation, is inexpensive, and is widely used, the development space of radiomics in ultrasound is very large. Qiao et al. used B-mode ultrasound images to identify benign and malignant breast tumors [14]. Zhang et al. [13] extracted high-throughput radiomics features from ultrasound elastic images for the diagnosis of breast tumors. These are examples of ultrasound image-based radiomics methods. In this study, we used the method similar to the above to establish the GM model to predict the immunosuppressive molecules PD-1 of HCC patients. The AUC of grayscale image-based GM model was 80.77% in prediction of the cell surface receptor PD-1. This grayscale image-based model was established to compare with the RF-based models.
Grayscale images are most widely used in ultrasound radiomics. A recent study by Biermann et al. proved that grayscale image-based radiomics model slightly outperformed ACR scoring by the less experienced radiologists in the classification of thyroid nodules [29]. The combination of multiple ultrasound images is one of the development trends of ultrasound radiomics. Xue et al. [30] confirmed that the combination of grayscale images and elasticity images in the transfer learning radiomics model was the most accurate prediction model in liver fibrosis grading (AUCs are 0.950, 0.932, and 0.930 for classifying S4, ≥ S3, and ≥ S2, respectively). No matter grayscale images or elasticity images, they are all calculated from RF signals. RF signal itself contains more information than these images. However, an effective feature extraction method of RF to establish the diagnostic model directly is lacking. The study of Wei et al. mentioned that using a single modality of RF signals for the liver fibrosis stage, the highest accuracy of the verification set was only 0.77 [31]. How to extract as much useful information as possible and improve the utilization efficiency of RF signals is worthy of exploration. Wei et al. chose the method of combing multiparametric features, including ultrasound grayscale images, RF signals, and contrast-enhanced micro-flow images, and the highest classification accuracy was 0.12 higher than that of the single modality of RF for the liver fibrosis stage [31]. In our study, we proposed to calculate three kinds of ultrasonic features from RF signals, including DEA, SSD, and NRD, and then extract effective features for the second time with radiomics method, so as to simplify and get useful information from RF signals as much as possible. The results showed that the performances of PD-1 prediction models using DEA, SSD, and NRD ultrasound feature maps were generally better than that of using only ultrasound grayscale images. Moreover, the more the ultrasound feature maps used, the better the PD-1 prediction performance.
As shown in Figs. 2 and 3, the model using DEA, SSD, and NRD ultrasound feature maps simultaneously showed the best performance of all the RF-based models using ultrasound feature maps, with an AUC of 94.23%. The attenuation coefficient of biological tissue is closely related to its tissue properties and structural characteristics. The differences caused by high PD-1 expression in disease tissues may vary, such as sound velocity, acoustic impedance, and acoustic attenuation coefficients. In this study, the DEA coefficient was used as an ultrasound feature to establish a DM preoperative PD-1 prediction model. The AUC of this model reached 83.52%. Studies on the frequency domain analysis of HCC mainly focus on the internal blood flow spectrum. The SSD feature map is a spectrum feature and reflects the frequency domain characteristics of the RF signals of the whole ROI of HCC lesions. In fact, the addition of the SSD ultrasound feature map to the DM model to form the DSM model improved the PD-1 prediction results. Statistical distribution models, such as Rician distribution, are still lacking in the detection of the PD-1 receptor. High-throughput feature extraction in radiomics technology is helpful to distinguish the ultrasound imaging changes caused by the accumulation of the tumor PD-1 receptor. The combination of radiomics analysis technology and ultrasound feature maps composed of the ultrasound feature parameters DEA, SSD, and NRD achieved 92.5% ACC in the DSNM model for PD-1 prediction in HCC patients. With the increase in the types of ultrasound feature maps, the predictive performance of the model increased. This results suggest that more ultrasound features should be extracted from RF signals, combined with high-throughput imaging features, to fully develop the application of ultrasound at the molecular and protein levels.
ANOVA showed that there were no significant differences in DEA, SSD, and NRD (p < 0.05) between patients with PD-1 and patients without PD-1. However, the radiomics scores of HCC patients based on the DSNM PD-1 prediction model in the boxplot of Fig. 1 shows that there were significant differences between HCC patients with and without PD-1. This explains why the numerical value of ultrasound feature maps calculated from RF in this research cannot directly and effectively predict the presence of PD-1 in HCC patients. After radiomics processing, the extracted texture features and wavelet-based texture features achieved a prediction accuracy of more than 85% in all three prediction models of DM, DSM, and DSNM. In addition to the traditional numerical value of ultrasound feature parameters, the numerical distribution characteristics and texture features of the ultrasound feature parameters themselves also provide useful clues worthy of study. It is suggested that the combination of radiomics processing methods can further mine the information contained in ultrasound RF signals and expand the application scope of ultrasound diagnosis. In this paper, an effective feature extraction method from RF signals for PD-1 prediction in HCC patients was established.
Texture features are one of the most widely utilized features in radiomics and perform well in terms of benign and malignant identification [32], protein [26], gene prediction [33, 34], and molecular typing [35]. Pham et al. [32] extracted two kinds of texture features from CT images of lung cancer patients to differentiate of mediastinal lymph nodes. Dang et al. [26] extracted texture features from MRI images to predict the tumor suppressor protein p53 in head and neck squamous cell carcinoma, with an accuracy of 0.813. Yang et al. extracted 97 texture features from MRI images and combined them with a random forest classifier to carry out molecular typing classification and survival prediction of glioma [35]. In this study, we extracted 345, 345, 690, and 1035 texture features and wavelet-based texture features from the GM, DM, DSM, and DSNM models, respectively. These texture features were simplified after SRC feature selection and secondary dimension reduction in SVM classification, which supports the successful prediction of PD-1. The preoperative prediction ACCs of the cell receptor PD-1 for HCC patients in the GM, DM, DSM, and DSNM models were 80%, 85%, 87.5%, and 92.5%, respectively. Texture features, as important visual features that are difficult to describe in detail for doctors, still have significant advantages in effective feature extraction from RF-based ultrasonic feature maps.
However, some limitations should be noted in this study. First, only 40 available patients were screened from 129 patients. Independent testing is difficult for small sample sizes. For this reason, this study designed a grayscale image-based comparative test to verify the prediction accuracy of the RF signal-based model while using the LOOCV method. Second, research at a single center cannot verify the generalization ability of the model. The next step of the research will be to achieve multicenter research results. Third, the current prediction results of this study can only predict PD-1 negative and positive results for HCC patients. Tumor heterogeneity makes it difficult to represent the nature of the whole pathological tissue. This makes the results of immunohistochemistry and tumor molecular typing less reliable for the development of the follow-up treatment plan. It also increases the uncertainty of prognosis. Studying how to predict the distribution, proportion, and area of PD-1 in the ROI may provide strong technical support for the selection of puncture sites. The treatment plan and prognosis may be more accurate with this information.
In conclusion, we propose an RF-based radiomics analysis method for predicting PD-1 in HCC patients. The DSNM model using RF-based ultrasound multifeature maps and the radiomics analysis method has the best performance of all four models and is expected to become a robust method for the noninvasive and fast preoperative prediction of PD-1 in HCC patients. Although the RF signal contains more information than the traditional grayscale image, an effective feature extraction method to establish the diagnostic model directly is lacking. The application of the ultrasound multifeature map extraction method and radiomics feature extraction effectively improves the utilization value of ultrasound RF signals and provides deeper diagnostic and treatment information from ultrasound. RF signals can provide richer diagnostic information than ultrasound images. The proposed method can provide a valuable reference for the combination of ultrasound and radiomics analysis and facilitate the development of more accurate algorithms and clinical diagnostic aids.
From January 2018 to December 2018, we enrolled 129 liver cancer patients preoperatively diagnosed with HCC in a designated institution. Finally, 40 eligible patients (33 men and 7 women; age range: 23–80 years; mean: 55 ± 12 years) were selected for this study. The inclusion criteria were (1) patients with HCC confirmed by pathological examination and operation; (2) patients with a solitary tumor; (3) patients who underwent preoperative grayscale ultrasound examinations within 1 week before surgery and had useful RF data; and (4) patients with confirmation by histopathological examination and PD-1 evaluation.
The exclusion criteria included (1) patients without HCC confirmed by pathological examination; (2) patients with preoperative biopsy or adjuvant therapy; (3) patients with an incomplete or not clearly visible HCC lesion area reconstructed by RF data; and (4) patients without histopathological examination and PD-1 evaluation results.
PD-1 evaluation was performed by two pathologists with at least 10 years of experience in hepatopathology reviewing all the specimen slices. Both investigators were blinded to the clinical and imaging information of the patients.
Ultrasound data acquisition
All examinations, including conventional ultrasound and RF ultrasound, were performed on an EPIQ-7 ultrasound system (Philips Medical Systems, Amsterdam, Holland). A C5-1 curved transducer with frequencies of 1–5 MHz (Philips Medical Systems, Amsterdam, Holland) was used for data acquisition, including ultrasound grayscale images and corresponding RF data.
All patients fasted for at least 8 h before ultrasound examinations. Then, the grayscale ultrasound features of the hepatic lesions were assessed according to a standardized protocol: number of lesions (solitary or multiple), size of the lesion (mm), and echogenicity (hyperechoic, isoechoic, hypoechoic, or mixed compared to surrounding liver tissue). Ultrasound grayscale examinations were performed by a single experienced radiologist (with more than 18 years of experience in ultrasound of the liver).
RF data processing
For the RF data obtained in this study, the specific RF data processing flow, which we call the RF-based radiomics analysis method, is shown in Fig. 4. We first conducted RF analysis to extract ultrasound multifeature maps. Then, combined with the widely used radiomics analysis method, high-throughput radiomics features were extracted, selected, and used to build effective PD-1 classification prediction models.
Experimental flow diagram of the RF-based radiomics analysis method
Ultrasound multifeature map extraction
The extraction of ultrasound multifeature maps was a unique design in this study and effectively improved the performance of ultrasound RF data in making classifications and predictions of the PD-1 protein level. This will be mentioned later in the analysis of the experimental results. To improve the calculation efficiency of ultrasound multifeature map extraction, we extracted the RF data of the region of interest (ROI) and used these data instead of the whole echo RF dataset to calculate the multifeature maps. Then, smooth filtering, Hilbert transformation, logarithmic compression, sector transformation, and other processing method were carried out on the RF data to achieve B-mode reconstruction, as shown in Fig. 5a. By referring to the lesion location marked with a white dotted circle by the doctor in the corresponding grayscale image saved during data acquisition, as shown in Fig. 5b, we can determine and segment the ROI, which is shown in Fig. 2a with a red circle, to obtain the RF data of the ROI.
a B-mode image of a patient reconstructed by RF data. b B-mode image saved during data acquisition in the hospital with a white dotted circle marked by the doctor during diagnosis
The RF data of the ROI were used to calculate the three feature parameters of direct energy attenuation (DEA), skewness of spectrum difference (SSD), and noncentrality parameter S of Rician distribution (NRD), which were composed of the corresponding DEA , SSD, and NRD ultrasound feature maps, as shown in Fig. 6. There are many kinds of ultrasound features that can characterize RF signals, including time domain features, frequency domain features, geometric features, and statistical features. When there is PD-1 protein in the liver of HCC patients, the microscopic scattering differences may manifest in the time domain, frequency domain, and so on. In this study, the DEA, SSD, and NRD ultrasound features we chose belong to the time domain feature, frequency domain feature, and statistical feature, respectively.
Schematic diagram of the extraction method of the a 1-D RF data block and b 2-D RF data block of the ROI in ultrasound feature map calculation. c Direct energy attenuation (DEA) ultrasound feature map. d Skewness of spectrum difference (SSD) ultrasound feature map. e Noncentrality parameter S of the Rician distribution (NRD) ultrasound feature map
DEA refers to the direct energy attenuation of ultrasound propagation in the medium. As shown in Fig. 3a, two 1-D RF data blocks with a gated window length of 64 and window interval of 16 were extracted by moving down one data point each time. Fourier transforms were applied to obtain the average energy \({\mathrm{E}}_{0}\) of the 1-D RF data block directly above and \({\mathrm{E}}_{1}\) of the 1-D RF data block directly below. Formula (1) was used to calculate the DEA coefficient in this study:
$${\text{DEA}} = 10*\log \left( {E_{0} /E_{1} } \right)/\left( {D*C/f_{S} /2} \right),$$
where D represents the gated window length, which is 64 data points; C equals 1540 m/s, which represents the sound speed of the ultrasound wave in tissue; and \({f}_{S}\) represents the sample rate, which is 32 MHz. The DEA value calculated is that of the point (window length + window interval)/2 on the 1-D RF data block directly above.
SSD represents the skewness of the spectrum difference of the ultrasound RF echo signal. Skewness is calculated by the following formula (2):
$$S = \frac{{\frac{1}{n}\mathop \sum \nolimits_{i = 1}^{n} \left( {x_{i} - \overline{x}} \right)^{3} }}{{\left( {\frac{1}{n}\mathop \sum \nolimits_{i = 1}^{n} \left( {x_{i} - \overline{x}} \right)^{2} } \right)^{\frac{3}{2}} }},$$
where S is the skewness (dimensionless); the letter 'i' is the i-th number; \(\overline{X }\) is the average of the samples; n is the number of samples; and \({x}_{i}\) is the i-th sample value.
A schematic diagram how the 1-D RF data block of the ROI was extracted during the SSD calculation is shown in Fig. 3a. Two 1-D RF data blocks with window lengths of 64 were extracted by moving down one data point each time. The window interval between the two RF data blocks was 40. Fourier transform was applied on the two 1-D RF data blocks. Then make a subtraction between the two blocks obtained by Fourier transform, and the skewness of the data after subtraction was calculated by using the function of 'skewness' in the MATLAB toolbox. The calculated SSD value is that of the point (window length + window interval)/2 on 1-D RF data block located directly above.
NRD is the noncentrality parameter S of the Rician distribution. The Rician distribution has the following density formula (3):
$$p\left( R \right) = I_{0} \left( {\frac{xs}{{\sigma^{2} }}} \right)\frac{x}{{\sigma^{2} }}^{{ - \left( {\frac{{x^{2} + s^{2} }}{{2\sigma^{2} }}} \right)}} ,$$
with noncentrality parameter s \(\ge\) 0 and scale parameter \(\sigma\) > 0, for x > 0, which is the sample value. \({\mathrm{I}}_{0}\) is the zero-order modified Bessel function of the first kind. In this study, the noncentrality parameter S of the midpoint of each 2-D RF data block was calculated by the function of 'fitdist' in the MATLAB toolbox. The 2-D data block selection method is shown in Fig. 3b. The size of each selected 2-D data block was 37 × 8. The S value of the Rician distribution of the selected data block was calculated as the NRD value of the midpoint of the current 2-D data block. Then, the gated windows were moved down one data point by one data point, and all the NRD values of each point in the ROI could be calculated.
The above three ultrasound feature parameters are widely used and representative and include most of the features of RF signals. These parameters compose the three ultrasound feature maps of the DEA, SSD, and NRD. The three ultrasound feature maps are shown in Fig. 3c–e.
Radiomics feature extraction and selection
Radiomics feature extraction was used to extract texture features and wavelet-based texture features from the three ultrasound feature maps of DEA, SSD, and NRD. Texture is ubiquitous in medical images and is an important visual clue of imaging doctors for diagnose. Each ultrasound feature map was extracted 69 texture features, including histogram features, gray-level co-occurrence matrix (GLCM) features, gray-level run-length matrix (GLRLM) features, grey-level size-zone matrix (GLSZM) features, and neighborhood gray-tone difference matrix (NGTDM) features.
In addition, texture features often show multiscale characteristics. Wavelet transform, as a multiscale analysis tool, can adaptively obtain the effective signals of different frequency components of the original image. We used wavelet transform to obtain 4 different frequency components of each ultrasound feature map and extracted the above 69 texture features from these maps, respectively. Then, another 276 wavelet-based texture features of each ultrasound feature map were obtained.
The detailed radiomics features are shown in Table 2. A total of 345 radiomics features were extracted from each ultrasound feature map and its 4 frequency components.
Table 2 Detailed radiomics features extracted from each ultrasound feature map and its 4 frequency components
GLCM (gray-level co-occurrence matrix), GLRLM (gray-level run-length matrix), GLSZM (grey-level size-zone matrix), and NGTDM (neighborhood gray-tone difference matrix)
The method of feature selection was based on sparse representation coefficient (SRC), which was proposed by Li [36]. The hypothesis of sparse representation was that all signals in the world are sparse and can be expressed sparsely, that is, they can be expressed linearly by finite features. The basic principle of sparse representation can be described by the following formula (4):
$$\mathrm{s}=\mathrm{\varnothing \beta }.$$
Suppose \({\varphi }_{i}\epsilon {R}^{N},i=\mathrm{1,2},\dots\), M is the base signal (atom) of the N*1 dimension, \(\mathrm{\varnothing }=[{\varphi }_{1},{\varphi }_{2,\dots }{\varphi }_{M}]\) is the matrix composed of M base signals, which is called the dictionary, where M > N, β is the coefficient vector of the M*1 dimension, and s is the target signal of N*1. The goal is to select as few atoms as possible in \(\mathrm{\varnothing }\) to make \(\mathrm{\varnothing \beta }=\mathrm{s}\) tenable, that is, to find a β to make \(\mathrm{\varnothing \beta }=\mathrm{s}\) tenable, while the number of nonzero elements in \(\upbeta\) is as small as possible. \(\upbeta\) can be solved by the OMP algorithm [37]. The elements in β are called the SRCs. The SRC value can reflect the importance of these features. All the radiomics features extracted were arranged in descending order by SRC according to their importance to the label in selecting the required number of useful features.
Establishment and evaluation of the prediction model
The currently widely used SVM classifier with excellent performance was used to build the PD-1 prediction model. This classifier can deal with the classification problem by the kernel method when the relationship between class labels and radiomics features is nonlinear. The kernel function method maps the linearly inseparable features of low-dimensional space to high-dimensional space through feature transformation to obtain the optimal separation hyperplane and realize linear classification. The Gaussian radial basis function was the preferred kernel function in this experiment. This function can map the original feature vector to the infinite dimension space to find the optimal hyperplane. There are few adjustable parameters using the Gaussian radial basis function, and only \(\upgamma\) and the penalty parameter \(\mathrm{C}\) can be changed. By using cross-validation to find the appropriate parameters \(\mathrm{C}\) and \(\upgamma\), the classifier can correctly predict the test set data. In this experiment, \(\mathrm{C}\) was 0.8 and \(\upgamma\) was 1. The SVM software package used in this experiment was the 'libsvm' toolkit designed by Lin Zhiren, Associate Professor of Taiwan University.
Three PD-1 radiomics prediction models based on RF were established using SVM, including a PD-1 prediction model that used the DEA feature map (DM), a PD-1 prediction model that used the DEA and SSD feature maps (DSM) and a PD-1 prediction model that used the DEA, SSD, and NRD feature maps (DSNM), which are shown in Fig. 7.
Three PD-1 radiomics prediction models based on RF included a PD-1 prediction model that used the DEA feature map (DM), a PD-1 prediction model that used the DEA and SSD feature maps (DSM), and a PD-1 prediction model that used the DEA, SSD, and NRD feature maps (DSNM)
Grayscale image compression test
Ultrasound grayscale image-based radiomics analysis is a relatively traditional method and was compared with RF-based radiomics analysis methods in terms of the performance of PD-1 prediction in HCC patients. After segmenting the ROI of the grayscale image, the same radiomics analysis processing steps as that of RF data were carried out on the grayscale image. Ultrasound grayscale image-based radiomics analysis compression tests directly extracted 345 radiomics features from ROIs of the grayscale images. They were also selected by the SRC and were used to build the PD-1 prediction model based on grayscale images, which was called GM.
The performance of the prediction model was evaluated by the LOOCV statistical analysis method. The Tukey's test, in conjunction with analysis of variance (ANOVA), was used to test the significance between any two pairs of the three ultrasound features. Receiver operating characteristic (ROC) curves and precision-recall curves (PRCs) were employed to show the overall performance of the model. Other assessment indicators included the area under the ROC curve (AUC), accuracy (ACC), sensitivity (SENS), and specificity (SPEC). Descriptive statistics are summarized as mean ± SD.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
RF:
ROI:
Region of interest
Sparse representation
SVM:
LOOCV:
Leave-one-out cross-validation
DEA:
Direct energy attenuation
NRD:
Noncentrality parameter S of Rician distribution
Skewness of spectrum difference
Programmed cell death protein 1 prediction model based on ultrasound grayscale image
Programmed cell death protein 1 prediction model based on direct energy attenuation
DSM:
Programmed cell death protein 1 prediction model based on direct energy attenuation and Skewness of spectrum difference
DSNM:
Programmed cell death protein 1 prediction model based on direct energy attenuation, noncentrality parameter S of Rician distribution and skewness of spectrum difference
AUC:
ACC:
SENS:
FFT:
Fast Fourier transform
GLCM:
Gray-Level Co-Occurrence Matrix
GLRLM:
Gray-Level Run-Length Matrix
GLSZM:
Grey-Level Size-Zone Matrix
NGTDM:
Neighborhood Gray-Tone Difference Matrix
ROC:
Receiver operating characteristic curve
PRC:
Precision-recall curve
BEP:
Ringelhan M, Reisinger F, Yuan D, Weber A, Heikenwalder M. Modeling human liver cancer heterogeneity: virally induced transgenic models and mouse genetic models of chronic liver inflammation. Curr Protoc Pharmacol. 2014;67:14–31.
Heimbach JK, Kulik LM, Finn RS, Sirlin CB, Abecassis MM, Roberts LR, et al. AASLD guidelines for the treatment of hepatocellular carcinoma. Hepatology. 2018;67:358–80.
Maurizio P, Antonio S, Nicoletta DM, Alessandro C, Francesco A, Bruno F, et al. Long-term effectiveness of resection and radiofrequency ablation for single hepatocellular carcinoma ≤ 3cm. Results of a multicenter Italian survey. J Hepatol. 2013;59(1):89–97.
Feng S, Ming S, Zhen Z, Rui-Zhao Q, Zhen-Wen L, Ji-Yuan Z, et al. PD-1 and PD-L1 upregulation promotes CD81 T-cell apoptosis and postoperative recurrence in hepatocellular carcinoma patients. INT J CANCER. 2011;128(4):887–96.
Lu YY, Guo XL. Advance development of immunotherapy on malignant melanoma with targeting inhibition of PD-L1/ PD-1. Chin Pharm J. 2015;50:1931–5.
Feld E, Horn L. Targeting PD-L1 for non-small-cell lung cancer. Immunotherapy-UK. 2016;8(6):747–58.
Prapruttam D, Suksai J, Kitiyakara T, Phongkitkarun S. Ultrasound surveillance for hepatocellular carcinoma of at-risk patients in Ramathibodi Hospital. J Med Assoc Thailand. 2014;97(11):1199–208.
Manini MA, Sangiovanni A, Fornari F, Piscaglia F, Biolato M, Fanigliulo L, et al. Clinical and economical impact of 2010 AASLD guidelines for the diagnosis of hepatocellular carcinoma. J HEPATOL. 2014;60(5):995–1001.
Segal E, Sirlin CB, Ooi C, Adler AS, Gollub J, Chen X, et al. Decoding global gene expression programs in liver cancer by noninvasive imaging. Nat Biotechnol. 2007;25(6):675–80.
Depeursinge A, Foncubierta-Rodriguez A, Ville DVD, Müller H. Three-dimensional solid texture analysis in biomedical imaging: review and opportunities. Med Image Anal. 2014;18(1):176–96.
Aerts HJ, Velazquez ER, Leijenaar RT, Parmar C, Grossmann P, Carvalho S, Bussink J, Monshouwer R, Haibe-Kains B, Rietveld D, Hoebers F. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014;5:1–9.
Choi E, Lee HY, Jeong JY, Choi Y, Kim J, Bae J, et al. Quantitative image variables reflect the intratumoral pathologic heterogeneity of lung adenocarcinoma. Oncotarget. 2016;7(41):67302–13.
Zhang Q, Xiao Y, Suo J, Shi J, Yu J, Guo Y, et al. Sonoelastomics for breast tumor classification: a radiomics approach with clustering-based feature selection on sonoelastography. Ultrasound Med Biol. 2017;43(5):1058–69.
Qiao M, Hu Y, Guo Y, Wang Y, Yu J. Breast tumor classification based on a computerized breast imaging reporting and data system feature system. J Ultrasound Med Offl J Am Instit Ultrasound Med. 2017;37(2):403–15.
Yao Z, Dong Y, Wu G, Zhang Q, Yang D, Yu J, et al. Preoperative diagnosis and prediction of hepatocellular carcinoma: radiomics analysis based on multi-modal ultrasound images. BMC Cancer. 2018. https://doi.org/10.1186/s12885-018-5003-4.
Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut. 2019;68(4):729–41.
Zhou Y, He L, Huang Y, Chen S, Wu P, Ye W, et al. CT-based radiomics signature: a potential biomarker for preoperative prediction of early recurrence in hepatocellular carcinoma. Abdom Radiol. 2017;42(6):1695–704.
Liang J-Y, Wang Z, Huang X-W, Zhang C-Q, Ruan S-M, Xie X-Y, et al. Multiparametric ultrasomics of significant liver fibrosis: a machine learning-based analysis. Eur Radiol. 2019;29(3):1496–506.
Chunrui L, Linzhou X, Wentao K, Xiaoling L, Dong Z, Min W, et al. Prediction of suspicious thyroid nodule using artificial neural network based on radiofrequency ultrasound and conventional ultrasound: a preliminary study. Ultrasonics. 2019;99:105951.
Barrere V, Sanchez M, Cambronero S, Dupré A, Rivoire M, Melodelima D. Evaluation of ultrasonic attenuation in primary and secondary human liver tumors and its potential effect on high-intensity focused ultrasound treatment. Ultrasound Med Biol. 2021;47:1761–74.
Mahmoud AM, Mukdadi OM, Teng B, Mustafa SJ. High-resolution quantitative ultrasound imaging for soft tissue classification. Biomedical Engineering; 2011. https://doi.org/10.1109/MECBME.2011.5752071.
Eltoft T. The Rician inverse Gaussian distribution: A new model for non-Rayleigh signal amplitude statistics. IEEE T Image Process. 2005;14(11):1722–35.
Yoon HJ, Sohn I, Cho JH, Lee HY, Kim J, Choi Y, et al. Decoding tumor phenotypes for ALK, ROS1, and RET fusions in lung adenocarcinoma using a radiomics approach. Medicine. 2015;94(41):e1753.
Gevaert O, Echegaray S, Khuong A, Hoang CD, Shrager JB, Jensen KC, et al. Predictive radiogenomics modeling of EGFR mutation status in lung cancer. Sci Rep. 2017;7:41674.
Yu J, Shi Z, Lian Y, Li Z, Liu T, Gao Y, et al. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma. Eur Radiol. 2016;27(8):3509–22.
Dang M, Lysack JT, Wu T, Matthews TW, Chandarana SP, Brockton NT, et al. MRI Texture analysis predicts p53 status in head and neck squamous cell carcinoma. AJNR Am J Neuroradiol. 2014;36(1):166–70.
Zhu Y, Li H, Guo W, Drukker K, Ji Y. TU-CD-BRB-06: deciphering genomic underpinnings of quantitative MRI-based radiomic phenotypes of invasive breast carcinoma. Sci Rep-UK. 2015;42(6):3603.
Li H, Zhu Y, Burnside ES, Huang E, Drukker K, Hoadley KA, et al. Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set. Npj Breast Cancer. 2016;2:16012.
Biermann M, Reisæter LR. Automated analysis of gray-scale ultrasound images of thyroid nodules ("radiomics") may outperform image interpretation by less experienced thyroid radiologists. Clin Thyroidol. 2018;30(7):332–6.
Xue LY, Jiang ZY, Fu TT, Wang QM, Zhu YL, Dai M, et al. Transfer learning radiomics based on multimodal ultrasound imaging for staging liver fibrosis. Eur Radiol. 2020;30(5):2973–83.
Wei LI, Huang Y, Zhuang BW, Liu G, Wang W. Multiparametric ultrasomics of significant liver fibrosis: a machine learning-based analysis. Eur Radiol. 2018;29(3):1469–506.
Pham TD, Watanabe Y, Higuchi M, Suzuki H. Texture analysis and synthesis of malignant and benign mediastinal lymph nodes in patients with lung cancer on computed tomography. Sci Rep-UK. 2017;7:43209.
Wan T, Bloch BN, Plecha D, Thompson CL, Gilmore H, Jaffe C, et al. A radio-genomics approach for identifying high risk estrogen receptor-positive breast cancers on DCE-MRI: preliminary results in predicting OncotypeDX risk scores. SCI REP-UK. 2016;6:21394.
Li H, Zhu Y, Burnside ES, Drukker K, Hoadley KA, Fan C, et al. MR imaging radiomics signatures for predicting the risk of breast cancer recurrence as given by research versions of mammaprint, Oncotype DX, and PAM50 gene assays. Radiology. 2016;281(2):152110.
Yang D, Rao G, Martinez J, Veeraraghavan A, Rao A. Evaluation of tumor-derived MRI-texture features for discrimination of molecular subtypes and prediction of 12-month survival status in glioblastoma. Med Phys. 2015;42(11):6725–35.
Li Y, Namburi P, Yu Z, Guan C, Feng J, Gu Z. Voxel selection in fMRI data analysis based on sparse representation. IEEE Trans Biomed Eng. 2009;56(10):2439–51.
Mallat S, Zhang Z. Matching pursuit with time-frequency dictionary. IEEE Trans Signal Process. 1993;41:3397–415.
The authors are grateful to all study participants.
This research was supported by Shanghai science and technology action innovation plan (19441903100) and National Natural Science Foundation of China (No. 81571676, No. 81501471).
Qing-Min Wang and Yi Dong have contributed equally to this work
Department of Electronic Engineering, Fudan University, Shanghai, 200433, China
Qingmin Wang, Tianlei Xiao, Jinhua Yu, Leyin Li & Yuanyuan Wang
Department of Ultrasound, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
Yi Dong, Qi Zhang & Wenping Wang
Institute of Biomedical and Health Engineering Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Ave., Shenzhen, University Town, Shenzhen, 518055, China
Shiquan Zhang & Yang Xiao
Qingmin Wang
Yi Dong
Tianlei Xiao
Shiquan Zhang
Jinhua Yu
Leyin Li
Qi Zhang
Yuanyuan Wang
Yang Xiao
Wenping Wang
YX and JY conceived and designed this study; YD, QZ, TX, and SZ collected experimental data and provided clinical guidance; QW and LL implemented the algorithm; QW analyzed the experimental results and wrote the paper; YW and WW reviewed the manuscript. All the authors read and approved the final manuscript.
Correspondence to Yang Xiao or Wenping Wang.
This prospective study was approved by the Research Ethics Committee of our institution. Informed consent was waived before CEUS examination. The procedure followed was in accordance with the Declaration of Helsinki.
Agreed by the authors.
Radiomics scores of the patients in all models.
Wang, Q., Dong, Y., Xiao, T. et al. Prediction of programmed cell death protein 1 in hepatocellular carcinoma patients using radiomics analysis with radiofrequency-based ultrasound multifeature maps. BioMed Eng OnLine 21, 24 (2022). https://doi.org/10.1186/s12938-021-00927-y
Ultrasound multifeature map | CommonCrawl |
What is inductor, inductance and How it store energy
Michal August 10, 2018 Electronics Engineering No Comments
An inductor is a passive circuit element which stores energy in the form of the magnetic field. Inductors are made of wrapped conducting wires or coil, to enhance the effectiveness of inductor number of turns are increased. An inductor is not anticipated to dissipate energy, it only stores energy and then delivers it to the circuit when required.
Various symbols of inductors
What is inductance ?
In 1800's Oersted, showed that current carrying conductor produces a magnetic field around it. When he was conducting some experiments, a compass present nearby the current carrying conductor deflected, he notices.
After few years, Ampere showed by taking some careful experiments that magnitude of magnetic flux is directly related to the amount of current flowing through it.
Michal Faraday and Joseph Henry discovered that a changing magnetic field can produce a voltage across its neighboring circuits. They also showed that the magnitude of voltage is dependent on the rate of change of flux.
$v \propto \frac{d\Phi }{dt} \quad OR
\quad v= L\frac{d\Phi }{dt} \quad
\dots(1)$
The letter "L" is called Inductance.
Inductance is the measure of the property of an inductor that it opposes the change in current flow.
If a current is passed through an inductor, the voltage across the inductor change gradually rather than abruptly. It can be represented mathematically
$v=L\frac{di}{dt} \quad \dots(2)$
From equation (2) it can be implied that if there is constant current flowing through the inductor, the voltage across the inductor will be zero. It means that an inductor will act as a short circuit for DC power supply.
An inductor acts like a short circuit to a dc source.
According to the equation (2), discontinuous change in inductor current need infinite voltage across it, which is practically impossible. So inductor opposes a change in current either positive change or negative change explained by the Lenz Law.
Lenz Law :
The Lenz Law state that the direction of the induced voltage is always such that it opposes its cause which produces it. As discussed earlier that the magnetic flux is directly related to the current flowing through it. So, when the current increases the flux also increases, this change in flux causes a voltage to induce in such direction that opposes the existing current. Similarly, when current and flux decreases, it causes the voltage to induce in such direction that opposes the existing current. This is "choking" and that's why inductors are called Choke.
Inductance of an inductor :
The inductance of a coil can be found using the following formula:
$L=\frac{N^{2}\mu A}{l} \quad \dots (3)$
Where N is the number of turn in the coil, A is the cross-sessional area of the wire, l is length and μ is permeability of the core, where magnetic flux passes. The permeability depends upon the material used and varies from material to material. Just like capacitors and resistors, the inductor is also available in the market from few micro-Henry to tens of Henry, which may be fixed or variable.
energy stored in inductor :
The inductor works like a capacitor and doesn't dissipate energy. It stores electric energy in the form of the magnetic field during the charging phase and releases the same energy to the circuit in the decay phase. Energy stored in the inductor is the multiplication of current through inductor and voltage across it.
The inductor absorb power is
$p(t)=v(t)i(t)$
From equation (2) we know that
$v(t)=L\frac{di}{dt} $
Putting it to the above equation
$p(t)=(L\frac{di}{dt}) i(t)$
Where stored energy can be find by integrating both sides up to charging time
$\int_{t_{0}}^{t}{p (t)dt}=L\int_{t_{0}}^{t}{i(t) \frac{di}{dt'}} dt'$
$W_{Stored}=L\int_{i(t_{0})}^{i(t)}{i '(t)} di^{'}$
$W_{Stored}=L\frac{1}{2}\lbrack i(t)^{2}-i(t_{0})^{2}\rbrack $
If the inductor is initially not charged or left unconnected for a long time, so the initial current $i(t_{0})$ will be zero. Thus the following formula is applicable for finding stored energy.
$W_{Stored}=\frac{1}{2}Li^{2}$
The energy stored is represented by the graph below.
inductor construction :
The basic construction of an inductor contains an insulated (enameled) wire wound. The winding may be supported by a core or not. In case of not supported by an internal core, is called air-core inductor. In another case, the winding is supported by an iron core, which is called an iron core inductor. Iron is a ferromagnetic high permeable material which provides a low reluctance path for the magnetic flux. It also confines the magnetic flux near the winding and increases the flux linkage. At low frequency, the core is made of thin plates called lamination to reduce eddy current losses. Where for high frequencies, the core is made of soft ferrite because of not producing high eddy current losses.
Eddy current losses :
As the core inside the inductor is subjected to the magnetic flux of the inductor. Upon variation in the flux, the iron core behaves like conductor and voltage is induce inside the core. The induced voltage causes current inside the core, which is called Eddy current. The current increases losses by producing heat.
To counter the eddy current losses, the core is made of laminations, the iron thin plates insulated from each other. Thus the flow of current is limited by increasing the resistance in the current path.
Q Factor of Inductor :
Practically the inductor shows some resistance, which absorbs part of the apparent power and reduces the efficiency of the inductor. Q Factor is the measure of the efficiency of the inductor at the given frequency and equal to the ratio of inductive reactance to the resistance. The higher the Q factor means nearness to the ideal inductor and narrower bandwidth in a resonant circuit. The radio uses the high Q factor inductor with a capacitor to make circuit resonant.
$Q=\frac{wL}{R}=\frac{2\pi fL}{R}$
Inductors in series :
Just like resistors and capacitor, inductors can be combined in series and/or parallel. We are interested in finding equivalent inductance of the combination. If there is only one path for current the combination of inductors will be called series combination, as shown in the diagram below.
Applying the KVL to the circuit above
$v=v_{1}+v_{2}+\ldots +v_{n} \ldots (4)$
Voltage across the inductor depends upon change in current
$v=L\frac{di}{dt}$
$v=L_{1}\frac{di}{dt}+L_{2}\frac{di}{dt}+\ldots +L_{n}\frac{di}{dt}
\ldots (5) $
We know the there is only one path for current, so current for all elements are same
$I=I_{1}=I_{2}=\ldots =I_{n}$
Now putting it to the above equation (4)
$v=(L_{1}+L_{2}+\ldots +L_{n})\frac{di}{dt}$
So equivalent inductance in series is
$L_{eq}=L_{1}+L_{2}+\ldots +L_{n}$
Inductor series parallel calculator :
Connection type?
Series CombinationParallel Combination
First Inductor, L1
Second Inductor, L2
Third Inductor, L3
Resultant Inductance, LT
Inductors in parallel :
If multiple inductors are connected such that there are multiple paths for current to flow, such combination is called parallel combination. Now consider the following parallel combination of inductors shown in the diagram.
Now applying KCL to the above diagram will give us the following equation
$i_{T}=i_{1}+i_{2}+\ldots +i_{n}$
We know that voltages are same in a parallel combination, so putting that into the above equation
$i_{T}=\frac{1}{L_{1}}\int{vdt}+\frac{1}{L_{2}}\int{vdt}+\ldots
+\frac{1}{L_{n}}\int{vdt} i=\frac{1}{L}\int{vdt}$
$i_{T}=(\frac{1}{L_{1}}+\frac{1}{L_{2}}+\ldots +\frac{1}{L_{n}})
\int{vdt}$
So equivalent inductance in case of the parallel combination will be
$\frac{1}{L_{eq}}=\frac{1}{L_{1}}+\frac{1}{L_{2}}+\ldots
+\frac{1}{L_{n}}$
Inductors are passive electrical components which store energy in the form of a magnetic field. Inductors resist change in current and they are commonly used in electronic circuits avoiding current surges.
Maimna | CommonCrawl |
◄ ▲ ►
LOADING PAGE...
There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page:
https://www.feynmanlectures.caltech.edu/I_01.html
If it does not open, or only shows you this message again, then please let us know:
which browser you are using (including version #)
which operating system you are using (including version #)
This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below.
By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated.
Mike Gottlieb
[email protected]
Editor, The Feynman Lectures on Physics New Millennium Edition
The recording of this lecture is missing from the Caltech Archives.
43 Diffusion
43–1Collisions between molecules
We have considered so far only the molecular motions in a gas which is in thermal equilibrium. We want now to discuss what happens when things are near, but not exactly in, equilibrium. In a situation far from equilibrium, things are extremely complicated, but in a situation very close to equilibrium we can easily work out what happens. To see what happens, we must, however, return to the kinetic theory. Statistical mechanics and thermodynamics deal with the equilibrium situation, but away from equilibrium we can only analyze what occurs atom by atom, so to speak.
As a simple example of a nonequilibrium circumstance, we shall consider the diffusion of ions in a gas. Suppose that in a gas there is a relatively small concentration of ions—electrically charged molecules. If we put an electric field on the gas, then each ion will have a force on it which is different from the forces on the neutral molecules of the gas. If there were no other molecules present, an ion would have a constant acceleration until it reached the wall of the container. But because of the presence of the other molecules, it cannot do that; its velocity increases only until it collides with a molecule and loses its momentum. It starts again to pick up more speed, but then it loses its momentum again. The net effect is that an ion works its way along an erratic path, but with a net motion in the direction of the electric force. We shall see that the ion has an average "drift" with a mean speed which is proportional to the electric field—the stronger the field, the faster it goes. While the field is on, and while the ion is moving along, it is, of course, not in thermal equilibrium, it is trying to get to equilibrium, which is to be sitting at the end of the container. By means of the kinetic theory we can compute the drift velocity.
It turns out that with our present mathematical abilities we cannot really compute precisely what will happen, but we can obtain approximate results which exhibit all the essential features. We can find out how things will vary with pressure, with temperature, and so on, but it will not be possible to get precisely the correct numerical factors in front of all the terms. We shall, therefore, in our derivations, not worry about the precise value of numerical factors. They can be obtained only by a very much more sophisticated mathematical treatment.
Before we consider what happens in nonequilibrium situations, we shall need to look a little closer at what goes on in a gas in thermal equilibrium. We shall need to know, for example, what the average time between successive collisions of a molecule is.
Any molecule experiences a sequence of collisions with other molecules—in a random way, of course. A particular molecule will, in a long period of time $T$, have a certain number, $N$, of hits. If we double the length of time, there will be twice as many hits. So the number of collisions is proportional to the time $T$. We would like to write it this way: \begin{equation} \label{Eq:I:43:1} N = T/\tau. \end{equation} We have written the constant of proportionality as $1/\tau$, where $\tau$ will have the dimensions of a time. The constant $\tau$ is the average time between collisions. Suppose, for example, that in an hour there are $60$ collisions; then $\tau$ is one minute. We would say that $\tau$ (one minute) is the average time between the collisions.
We may often wish to ask the following question: "What is the chance that a molecule will experience a collision during the next small interval of time $dt$?" The answer, we may intuitively understand, is $dt/\tau$. But let us try to make a more convincing argument. Suppose that there were a very large number $N$ of molecules. How many will have collisions in the next interval of time $dt$? If there is equilibrium, nothing is changing on the average with time. So $N$ molecules waiting the time $dt$ will have the same number of collisions as one molecule waiting for the time $N\,dt$. That number we know is $N\,dt/\tau$. So the number of hits of $N$ molecules is $N\,dt/\tau$ in a time $dt$, and the chance, or probability, of a hit for any one molecule is just $1/N$ as large, or $(1/N)(N\,dt/\tau) = dt/\tau$, as we guessed above. That is to say, the fraction of the molecules which will suffer a collision in the time $dt$ is $dt/\tau$. To take an example, if $\tau$ is one minute, then in one second the fraction of particles which will suffer collisions is $1/60$. What this means, of course, is that $1/60$ of the molecules happen to be close enough to what they are going to hit next that their collisions will occur in the next second.
When we say that $\tau$, the mean time between collisions, is one minute, we do not mean that all the collisions will occur at times separated by exactly one minute. A particular particle does not have a collision, wait one minute, and then have another collision. The times between successive collisions are quite variable. We will not need it for our later work here, but we may make a small diversion to answer the question: "What are the times between collisions?" We know that for the case above, the average time is one minute, but we might like to know, for example, what is the chance that we get no collision for two minutes?
We shall find the answer to the general question: "What is the probability that a molecule will go for a time $t$ without having a collision?" At some arbitrary instant—that we call $t = 0$—we begin to watch a particular molecule. What is the chance that it gets by until $t$ without colliding with another molecule? To compute the probability, we observe what is happening to all $N_0$ molecules in a container. After we have waited a time $t$, some of them will have had collisions. We let $N(t)$ be the number that have not had collisions up to the time $t$. $N(t)$ is, of course, less than $N_0$. We can find $N(t)$ because we know how it changes with time. If we know that $N(t)$ molecules have got by until $t$, then $N(t + dt)$, the number which get by until $t + dt$, is less than $N(t)$ by the number that have collisions in $dt$. The number that collide in $dt$ we have written above in terms of the mean time $\tau$ as $dN = N(t)\,dt/\tau$. We have the equation \begin{equation} \label{Eq:I:43:2} N(t + dt) = N(t) - N(t)\,\frac{dt}{\tau}. \end{equation} The quantity on the left-hand side, $N(t + dt)$, can be written, according to the definitions of calculus, as $N(t) + (dN/dt)\,dt$. Making this substitution, Eq. (43.2) yields \begin{equation} \label{Eq:I:43:3} \ddt{N(t)}{t} = -\frac{N(t)}{\tau}. \end{equation} The number that are being lost in the interval $dt$ is proportional to the number that are present, and inversely proportional to the mean life $\tau$. Equation (43.3) is easily integrated if we rewrite it as \begin{equation} \label{Eq:I:43:4} \frac{dN(t)}{N(t)} = -\frac{dt}{\tau}. \end{equation} Each side is a perfect differential, so the integral is \begin{equation} \label{Eq:I:43:5} \ln N(t) = -t/\tau + (\text{a constant}), \end{equation} which says the same thing as \begin{equation} \label{Eq:I:43:6} N(t) = (\text{constant})e^{-t/\tau}. \end{equation} We know that the constant must be just $N_0$, the total number of molecules present, since all of them start at $t = 0$ to wait for their "next" collision. We can write our result as \begin{equation} \label{Eq:I:43:7} N(t) = N_0e^{-t/\tau}. \end{equation} If we wish the probability of no collision, $P(t)$, we can get it by dividing $N(t)$ by $N_0$, so \begin{equation} \label{Eq:I:43:8} P(t) = e^{-t/\tau}. \end{equation} Our result is: the probability that a particular molecule survives a time $t$ without a collision is $e^{-t/\tau}$, where $\tau$ is the mean time between collisions. The probability starts out at $1$ (or certainty) for $t = 0$, and gets less as $t$ gets bigger and bigger. The probability that the molecule avoids a collision for a time equal to $\tau$ is $e^{-1} \approx 0.37$. The chance is less than one-half that it will have a greater than average time between collisions. That is all right, because there are enough molecules which go collision-free for times much longer than the mean time before colliding, so that the average time can still be $\tau$.
We originally defined $\tau$ as the average time between collisions. The result we have obtained in Eq. (43.7) also says that the mean time from an arbitrary starting instant to the next collision is also $\tau$. We can demonstrate this somewhat surprising fact in the following way. The number of molecules which experience their next collision in the interval $dt$ at the time $t$ after an arbitrarily chosen starting time is $N(t)\,dt/\tau$. Their "time until the next collision" is, of course, just $t$. The "average time until the next collision" is obtained in the usual way: \begin{equation*} \text{Average time until the next collision} = \frac{1}{N_0}\int_0^\infty t\,\frac{N(t)\,dt}{\tau}. \end{equation*} Using $N(t)$ obtained in (43.7) and evaluating the integral, we find indeed that $\tau$ is the average time from any instant until the next collision.
43–2The mean free path
Another way of describing the molecular collisions is to talk not about the time between collisions, but about how far the particle moves between collisions. If we say that the average time between collisions is $\tau$, and that the molecules have a mean velocity $v$, we can expect that the average distance between collisions, which we shall call $l$, is just the product of $\tau$ and $v$. This distance between collisions is usually called the mean free path: \begin{equation} \label{Eq:I:43:9} \text{Mean free path $l$} = \tau v. \end{equation}
In this chapter we shall be a little careless about what kind of average we mean in any particular case. The various possible averages—the mean, the root-mean-square, etc.—are all nearly equal and differ by factors which are near to one. Since a detailed analysis is required to obtain the correct numerical factors anyway, we need not worry about which average is required at any particular point. We may also warn the reader that the algebraic symbols we are using for some of the physical quantities (e.g., $l$ for the mean free path) do not follow a generally accepted convention, mainly because there is no general agreement.
Just as the chance that a molecule will have a collision in a short time $dt$ is equal to $dt/\tau$, the chance that it will have a collision in going a distance $dx$ is $dx/l$. Following the same line of argument used above, the reader can show that the probability that a molecule will go at least the distance $x$ before having its next collision is $e^{-x/l}$.
The average distance a molecule goes before colliding with another molecule—the mean free path $l$—will depend on how many molecules there are around and on the "size" of the molecules, i.e., how big a target they represent. The effective "size" of a target in a collision we usually describe by a "collision cross section," the same idea that is used in nuclear physics, or in light-scattering problems.
Fig. 43–1.Collision cross section.
Consider a moving particle which travels a distance $dx$ through a gas which has $n_0$ scatterers (molecules) per unit volume (Fig. 43–1). If we look at each unit of area perpendicular to the direction of motion of our selected particle, we will find there $n_0\,dx$ molecules. If each one presents an effective collision area or, as it is usually called, "collision cross section," $\sigma_c$, then the total area covered by the scatterers is $\sigma_cn_0\,dx$.
By "collision cross section" we mean the area within which the center of our particle must be located if it is to collide with a particular molecule. If molecules were little spheres (a classical picture) we would expect that $\sigma_c = \pi(r_1 + r_2)^2$, where $r_1$ and $r_2$ are the radii of the two colliding objects. The chance that our particle will have a collision is the ratio of the area covered by scattering molecules to the total area, which we have taken to be one. So the probability of a collision in going a distance $dx$ is just $\sigma_cn_0\,dx$: \begin{equation} \label{Eq:I:43:10} \text{Chance of a collision in $dx$} = \sigma_cn_0\,dx. \end{equation}
We have seen above that the chance of a collision in $dx$ can also be written in terms of the mean free path $l$ as $dx/l$. Comparing this with (43.10), we can relate the mean free path to the collision cross section: \begin{equation} \label{Eq:I:43:11} \frac{1}{l} = \sigma_cn_0, \end{equation} which is easier to remember if we write it as \begin{equation} \label{Eq:I:43:12} \sigma_cn_0l = 1. \end{equation}
This formula can be thought of as saying that there should be one collision, on the average, when the particle goes through a distance $l$ in which the scattering molecules could just cover the total area. In a cylindrical volume of length $l$ and a base of unit area, there are $n_0l$ scatterers; if each one has an area $\sigma_c$ the total area covered is $n_0l\sigma_c$, which is just one unit of area. The whole area is not covered, of course, because some molecules are partly hidden behind others. That is why some molecules go farther than $l$ before having a collision. It is only on the average that the molecules have a collision by the time they go the distance $l$. From measurements of the mean free path $l$ we can determine the scattering cross section $\sigma_c$, and compare the result with calculations based on a detailed theory of atomic structure. But that is a different subject! So we return to the problem of nonequilibrium states.
43–3The drift speed
We want to describe what happens to a molecule, or several molecules, which are different in some way from the large majority of the molecules in a gas. We shall refer to the "majority" molecules as the "background" molecules, and we shall call the molecules which are different from the background molecules "special" molecules or, for short, the $S$-molecules. A molecule could be special for any number of reasons: It might be heavier than the background molecules. It might be a different chemical. It might have an electric charge—i.e., be an ion in a background of uncharged molecules. Because of their different masses or charges the $S$-molecules may have forces on them which are different from the forces on the background molecules. By considering what happens to these $S$-molecules we can understand the basic effects which come into play in a similar way in many different phenomena. To list a few: the diffusion of gases, electric currents in batteries, sedimentation, centrifugal separation, etc.
We begin by concentrating on the basic process: an $S$-molecule in a background gas is acted on by some specific force $\FLPF$ (which might be, e.g., gravitational or electrical) and in addition by the not-so-specific forces due to collisions with the background molecules. We would like to describe the general behavior of the $S$-molecule. What happens to it, in detail, is that it darts around hither and yon as it collides over and over again with other molecules. But if we watch it carefully we see that it does make some net progress in the direction of the force $\FLPF$. We say that there is a drift, superposed on its random motion. We would like to know what the speed of its drift is—its drift velocity—due to the force $\FLPF$.
If we start to observe an $S$-molecule at some instant we may expect that it is somewhere between two collisions. In addition to the velocity it was left with after its last collision it is picking up some velocity component due to the force $\FLPF$. In a short time (on the average, in a time $\tau$) it will experience a collision and start out on a new piece of its trajectory. It will have a new starting velocity, but the same acceleration from $\FLPF$.
To keep things simple for the moment, we shall suppose that after each collision our $S$-molecule gets a completely "fresh" start. That is, that it keeps no remembrance of its past acceleration by $\FLPF$. This might be a reasonable assumption if our $S$-molecule were much lighter than the background molecules, but it is certainly not valid in general. We shall discuss later an improved assumption.
For the moment, then, our assumption is that the $S$-molecule leaves each collision with a velocity which may be in any direction with equal likelihood. The starting velocity will take it equally in all directions and will not contribute to any net motion, so we shall not worry further about its initial velocity after a collision. In addition to its random motion, each $S$-molecule will have, at any moment, an additional velocity in the direction of the force $\FLPF$, which it has picked up since its last collision. What is the average value of this part of the velocity? It is just the acceleration $\FLPF/m$ (where $m$ is the mass of the $S$-molecule) times the average time since the last collision. Now the average time since the last collision must be the same as the average time until the next collision, which we have called $\tau$, above. The average velocity from $\FLPF$, of course, is just what is called the drift velocity, so we have the relation \begin{equation} \label{Eq:I:43:13} v_{\text{drift}} = \frac{F\tau}{m}. \end{equation} This basic relation is the heart of our subject. There may be some complication in determining what $\tau$ is, but the basic process is defined by Eq. (43.13).
You will notice that the drift velocity is proportional to the force. There is, unfortunately, no generally used name for the constant of proportionality. Different names have been used for each different kind of force. If in an electrical problem the force is written as the charge times the electric field, $\FLPF = q\FLPE$, then the constant of proportionality between the velocity and the electric field $\FLPE$ is usually called the "mobility." In spite of the possibility of some confusion, we shall use the term mobility for the ratio of the drift velocity to the force for any force. We write \begin{equation} \label{Eq:I:43:14} v_{\text{drift}} = \mu F \end{equation} in general, and we shall call $\mu$ the mobility. We have from Eq. (43.13) that \begin{equation} \label{Eq:I:43:15} \mu = \tau/m. \end{equation} The mobility is proportional to the mean time between collisions (there are fewer collisions to slow it down) and inversely proportional to the mass (more inertia means less speed picked up between collisions).
To get the correct numerical coefficient in Eq. (43.13), which is correct as given, takes some care. Without intending to confuse, we should still point out that the arguments have a subtlety which can be appreciated only by a careful and detailed study. To illustrate that there are difficulties, in spite of appearances, we shall make over again the argument which led to Eq. (43.13) in a reasonable but erroneous way (and the way one will find in many textbooks!).
We might have said: The mean time between collisions is $\tau$. After a collision the particle starts out with a random velocity, but it picks up an additional velocity between collisions, which is equal to the acceleration times the time. Since it takes the time $\tau$ to arrive at the next collision it gets there with the velocity $(F/m)\tau$. At the beginning of the collision it had zero velocity. So between the two collisions it has, on the average, a velocity one-half of the final velocity, so the mean drift velocity is $\tfrac{1}{2}F\tau/m$. (Wrong!) This result is wrong and the result in Eq. (43.13) is right, although the arguments may sound equally satisfactory. The reason the second result is wrong is somewhat subtle, and has to do with the following: The argument is made as though all collisions were separated by the mean time $\tau$. The fact is that some times are shorter and others are longer than the mean. Short times occur more often but make less contribution to the drift velocity because they have less chance "to really get going." If one takes proper account of the distribution of free times between collisions, one can show that there should not be the factor $\tfrac{1}{2}$ that was obtained from the second argument. The error was made in trying to relate by a simple argument the average final velocity to the average velocity itself. This relationship is not simple, so it is best to concentrate on what is wanted: the average velocity itself. The first argument we gave determines the average velocity directly—and correctly! But we can perhaps see now why we shall not in general try to get all of the correct numerical coefficients in our elementary derivations!
We return now to our simplifying assumption that each collision knocks out all memory of the past motion—that a fresh start is made after each collision. Suppose our $S$-molecule is a heavy object in a background of lighter molecules. Then our $S$-molecule will not lose its "forward" momentum in each collision. It would take several collisions before its motion was "randomized" again. We should assume, instead, that at each collision—in each time $\tau$ on the average—it loses a certain fraction of its momentum. We shall not work out the details, but just state that the result is equivalent to replacing $\tau$, the average collision time, by a new—and longer—$\tau$ which corresponds to the average "forgetting time," i.e., the average time to forget its forward momentum. With such an interpretation of $\tau$ we can use our formula (43.15) for situations which are not quite as simple as we first assumed.
43–4Ionic conductivity
We now apply our results to a special case. Suppose we have a gas in a vessel in which there are also some ions—atoms or molecules with a net electric charge. We show the situation schematically in Fig. 43–2. If two opposite walls of the container are metallic plates, we can connect them to the terminals of a battery and thereby produce an electric field in the gas. The electric field will result in a force on the ions, so they will begin to drift toward one or the other of the plates. An electric current will be induced, and the gas with its ions will behave like a resistor. By computing the ion flow from the drift velocity we can compute the resistance. We ask, specifically: How does the flow of electric current depend on the voltage difference $V$ that we apply across the two plates?
Fig. 43–2.Electric current from an ionized gas.
We consider the case that our container is a rectangular box of length $b$ and cross-sectional area $A$ (Fig. 43–2). If the potential difference, or voltage, from one plate to the other is $V$, the electric field $E$ between the plates is $V/b$. (The electric potential is the work done in carrying a unit charge from one plate to the other. The force on a unit charge is $\FLPE$. If $\FLPE$ is the same everywhere between the plates, which is a good enough approximation for now, the work done on a unit charge is just $Eb$, so $V = Eb$.) The special force on an ion of the gas is $q\FLPE$, where $q$ is the charge on the ion. The drift velocity of the ion is then $\mu$ times this force, or \begin{equation} \label{Eq:I:43:16} v_{\text{drift}} = \mu F = \mu qE = \mu q\,\frac{V}{b}. \end{equation} An electric current $I$ is the flow of charge in a unit time. The electric current to one of the plates is given by the total charge of the ions which arrive at the plate in a unit of time. If the ions drift toward the plate with the velocity $v_{\text{drift}}$, then those which are within a distance ($v_{\text{drift}}\cdot T$) will arrive at the plate in the time $T$. If there are $n_i$ ions per unit volume, the number which reach the plate in the time $T$ is ($n_i\cdot A\cdot v_{\text{drift}}\cdot T$). Each ion carries the charge $q$, so we have that \begin{equation} \label{Eq:I:43:17} \text{Charge collected in $T$} = qn_iAv_{\text{drift}}T. \end{equation} The current $I$ is the charge collected in $T$ divided by $T$, so \begin{equation} \label{Eq:I:43:18} I = qn_iAv_{\text{drift}}. \end{equation} Substituting $v_{\text{drift}}$ from (43.16), we have \begin{equation} \label{Eq:I:43:19} I = \mu q^2n_i\,\frac{A}{b}\,V. \end{equation} We find that the current is proportional to the voltage, which is just the form of Ohm's law, and the resistance $R$ is the inverse of the proportionality constant: \begin{equation} \label{Eq:I:43:20} \frac{1}{R} = \mu q^2n_i\,\frac{A}{b}. \end{equation} We have a relation between the resistance and the molecular properties $n_i$, $q$, and $\mu$, which depends in turn on $m$ and $\tau$. If we know $n_i$ and $q$ from atomic measurements, a measurement of $R$ could be used to determine $\mu$, and from $\mu$ also $\tau$.
43–5Molecular diffusion
We turn now to a different kind of problem, and a different kind of analysis: the theory of diffusion. Suppose that we have a container of gas in thermal equilibrium, and that we introduce a small amount of a different kind of gas at some place in the container. We shall call the original gas the "background" gas and the new one the "special" gas. The special gas will start to spread out through the whole container, but it will spread slowly because of the presence of the background gas. This slow spreading-out process is called diffusion. The diffusion is controlled mainly by the molecules of the special gas getting knocked about by the molecules of the background gas. After a large number of collisions, the special molecules end up spread out more or less evenly throughout the whole volume. We must be careful not to confuse diffusion of a gas with the gross transport that may occur due to convection currents. Most commonly, the mixing of two gases occurs by a combination of convection and diffusion. We are interested now only in the case that there are no "wind" currents. The gas is spreading only by molecular motions, by diffusion. We wish to compute how fast diffusion takes place.
We now compute the net flow of molecules of the "special" gas due to the molecular motions. There will be a net flow only when there is some nonuniform distribution of the molecules, otherwise all of the molecular motions would average to give no net flow. Let us consider first the flow in the $x$-direction. To find the flow, we consider an imaginary plane surface perpendicular to the $x$-axis and count the number of special molecules that cross this plane. To obtain the net flow, we must count as positive those molecules which cross in the direction of positive $x$ and subtract from this number the number which cross in the negative $x$-direction. As we have seen many times, the number which cross a surface area in a time $\Delta T$ is given by the number which start the interval $\Delta T$ in a volume which extends the distance $v\,\Delta T$ from the plane. (Note that $v$, here, is the actual molecular velocity, not the drift velocity.)
We shall simplify our algebra by giving our surface one unit of area. Then the number of special molecules which pass from left to right (taking the $+x$-direction to the right) is $n_-v\,\Delta T$, where $n_-$ is the number of special molecules per unit volume to the left (within a factor of $2$ or so, but we are ignoring such factors!). The number which cross from right to left is, similarly, $n_+v\,\Delta T$, where $n_+$ is the number density of special molecules on the right-hand side of the plane. If we call the molecular current $J$, by which we mean the net flow of molecules per unit area per unit time, we have \begin{equation} \label{Eq:I:43:21} J = \frac{n_-v\,\Delta T - n_+v\,\Delta T}{\Delta T}, \end{equation} or \begin{equation} \label{Eq:I:43:22} J = (n_- - n_+)v. \end{equation}
What shall we use for $n_-$ and $n_+$? When we say "the density on the left," how far to the left do we mean? We should choose the density at the place from which the molecules started their "flight," because the number which start such trips is determined by the number present at that place. So by $n_-$ we should mean the density a distance to the left equal to the mean free path $l$, and by $n_+$, the density at the distance $l$ to the right of our imaginary surface.
It is convenient to consider that the distribution of our special molecules in space is described by a continuous function of $x$, $y$, and $z$ which we shall call $n_a$. By $n_a(x,y,z)$ we mean the number density of special molecules in a small volume element centered on $(x,y,z)$. In terms of $n_a$ we can express the difference $(n_+ - n_-)$ as \begin{equation} \label{Eq:I:43:23} (n_+ - n_-) = \ddt{n_a}{x}\,\Delta x = \ddt{n_a}{x}\cdot 2l. \end{equation} Substituting this result in Eq. (43.22) and neglecting the factor of $2$, we get \begin{equation} \label{Eq:I:43:24} J_x = -lv\,\ddt{n_a}{x}. \end{equation} We have found that the flow of special molecules is proportional to the derivative of the density, or to what is sometimes called the "gradient" of the density.
It is clear that we have made several rough approximations. Besides various factors of two we have left out, we have used $v$ where we should have used $v_x$, and we have assumed that $n_+$ and $n_-$ refer to places at the perpendicular distance $l$ from our surface, whereas for those molecules which do not travel perpendicular to the surface element, $l$ should correspond to the slant distance from the surface. All of these refinements can be made; the result of a more careful analysis shows that the right-hand side of Eq. (43.24) should be multiplied by $1/3$. So a better answer is \begin{equation} \label{Eq:I:43:25} J_x = -\frac{lv}{3}\,\ddt{n_a}{x}. \end{equation} Similar equations can be written for the currents in the $y$- and $z$-directions.
The current $J_x$ and the density gradient $dn_a/dx$ can be measured by macroscopic observations. Their experimentally determined ratio is called the "diffusion coefficient," $D$. That is, \begin{equation} \label{Eq:I:43:26} J_x = -D\,\ddt{n_a}{x}. \end{equation} We have been able to show that for a gas we expect \begin{equation} \label{Eq:I:43:27} D = \tfrac{1}{3}lv. \end{equation}
So far in this chapter we have considered two distinct processes: mobility, the drift of molecules due to "outside" forces; and diffusion, the spreading determined only by the internal forces, the random collisions. There is, however, a relation between them, since they both depend basically on the thermal motions, and the mean free path $l$ appears in both calculations.
If, in Eq. (43.25), we substitute $l = v\tau$ and $\tau = \mu m$, we have \begin{equation} \label{Eq:I:43:28} J_x = -\tfrac{1}{3}mv^2\mu\,\ddt{n_a}{x}. \end{equation} But $mv^2$ depends only on the temperature. We recall that \begin{equation} \label{Eq:I:43:29} \tfrac{1}{2}mv^2 = \tfrac{3}{2}kT, \end{equation} so \begin{equation} \label{Eq:I:43:30} J_x = -\mu kT\,\ddt{n_a}{x}. \end{equation} We find that $D$, the diffusion coefficient, is just $kT$ times $\mu$, the mobility coefficient: \begin{equation} \label{Eq:I:43:31} D = \mu kT. \end{equation} And it turns out that the numerical coefficient in (43.31) is exactly right—no extra factors have to be thrown in to adjust for our rough assumptions. We can show, in fact, that (43.31) must always be correct—even in complicated situations (for example, the case of a suspension in a liquid) where the details of our simple calculations would not apply at all.
To show that (43.31) must be correct in general, we shall derive it in a different way, using only our basic principles of statistical mechanics. Imagine a situation in which there is a gradient of "special" molecules, and we have a diffusion current proportional to the density gradient, according to Eq. (43.26). We now apply a force field in the $x$-direction, so that each special molecule feels the force $F$. According to the definition of the mobility $\mu$ there will be a drift velocity given by \begin{equation} \label{Eq:I:43:32} v_{\text{drift}} = \mu F. \end{equation} By our usual arguments, the drift current (the net number of molecules which pass a unit of area in a unit of time) will be \begin{equation} \label{Eq:I:43:33} J_{\text{drift}} = n_av_{\text{drift}}, \end{equation} or \begin{equation} \label{Eq:I:43:34} J_{\text{drift}} = n_a\mu F. \end{equation} We now adjust the force $F$ so that the drift current due to $F$ just balances the diffusion, so that there is no net flow of our special molecules. We have $J_x + J_{\text{drift}} = 0$, or \begin{equation} \label{Eq:I:43:35} D\,\ddt{n_a}{x} = n_a\mu F. \end{equation}
Under the "balance" conditions we find a steady (with time) gradient of density given by \begin{equation} \label{Eq:I:43:36} \ddt{n_a}{x} = \frac{n_a\mu F}{D}. \end{equation}
But notice! We are describing an equilibrium condition, so our equilibrium laws of statistical mechanics apply. According to these laws the probability of finding a molecule at the coordinate $x$ is proportional to $e^{-U/kT}$, where $U$ is the potential energy. In terms of the number density $n_a$, this means that \begin{equation} \label{Eq:I:43:37} n_a = n_0e^{-U/kT}. \end{equation} If we differentiate (43.37) with respect to $x$, we find \begin{equation} \label{Eq:I:43:38} \ddt{n_a}{x} = -n_0e^{-U/kT}\cdot\frac{1}{kT}\,\ddt{U}{x}, \end{equation} or \begin{equation} \label{Eq:I:43:39} \ddt{n_a}{x} = -\frac{n_a}{kT}\,\ddt{U}{x}. \end{equation} In our situation, since the force $F$ is in the $x$-direction, the potential energy $U$ is just $-Fx$, and $-dU/dx = F$. Equation (43.39) then gives \begin{equation} \label{Eq:I:43:40} \ddt{n_a}{x} = \frac{n_aF}{kT}. \end{equation} [This is just exactly Eq. (40.2), from which we deduced $e^{-U/kT}$ in the first place, so we have come in a circle]. Comparing (43.40) with (43.36), we get exactly Eq. (43.31). We have shown that Eq. (43.31), which gives the diffusion current in terms of the mobility, has the correct coefficient and is very generally true. Mobility and diffusion are intimately connected. This relation was first deduced by Einstein.
43–6Thermal conductivity
The methods of the kinetic theory that we have been using above can be used also to compute the thermal conductivity of a gas. If the gas at the top of a container is hotter than the gas at the bottom, heat will flow from the top to the bottom. (We think of the top being hotter because otherwise convection currents would be set up and the problem would no longer be one of heat conduction.) The transfer of heat from the hotter gas to the colder gas is by the diffusion of the "hot" molecules—those with more energy—downward and the diffusion of the "cold" molecules upward. To compute the flow of thermal energy we can ask about the energy carried downward across an element of area by the downward-moving molecules, and about the energy carried upward across the surface by the upward-moving molecules. The difference will give us the net downward flow of energy.
The thermal conductivity $\kappa$ is defined as the ratio of the rate at which thermal energy is carried across a unit surface area, to the temperature gradient: \begin{equation} \label{Eq:I:43:41} \frac{1}{A}\,\ddt{Q}{t} = -\kappa\,\ddt{T}{z}. \end{equation} Since the details of the calculations are quite similar to those we have done above in considering molecular diffusion, we shall leave it as an exercise for the reader to show that \begin{equation} \label{Eq:I:43:42} \kappa = \frac{knlv}{\gamma - 1}, \end{equation} where $kT/(\gamma - 1)$ is the average energy of a molecule at the temperature $T$.
If we use our relation $nl\sigma_c = 1$, the heat conductivity can be written as \begin{equation} \label{Eq:I:43:43} \kappa = \frac{1}{\gamma - 1}\,\frac{kv}{\sigma_c}. \end{equation}
We have a rather surprising result. We know that the average velocity of gas molecules depends on the temperature but not on the density. We expect $\sigma_c$ to depend only on the size of the molecules. So our simple result says that the thermal conductivity $\kappa$ (and therefore the rate of flow of heat in any particular circumstance) is independent of the density of the gas! The change in the number of "carriers" of energy with a change in density is just compensated by the larger distance the "carriers" can go between collisions.
One may ask: "Is the heat flow independent of the gas density in the limit as the density goes to zero? When there is no gas at all?" Certainly not! The formula (43.43) was derived, as were all the others in this chapter, under the assumption that the mean free path between collisions is much smaller than any of the dimensions of the container. Whenever the gas density is so low that a molecule has a fair chance of crossing from one wall of its container to the other without having a collision, none of the calculations of this chapter apply. We must in such cases go back to kinetic theory and calculate again the details of what will occur.
Copyright © 1963, 2006, 2013 by the California Institute of Technology, Michael A. Gottlieb and Rudolf Pfeiffer | CommonCrawl |
Constructive and Non-Constructive Proofs in Agda (Part 1): Logical Background
Article by Danya Rogozin
Hi! I'm Danya Rogozin, and I work at Serokell on a blockchain framework called Snowdrop.
I would like to tell you about constructive and non-constructive proofs in a proof assistant and functional programming language called Agda. I'm currently working on a paper on a generalized model of data processing in a blockchain-system together with my teammates George Agapov and Kirill Briantsev. We use Agda to prove interesting properties of read-only state access computation, transaction validation, and block processing. Also, I'm writing a PhD thesis at Moscow State University on modalities in constructive and linear logic and their connection with computer science and mathematical linguistics.
We'll split this article into several parts:
Logical background;
Brief introduction to Agda;
The first two parts are needed to introduce the reader to the necessary background. In the first part, we will give some theoretical background in mathematical logic to know what formal proof and related concepts are, regardless of functional programming context. After that, we will talk about constructive and non-constructive proofs in mathematical logic and discuss a difference between them. Mathematical logic is the theoretical foundation of formal verification in dependently typed languages, so we need to discuss these basic technical and philosophical aspects to see the close connection between proof theory and dependently typed programming.
In the second part, we will introduce the reader to programming and theorem proving in Agda. We will compare the syntax and basic concepts of Agda and Haskell, and discuss the difference.
In the third part, we will prove some theorems in Agda in two ways: constructively and non-constructively. We will see the difference between constructive and non-constructive proofs in Agda through those examples.
As we will see later, Agda is a constructive formal system, i.e. we have no way to write non-constructive proofs without some special postulates. We will discuss what exactly we obtain if we make Agda formal system classical by adding non-constructive postulates.
Logical background
Classical logic
Classical logic is the oldest branch of mathematical logic that has its roots in Aristotle's works [1]. Classical logic took its shape in G. Boole's [2] and G. Frege's [3] works in the second half of the 19th century. The main motivation is a necessity to define whether some statement is true or false regardless of its content. In other words, we would like to establish the truth of a given statement from its form. By form, we mean a result of forming complex statements from atomic statements via special parts of speech by abstracting from specific meanings that can vary from context to context. In classical logic, atomic statements are propositional variables that can take a value from the two-element set {false,true}\{ false, true \}{false,true} and special parts of speech are logical connectives: conjunction (a counterpart of "and"), disjunction (a counterpart of "or"), implication (a counterpart of "if … then …"), and negation (a counterpart of "not").
Thus classical proof is a syntactical way to establish the truth of a proposition from a two-valued point of view. We propose this syntactical way by this way. We define axioms (or axiom schemas, in other words, not only primitive formulas but all their special cases too) and inference rules that allows obtaining new theorems from formulas that are already proved yet. We use the single inference rule called Modus Ponens. Modus Ponens claims that if implication (A→BA \to BA→B) and assumption (AAA) are true, then the conclusion (BBB) is true.
We define the language of a classical propositional calculus:
Definition 1 Let V={p0,p1,…,}V = \{ p_0, p_1, \dots, \}V={p0,p1,…,} be set of propositional variables. Thus:
Any propositional variable is a formula;
If AAA is a formula, then ¬A\neg A¬A is a formula;
If A,BA, BA,B are formulas, then (A∨B)(A \lor B)(A∨B), (A∧B)(A \land B)(A∧B) and (A→B)(A \to B)(A→B) are formulas.
Definition 2 (Classical propositional calculus) The classical propositional calculus (CPC) is defined by the following list of axiom schemes and inference rules:
(A→(B→C))→((A→B)→(A→C))(A \to (B \to C)) \to ((A \to B) \to (A \to C))(A→(B→C))→((A→B)→(A→C));
A→(B→A)A \to (B \to A)A→(B→A);
A→(B→(A∧B))A \to (B \to (A \land B))A→(B→(A∧B));
(A∧B)→A(A \land B) \to A(A∧B)→A;
(A∧B)→B(A \land B) \to B(A∧B)→B;
(A→C)→((B→C)→((A∨B)→C))(A \to C) \to ((B \to C) \to ((A \lor B) \to C))(A→C)→((B→C)→((A∨B)→C));
A→(A∨B)A \to (A \lor B)A→(A∨B);
B→(A∨B)B \to (A \lor B)B→(A∨B);
(A→B)→((A→¬B)→¬A)(A \to B) \to ((A \to \neg B) \to \neg A)(A→B)→((A→¬B)→¬A);
¬¬A→A\neg \neg A \to A¬¬A→A
Inference rule, Modus Ponens: A A→BB\frac{A \:\: A \to B}{B}BAA→B.
Equivalently, we may define classical propositional logic as the smallest set L\mathfrak{L}L that consists of all special cases of these schemas and is closed under Modus Ponens rule, i.e. if A∈LA \in \mathfrak{L}A∈L and A→B∈LA \to B \in \mathfrak{L}A→B∈L, then B∈LB \in \mathfrak{L}B∈L.
Definition 3 (Formal proof) A (classical propositional) proof is a finite sequence of formulas, each of which is an axiom, or follows from the previous formulas by Modus Ponens rule.
Let us prove the formula A→AA \to AA→A as an example: (1)(A→((A→A)→A))→(((A→(A→A))→(A→A)) Axiom schema(2)A→((A→A)→A) Axiom schema(3)((A→(A→A))→(A→A) 1, 2, Modus Ponens(4)A→(A→A) Axiom schema(5)A→A 1, 2, Modus Ponens\begin{array}{lll} (1) &(A \to ((A \to A) \to A)) \to (((A \to (A \to A)) \to (A \to A))& \\ &\:\:\:\: \text{Axiom schema}& \\ (2) &A \to ((A \to A) \to A)& \\ &\:\:\:\: \text{Axiom schema}& \\ (3) &((A \to (A \to A)) \to (A \to A)&\\ &\:\:\:\: \text{1, 2, Modus Ponens}& \\ (4) &A \to (A \to A)&\\ &\:\:\:\: \text{Axiom schema}& \\ (5) &A \to A&\\ &\:\:\:\: \text{1, 2, Modus Ponens}& \\ \end{array}(1)(2)(3)(4)(5)(A→((A→A)→A))→(((A→(A→A))→(A→A))Axiom schemaA→((A→A)→A)Axiom schema((A→(A→A))→(A→A)1, 2, Modus PonensA→(A→A)Axiom schemaA→A1, 2, Modus Ponens
The law of excluded middle (in Latin, tertium non datur) is a law of classical logic initially formulated in Aristotle's works [1]. This law claims that only one statement from AAA and ¬A\neg A¬A is necessarily true and the second one is necessarily false. In other words, a statement may be either true or false, and there is no third option. Formally, A∨¬AA \vee \neg AA∨¬A. We leave as an exercise to prove the law of excluded middle in CPC.
The equivalent formulation of the law of excluded middle is a law of double negation elimination, which says that any statement is equivalent to its double negation. That is, if we know that it's false that AAA is false, then AAA is true.
First-order logic
We have told above about classical propositional logic that has quite weak expressive possibilities. The language of classical propositional logic doesn't include the structure of propositions, but the structure of a statement often plays a huge role in establishing the truth of this statement. For example,
"A sequence of real numbers $x_1, x_2, \dots $ is a Cauchy sequence, if for all ε>0\varepsilon > 0ε>0 there exists a natural number NNN, such that for all i,j>Ni, j > Ni,j>N, ∣xi−xj∣<ε| x_i - x_j | < \varepsilon∣xi−xj∣<ε".
We have the sentence "for all ε>0\varepsilon > 0ε>0 …", but we have no way to establish whether this sentence is true or false only looking on connectives in this statement because we have to pay particular attention to the internal structure.
First-order logic (FOL) is a much richer formal system than (classical) propositional logic that allows expressing the internal structure of basic propositions in more detail.
The language of FOL extends the language of classical propositional logic. In addition to logical connectives, we have:
variables, the infinite set of letters x,y,z,…x, y, z, \dotsx,y,z,…;
constants, a,b,c,…a, b, c, \dotsa,b,c,…;
relation symbols, the set of letters P,Q,R,…P, Q, R, \dotsP,Q,R,…. Generally, we have an infinite collection of nnn-ary relation symbols for every n∈Nn \in \mathbb{N}n∈N;
function symbols, the set of letters f,g,h,…f, g, h, \dotsf,g,h,…. Similarly, we have an infinite collection of nnn-ary function symbols for any natural number nnn;
quantifiers: "for all" ∀\forall∀, "there exists" ∃\exists∃.
Variables range over some domain. Constants denote the special elements of the considered domain. Predicate symbols are symbols that represent relations. Function symbols are signs that denote operations. Note that any propositional variable is 000-ary relation symbol and any constant is 000-ary function symbol. In other words, we don't need to define propositional variables and constants in the first-order language explicitly.
We build the first-order formulas as follows:
A first-order signature is a pair Ω=⟨Fn,Pr⟩\Omega = \langle Fn, Pr \rangleΩ=⟨Fn,Pr⟩, where FnFnFn is a set of function symbols and PrPrPr is a set of relation symbols.
Definition 4 (Terms)
Any variable is a term;
Any constant is a term;
If x1,…,xnx_1, \dots, x_nx1,…,xn are terms and f∈Fnf \in Fnf∈Fn is a function symbol of valence nnn, then f(x1,…,xn)f(x_1, \dots, x_n)f(x1,…,xn) is a term.
Definition 5 (Formulas)
If x1,...,xnx_1, ..., x_nx1,...,xn are terms and P∈PrP \in PrP∈Pr is a relation symbol of valence nnn, then P(x1,…,xn)P(x_1, \dots, x_n)P(x1,…,xn) is a formula;
If A,BA, BA,B are formulas, then (AαB)(A \alpha B)(AαB) is a formula, where α∈{→,∧,∨}\alpha \in \{ \to, \land, \lor \}α∈{→,∧,∨}.
If AAA is a formula and xxx is a variable, then ∀x A\forall x \: A∀xA (for all xxx, AAA holds) and ∃x A\exists x \: A∃xA (exists xxx, such that AAA holds) are formulas.
Let us write down the definition of a Cauchy sequence as the first-order formula: ∀ε ∃N ∀i ∀j ((i>N∧j>N)→(∣xi−xj∣<ε))\forall \varepsilon \: \exists N \: \forall i \: \forall j \: ((i > N \land j > N) \to (| x_i - x_j | < \varepsilon))∀ε∃N∀i∀j((i>N∧j>N)→(∣xi−xj∣<ε)) where >>> ("greater than") is a binary relation symbol, −-− ("subtraction") is a binary function symbol, ∣∣||∣∣ ("absolute value") is an unary function symbol.
More briefly: ∀ε ∃N ∀i,j>N (∣xi−xj∣<ε)\forall \varepsilon \: \exists N \: \forall i, j > N \: (| x_i - x_j | < \varepsilon)∀ε∃N∀i,j>N(∣xi−xj∣<ε). where $\forall i, j > N $ is a short form for ∀i ∀j ((i>N∧j>N)→… )\forall i \: \forall j \: ((i > N \land j > N) \to \dots)∀i∀j((i>N∧j>N)→…).
We define first-order logic axiomatically as first-order predicate calculus:
Definition 6 (Classical first-order predicate calculus)
CPC axioms;
∀x A(x)→A(a)\forall x \: A(x) \to A(a)∀xA(x)→A(a);
A(a)→∃x A(x)A(a) \to \exists x \: A(x)A(a)→∃xA(x);
Modus Ponens;
The first Bernays' rule: A→BA→∀x B\frac{A \to B}{A \to \forall x \: B}A→∀xBA→B;
The second Bernays' rule: A→B∃x A→B\frac{A \to B}{\exists x \: A \to B}∃xA→BA→B. where AAA, BBB are metavariables on formulas.
Here, a proof is a finite sequence of formulas, each of which is an axiom, or follows from the previous formulas by inference rules (Modus Ponens and Bernays' rules).
Note that ∃x A\exists x \: A∃xA is equivalent to ¬(∀x ¬A)\neg (\forall x \: \neg A)¬(∀x¬A) and ∀x A\forall x \: A∀xA is equivalent to ¬(∃x ¬A)\neg (\exists x \: \neg A)¬(∃x¬A). Thus, quantifiers are mutually expressible in classical first-order logic.
Constructive logic
Constructive (or intuitionistic) mathematics is a field of mathematical logic that arose at the beginning of the 20th century. This direction was founded by Dutch mathematician L. E. J. Brouwer to provide an answer to the paradoxes of naive set theory such as Russell's paradox [4]. Brouwer claimed that obtained paradoxes are the evidence of the fact that classical mathematics and its foundation are unsafe.
Brouwer and his followers expressed misgivings about ways of reasoning on mathematical objects and their introduction [5]. For instance, intuitionists widely discussed the nature of existence [6]. Mathematics is full of examples of so-called pure existence theorems, i.e. theorems claiming the existence of an object with some desired property but proved without any explicit presentation of this object. We consider the simplest example:
Theorem 1
There exist irrational numbers aaa and bbb such that aba^bab is a rational number.
Proof Let us consider the number 22\sqrt{2}^{\sqrt{2}}22. If 22\sqrt{2}^{\sqrt{2}}22 is a rational, then a=b=2a = b = \sqrt{2}a=b=2. If 22\sqrt{2}^{\sqrt{2}}22 is an irrational number, let a=22a = \sqrt{2}^{\sqrt{2}}a=22 and b=2b = \sqrt{2}b=2. Thus ab=(22)2=22⋅2=22=2a^b = (\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = \sqrt{2}^{\sqrt{2} \cdot \sqrt{2}} = \sqrt{2}^2 =2ab=(22)2=22⋅2=22=2, which is a rational number. □\Box□
Such reasoning is unacceptable from an intuitionistic point of view because we did not find those numbers. We just established two alternatives and had no reason to choose one of them. This proof is a proof of existence without any provision of clear evidence for the specific property. Such proofs are often based on the law of excluded middle (A∨¬AA \vee \neg AA∨¬A) as in the example above. But this proof is classically valid since any statement is either true or false.
This critique has led to the rejection of the law of excluded middle. Moreover, logical connectives and quantifiers have become to be understood quite differently. A statement is proved if we have some explicit construction that solves some desired mathematical problem. Logical connectives are understood as follows. We will use Haskell notation:
A proof of (a, b) is an ordered pair (x, y), where x :: a and y :: b. In other words, if we need to prove both statements, then we need to prove each of them;
A proof of Either a b is either Left x or Right y, where x :: a and y :: b. If we are looking for proof of some disjunction, it means that we must prove at least one of members of this disjunction. ;
A proof of a -> b is a function f such that for all x :: a, f x :: b. A proof of this implication means that we have a general method that reduces any proof of b to the proof of a;
A proof ¬ a is a proof of a -> Void, where Void is an empty type.
Logically, Void denotes the absurdity, the statement that has no proof, for instance, 0=10 = 10=1. In other words, if we need to prove the negation of a, it means that we should show that any proof of a leads to the contradiction. For example, 4=5→0=14 = 5 \to 0 = 14=5→0=1 is equivalent to ¬(4=5)\neg (4 = 5)¬(4=5).
Type-theoretically, Void is a type of contradiction and has no values as far as the contradiction is not provable (if our system is consistent). Thus, a -> Void may be considered as a type of function with an empty range of values.
In Haskell, non-termination and exceptions inhabit all types including Void (e.g. loop = loop :: Void or exc = undefined :: Void), but we'll be working in a subset of Haskell without these problematic constructs.
Such way of the interpretation of logical constants is called Brouwer-Heyting-Kolmogorov semantics (BHK-semantics) [7].
As you could see the proof in the example above is not valid within the context of BHK-semantics. Firstly, we did not propose any concrete irrational numbers aaa and bbb such that aba^bab is rational. Secondly, we have used the law of excluded middle A∨¬AA \vee \neg AA∨¬A. By the definition, a proof A∨¬AA \vee \neg AA∨¬A is either proof of AAA or proof of ¬A\neg A¬A, but classically A∨¬AA \vee \neg AA∨¬A is true without any proof of AAA or ¬A\neg A¬A (it is easy to check that A∨¬AA \vee \neg AA∨¬A is classical tautology via truth tables). The law of excluded middle was rejected by intuitionists for this reason.
We define intuitionistic propositional logic axiomatically as follows. Propositional language and formal proof are defined similarly as above:
Definition 7 (Intuitionistic propositional logic)
A→(¬A→B)A \to (\neg A \to B)A→(¬A→B).
In other words, we replaced the last axioms of classical propositional logic ¬¬A→A\neg \neg A \to A¬¬A→A to weaker axiom A→(¬A→B)A \to (\neg A \to B)A→(¬A→B) and obtained intuitionistic propositional logic.
Moreover, there is the following theorem: Theorem 2 (Disjunction property, Gödel [1932], Gentzen [1934], Kleene [1945]) [8] [9] [10]
A∨BA \vee BA∨B is provable intuitionistically if and only if either AAA is provable intuitionistically or BBB is provable intuitionistically.
By the way, the unprovability of the law of excluded middle may be considered to be the consequence of the disjunction property: A∨¬AA \vee \neg AA∨¬A cannot be provable generally, because we have no possibility to establish the provability of this disjunction knowing nothing about AAA. Note that the disjunction property doesn't work in classical logic, where A∨¬AA \vee \neg AA∨¬A is provable and true regardless of what AAA is.
Intuitionistic propositional logic may be extended to intuitionistic first-order logic as follows:
Definition 8 (Intuitionistic first-order predicate calculus)
IPC axioms
Bernays' rules.
Note that, quantifiers are not mutually expressible in contrast to classical first-order logic.
There is the following theorem about intuitionistic first-order logic which is wrong for classical first-order logic:
Theorem 3 (Existence property [9])
If ∃x A(x)\exists x \: A(x)∃xA(x) is provable in intuitionistic first-order logic with signature Ω\OmegaΩ. Then there exists some term ttt, such that A(t)A(t)A(t) is provable.
Existence property theorem is closely connected with the philosophical motivation of intuitionism, so far as we have told before that existence should be proved explicitly from an intuitionistic point of view.
See [11] to read more about philosophical distinctions between classical and intuitionistic logic in more detail.
Also, we note that the statement formulated in Theorem 1 has a constructive proof. Firstly, we propose some definitions and formulate the following theorem that solves Hilbert's seventh problem:
Definition 9 An algerbaic number is a real number (generally, complex number) that is a root of some non-zero polynomial with rational coefficients. Simple example: ±2\pm \sqrt{2}±2 are roots of polynomial f(x)=x2−2f(x) = x^2 - 2f(x)=x2−2, because f(2)=f(−2)=0f(\sqrt{2}) = f(- \sqrt{2}) = 0f(2)=f(−2)=0.
Definition 10 A transcendental number is a real number aaa that is not a root of a polynomial with rational coefficients. The standard examples of transcendental numbers are π\piπ and eee.
Theorem 4 (Gelfond–Schneider theorem [1934]) [12] Let a,ba,ba,b be algebraic numbers such that a≠1a \neq 1a=1, a≠0a \neq 0a=0 and bbb is an irrational number. Then aba^bab is a transcendental number.
Thus we may easily prove the previous theorem without any non-constructive steps in the reasoning:
Proof By Gelfond-Schneider theorem, 22\sqrt{2}^{\sqrt{2}}22 is a transcendental number, since 2\sqrt{2}2 is an algebraic number. Hence 22\sqrt{2}^{\sqrt{2}}22 is an irrational number. But (22)2=22⋅2=22(\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = \sqrt{2}^{\sqrt{2} \cdot \sqrt{2}} = \sqrt{2}^2(22)2=22⋅2=22 is a rational number. Then we take a=22a = \sqrt{2}^{\sqrt{2}}a=22 and b=2b = \sqrt{2}b=2. □\Box□
In this post, we have made a brief introduction to the logical background to understand better the concepts that will be described in the next parts. We have seen the difference between constructive and non-constructive proofs considered mathematically, within in a context of mathematical logic.
In the next post, we will introduce the reader to Agda and compare its concepts and syntax with Haskell. Moreover, we will study theorem proving in Agda and understand the implementation of mathematical reasoning in dependently typed programming languages. If you want to stay updated, follow Serokell on Twitter and Facebook!
[1] Smith, R. (tr. & comm.). Aristotle's Prior Analytics, Indianapolis: Hackett, 1989.
[2] Boole, G. An Investigation of the Laws of Thought. London: Walton & Maberly, 1854.
[3] Frege, G. Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle, 1879.
[4] Russell, B. The principles of mathematics. London, 1903.
[5] Brouwer, L.E.J… Collected works I, A. Heyting (ed.). Amsterdam: North-Holland, 1975.
[6] Heyting, A. Intuitionism, an introduction. Amsterdam: North-Holland, 1956.
[7] Troelstra, A.S. Constructivism and Proof Theory. Illc, University of Amsterdam, 2003.
[8] Gödel, K. Zum intuitionistischen Aussagenkalkül, Anzeiger der Akademie der Wissenschaftischen in Wien, v. 69, 1932.
[9] Gentzen, G. Untersuchungen über das logische Schließen. I, Mathematische Zeitschrift v. 39 n. 2, 1934.
[10] Kleene S.C. On the interpretation of intuitionistic number theory, Journal of Symbolic Logic, vol. 10, 1945.
[11] Dummett, M. Elements of Intuitionism. Oxford University Press, 1977.
[12] Gelfond, A. Sur le septième Problème de Hilbert, Bulletin de l'Académie des Sciences de l'URSS. Classe des sciences mathématiques et na. VII (4), 1934.
agdaformal verification
Constructive and Non-Constructive Proofs in Agda (Part 3): Playing with Negation
In the previous post, we briefly introduced the reader to dependently typed programming and theorem proving in Agda. Now we present an empty type to work with constructive negation. We discuss Marko…
Constructive and Non-Constructive Proofs in Agda (Part 2): Agda in a Nutshell
Last week, we started a new blog series to introduce you to creating constructive and non-constructive proofs in Agda..
Why Fintech Companies Use Haskell
Why do so many companies choose to use functional programming languages for their fintech products? Find the answer in this article. | CommonCrawl |
Alternaria alternata strain VLH1: a potential entomopathogenic fungus native to North Western Indian Himalayas
Amit Umesh Paschapur ORCID: orcid.org/0000-0002-3152-06021,
A. R. N. S. Subbanna2,
Ashish Kumar Singh1,
B. Jeevan1,
Johnson Stanley3,
H. Rajashekara1,
Krishna Kant Mishra1,
Prasanna S. Koti4,
Lakshmi Kant1 &
Arunava Pattanayak1
Egyptian Journal of Biological Pest Control volume 32, Article number: 138 (2022) Cite this article
The inadvertent observation of a substantial population reduction of greenhouse whiteflies infecting Salvia divinorum plants grown in a polyhouse sparked a flurry of inquiries on the cause of the population decline. The entomopathogenic fungus (EPF) (Alternaria alternata strain VLH1) infecting greenhouse whitefly on S. divinorum plants was isolated and morphologically and molecularly characterised using multilocus sequence typing.
The fungus was found to be highly virulent against sucking pests; with LC50 values ranging from 1.7 × 104 to 2.5 × 106 spores per ml for the Mustard aphid (Lipaphis erysimi Kaltenbach) and soybean sucking bug (Chauliops choprai Sweet and Schaeffer), respectively. In the lepidopteran larvae treated with a concentration of 3 × 105 spores per ml, the fungus induced developmental abnormalities such as aberrant larval to pupal moulting, defective pupae, and deformed adults. Pathogenicity studies on the two beneficial insects (Coccinella septempunctata (Linn.) and Apis mellifera L.) and 11 host plants revealed no disease signs, indicating that it is safe for use in pest management in hill agriculture. The chitinolytic activity of the fungus and its crude protein extracts was reported in studies conducted against target insect pests, with the highest chitinase enzyme production (117.7 U/ml) on the fourth day of inoculation. Furthermore, over a 96-h period, third instar Helicoverpa armigera (Hubn.) larvae fed on a protein fraction-amended artificial diet showed a significant decrease in nutritional physiology indices such as relative growth rate, relative consumption rate, efficiency of ingested food conversion, efficiency of digested food conversion, and approximate digestibility. Moreover, the polyhouse and open-field studies against two sucking pests; Myzus persicae (Sulz.) infesting capsicum in polyhouse and L. erysimi infesting Indian rapeseed in open-field conditions showed, 81.14% and 63.14% mortality rates, respectively, at 3 × 107 spore/ml concentration.
Entomopathogenic fungus (EPF) was reported to be an effective biocontrol agent, which caused direct mortality in sucking pests to developmental abnormalities in lepidopteran insects. Despite positive findings in in vitro and in vivo bioassay investigations against various insect pests, the fungus still has to be inspected before it can be used on a broad scale for biological pest management.
The traditional form of crop-livestock agriculture and organic farming are the prima facie of hill agriculture in the Indian Himalayas from time immemorial (Behera et al. 2012). This type of traditional agriculture has not only increased pest population density over time, but has also resulted in significant economic losses in hill agriculture during pest outbreaks (Negi and Palni 2010). The growing demand for organic products grown in the hills among metropolitan centres has paved the way for Himalayan farmers to use environmentally friendly pest management measures that are compatible with organic farming systems (Stanley et al. 2022). It has become critical for researchers to develop fresh environmentally friendly pest management methods that are both socioeconomically and culturally acceptable to Himalayan marginal and small farmers (Stanley et al. 2022). In the Himalayas, there is currently no recommended, effective, and widely accepted biological pest management solution. As a result, the hunt for a native, highly virulent, and self-sustaining biological control agents still persists (Paschapur et al. 2019).
Fungi, for example, are microbial biocontrol agents with the potential to be an effective pest control agent as well as a best alternative to traditional, hazardous insecticides (Quesada-Moraga et al. 2006). Beauveria bassiana (Bals.-Criv.) (Hypocreales: Clavicipitaceae), Metarhizium anisopliae (Metchnikoff) Sorokin (Hypocreales: Clavicipitaceae), Nomuraea rileyi (Farlow) Kepler (Hypocreales: Clavicipitaceae), and others are some of the most well-known EPFs in the world (McKinnon et al. 2018). Apart from these, a few fungi, such as Alternaria, are known to be saprophytic, plant pathogenic, and in some circumstances, entomopathogenic (Poitevin et al. 2018).
Alternaria alternata (Fr.) Keissl (Pleosporales: Pleosporaceae) is a saprophytic and well-known plant pathogenic fungus that causes serious diseases in fruits and vegetables during their post-harvest shelf life. However, several authors, on the other hand, have reported A. alternata's EPF activity against a variety of insect pests. Christias et al. (2001) isolated a new pathotype of A. alternata that is highly virulent against large number of aphids. In the studies of Mckinnon et al. (2018), they isolated the EPF A. alternata from the rhizosphere zone of maize crops, which was capable of infecting insect pests of maize under laboratory conditions. Mehrmoradi et al. (2020) found the EPF A. alternata in organic fruit orchard soils in Switzerland and Iran, respectively, and proved their toxicity against a range of insect pests infecting fruit crops.
From the previous studies carried out by several authors on isolation of A. alternata from the dead insect cadavers, qualitative assessment of toxicity of the fungus, laboratory bioassays, and ecological studies, cleared that the fungus was a potential biocontrol agent against range of insect pests belonging to Hemipteran, Coleopteran, and Lepidopteran orders. Furthermore, the fungal isolates were very specific to a specific group of insects, with a very little cross-infectivity (Christias et al. 2001). However, in recent years, the attention has switched to utilising A. alternata endophytic nature (Kaur et al. 2019), as well as the proteinaceous poisons produced by the fungus for infecting and killing insects (Yang et al. 2012). However, little emphasis was placed on isolating the fungi's protein component, testing its toxicity against insect pests, elucidating its mode of action, and exploiting the fungi's biocontrol potential in open-field conditions. Therefore, the present study was designed to morphologically and molecularly characterise (EPF A. alternata VLH1), study its ecological factors, test the fungus' cross-infectivity on a variety of insect pests, confirm its non-target effects on a variety of host plants and beneficial insects, test the fungus' chitinolytic activity and protein extracts to characterise its mode of action against insect pests, as well as examine the potential of A. alternata VLH1 for its exploitation under polyhouse and open-field conditions.
Isolation of the fungus and proving Koch's postulate
The greenhouse whitefly, Trialeurodes vaporariorum Westw. (Hemiptera: Aleyrodidae) infecting Salvia divinorum Epling & Jativa, 1962 (Lamiaceae: Lamiales) plants was found to be infected with an EPF at Experimental farm, Hawalbagh, ICAR-VPKAS (Vivekananda Parvatiya Krishi Anusandhan Sansthan), Almora, Uttarakhand, India (29.63° N and 79.63° E, 1250 amsl). Dead cadavers were collected from the field and transported to laboratory under aseptic conditions. To avoid bacterial contamination, the dead cadavers were surface sterilised with 70% ethyl alcohol and inoculated on potato dextrose agar (PDA) medium (Himedia labs Ltd. India) supplemented with 0.30 mg/l chloramphenicol. To obtain a pure single spore colony, a spore concentration of 1 spore/µl of double-distilled water was prepared by serial dilution (10–6) and the spore suspension was inoculated on PDA media. After 24 h, single actively growing conidium was transferred to new PDA media and incubated at 27 ± 2 °C for 8–10 days. Furthermore, the pure conidia collected from the dead cadaver were analysed morphologically and molecularly (multilocus sequence typing). To test Koch's postulate, the pure culture was multiplied on potato dextrose broth and the conidia obtained were filtered through a double layered sterile muslin cloth. The conidial concentration of 1 × 108 conidia per ml was prepared by using Neubauer's haemocytometer and sprayed topically on tomato plants (VL tamatar-4 variety) infected with T. vaporariorum nymphs and adults under polyhouse condition. The dead cadavers of T. vaporariorum infected with the fungi were collected with a sterile forceps; surface sterilised with 70% ethyl alcohol and inoculated on PDA medium. To confirm the species and prove Kochs' postulate, the fungal culture was morphologically and molecularly compared with the mother culture (Accession number MN704637, showed 100% similarity with mother culture of A. alternata VLH1 MN704636).
Morphological and molecular characterisation of the fungus
The spore and colony characters were morphologically characterised with the help of several taxonomic references (Goettel and Inglis 1997; Tzean 1997; Woudenberg et al. 2015). For molecular characterisation, the DNA was isolated from both reproductive conidia and vegetative mycelium by modified CTAB method developed by Subbanna et al. (2019). The three gene fragments: ITS (internal transcribed spacer region), GAPDH (glyceraldehyde 3 phosphate dehydrogenase), and LSU (large subunit ribosomal RNA gene), were amplified using primer pairs ITS1/ITS4 (White et al. 1990), gpd1/gpd2 (Berbee et al. 1999), and LR5/LROR (Schoch et al. 2009), respectively, in a thermo-cycler (Eppendorf Mastercycler V ProTM). The PCR amplicon thus obtained was purified and sequenced with an automated DNA sequencer (ABI 377), using Big Dye terminator kit (Applied Biosystems) as per manufacturer's instructions. BLASTN was followed to compare the acquired sequences to the ITS, LSU, and GAPDH sequences in the NCBI GenBank database. Using the Molecular Evolutionary Genetics Analysis (MEGA X) sequence alignment software, the available sequences were assembled and aligned. The aligned DNA sequences of A. alternata VLH1 (ITS, LSU, and GAPDH) have been deposited in NCBI GenBank (http://www.ncbi.nlm.nih.gov). With 1,000 bootstrap repetitions, MEGA X software was utilised to perform phylogenetic analysis for concatenated genes (Kumar et al. 2018). The reference sequences for the ITS, GAPDH, and LSU genes were retrieved from NCBI GenBank and included in the tree. The node support and the confidence level of each branch were estimated, using 1000 bootstrap pseudo-replicates generated with random seed.
Laboratory bioassays and estimation of median lethal concentrations
A leaf dip bioassay was carried out with seven spore concentrations of A. alternata VLH1 (102, 103, 104, 105, 106, 107, 108 spore/ml) and control. Ten individuals of each insect species (adult stage for hemipterans and larval stage for lepidopterans) were released in separate Petri dishes, with the experiment being replicated three times. The following insects were used for the study; cabbage aphid (Brevicoryne brassicae (Linnaeus, 1758) (Hemiptera: Aphididae)), nymphs and adults of greenhouse whitefly (T. vaporariorum), green peach aphid (Myzus persicae (Sulzer) (Hemiptera: Aphididae)), soybean seed bug (Chauliops choprai Banks (Hemiptera: Malcidae)), mustard aphid (Lipaphis erysimi (Kaltenbach, 1843) (Hemiptera: Aphididae)), wheat and barley aphid (Sitobion avenae (Fabricius, 1794) (Hemiptera: Aphididae)), soybean aphid (Aphis craccivora Koch, 1854 (Hemiptera: Aphididae)), greater wax moth (Galleria mellonella Linnaeus, 1758 (Lepidoptera: Pyralidae)) (Diet surface contamination technique), Bihar hairy caterpillar (Spilarctia oblique (Walker. 1855) (Lepidoptera: Arctiidae)), tomato fruit borers (Helicoverpa armigera (Hübner, (1808)) (Lepidoptera: Noctuidae) and S. litura). The non-target effect of the fungus was tested on seven spotted lady bird beetles (Coccinella septempunctata (Linnaeus, 1758) (Coleoptera: Coccinellidae)) and European honey bees (Apis mellifera Linnaeus, 1758 (Hymenoptera: Apidae)). The treated insects' mortality was recorded every 24 h up to 96 h and the mortality values were corrected using Abbott's (1925) formula. Hemipteran insects with brownish to blackish coloration and abundant mycelial development on deceased cadavers were deemed dead and the fungal mycelia and conidia were observed under a compound microscope and compared to mother culture of A. alternata VLH1. The collected data were subjected to probit analysis (Finney 1971) using the PoloPlus software package. Furthermore, the changes in the growth patterns and days required for consequent moulting of four lepidopteran pests throughout their life cycle (larvae, pupae, and adults) were also recorded and the data were subjected to a Tukey's-B test at 1% level of significance.
Testing chitinolytic activity of the fungus
Colloidal chitin was prepared from commercial chitin flakes using the procedure developed by Berger and Reynolds (1988). PDA media was prepared at half strength (50%) and supplemented with 1% colloidal chitin. A. alternata VLH1 pure fungal culture was inoculated on the plates and cultured for 4–5 days at 27 ± 2 °C and 65 ± 5% relative humidity. Following the incubation period, a fungal colony with a visible halo was observed on the plates, indicating that the fungus was capable of degrading chitin.
Infectivity of A. alternata VLH1 to host plants
The procedure developed by Sharma et al. (2012) was used in laboratory studies, while the procedure developed by Christias et al. (2001) was used for field level study for examining the infectivity of A. alternata VLH1 to various host plants (cabbage and cauliflower, capsicum, tomato, French bean, soybean, Indian rapeseed, maize, paddy, wheat, and Salvia were used as positive control). In laboratory assays, a total of 10 ml spore suspension of 3 × 105 spores per ml of water was prepared with Neubauer's haemocytometer and four spots of 10 µl each of the suspension were inoculated equidistantly on individual leaf of 25 cm2 (5 × 5 cm) area and the experiment was replicated five times. However, in field assays a total of 100 ml fungal spore suspension of 3 × 105 spores per ml water was sprayed over 20 host plants of 30 days old, grown in pots under polyhouse condition. Further, the plants were covered by polythene bags to create 100% relative humidity for 48 h. In both the studies, the treated leaves and plants were incubated at 25 ± 2 °C temperature, 75 ± 5% RH, and photoperiod of 16 h light and 8 h dark. In control pots, double-distilled water was sprayed up to runoff. In laboratory investigations, observations for disease development were made every 24 h for up to 7 days, while in field studies, observations were made after every 24 h for up to 21 days.
Isolation of crude proteins from Alternaria cultures
The crude protein extract was isolated from one-, two-, three-, four-, and five-day-old Alternaria cultures using the ammonium sulphate precipitation method established by Wallet and Provost Laboratory (DragonTech, a biotechnology services company). The fungal culture was inoculated with 100 µl of 3 × 105 spore suspension of A. alternata VLH1 on five consecutive days and incubated at 25 ± 2 °C on 100 ml potato dextrose broth (PDB) medium and at relative humidity of 65 ± 5% and photoperiod of 16 h light and 8 h dark. The protein was precipitated by stirring at room temperature for 15 min after adding solid ammonium sulphate to a final concentration of 85% (56.8 g of ammonium sulphate in 100 ml culture broth) estimated from the table. For 15 min, the precipitated protein was centrifuged at the highest speed. The supernatant was discarded, and the crude protein containing pellet was re-suspended in a 25 mM Tris–HCl buffer and dialyzed extensively at 4 °C against the same buffer. The protein content of the dialyzed crude extract was determined using the Bradford method and stored at − 80 °C for further investigation.
Assay for ascertaining chitinase activity of the crude protein extracts
The chitinase enzyme activity of dialyzed crude protein of A. alternata VLH1 was determined using colloidal chitin as a natural substrate at pH 6 in a 50 mM acetate buffer. Equal amounts (250 µl each) of adequately diluted crude protein extract and buffer containing 1% colloidal chitin made up the reaction mixture. After 30 min of incubation at 37 °C, the reaction was stopped by boiling for 10 min in a water bath. The residual colloidal chitin was precipitated by centrifugation at 10,000 rpm for 7 min, and the amount of liberated reducing sugars in the supernatant was calculated using a modified Schales reagent (0.5 g potassium ferricyanide in one litre of 0.5 M sodium carbonate) (Imoto and Yagishita 1971). In brief, 450 µl of supernatant was combined with 600 µl of Schales reagent and boiled for 15 min in a water bath. After cooling, absorbance was measured at 420 nm, and the reducing sugar was calculated using N-acetyl-glucosamine reference curve. The amount of enzyme that released 1 µmol of reducing sugar per min was defined as one unit of enzyme activity.
Identification of chitinolytic activity through protein electrophoresis
To verify the chitinolytic activity of A. alternata VLH1 crude protein extract, electrophoresis of crude protein supernatant on a native PAGE gel at 4 °C was performed. In a Bio-Rad mini-protean II cell assembly, 10 g of protein was electrophoresed through a 5% stacking gel and a 10% separating gel at a constant voltage of 100 V for 3–4 h. To visualise the protein bands, the gel was stained for one hour with Coomassie brilliant blue R-250 dye and then de-stained for two hours using a 3% NaCl buffer. The gel was superimposed on a substrate gel composed of 2% agarose and 0.2% colloidal chitin after the clear and evident bands were noticed in the gel. To allow for enzymatic activity, the assemblage was incubated overnight at 37 °C. After incubation, the agarose gel was stained for 15 min with chitin binding fluorescent dye solution (0.01% Calcofluor white M2R in 50 mM Tri HCl (pH 8)) and then de-stained with distilled water for two hours. Under a UV illuminator, the zones of chitinolytic activity were seen and compared to duplicate protein banding patterns that were run simultaneously.
Effect of crude protein extracts on nutritional physiology of H. armigera
The influence of protein extracts from A. alternata VLH1 on dietary physiology of H. armigera third instar larvae (8 days old) was studied using Waldbauer's (1968) gravimetric approach. The experiment was replicated four times with five treatments (four with protein extract amended diets of 5, 10, 20, and 40 ppm, and a control with an un-amended diet). A total of 200 larvae were employed in the investigation, with 10 third instar larvae per treatment were pre-starved for 2–3 h and fed with a known amount of artificial diet (5 g). Larvae were kept in plastic Petri plates (5 cm diameter) and incubated in a BOD incubator for 96 h at 25 ± 2° C and 65 ± 5% relative humidity. The weight of larvae, faeces, and leftover feed were recorded after every 24 h till 96 h and the final values were recorded once after the experiment was completed by drying the materials at 65 °C for 3 days. The nutritional indicators were calculated using Wheeler and Isman's (2001) formula.
$$\begin{aligned} & {\text{RGR}} = \frac{{\text{Change in larval dry weight/day}}}{{\text{Initial larval dry weight}}} \\ & {\text{RCR}} = \frac{{\text{Change in diet dry weight/day}}}{{\text{Initial larval dry weight}}} \\ & {\text{ECI}} = \frac{{{\text{Dry weight gain of insect}} \times {1}00}}{{\text{Dry weight of food ingested}}} \\ & {\text{ECD}} = \frac{{{\text{Dry weight gain of insect}} \times {1}00}}{{\text{Dry weight of food ingested Dry weight of frass}}} \\ & {\text{AD}} = \frac{{{\text{Dry weight of food ingested Dry weight of frass }} \times {1}00}}{{\text{Dry weight of food ingested}}} \\ \end{aligned}$$
RGR—Relative growth rate, RCR—relative consumption rate, ECI—efficiency of conversion of ingested food, ECD—efficiency of conversion of digested food, AD—approximate digestibility.
Laboratory bioassay of fungal crude protein extracts against aphids
The dialyzed protein treatments with following concentration; 25, 50, 100, 150, and 200 ppm and a control (double-distilled water) was imposed against adults of two aphid pests, mustard aphid (L. erysimi), and wheat aphid (S. avenae), using a four-day-old Alternaria culture. Every 24 h, the treated insects' mortality was reported and corrected using Abbott's (1925) formula. Aphid cadavers with brownish to blackish coloration were deemed dead.
Testing virulence of the fungus under polyhouse conditions
The green peach aphid (M. persicae) infecting capsicum plants were chosen as the target insect pest to investigate the infectivity and virulence of A. alternata VLH1 under polyhouse conditions. The capsicum (VL Shimla Mirch-3) plants were grown in a fully automated polyhouse and the inoculum of M. persicae was artificially inoculated on the plants and the infestation was allowed to grow by creating favourable environmental conditions in the polyhouse (temperature 25–28 °C, RH > 80%, and a photoperiod of 16 h of light and 8 h of darkness). Once the pest population crossed ETL (average of > 50 aphids in one top, one middle, and one lower leaf), four spore suspensions of five litres each of A. alternata VLH1 at the rate of 3 × 104, 3 × 105, 3 × 106, and 3 × 107 spores per ml were prepared for foliar spray. The field was divided into 50 plots of 1 m2 each containing 4 plants per plot and five treatments (four with 3 × 104, 3 × 105, 3 × 106, and 3 × 107 spores per ml of A. alternata VLH1 and control with Triton-X-100 at 0.02% concentration) were induced with 10 replications (Triton-X 100, 0.02% was used as surfactant with every fungal conidial spray). Every plant in each plot was counted for the number of aphids infesting the plants before and after spray (96 h after spray) and aphid mortality was determined by counting the number of aphids on one top, one middle, and one bottom leaf, respectively, to obtain the data regarding per cent pest reduction. SPSS software for WINDOWS version 16.0 was used to calculate the average per cent mortality and SE(m) values for various treatments (SPSS Inc, Chicago).
Virulence of the fungus under open-field conditions
The mustard aphid (L. erysimi) was chosen as the target insect pest to investigate the infectivity and virulence of A. alternata VLH1 under open-field conditions. The Indian rapeseed variety (VLT-3) was grown and allowed to naturally infest with L. erysimi. Twenty-five plots of 1 m2 each containing 82.5 ± 3.87 plants per plot were demarcated in the field. Four spore suspensions of A. alternata VLH1 at a concentration of 3 × 104, 3 × 105, 3 × 106, and 3 × 107 spores per ml were prepared with one control treatment with Triton-X-100 at 0.02% concentration. The treatments were replicated five times and total 5 l spore suspensions of each concentration were prepared for foliar spray. Once the pest population density crossed ETL (> 50 aphids per top 5 cm plant), the treatments were imposed. For calculating the per cent pest reduction, ten plants were selected randomly from each plot and numbers of aphids in the top 5 cm of plant were counted before and after spray (96 h after spray).
Aphid cadavers' data collected were subjected to probit analysis (Finney 1971) using the PoloPlus software package (LeOra Software 2013). SPSS software for WINDOWS version 16.0 was used to calculate the average per cent mortality for various treatments of fungus virulence (SPSS Inc, Chicago).
Morphological characterisations of A. alternata VLH1
The spore clearly belongs to the Alternaria species based on preliminary stereomicroscope examinations. The conidial measurements like conidia length, conidia breadth, number of transverse and longitudinal septa, and presence or absence of beak were recorded (Table 1). All measurements were taken with a fluorescent microscope at 40X magnification on freshly isolated conidia (Olympic BX 61). The colony characteristics and sporulation pattern of A. alternata VLH1 were observed (Fig. 1a, b). The pure fungal culture first formed buff-coloured colonies, which gradually turned brownish and blackish in colour. The colony generated pluffy aerial mycelia with many branches and brownish septate mycelium. It had somewhat long, branched conidiophores with ellipsoid to ovoid conidia that tapered to a very short beak at one end. The length and width of the conidial septa ranged from 11.8 to 31.6 m, with 1–3 transverse septa and 0–2 longitudinal septa. The fungal culture showed high sporulation capacity but moderate ability multiply on artificial culture medium (9 cm in 18 days on PDA). On the basis of morphological characteristics and comparison with available studies of Goettel and Inglis (1997), Tzean (1997) and Woudenberg et al. (2015), the fungus was identified as Alternaria alternata (Fr.).
Table 1 Spore characters of Alternaria alternata VLH1 (Average of minimum 30 spores)
a Spores of Alternaria alternata VLH1, b fungal colony on PDA media
Molecular characterisations
The fungus was identified up to species level using molecular markers, which was used to back up the morphological findings. Multilocus sequence typing (MLST), which integrates several loci of conserved genes to improve the detection of phylogenetic signals among taxa, to construct a high resolution phylogenetic tree, was employed. In an MLST-based phylogenetic analysis, a sequence dataset from the NCBI database that showed ITS, GAPDH, and LSU markers that were comparable to A. alternata VLH1 (MN704636) was acquired. The homologous sequence dataset was aligned with ClustalW, and then, comparative data were reconstructed to a tree with MEGA X, using the widely used maximum likelihood technique (Saitou and Nei 1987). Using ten sequences, the ITS marker-based MLST identified A. alternata isolate E20 (MT524319.1) as the closest relative in the form of a monophyletic group (Fig. 2). This close relative was found in the rhizosphere zone of maize crops and was shown to be capable of infecting insects (Mckinnon et al. 2018). In this research, the isolate Beauveria bassiana JALBB1 (MF187104.1) was used as an out-group. Monophyly with A. alternata KUC21222 (KT207688.1) was inferred from the LSU gene-based phylogeny, which was capable of hyper-parasitizing Puccinia striiformis f. sp. tritici (Pucciniales: Pucciniaceae), the cause of wheat stripe rust (Zheng et al. 2017). Due to a different diverging branch, the GAPDH marker-based analysis revealed the independent origin of A. alternata VLH1. The A. alternata VLH1 isolate's similarity to other known EPF A. alternata isolates was confirmed by a categorised phylogenetic reconstruction using MLST. Sequences of nine entomopathogenic and 21 plant pathogenic strains of A. alternata in the NCBI database were found and included in alignment and phylogenetic tree inference, our evaluation of phylogenetic tree suggests the existence of strains of A. alternata in nature with diversified traits infecting plants and insects. The ability to conduct numerous downstream analyses, contextual phenotypic differences, and identify possible metabolites requires the identification of fungal bio-agent at the species level. The major obstacles in species delineation are evolutionary events such as horizontal gene transfer, recombination, and independent assortment. DNA sequencing of a taxon's highly conserved region provides a dependable method of overcoming the aforementioned difficulties.
Phylogenetic analysis of Alternaria alternata VLH1 along with 33 closely associated Alternaria species sequences (Beauveria bassiana isolate JALBB1 is taken as out-group). Filled circle: A. alternata VLH1 ITS region, filled square: A. alternata VLH1 LSU region, filled triangle: A. alternata VLH1 GAPDH region
Proving Koch's postulate
The Koch's postulate was proved for seven insect pests: T. vaporariorum nymphs and adults, C. choprai adults, B. brassicae adults, A. craccivora adults, M. persicae adults, L. erysimi adults, and larvae of Pieris brassicae (Linnaeus, 1758) (Lepidoptera: Pieridae) (Additional file 1: Fig. S1). The colony and spore characteristics of the fungal spores isolated from the dead cadavers of treated insects were comparable to those of the mother culture.
Testing the infectivity of A. alternata VLH1 against target insect pests
The EPF A. alternata VLH1 had a variety of impacts on the insect pests it was tested on. For important sucking pests, the median fatal concentration (LC50) was determined, and the results are presented in Table 2 along with upper and lower fiducially limits, p and χ2 values at 1% level of significance. L. erysimi had the lowest LC50 value of 1.7 × 104 spores per ml, followed by M. persicae (4.8 × 104 spores per ml), A. craccivora (8.5 × 104 spores per ml), S. avenae and B. brassicae (1.2 × 105 spores per ml). Despite the fact that the fungus was identified against T. vaporariorum, the LC50 values against the nymphs (4.3 × 105 spores per ml) and adults (2.0 × 105 spores per ml) were higher than those of the aphid pests. Even though C. choprai adults were susceptible to the fungus, it had the highest LC50 value of 2.5 × 106 spores per ml. Except for C. choprai, which demonstrated mortality only after 96 h, other insect species showed mortality after 72 h.
Table 2 Lethal concentrations of A. alternata VLH1 against major sucking pests of hill agriculture
With respect to effect of A. alternata VLH1 on larval, pupal, and adult periods of H. armigera, S. litura, S. obliqua, and G. mellonella, the data are presented in Table 3 (Additional file 1: Fig. S2). In paired sample data, the Student's T test was used to compare difference of means at < 0.01% (1% level of significance) (Alternaria treated vs untreated insects). When H. armigera, S. litura, and G. mellonella were treated with spore suspension of A. alternata VLH1, there was a substantial change in larval, pupal, and adult lifespan periods, with reduced larval duration, increased pupation period, and lowered adult longevity. The fungus had the least effect on S. obliqua larvae, pupae, and adults, and there was non-significant alteration in the life cycle.
Table 3 Changes in growth pattern of larvae, pupae and adults of four Lepidopteran pests (data presented as number of days required to transform to next instar)
Non-target effect of A. alternata VLH1 on host plants
In the laboratory studies, the leaves treated with A. alternata VLH1 spore suspension showed no effect on detached leaves of 11 host plants in laboratory trials. In field trials where plants were sprayed with the fungal spore suspension up until run off, no impacts or symptoms were found on 11 host plants. The results indicate that the native EPF A. alternata VLH1 was less toxic and had no pathogenicity for important field crops grown in hill agriculture in the Indian Himalayas.
Ascertaining chitinase activity and characterisation of chitinolytic protein of A. alternata VLH1
The rates of protein production from the fungus A. alternata VLH1 (Additional file 1: Fig. S3) revealed that the most protein was produced on day 4, 7.90 µg/100 µl, followed by day 3, 2, and 1 (7.27, 5.42, 4.94 µg/100 µl), respectively. On day 5, however, the amount of protein dropped dramatically, with a mean value of 4.71 µg/100 µl (OD 0.157). Moreover, a study that evaluated the chitinase enzyme activity in the protein extract revealed a gradual rise in enzymatic activity over time (Fig. 3). The greatest protein concentration was 117.78 U/ml four days after inoculation, followed by 104.72 U/ml and 91.83 U/ml on the third and fifth days, respectively. The protein extract thus obtained was electrophoresed, and the resulting band on a PAGE gel was compared to a molecular weight marker (Fig. 4). The existence of an enzyme with a size of 75–90 kDa was confirmed by the band. To confirm the band's chitinolytic activity, colloidal chitin 0.2 per cent was utilised as a substrate, and the superimposition of PAGE gels generated a distinct chitin degradation zone on a 2% agarose gel, indicating that the protein fraction isolated from A. alternata VLH1 had chitinolytic activity. This crude protein extract showcasing chitinolytic activity was tested against two aphid pests, L. erysimi and S. avenae and Table 4 shows the median lethal concentrations. L. erysimi had the lowest LC50 value of 74.47 ppm, whereas S. avenae had a higher LC50 value of 127.06 ppm, indicating that it was less susceptible to crude proteins. These findings suggested that proteinaceous substances are involved in A. alternata VLH1's entomopathogenic activity against insect pests. Additionally, the effect of protein extracts on the nutritional physiology of H. armigera third instar larvae revealed a considerable decrease in the larval food utilisation efficiency (Table 5). RGR fell from 7.09 to 2.18 mg/mg/day 96 HAT (F = 32.65, p < 0.001), RCR decreased from 18.56 to 3.81 mg/mg/day 96 HAT (F = 136.96, p < 0.001), per cent ECI decreased from 9.47 to 3.19 after 96 h. (F = 41.94, p < 0.001), and per cent ECI decreased from 9.47 to 3.19 after 96 h. (F = 41.94), in control and 40 ppm protein concentrations. Moreover, ECD decreased from 8.02 to 1.96 after 96 h (F = 39.79, p < 0.001), and AD decreased from 99.65 to 96.70% after 96 h (F = 6.72, p < 0.001). The highest concentration of protein extract (40 ppm) resulted in abnormal pupal moulting and the formation of deformed adults (Fig. 5). RGR, RCR, ECI, ECD, and AD all decreased 3.25, 4.87, 2.96, 4.09, and 1.03 times over control at the highest protein concentration. Furthermore, substantial reduction in all nutritional indices was recorded at 1% level of significance.
Kinetics of chitinase production by protein extracts of Alternaria alternata VLH1
Characterisation of chitinolytic activity of protein extract of Alternaria alternata VLH1 (Lane 1. (E1) chitinolytic activity on 2% agarose gel with 0.2% colloidal chitin as substrate, Lane 2. Chitinase enzyme band isolated from crude protein extracts of A. alternata VLH1, Lane 3. Molecular weight marker)
Table 4 Laboratory bioassay with crude protein extracts of Alternaria alternata VLH1 against major aphid pests of Rabi crops
Table 5 Effect of chitinolytic protein extracts of Alternaria alternata VLH1 on nutritional physiology of third instar larvae of Helicoverpa armigera (96 HAT)
Abnormal pupal moulting and formation of deformed adults in Helicoverpa armigera treated with 40 ppm protein concentration of Alternaria alternata VLH1
Polyhouse condition
Figure 6 shows the per cent mortality of green peach aphid (M. persicae) infecting capsicum following treatment with A. alternata VLH1 spore suspension 96 HAT. Among all the treatments, T4 (3 × 107 spores per ml) recorded the highest mortality of 81.14%, which was followed by 3 × 106 (68.64%), 3 × 105 (54.89%), and 3 × 104 (37.80%) spores per ml. Based on results, it can be concluded that the increase in spore concentration resulted into concurrent increase in the per cent pest mortality. However, the population of aphids in control plots was unaltered even after spraying with Triton-X-100 at a concentration of 0.02% and the decrease by 2.86% only was reported, following treatment. The polyhouse bioassays open the way for the safe and effective application of A. alternata VLH1 to control important sucking pests that infect a variety of crops under polyhouse conditions.
Per cent mortality of aphids in controlled polyhouse conditions after treatment with spore suspension of Alternaria alternata VLH1
Open-field conditions
Figure 7 shows the per cent mortality of mustard aphid (L. erysimi) infecting Indian rape 96 HAT with A. alternata VLH1 spore suspension. Among all the treatments, T4 (3 × 107 spores per ml) recorded the highest mortality of 63.14%, which was followed by 3 × 106 (48.64%), 3 × 105 (38.89%), and 3 × 104 (23.80%) spores per ml. However, more than 50% mortality of pests was recorded only in spore concentration of 3 × 107 spores per ml, indicating the need for high concentration of biocontrol agent for managing the pests under open-field conditions. On the other hand, aphid population in control plots was unaltered, with a mere reduction of 1.46%. The field circumstances differed from one site to the next, and the findings may differ as well. Before proposing A. alternata VLH1 for large-scale open-field application, more testing against other sucking pests in diverse locations and climate regimes is required.
Per cent mortality of aphids in open fields after treatment with spore suspension of Alternaria alternata VLH1
The inadvertent observation of a substantial population reduction of greenhouse whiteflies infecting S. divinorum plants grown in a polyhouse sparked a flurry of inquiries on the cause of the population decline. Whitefly dead cadavers with abundant fungal development were collected from the host plants and dissected to examine the conidia/spores under a stereo microscope. The fungus species was verified as Alternaria through ocular observation. Furthermore, the literature search found that Alternaria fungi are saprophytic and might be used as a biocontrol agent against a variety of insect pests (Mehrmoradi et al. 2020).
Kochs' postulate was tested by infecting T. vaporariorum nymphs and adults with pure culture of the fungus isolated using the single spore colony method before determining native Alternaria species to be EPF infecting T. vaporariorum. The mycelia obtained had colony and spore characteristics that were identical to those of the mother culture.
Alternaria alternata VLH1, was found to be monophyletic with A. alternata isolate E20 (MT524319.1) based on phylogenetic analysis and multilocus sequence typing. This close relative was found in the rhizosphere zone of maize crops and had the ability to infect insects (Mckinnon et al. 2018). Furthermore, our findings showed the presence of a diverse characteristic in A. alternata strains found in nature, infecting both plants and insects. Based on a phylogenetic study of sequences from nine entomopathogenic and 21 plant pathogenic strains of A. alternata in the NCBI database, a small mutation or transformation in SNPs may have produced entomopathogenic behaviour in A. alternata VLH1. However, it is worth noting that utilising the ITS marker to trace the phylogeny of A. alternata was only the tip of the iceberg. It was difficult inferring the exact picture of the evolutionary events that lead to distinct strains of A. alternata featuring entomopathogenic activity due to close similarity in ITS sequences across insect and plant pathogenic A. alternata strains. However, if additional molecular data becomes available, such as genome sequencing of strains to find single copy orthologues, a phylogenetic tree based on effector genes may provide a clear picture of the evolutionary relevance in achieving adequate virulence on plants and insects. Furthermore, a gene tree employing effectors and single copy orthologues will provide a better understanding of the specific/key genetic alteration that favours A. alternata strains that parasitize insects.
The sucking insects were the first priority targets of the fungus, according to laboratory assays done to examine the infectivity and pathogenicity of A. alternata VLH1 against insect pests. L. erysimi and M. persicae had the lowest LC50 values of the seven sucking pests studied. Except for C. choprai, most insects died after 72 h of exposure (96 HAT). These findings were similar to those of Christias et al. (2001), who found that the fungus A. alternata completed its life cycle in 48–72 h and that the insects were sluggish, stopped feeding, and became brownish to blackish in colour with abundant mycelial development. In the present research, similar changes in body colour and mycelial development on the carcass of a dead insect were noticed.
Furthermore, lepidopteran pests are most prevalent insect pests in the Indian Himalayas, making biocontrol approaches challenging. To investigate the virulence and infectivity of A. alternata VLH1, four lepidopteran insects (H. armigera, S. litura, S. obliqua, and G. mellonella) were chosen. The treated insects were compared with untreated ones for time required to complete larval, pupal and adult stages. In three insects, H. armigera, S. litura, and G. mellonella, the pupal stage was extended further; however in S. obliqua, non-significant difference in larval, pupal, and adult periods was seen. Furthermore, few H. armigera, S. litura, and G. mellonella individuals treated with A. alternata VLH1 were unable to complete their life cycle. Pupae were found to have developmental abnormalities, adults emerged prematurely, and the emerged adults died within a short time of emergence. Obtained findings were similar to those of Kaur et al. (2019) who found that when an ethyl acetate extract of A. alternata strain NL23 was tested against S. litura larvae, it resulted in a 28–81% reduction in relative growth rate (RGR) and a 47–55% reduction in relative consumption rate (RCR) compared to control, lowering the larvae's growth efficiency and increasing premature mortality. Sharma et al. (2012), on the other hand, found results that were diametrically opposed to ours. The larval, pupal, and adult developmental periods of H. vigintioctopunctata increased after feeding on the fungal pathogen A. alternata, which infects Withania somnifera (L.) Dunal, (Solanales: Solanaceae) plants, compared to the control.
In a bioassay trial with two non-target organisms, C. septempunctata and A. mellifera, treated with 3 × 106 spore/ml concentrations, non-significant effect on insects was seen. The above findings demonstrated the efficacy of local EPF against a variety of insect pests in hill agriculture, as well as their safety against non-target insects.
Despite the fact that the fungus was effective at controlling insect pests, data on its non-target impacts on host plants were crucial before embarking on any open-field experiments. The findings of this study showed that A. alternata VLH1 did not develop any disease symptoms on the treated host plants.
Alternaria alternata is an entomopathogenic fungus that produces appressorium, which allows it to penetrate plant surfaces or insect cuticles. The fungus is known to produce more than 70 types of toxins (EFSA 2011) that may have entomopathogenic activity on insects, in addition to exerting mechanical forces to infect the target insect or plant. A laboratory trial was done to investigate the entomopathogenic activity of crude protein extracts of fungi against two aphid pests, yielding LC50 values of 74.47 ppm and 127.06 ppm for mustard and wheat aphids, respectively. Samuels and Paterson (1995) reported that the combined action of mechanical force exerted by infection pegs formed from appressoria and enzymatic digestion of the insect epicuticle through production of proteinaceous toxins is required for fungal pathogen penetration of insect hosts. Furthermore, Green et al. (2001) believed that entomopathogenic Alternaria species produce destruxins (A to E) and the chitinase enzyme, which are a primary feature related with fungi's virulence against insects. As a consequence, the methods for isolating proteinaceous toxins and identifying fungal toxins in order to examine their insecticidal properties and non-target effects were standardised. The fungus was cultivated on a colloidal chitin-based substrate containing 1% colloidal chitin, and the presence of a distinct halo around the colony validated the strain VLH1's chitinolytic activity. Further research into quantifying the chitinase enzyme produced by A. alternata VLH1 at various time intervals revealed that the highest enzymatic activity of 117.78 U/ml was found on the fourth day after inoculation. Our findings were similar to those of Christias et al. (2001), who found that A. alternata chitinolytic activity was the predominant method of action for causing mortality in the oleander aphid, Anuraphis nerii Kaltenbach, 1843 (Homoptera: Aphididae).
The nutritional physiology of H. armigera third instar larvae given a protein extract-added artificial diet demonstrated significant alterations. Over control, the RGR, RCR, ECI, ECD, and AD decreased 3.25, 4.87, 2.96, 4.09, and 1.03 times, respectively. When S. litura larvae were fed on protein fractions from Alternaria destruens AKL-3 (Fr.) Keissl (Pleosporales: Pleosporaceae), Kaur et al. (2019) reported a 31.96–53.94% decrease in RGR, 19.24–72.93% decrease in RCR, and adverse effects in ECI and ECD. Furthermore, Quesada-Moraga et al. (2006) found that protein isolates from multiple EPFs (M. anisopliae, B. bassiana, Beauveria brongniartii (Bals.-Criv.) (Hypocreales: Clavicipitaceae), and Scopulariopsis brevicaulis Saccardo (1881) (Microascales: Microascaceae)) induced significant growth abnormalities and adult mortality in Scopulariopsis littoralis Saccardo (1881) (Microascales: Microascaceae). The degradation of the chitin layer found in the stomach of insects, as well as concomitant toxicities produced by other insecticidal protein fractions, could be the source of growth deformity and mortality in insects caused by A. alternata VLH1. The EPF potential of A. alternata VLH1 may have been conferred by the destruction of the chitinolytic cuticular layer during appressoria production and fungal penetration.
The next stage was to investigate the activity of fungi in both polyhouse and open-field environments after obtaining successful bioassay results against a variety of insect pests in vitro. M. persicae, a major polyphagous aphid that infects vegetable crops in polyhouses, was treated with A. alternata VLH1 spore suspension. The average per cent mortality in different treatments was compared using one-way analysis of variance (ANOVA) at a 5% level of significance. The mortality ranged from 37.80% at lowest conidial concentration (3 × 104) to 81.14% at the highest conidial concentration (3 × 107 spores/ml), and the per cent mortality between Alternaria treated and a control plot was highly significant; moreover, the per cent mortality among the treated plots was also statistically significant with each other. The findings of field sprays of A. alternata VLH1 spore suspension against L. erysimi infecting Indian rapeseed were almost identical to those of polyhouse investigations. In four separate treatments, the per cent mortality ranged from 23.80% at lowest conidial concentration (3 × 104) to 63.14% at highest conidial concentration (3 × 107 spores/ml). The ANOVA test used to compare the means revealed that the per cent mortality differences between the treated and control plots were statistically significant, moreover, the per cent mortality among the treated plots were also highly significant. The population of aphids dropped to a very low level of 2.86 and 1.46% in the control plots of both polyhouse and open-field environments, respectively. Based on results, it can be concluded that the increase in spore concentration resulted into concurrent increase in the per cent pest mortality. To our knowledge, this is the first study to look at A. alternata as a possible biocontrol agent for insect pests in polyhouses and open fields under Indian Himalayan conditions.
The entomopathogenic fungus (EPF), A. alternata VLH1 was found to be an effective a biocontrol agent against wide range of insect pests that infest hill crops. From direct mortality in sucking pests to developmental abnormalities in lepidopteran insects, the insects treated with the fungal strain showed a range of responses. Despite positive findings in in vitro and in vivo bioassay investigations against various insect pests, the fungus still has to be inspected before it can be used on a broad scale for biological pest management. Insect infection requires a temperature of 25–30 °C, according to ecological research. However, the average annual temperature in the Himalayan region is well below 25 °C. This flaw must be rectified by conducting comprehensive field surveys in the Indian Himalayas to isolate different strains of Alternaria species and other EPF. The methods used to isolate proteinaceous toxins from the fungus strain resulted in a very poor toxin recovery. As a consequence, processes for isolating toxins and characterising them down to the molecular level must be standardised by culturing on various artificial culture medium. Other biosafety procedures, such as the toxicity of proteinaceous toxins to humans, animals, fish, and birds, must also be investigated before A. alternata VLH1 is recommended for field use.
Data are available within the article or its supplementary materials. The authors confirm that the data supporting the findings of this study are available within the article [and/or] its supplementary materials.
Approximate digestibility
ANOVA:
BLASTN:
Basic local alignment search tool-nucleotide
BOD:
Biochemical oxygen demand
CTAB:
Cetyltrimethylammonium bromide
DNA:
Deoxy-ribose nucleic acid
ECD:
Efficiency of digested food conversion
ECI:
Efficiency of ingested food conversion
EFSA:
European Food Safety Authority
ETL:
Economic threshold level
GAPDH:
Glyceraldehyde 3 phosphate dehydrogenase
HAT:
Hours after treatment
ITS:
Internal transcribed spacer
kDa:
Kilo Daltons
LC50:
Lethal concentration 50
LSU:
Large subunit ribosomal RNA gene
MEGA:
Molecular evolutionary genetics analysis
MLST:
Multilocus sequence typing
NCBI:
National centre for biotechnology information
Optical density
Polyacrylamide gel electrophoresis
PBD:
RCR:
Relative consumption rate
RGR:
Relative growth rate
SE(m):
Standard error of mean
SPSS:
Statistical package for social sciences
Ultra violet radiation
VL:
Vivekananda Laboratory
VLH1:
Vivekananda Laboratory Hawalbagh 1
Abbott WS (1925) Abbott's formula. J Econ Entomol 18:267–268
Behera KK, Alam A, Vats S, Sharma HP, Sharma V (2012) Organic farming history and techniques. In: Lichtfouse E (ed) Agro-ecology and strategies for climate change. Springer, Dordrecht, pp 287–328
Berbee ML, Pirseyedi M, Hubbard S (1999) Cochliobolus phylogenetics and the origin of known, highly virulent pathogens, inferred from ITS and glyceraldehyde-3-phosphate dehydrogenase gene sequences. Mycologia 91:964–977
Berger LR, Reynolds DM (1988) Colloidal chitin preparation. Methods. Enzymology 161:430
Christias CH, Hatzipapas P, Dara A, Kaliafas A, Chrysanthis G (2001) Alternaria alternata, a new pathotype pathogenic to aphids. BioControl 46(1):105–124
European Food Safety Authority (2011) Use of the EFSA comprehensive European food consumption database in exposure assessment. EFSA J 9(3):2097
Finney DJ (1971) Probit analysis. Cambridge University Press, Cambridge
Goettel MS, Inglis GD (1997) Fungi, hyphomycetes. In: Lacey L (ed) Manual of techniques in insect pathology. Academic Press, London, pp 213–249
Green S, Bailey KL, Tewari JP (2001) The infection process of Alternaria cirsinoxia on Canada thistle (Cirsiumarvense) and host structural defense responses. Mycol Res 105(3):344–351
Imoto T, Yagishita K (1971) A simple activity measurement of lysozyme. Agric Biol Chem 35:1154–1156
Kaur J, Sharma A, Sharma M, Manhas RK, Kaur S, Kaur A (2019) Effect of α-glycosidase inhibitors from endophytic fungus Alternaria destruens on survival and development of insect pest Spodoptera litura Fab. and fungal phytopathogens. Sci Rep 9(1):1–13
Kumar S, Stecher G, Li M, Knyaz C, Tamura K (2018) MEGA X, molecular evolutionary genetics analysis across computing platforms. Mol Biol Evol 35(6):1547–1549
McKinnon AC, Glare TR, Ridgway HJ, Mendoza-Mendoza A, Holyoake A, Godsoe WK, Bufford JL (2018) Detection of the entomopathogenic fungus Beauveria bassiana in the rhizosphere of wound-stressed Zea mays plants. Front Microbiol 9(1161):1–16
Mehrmoradi H, Jamali S, Pourian HR (2020) Isolation and identification of entomopathogenic fungi from cultivated and natural soils in Kermanshah province (West of Iran). Rostaniha 21(1):49–64
Negi GCS, Palni LMS (2010) Responding to the challenges of climate change: mountain specific issues. Climate change, biodiversity and ecological security in the South Asian Region. Mac-Millan Publishers India Ltd., New Delhi, pp 293–307
Paschapur A, Stanley J, Jeevan B, Subbanna ARNS, Singh AK, Mishra KK (2019) Report of novel entomopathogenic fungi Alternaria alternata infecting greenhouse whitefly (Trialeurodes vaporariorum Westwood) in North Western Himalayan region. IPM2 P18. pp 292. IPPC 2019
Poitevin CG, Porsani MV, Poltronieri AS, Zawadneak MAC, Pimentel IC (2018) Fungi isolated from insects in strawberry crops act as potential biological control agents of Duponchelia fovealis (Lepidoptera: Crambidae). Appl Entomol Zool 53(3):323–331
Quesada-Moraga E, Ruiz-García A, Santiago-Alvarez C (2006) Laboratory evaluation of entomopathogenic fungi Beauveria bassiana and Metarhizium anisopliae against puparia and adults of Ceratitis capitata (Diptera: Tephritidae). J Econ Entomol 99(6):1955–1966
Saitou N, Nei M (1987) The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol 4(4):406–425
Samuels RI, Paterson IC (1995) Cuticle degrading proteases from insect moulting fluid and culture filtrates of entomopathogenic fungi. Comp Biochem Physiol B Biochem Mol Biol 110(4):661–669
Schoch CL, Crous PW, Groenewald JZ, Boehm E, Burgess TI, de Gruyter J et al (2009) A class-wide phylogenetic assessment of Dothideomycetes. Stud Mycol 64:1–15. https://doi.org/10.3114/sim.2009.64.01
Sharma A, Thakur A, Kaur S, Pati PK (2012) Effect of Alternaria alternata on the coccinellid pest Henosepilachna vigintioctopunctata and its implications for biological pest management. J Pest Sci 85(4):513–518
Stanley J, Subbanna A, Mahanta D, Paschapur AU, Mishra KK, Varghese E (2022) Organic pest management of hill crops through locally available plant extracts in the mid-Himalayas. Ann Appl Biol. https://doi.org/10.1111/aab.12791
Subbanna ARNS, Chandrashekara C, Stanley J, Mishra KK, Mishra PK, Pattanayak A (2019) Bio-efficacy of chitinolytic Bacillus thuringiensis isolates native to north-western Indian Himalayas and their synergistic toxicity with selected insecticides. Pestic Biochem Physiol 158:166–174
Tzean SS (1997) Atlas of entomopathogenic fungi from Taiwan. National Taiwan University, Department of Plant Pathology and Entomology, Taipei
Waldbauer GP (1968) The consumption and utilization of food by insects. In: Waldbauer GP (ed) Advances in insect physiology, vol 5. Academic Press, London, pp 229–288
Wheeler DA, Isman MB (2001) Antifeedant and toxic activity of Trichiliaamericana extract against the larvae of Spodoptera litura. Entomol Exp Appl 98(1):9–16
White TJ, Bruns T, Lee S, Taylor J (1990) Amplification and direct sequencing of fungal ribosomal RNA genes for phylogenetics. In: Innis MA, Gelfand DH, Sninsky JJ, White TJ (eds) PCR protocols. A guide to methods and applications, vol 18. Academic Press, London, pp 315–322
Woudenberg JHC, Seidl MF, Groenewald JZ, De Vries M, Stielow JB, Thomma BPHJ, Crous PW (2015) Alternaria section Alternaria: Species, formaespeciales or pathotypes? Stud Mycol 82:1–21
Yang FZ, Li L, Yang B (2012) Alternaria toxin-induced resistance against rose aphids and olfactory response of aphids to toxin-induced volatiles of rose plants. J Zhejiang Univ Sci B Biomed Biotechnol 13(2):126–135
Zheng L, Zhao J, Liang X, Zhan G, Jiang S, Kang Z (2017) Identification of a novel Alternaria alternata strain able to hyperparasitize Puccinia striiformis f. sp. tritici, the causal agent of wheat stripe rust. Front Microbiol 8:71
The authors are thankful to Indian Council of Agricultural Research (ICAR) and ICAR-VPKAS, Almora, for the financial assistance and providing facilities for conducting experiments successfully. We also thank our Senior Technical Assistant, Mr. J. P. Gupta for his untiring support during both laboratory and field studies.
The study has been funded by Indian Council of Agriculture Research, New Delhi.
Crop Protection Section, ICAR-Vivekananda Parvatiya Krishi Anusandhan Sansthan (VPKAS), Almora, Uttarakhand, 263601, India
Amit Umesh Paschapur, Ashish Kumar Singh, B. Jeevan, H. Rajashekara, Krishna Kant Mishra, Lakshmi Kant & Arunava Pattanayak
ICAR-Indian Institute of Oil Palm Research, Pedavegi, East Godavari, Andhra Pradesh, 534450, India
A. R. N. S. Subbanna
ICAR-Indian Institute of Millets Research, Rajendranagar, Hyderabad, Telangana State, 500030, India
Johnson Stanley
The University of Trans-Disciplinary Health Sciences and Technology, Bengaluru, Karnataka, 560064, India
Prasanna S. Koti
Amit Umesh Paschapur
Ashish Kumar Singh
B. Jeevan
H. Rajashekara
Krishna Kant Mishra
Lakshmi Kant
Arunava Pattanayak
AUP carried out laboratory and field studies, drafting and designing of the experiment, ARNSS was involved in scientific guidance, support during laboratory, and field studies, AKS helped in molecular data analysis, JB contributed to morphological characterisation and mass multiplication of the fungi, JS was involved in technical guidance and support for conducting laboratory experiments, RH helped in data analysis and revising the manuscript, KKM contributed to language editing and revision of the manuscript, PSK was involved in language editing and English improvement, LK provided facility and scientific guidance, AP provided facility and guidance, final reviewing and revision of the manuscript. All authors read and approved the final manuscript.
Correspondence to Amit Umesh Paschapur.
This article does not contain any studies with animals performed by any of the authors. This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent was obtained from all individual participants included in the study.
All the authors have received the funding from ICAR, New Delhi, and therefore have no conflict of interest for the submission and publication of the article to "Egyptian Journal of Biological Pest control".
All the authors have received the funding from ICAR, New Delhi, and therefore have no conflict of interest for the submission of the article to "Egyptian Journal of Biological Pest control".
Additional file 1.
Supplementary Fig 1. Insect cadavers infected with A. alternata strain VLH1, four days after treatment, a T. vaporariorum nymph, b T. vaporariorum deformed adult, c Chauliops choprai adult, d Brevicoryne brassicae, e Aphis craccivora, f Myzus persicae, g Lipaphis erysimi, h Pieris brassicae larvae. Supplementary Fig 2. Changes in growth pattern of larvae, pupae and adults of four Lepidopteran pests (data presented in number of days required to transform to next instar). Supplementary Fig 3. Kinetics of proteinaceous toxin production by A. alternata strain VLH1 Laboratory bioassay of crude proteins against aphid pests.
Paschapur, A.U., Subbanna, A.R.N.S., Singh, A.K. et al. Alternaria alternata strain VLH1: a potential entomopathogenic fungus native to North Western Indian Himalayas. Egypt J Biol Pest Control 32, 138 (2022). https://doi.org/10.1186/s41938-022-00637-0
Hemipteran insects
Chitinolytic activity
Nutritional physiology indices
Indian Himalayas | CommonCrawl |
Multi-bump solutions for a class of quasilinear equations on $R$
Spectral analysis and stabilization of a chain of serially connected Euler-Bernoulli beams and strings
On the structure of the global attractor for non-autonomous dynamical systems with weak convergence
Tomás Caraballo 1, and David Cheban 2,
Dpto. Ecuaciones Diferenciales y Análisis Numérico, Facultad de Matemáticas, Universidad de Sevilla, Campus Reina Mercedes, Apdo. de Correos 1160, 41080 Sevilla
State University of Moldova, Department of Mathematics and Informatics, A. Mateevich Street 60, MD–2009 Chişinău
Received January 2011 Revised January 2011 Published October 2011
The aim of this paper is to describe the structure of global attractors for non-autonomous dynamical systems with recurrent coefficients (with both continuous and discrete time). We consider a special class of this type of systems (the so--called weak convergent systems). It is shown that, for weak convergent systems, the answer to Seifert's question (Does an almost periodic dissipative equation possess an almost periodic solution?) is affirmative, although, in general, even for scalar equations, the response is negative. We study this problem in the framework of general non-autonomous dynamical systems (cocycles). We apply the general results obtained in our paper to the study of almost periodic (almost automorphic, recurrent, pseudo recurrent) and asymptotically almost periodic (asymptotically almost automorphic, asymptotically recurrent, asymptotically pseudo recurrent) solutions of different classes of differential equations.
Keywords: dissipative systems, convergent systems, skew-product systems, almost periodic, global attractor, Non-autonomous dynamical systems, cocycles, almost automorphic, quasi-periodic, asymptotically almost periodic solutions., recurrent solutions.
Mathematics Subject Classification: Primary: 34C11, 34C27, 34D05, 34D23, 34D45, 34K14,37B20, 37B55, 37C55, 7C60, 37C65, 37C70, 37C7.
Citation: Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809
N. P. Bhatia and G. P. Szegö, "Stability Theory of Dynamical Systems," Lecture Notes in Mathematics, Springer, Berlin-Heidelberg-New York, 1970. Google Scholar
I. U. Bronsteyn, "Extensions of Minimal Transformation Group," Noordhoff, 1979. Google Scholar
B. F. Bylov, R. E. Vinograd, D. M. Grobman and V. V. Nemytskii, "Lyapunov Exponents Theory and Its Applications to Problems of Stabity," Moscow, Nauka, 1966, 576 pp. (in Russian). Google Scholar
T. Caraballo and D. N. Cheban, Levitan/Bohr almost periodic and almost automorphic solutions of second-order monotone differential equations, J. Differ. Eqns., 251 (2011), 708-727. Google Scholar
D. N. Cheban, Quasiperiodic solutions of the dissipative systems with quasiperiodic coefficients, Differential Equations, 22 (1986), 267-278. Google Scholar
D. N. Cheban, $\mathbb C$-analytic dissipative dynamical systems, Differential Equations, 22 (1986), 1915-1922. Google Scholar
D. N. Cheban, Boundedness, dissipativity and almost periodicity of the solutions of linear and weakly nonlinear systems of differential equations, Dynamical systems and boundary value problems, Kishinev, "Shtiintsa," (1987), 143-159. Google Scholar
D. N. Cheban, Global pullback atttactors of C-analytic nonautonomous dynamical systems, Stochastics and Dynamics, 1 (2001), 511-535. Google Scholar
D. N. Cheban, "Global Attractors of Non-Autonomous Dissipative Dynamical Systems," Interdisciplinary Mathematical Sciences 1. River Edge, NJ: World Scientific, 2004, 528pp. Google Scholar
D. N. Cheban, Levitan almost periodic and almost automorphic solutions of $V$-monotone differential equations, J. Dynamics and Differential Equations, 20 (2008), 669-697. Google Scholar
D. N. Cheban, "Asymptotically Almost Periodic Solutions of Differential Equations," Hindawi Publishing Corporation, New York, 2009, 203 pp. Google Scholar
D. N. Cheban, "Global Attractors of Set-Valued Dynamical and Control Systems," Nova Science Publishers, New York, 2010. Google Scholar
D. N. Cheban and C. Mammana, Invariant manifolds, global attractors and almost periodic solutions of non-autonomous difference equations, Nonlinear Analysis TMA, 56 (2004), 465-484. Google Scholar
D. N. Cheban and B. Schmalfuß, Invariant manifolds, global attractors, almost automorphic and almost periodic solutions of non-autonomous differential equations, J. Math. Anal. Appl., 340 (2008), 374-393. Google Scholar
C. Conley, "Isolated Invariant Sets and the Morse Index," Region. Conf. Ser. Math., No.38, 1978. Am. Math. Soc., Providence, RI. Google Scholar
B. P. Demidovich, On Dissipativity of Certain Nonlinear Systems of Differential Equations, I, Vestnik MGU, 6 (1961), 19-27. Google Scholar
B. P. Demidovich, On Dissipativity of Certain Nonlinear Systems of Differential Equations, II, Vestnik MGU, 1 (1962), 3-8. Google Scholar
B. P. Demidovich, "Lectures on Mathematical Theory of Stability," Moscow, Nauka, 1967. (in Russian) Google Scholar
A. M. Fink and P. O. Fredericson, Ultimate boundedness does not imply almost periodicity, Journal of Differential Equations, 9 (1971), 280-284. Google Scholar
J. K. Hale, "Asymptotic Behaviour of Dissipative Systems," Amer. Math. Soc., Providence, RI, 1988. Google Scholar
M. W. Hirsch, H. L. Smith and X.-Q. Zhao, Chain transitivity, attractivity, and strong repellers for semidynamical systems, J. Dyn. Diff. Eqns, 13 (2001), 107-131. Google Scholar
B. M. Levitan and V. V. Zhikov, "Almost Periodic Functions and Differential Equations," Cambridge Univ. Press, London, 1982. Google Scholar
A. Pavlov, A. Pogrowsky, N. van de Wouw and N. Nijmeijer, Convergent dynamics, a tribute to Boris Pavlovich Demidovich, Systems and Control Letters, 52 (2007), 257-261. Google Scholar
V. A. Pliss, "Nonlocal Problems in the Theory of Oscillations," Nauka, Moscow, 1964 (in Russian). [English translation: Nonlocal Problems in the Theory of Oscillations, Academic Press, 1966.] Google Scholar
V. A. Pliss, "Integral Sets of Periodic Systems of Differential Equations," Nauka, Moscow, 1977 (in Russian). Google Scholar
G. R. Sell, "Topological Dynamics and Ordinary Differential Equations," Van Nostrand-Reinhold, London, 1971. Google Scholar
B. A. Shcherbakov, The comparability of the motions of dynamical systems with regard to the nature of their recurrence, Differential Equations, 11 (1975), 1246-1255. Google Scholar
B. A. Shcherbakov, "Poisson Stability of Motions of Dynamical Systems and Solutions of Differential Equations," Ştiinţa, Chişinău, 1985. (In Russian) Google Scholar
R. E. Vinograd, Inapplicability of the method of characteristic exponents to the study of non-linear differential equations, Mat. Sb. N.S., 41 (1957), 431-438. (in Russian) Google Scholar
T. Yoshizawa, "Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutions," Applied Mathematical Sciences, Vol. 14, Springer-Verlag, New York-Heidelberg, 1975. vii+233 pp. Google Scholar
V. V. Zhikov, On stability and unstability of Levinson's centre, Differentsial'nye Uravneniya, 8 (1972), 2167-2170. Google Scholar
V. V. Zhikov, Monotonicity in the theory of almost periodic solutions of non-linear operator equations, Mat. Sbornik, 90 (1973), 214-228; English transl., Math. USSR-Sb., 19 (1974), 209-223. Google Scholar
V. I. Zubov, "The Methods of A. M. Lyapunov and Their Application," Noordhoof, Groningen, 1964. Google Scholar
V. I. Zubov, "Theory of Oscillations," Nauka, Moscow, 1979. (in Russian) Google Scholar
Bixiang Wang. Stochastic bifurcation of pathwise random almost periodic and almost automorphic solutions for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3745-3769. doi: 10.3934/dcds.2015.35.3745
Ernest Fontich, Rafael de la Llave, Yannick Sire. A method for the study of whiskered quasi-periodic and almost-periodic solutions in finite and infinite dimensional Hamiltonian systems. Electronic Research Announcements, 2009, 16: 9-22. doi: 10.3934/era.2009.16.9
Mikhail B. Sevryuk. Invariant tori in quasi-periodic non-autonomous dynamical systems via Herman's method. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 569-595. doi: 10.3934/dcds.2007.18.569
Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6425-6462. doi: 10.3934/dcdsb.2021026
Francesca Alessio, Carlo Carminati, Piero Montecchiari. Heteroclinic motions joining almost periodic solutions for a class of Lagrangian systems. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 569-584. doi: 10.3934/dcds.1999.5.569
Tomás Caraballo, David Cheban. Almost periodic and almost automorphic solutions of linear differential equations. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1857-1882. doi: 10.3934/dcds.2013.33.1857
Tomás Caraballo, David Cheban. Almost periodic and asymptotically almost periodic solutions of Liénard equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 703-717. doi: 10.3934/dcdsb.2011.16.703
Claudia Valls. On the quasi-periodic solutions of generalized Kaup systems. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 467-482. doi: 10.3934/dcds.2015.35.467
Felipe García-Ramos, Brian Marcus. Mean sensitive, mean equicontinuous and almost periodic functions for dynamical systems. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 729-746. doi: 10.3934/dcds.2019030
Jia Li, Junxiang Xu. On the reducibility of a class of almost periodic Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3905-3919. doi: 10.3934/dcdsb.2020268
Xiang Li, Zhixiang Li. Kernel sections and (almost) periodic solutions of a non-autonomous parabolic PDE with a discrete state-dependent delay. Communications on Pure & Applied Analysis, 2011, 10 (2) : 687-700. doi: 10.3934/cpaa.2011.10.687
Qihuai Liu, Dingbian Qian, Zhiguo Wang. Quasi-periodic solutions of the Lotka-Volterra competition systems with quasi-periodic perturbations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1537-1550. doi: 10.3934/dcdsb.2012.17.1537
Xianhua Huang. Almost periodic and periodic solutions of certain dissipative delay differential equations. Conference Publications, 1998, 1998 (Special) : 301-313. doi: 10.3934/proc.1998.1998.301
Nguyen Minh Man, Nguyen Van Minh. On the existence of quasi periodic and almost periodic solutions of neutral functional differential equations. Communications on Pure & Applied Analysis, 2004, 3 (2) : 291-300. doi: 10.3934/cpaa.2004.3.291
Ahmed Y. Abdallah. Attractors for first order lattice systems with almost periodic nonlinear part. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1241-1255. doi: 10.3934/dcdsb.2019218
Weigu Li, Jaume Llibre, Hao Wu. Polynomial and linearized normal forms for almost periodic differential systems. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 345-360. doi: 10.3934/dcds.2016.36.345
Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172
P.E. Kloeden. Pitchfork and transcritical bifurcations in systems with homogeneous nonlinearities and an almost periodic time coefficient. Communications on Pure & Applied Analysis, 2004, 3 (2) : 161-173. doi: 10.3934/cpaa.2004.3.161
Massimo Tarallo. Fredholm's alternative for a class of almost periodic linear systems. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2301-2313. doi: 10.3934/dcds.2012.32.2301
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Tomás Caraballo David Cheban | CommonCrawl |
AccountingFinanceAuditManagementComputersStatistics
In this chapter (tap to expand)
Market Structure
Perfect Competition
Monopolistic Competition
Oligopoly Models
Kinked Demand Curve Model
Concentration Ratio
Cournot Model
Profit Maximization
Shutdown Point
Profit Function
Marginal Revenue
DefinitionShort-run Shutdown DecisionShut-down PriceLong-run Shutdown Decision
Home Economics Market Structure Shutdown Point
In short-run, a firm should shut down immediately if the market price of its product is lower than its average variable cost at its profit-maximizing output level. In long-run, it should shut down if the price of its product is less than its average total cost.
We have defined two different shutdown conditions for a single firm because the shutdown decision depends on which of its costs the firm can avoid by shutting down.
Short run is defined as a time period in which at least one of a firm's inputs, say capital, is fixed. It means that at least some of the firm's costs will be fixed which will be incurred even if it shuts down.
Long run, on the other hand, is a period in which a firm can change all its inputs. In other words, the firm has no fixed costs to worry about in the long-run. If it shuts down in the long-run, all its costs go away too.
Short-run Shutdown Decision
Because fixed costs are costs which a firm continue to incur even if production falls to zero, a firm should continue production if its revenue covers its variable cost. It is because when its revenue is higher than its variable cost, at least there is something left behind to cover a part of the fixed costs which will be incurred anyway. This is why the short-run shutdown point occurs when price P is less than or equal to the average variable cost at the profit-maximizing point. This can be expressed mathematically as follows:
$$ \text{P}\le \text{AVC} $$
The following graph shown a firm's shut-down point in short-run.
The profit-maximizing (optimal) output level occurs when marginal revenue is equal to marginal cost. You can see that even at this optimal level, the price curve is below the average variable cost (AVC) curve. It means that the firm is incurring losses at this price and it should shut down.
Shut-down Price
The price at which a firm should shut down even in the short-run is called the firm's shutdown price. Shutdown price is equal to a firm's minimum possible average variable cost. It is because the firm will never be able to achieve an average variable cost lower than this and if the market price is less than even the lowest-possible average variable cost, there is no output level at which the firm will earn positive contribution margin. It doesn't make sense for a firm to keep producing if the sales revenue will not cover even the variable costs at a firm's optimal output.
Long-run Shutdown Decision
Shutdown decision in the long-run is different because all costs can be avoided in the long-run. In the long-run, a firm should shut down if its revenues do not cover its total costs.
Let's derive the firm's long-run shutdown point.
We know a firm should shut down in the long-run if its profit is zero or if it is making losses. If π is profit, TR is total revenues and TC is total costs, the shutdown condition can be written as follows
$$ \pi\le \text{TR}\ - \text{TC} $$
Dividing both sides by Q
$$ \frac{\pi}{\text{Q}}\le\frac{\text{TR}}{\text{Q}}\ - \frac{\text{TC}}{\text{Q}} $$
Π/Q should be zero because at shutdown point, profit must be zero. TR/Q equals price and TC/Q equals average total cost (ATC).
$$ \text{0}\le \text{P}\ - \text{ATC} $$
Just a bit of rearrangement (please note that we are dealing with an inequality):
$$ \text{P}\le \text{ATC} $$
It shows that a firm should shut down in the long-run if price is less than average total cost.
by Obaidullah Jan, ACA, CFA and last modified on Feb 13, 2019
Marginal Cost
Average Variable Cost
Short Run vs Long Run
All Chapters in Economics
Production Functions
Cost Curves
Consumption Function
National Income Accounting
Growth Accounting
Structural Unemployment
Current Chapter
XPLAIND.com is a free educational website; of students, by students, and for students. You are welcome to learn a range of topics from accounting, economics, finance and more. We hope you like the work that has been done, and if you have any suggestions, your feedback is highly valuable. Let's connect!
Copyright © 2010-2022 XPLAIND.com | CommonCrawl |
communications physics
Strange metal behaviour from charge density fluctuations in cuprates
Götz Seibold1,
Riccardo Arpaia ORCID: orcid.org/0000-0003-4687-23762,3,
Ying Ying Peng ORCID: orcid.org/0000-0002-2657-35902 nAff8,
Roberto Fumagalli2,
Lucio Braicovich ORCID: orcid.org/0000-0001-6548-91402,4,
Carlo Di Castro5,
Marco Grilli ORCID: orcid.org/0000-0001-5607-79965,6 na1,
Giacomo Claudio Ghiringhelli ORCID: orcid.org/0000-0003-0867-77482,7 &
Sergio Caprara ORCID: orcid.org/0000-0001-8041-32325,6 na1
Communications Physics volume 4, Article number: 7 (2021) Cite this article
Electronic properties and materials
Superconducting properties and materials
Besides the mechanism responsible for high critical temperature superconductivity, the grand unresolved issue of the cuprates is the occurrence of a strange metallic state above the so-called pseudogap temperature T*. Even though such state has been successfully described within a phenomenological scheme, the so-called Marginal Fermi-Liquid theory, a microscopic explanation is still missing. However, recent resonant X-ray scattering experiments identified a new class of charge density fluctuations characterized by low characteristic energies and short correlation lengths, which are related to the well-known charge density waves. These fluctuations are present over a wide region of the temperature-vs-doping phase diagram and extend well above T*. Here we investigate the consequences of charge density fluctuations on the electron and transport properties and find that they can explain the strange metal phenomenology. Therefore, charge density fluctuations are likely the long-sought microscopic mechanism underlying the peculiarities of the metallic state of cuprates.
Among the different phases and orders populating the phase diagram of superconducting cuprates, the region where the strange metal occurs has a preeminent role for this class of compounds over a rather wide doping range pivoting around optimal doping (see Fig. 1). Experimentally, the most evident benchmark of this region is represented by the linear behaviour of the electrical resistivity ρ(T) as a function of the temperature T, from above a doping-dependent pseudogap crossover temperature T* up to the highest attained temperatures. Such occurrence is less evident in the underdoped regime, where T* is almost as high as room temperature (e.g. at doping p ≈ 0.11, see Fig. 1), while it dominates the transport properties of the metallic state in its entirety above optimal doping (p ≈ 0.17–0.20, see Fig. 1), where T* decreases and eventually merges with the superconducting critical temperature Tc. Beyond such occurrence, the main deviations from the paradigmatic behaviour dictated by the Landau Fermi-liquid theory of standard metals are the optical conductivity, following a non-Drude-like frequency dependence σ(ω) ~ 1/ω, and the Raman scattering intensity, starting linearly in frequency and then saturating into a flat electron continuum, as expressed by the dependence of the susceptibility of the scattering mediator, \({\rm{Im}}\ P(\omega ) \sim \omega /\max \ (T,| \omega | )\). It was shown long ago1 that the phenomenological assumption of this form for \({\rm{Im}}\ P(\omega )\) accounts for the above anomalous properties. In particular, the related low-energy excitations, mediating a momentum-independent electron–electron effective interaction, give rise to a linear dependence of the imaginary part of the electron self-energy both in frequency and temperature
$${\rm{Im}}\ {{\Sigma }}({\bf{k}},\omega ) \sim \max \ (T,| \omega | ).$$
Although there are theories that do not rely on a specific mediator2, a huge effort has been devoted along the years to identify the excitations mediating this scattering, mostly based on the idea of proximity to some form of order: circulating currents3, spin4,5, charge order6,7,8,9,10 or the phenomenological coupling to incoherent fermions11.
Fig. 1: Temperature-vs-doping phase diagram of the superconducting cuprates.
In the red region encompassed between the pseudogap temperature T* and the upturn temperature Tup of the resistance, above the superconducting critical temperature Tc, in particular close to the optimally doped regime (e.g. at hole doping p ≈ 0.17), these compounds display a strange-metal behaviour. This is revealed in the experimental resistance R data by the presence of a linear temperature dependence, displayed as a red thick solid line in the R(T) curves above the phase diagram. In the underdoped regime (e.g. at p ≈ 0.11), below T* (blue region) a downturn from the linear-in-T resistance is observed, since additional mechanisms lead to deviations from the strange-metal regime. In the overdoped regime (e.g. at p ≈ 0.21), below Tup (yellow region) the upturn from the linear-in-T resistance is due to the setting in of the Fermi-liquid regime. Recent Resonant X-Ray Scattering experiments12 showed that also the charge order phenomenon is widespread in the phase diagram. In particular, short-ranged dynamical charge density fluctuations (sketched by red waves highlighted in the red circle, and observed in the striped area) populate the strange-metal region, while in the underdoped region, below the onset temperature TCDW, they coexist with the usual longer-ranged charge density waves (sketched by blue waves in the blue circle, and observed in the wavy area). TN is the Néel temperature. The data of the R(T) curves are taken from Refs. 12,40.
A step forward in the identification of low energy excitations that might be responsible for the strange-metal behaviour was recently taken by means of resonant X-ray scattering (RXS), performed on Nd1+xBa2−xCu3O7−δ (NBCO) and YBa2Cu3O7−δ (YBCO) thin films12. After the first experimental evidence, these excitations have been demonstrated to be a common feature of different families of cuprates, namely HgBa2CuO4+δ13, La2−xSrxCuO414,15,16, La2−xBaxCuO417,18, and La1.675Eu0.2Sr0.125CuO419, thereby indicating that these excitations may well provide a generic scattering mechanism in all cuprates.
In the following we will focus on NBCO or YBCO investigated in the precursor experiment. These experiments not only confirmed the occurrence of incommensurate charge density waves (CDWs), correlated over several lattice spacings, in the underdoped region and below T*20,21,22,23,24,25,26,27,28, but, quite remarkably, also identified a much larger amount of very short-ranged (≈3 lattice spacings) dynamical charge density fluctuations (CDFs, see Fig. 1), with a characteristic energy scale ω0 ≈ 10–15 meV. These CDFs are peaked at a wave vector, along the (1,0) and (0,1) directions, which is very close to that of the intermediate-range CDWs12, arising below a given temperature TCDW(p) for each measured doping p. We also notice that, when the temperature is raised towards TCDW(p), the CDWs correlation length decreases down to values close to those of the CDFs. These facts suggest that the two charge fluctuations have a common origin. One possibility is that they develop differently in different regions, with CDFs remaining noncritical, whereas CDWs evolve towards order. This is also supported by the possibility that the narrow peak (NP) of the RXS response function, customarily associated to the CDWs, arises at the expense of the broad peak (BP) due to CDFs. However, differently from CDWs, CDFs are quite robust both in temperature (they survive essentially unaltered up to the highest explored temperatures, T ≈ 270 K) and doping. These excitations are at low energy (≈15 meV in an optimally doped sample with Tc = 90 K) and so short ranged that in reciprocal space they produce the BP observed in the RXS scans. CDFs not only provide a strong scattering channel for the electrons, but also overcome the difficulty of the CDWs, which, being so peaked, give rise to anisotropic scattering dominated by the hot spots on the Fermi surface. CDFs, instead, being so broad, affect all states on the Fermi surface nearly equally, resulting in an essentially isotropic scattering rate. This isotropy is a distinguished feature of the strange-metal state and we show below that it can account for the peculiar behaviour of the electronic spectra and for the linear-in-T resistivity.
Strange-metal behaviour of the electron self-energy
Figure 2a shows a qualitative explanation of the inherent isotropy of the scattering by CDFs. RXS experiments directly access the frequency and momentum-dependent charge susceptibility (see Methods) and find the above mentioned BP at a well-defined incommensurate wave vector Qc, but the large width of this peak means that a wealth of low-energy CDFs are present over a broad range of momenta. Therefore, an electron quasiparticle on a branch of the Fermi surface can always find a CDF that scatters it onto another region of the Fermi surface [see Fig. 2a)]. Thus the whole Fermi surface is hot in the sense that no regions exist over the Fermi surface that can avoid this scattering. This is visualized in Fig. 2a, where the overlap of the Fermi surface with its translated and broadened replicas (due to the scattered quasiparticles) is almost uniform, and no particular nesting condition is needed. Quite remarkably, the CDF-mediated scattering stays isotropic even in an energy window of ≈20 meV around the Fermi surface (see Supplementary note 1 and Supplementary Fig. 2).
Fig. 2: Strange-metal self-energy.
a Sketch of the charge density fluctuation (CDF) and charge density wave (CDW) mediated quasiparticle scattering on the Fermi surfaces. Points on the Fermi surface are identified by the angle ϕ. Owing to the broadness of CDFs in momentum space, all the states along the Fermi surface (thick black line) can be scattered by low-energy CDFs over other portions of the Fermi surface, and no particular nesting condition is needed. The involvement of only one branch of the Fermi surface in the Brillouin zone is displayed for clarity: The scattered portions of the Fermi surface (broad reddish areas) essentially cover the whole branch. Therefore the whole Fermi surface is affected in a nearly isotropic way. On the contrary, the CDWs are peaked in momentum space and scatter the Fermi surface states in rather restricted regions of other Fermi surface branches (hot spots). These occur where the bluish lines cross the thick black line. b Scattering rate [i.e. the imaginary part of the self-energy at zero frequency \({{\Gamma }}(\phi )=-{\rm{Im}}\ {{\Sigma }}(\phi ,T,\omega =0)\)] at a given temperature T = 80 K, as a function of the position on the Fermi surface, as identified by the angle ϕ defined in panel a. The nearly isotropic red line corresponds to the case when all the scattering would be due to CDFs, while the blue dashed line represents the scattering due to CDWs only. c Imaginary part of the electron self-energy as a function of the (negative) electron binding energy, at different temperatures above TCDW, below which the CDWs emerge to produce the narrow peak in resonant X-ray scattering. The coupling between fermion quasiparticles and CDFs is g = 0.166 eV. d Same as c, but with both frequency and self-energy axes rescaled by the temperature (kB is the Boltzmann constant), to highlight the approximate scaling behaviour at low frequency.
On the contrary, since CDWs are quite peaked, only a few of them around Qc are available to scatter quasiparticles at low energy: Only quasiparticles at the hot spot are then significantly scattered by CDWs [see Fig. 2a]. In a quantitative way, this is shown in Fig. 2b, where the actual scattering rate along the Fermi surface has been separately computed for CDFs (solid red line) and CDWs (dashed blue line) with parameters suitable to describe a slightly underdoped NBCO sample (p ≈ 0.15), where CDF and CDW coexist (see Supplementary note 2 and Supplementary Fig. 5). This feature makes these CDFs an appealing candidate to mediate the isotropic scattering required by the original marginal Fermi-liquid theory. We therefore test this expectation by explicitly calculating how the CDFs dress the electron quasiparticles modifying their spectrum. In many-body theory, this effect is customarily described by the electron self-energy. In particular, the imaginary part of the electron self-energy, \({\rm{Im}}\ {{\Sigma }}\), provides the broadening of the electron dispersion as measured, e.g. in angle-resolved photoemission experiments. We adopt the following strategy: (a) we extract from the experimental inelastic RXS spectra the information on the dynamics of the CDFs (see supplementary note 2) evaluated within the linear response theory; (b) we borrow from photoemission experiments the electron dispersion in the form of a tight-binding band structure29; (c) we calculate the electron self-energy resulting from the coupling between CDFs and the electron quasiparticles, as discussed in Supplementary note 1 and represented as a diagram in Supplementary Fig. 1.
With the extracted parameters, using the coupling between quasiparticles and CDFs obtained from the resistivity fit (see below) and taking the frequency derivative of the real part of the self-energy, we also calculated the dimensionless coupling λ at T ≈ T* finding λ ≈ 0.35–0.5 (see Supplementary note 1).
Of course, this perturbative approach, although supported by the low-moderate value of λ, is based on the Fermi-liquid as a starting point in the overdoped region. Its applicability can be safely extended to lower doping at high temperatures, in the metallic state and above T*, where the phenomenology is only marginally different from that of a Fermi-Liquid.
The result of our calculation for an optimally doped NBCO sample with Tc = 90 K is reported in Fig. 2c, d. After an initial quadratic behaviour, the scale of which is set by the energy scale ω0 of the CDFs30 (see Supplementary note 1), \({\rm{Im}}\ {{\Sigma }}\) displays an extended linear frequency dependence up to 0.10–0.15 eV (comparable to the one reported in the photoemission experiments of refs. 31,32). The overall value of this self-energy is comparable to, but it always stays smaller than, the Fermi energy scale of order 0.3–0.4 eV. This is an intrinsic manifestation of a strange-metal state, where the width of the quasiparticle peak must be of the same order of its typical energy. At low frequencies \({\rm{Im}}\ {{\Sigma }}\) saturates at a constant value that increases linearly with increasing T. This is precisely the behaviour expected from the strange-metal expression of Eq. (1). This self-energy is reported along a specific (1,1) direction, but it is crucial to recognize that it is also highly isotropic in momentum space. Figure 2b indeed reports the scattering rate (i.e. the imaginary part of the self-energy at zero frequency) Γ(ϕ) ≡ Γ0 + ΓΣ(ϕ). An isotropic scattering rate Γ0 representing the effect of quenched impurities has also been included. Our results in Fig. 2c, not only share with the data of ref. 31 a similar form, but also display a scaling behaviour, as reported in Fig. 2d. As mentioned in ref. 1, the isotropic linear-in-frequency self-energy behaviour, stemming from CDFs, is sufficient to produce a strange-metal behaviour in physical quantities like optical conductivity and Raman scattering.
Below TCDW = 150 K, an additional scattering due to the CDWs is present. This additional scattering has a significant anisotropic component, which is confined in a small region of momentum space, as shown by the dashed blue curve of Fig. 2b. This anisotropic character eventually leads to the departure from the strange-metal behaviour33 below temperatures comparable with T*.
CDFs produce linear resistivity
Once the dynamics of the CDFs is identified by exploiting RXS experiments, one can investigate their effects on transport properties. The calculation of the electron resistivity is carried out within a standard Boltzmann-equation approach along the lines of ref. 34 (see Supplementary note 3). An analogous calculation within the Kubo formalism gives very similar results (see Supplementary note 4 and Supplementary Fig. 7). From the electron self-energy we obtain the zero frequency quasiparticle scattering rate along the Fermi surface Γ(ϕ) defined above, and we use Γ0 as a fitting parameter, obtaining values (≈20-60 meV) that are reasonable for impurity scattering. We also use the anisotropic Fermi wave vector along the Fermi surface, as obtained from the same band structure in tight-binding approximation29 used for the self-energy calculation. Figure 3a displays the comparison between the ρ(T) curve of the optimally doped NBCO film (Tc = 90 K), studied in ref. 12 (yellow line) and the theoretical results (black line). At high temperatures, the famous linear-in-T behaviour of the resistivity is found and the data are quantitatively matched. This behaviour stems from the very isotropic scattering rate produced by the CDFs [red solid line in Fig. 2b], which, for this sample and in this temperature range, are the only observed charge excitations. At lower temperatures, below T*, a discrepancy emerges between the theoretical expectation and the experimental evidence, since the expected saturation, due to the onset of a Fermi-liquid regime and to (isotropic) impurity scattering Γ0, is experimentally replaced by a downturn of the resistivity. Such discrepancy occurs gradually in T when, entering the pseudogap state, the pseudogap itself and other intertwined incipient orders (CDWs, Cooper pairing,...) play their role. These effects, which are outside our present scope, obviously lead to deviations from our theory, which only considers the effect of CDFs. On the other hand, in the overdoped YBCO sample (Tc = 83 K), the pseudogap and the intertwined orders are absent, while the CDFs are the only surviving charge excitations, even down to Tc12. Here, our theoretical resistivity, related to the scattering rate produced by CDFs, matches very well the experimental data, in the whole range from room temperature almost down to Tc [see Fig. 3b]. In particular, the agreement is rather good even at the lowest temperatures above Tc. The data display an upward saturation due to the onset of a Fermi-liquid regime that is well described by our calculation: At temperatures lower than the characteristic energy of CDFs their scattering effect is suppressed and the strange-metal behaviour ceases. We find remarkable that our theory not only describes the linear-in-T regime, but also captures the temperature scale of upward deviation from it, without additional adjustments.
Fig. 3: Linear-in-T resistivity.
a Experimental resistivity for an optimally doped (Tc = 90 K) Nd1+xBa2−xCu3O7−δ sample (yellow thick curve) compared to the theoretical result as obtained from the charge density fluctuations (CDFs) only (black solid line). The dashed part demonstrates the continuation of linear-in-T behaviour up to temperatures > 500 K. The scattering rate includes an elastic scattering Γ0 due to quenched impurities, \({{\Gamma }}(\phi )={{{\Gamma }}}_{0}+{\rm{Im}}\ {{\Sigma }}(\phi ,T,\omega =0)\). Here, Γ0 = 52 meV, and the coupling g = 0.166 eV between quasiparticles and CDFs is the same as for the self-energy of Fig. 2. b Same as a for an overdoped YBa2Cu3O7−δ sample (Tc = 83 K). Here, Γ0 = 25.5 meV, g = 0.179 eV.
The above results clearly show that the main features for the CDFs to account for the strange-metal behaviour are a) a short coherence length of 1–2 wavelengths to scatter the low-energy electrons in a nearly isotropic way and b) a rather low energy (ω0 ≈ 10–15 meV) to produce a linear scattering rate down to 100–120 K. We emphasize here that ω0 is only a characteristic minimal scale of CDFs, but these are broad overdamped excitations from ω = 0 (due to damping) up to about 0.1 eV, because they have a dispersion \(\sim \overline{\nu }{({\bf{q}}-{{\bf{Q}}}_{c})}^{2}\) with a stiffness energy scale \(\overline{\nu }\approx 1.0-1.5\) eV(r.l.u.)−2 [see Eq. (3), the discussion in supplementary note 2, and Supplementary Figs. 5 and 6]. Moreover, our approach (extract information about CDFs from RXS experiments, and determine their effect on electron spectra and transport), not only captures the high-temperature linear behaviour of resistivity, but also the deviation from it in the overdoped case, where no other perturbing mechanisms, like CDWs, pairing, spin fluctuations, pseudogap, are present.
The question may also arises whether CDFs can also account for the so-called Planckian behaviour35: at some specific doping, when a strong magnetic field (several tens of Teslas) destroys superconductivity, the linear-in-T resistivity extends down to low temperatures of a few K. In order for our theory to account also for this behaviour, we should find CDFs with a lower characteristic energy of order 0.5–1.0 meV, while maintaining the correlation length short, to keep the scattering isotropic. Unfortunately at the moment no RXS experiments in the presence of such large magnetic fields are viable and we therefore cannot test these expectations. Nevertheless, we feel that it is not accidental that our theory accounts so well of the experiments done so far in the absence of magnetic field which show linearity up to very high temperature, well above T*, so far from the quantum region. No wonder if by lowering the temperature at special values of doping, other effects may come in to modify our parameters values.
One interesting question is why CDFs, even in the absence of the specific Planckian conditions have rather low characteristic energies ≈10–15 meV. In this regard, we notice that CDFs and CDWs have nearly the same characteristic wave vectors, indicating a close relationship. Since CDWs have a nearly critical character (that was theoretically predicted long ago6,36), it is likely that CDFs are aborted CDWs, that for several possible reasons (competition with superconductivity, low dimensionality, disorder, charge density inhomogeneity, ...) do not succeed in establishing longer-range correlations. Still, this tight affinity with CDWs, which are nearly critical and therefore at very low energy, implies that CDFs also may have a broad dynamical range extending down to a rather low-energy scale ω0. In this scenario, where CDWs and CDFs coexist in the system, one and the same theoretical scheme accounts for both excitations.
In conclusion, although some issues are still open, like the effects of magnetic field on CDFs to possibly account for Planckian transport, or the origin of the pseudogap features in transport, we were able to show that CDFs account for the anomalous metallic state of cuprates above T*. Indeed, once the dynamics of the CDFs is extracted from RXS experiments, we can well explain, with the same parameter set, both the strange-metal behaviour of the electron self-energy (therefore all the related anomalous spectral properties observed, e.g. in optical conductivity and Raman spectroscopy, are also explained) and the famous linear-in-T resistivity in the metallic state of high-temperature superconducting cuprates. We thus believe that our results provide a very sound step forward in the long-sought explanation of the violation of the normal Fermi-liquid behaviour in cuprates.
Fitting procedure to extract the CDW and CDF dynamics
The CDW and CDF contributions to the RXS spectra are captured by a density response-function diagram as reported in Supplementary Fig. 1. In this framework, we carry out a twofold task: on the one hand, we show that dynamical CDFs and nearly critical CDWs account both for the RXS high-resolution, frequency-dependent, spectra, and for the quasi-elastic momentum-dependent spectra. On the other hand, from the fitting of these experimental quantities, we extract the dynamical structure of these excitations needed to calculate the physical quantities discussed above.
According to this scheme, the CDW or CDF contribution to the low-energy RXS spectra is
$$I({\bf{q}},\omega )=A\ {\rm{Im}}\ D({\bf{q}},\omega )\ b(\omega )$$
where \(b(\omega )\equiv {[{{\rm{e}}}^{\omega /{k}_{B}T}-1]}^{-1}\) is the Bose distribution ruling the thermal excitation of CDFs and CDWs, and A is a constant effectively representing the intricate photon-conduction electron scattering processes27,37. In Eq. (2), \({\rm{Im}}\ D({\bf{q}},\omega )\) is the imaginary (i.e. absorptive) part of the (retarded) dynamical density fluctuation propagator, which can describe either CDWs or CDFs. For both we adopt the standard Ginzburg–Landau form of the dynamical density fluctuation propagator, typical of overdamped quantum critical Gaussian fluctuations6,7,36,
$$D({\bf{q}},\omega )\equiv {\left[{\omega }_{0}+\nu ({\bf{q}})-{\rm{i}}\omega -\frac{{\omega }^{2}}{\overline{{{\Omega }}}}\right]}^{-1},$$
where \({\omega }_{0}=\bar{\nu }\ {\xi }^{-2}\) is the characteristic energy of the fluctuations, \(\nu ({\bf{q}})\approx \bar{\nu }\ | {\bf{q}}-{{\bf{Q}}}_{{\rm{c}}}{| }^{2}\), \(\bar{\nu }\) determines the dispersion of the density fluctuations, Qc ≈ (0.3, 0), (0, 0.3) is the characteristic critical wave vector (we work with dimensionless wave vectors, measured in reciprocal lattice units, r.l.u.) and \(\overline{{{\Omega }}}\) is a frequency cut-off. This form of the charge collective mode propagator is typical of metallic systems where the collective modes have a marked overdamped character at low energy, where they can decay into particle-hole pairs (Landau damping). At larger energies, above \(\overline{{{\Omega }}}\), they acquire a more propagating character. In both regimes, however, the maximum of their spectral weight is dispersive with a definite relation between ω and momentum, as it should be for well-defined collective modes. This is valid for both CDFs and CDWs, although the coherence length of the formers is weakly varying in doping and temperature and is generically very short (of the order of the wavelength itself). The sharper CDWs have a nearly critical character, with a marked temperature dependence of the square correlation length, \({\xi }_{{\rm{NP}}}^{2}(T)\). In particular, if these fluctuations had a standard quantum critical character around optimal doping6,7,36,38, one would expect \({\xi }_{{\rm{NP}}}^{2}(T) \sim 1/T\). The CDFs have a similar Qc, the main difference being in the behaviour of the correlation length, that, according to RXS experiments, increases significantly with decreasing the temperature and reaches up to 8–10 lattice spacings for the nearly critical CDWs, while the CDFs have correlation length in the range 2–3 lattice spacings, independently of the temperature.
Although high-resolution spectra provide a wealth of information, they are experimentally very demanding, so that RXS data are more often available in the form of quasi-elastic spectra corresponding to the frequency integration of the inelastic spectra, Eq. (2),
$$I({\bf{q}})=\mathop{\int}\nolimits_{-\infty }^{+\infty }\frac{A\ \omega }{{\left({\omega }_{0}+\nu ({\bf{q}})-\frac{{\omega }^{2}}{\overline{{{\Omega }}}}\right)}^{2}+{\omega }^{2}}\ b(\omega )\ {\rm{d}}\omega$$
Our first goal is to extract from the experiments all the parameters entering the CDW and CDF correlators, \({\omega }_{0},\bar{\nu },{{\bf{Q}}}_{{\rm{c}}}\) and \(\overline{{{\Omega }}}\).
Since high-resolution and quasi-elastic spectra provide different complementary information, we adopted a bootstrap strategy in which we first estimated the dynamical scale ω0 from high resolution at the largest temperatures, where the NP due to CDWs is absent and all collective charge excitations are CDFs. Then, we used this information to fit the quasi-elastic peaks to extract the relative weight (intensity) of the narrow and broad contributions. Once this information is obtained, we go back to high-resolution spectra, since we now know the relative weight of the CDFs and CDWs contribution at all temperatures.
More specifically, the quasi-elastic peak has a composite character and, once the (essentially linear) background measured along the (1, 1) direction is subtracted (see, e.g. Fig. 2 A–D in ref. 12), the peak may be decomposed into two approximately Lorentzian curves, corresponding to a narrow, strongly temperature dependent, peak due to the standard nearly critical CDWs arising below T ≈ 200 K and to a BP due to the CDFs. This is the main outcome of the RXS experiments reported in ref. 12. We thus fitted each of the two peaks with Eq. (4). From the fits, one can extract the overall intensity parameter A and the ratio \({\omega }_{0}/\bar{\nu }={\xi }^{-2}\). Since only this ratio determines the width of the quasi-elastic spectra, we need a separate measure to disentangle ω0 and \(\bar{\nu }\), so we used the high-resolution information on ω0 for the BP at T = 150 K and T = 250 K to extract \({\bar{\nu }}_{{\rm{BP}}}\approx 1400\) meV(r.l.u.)−2 at these temperatures. The same procedure cannot be adopted for the narrow CDWs peaks, which always appear on top of (and are hardly unambiguously separated from) the broad CDFs contribution. Nevertheless, to obtain a rough estimate, we investigated the high-resolution spectra at low temperature (see Supplementary note 2), where the maximum intensity should mostly involve the NP to extract the characteristic energy of the quasi-critical CDWs obtaining, as expected, much lower values \({\omega }_{0}^{{\rm{NP}}}\approx 1-3\) meV (although these low values are less reliable, due to the relatively low resolution of the frequency-dependent spectra). These estimates allow to extract values of \({\bar{\nu }}_{{\rm{NP}}}\approx 800\) meV(r.l.u.)−2 for the CDWs, comparable with those of the CDFs, suggesting a common electronic origin of the two types of charge fluctuations. To reduce the fitting parameters to a minimum, although subleading temperature dependencies of the high-energy parameters \(\bar{\nu }\) and \(\overline{{{\Omega }}}\) over a broad temperature range can be expected, we kept those parameters constant. We also assumed a constant ω0 for the CDFs, to highlight the noncritical nature of these fluctuations.
The experimental resistivity and RXS data (see Fig. 3 of the main text and supplementary figures 4–7) have already been published in ref. 12 and are therefore available in the related data repository39. They are also available from one of the corresponding authors [M.G.] on reasonable request. The datasets (resistivity curves, fitted RXS spectra and electron self-energy) generated during the current study are available from one of the corresponding authors [M.G.] on reasonable request.
The theoretical analysis was carried out with FORTRAN codes to implement various required numerical integrations [Eq. (4) in Methods to fit the RXS data, supplementary Eq. (1) for the self-energy, in the supplementary note 1, and supplementary Eq. (6) for the resistivity, in supplementary note 3]. Although the same task could easily by performed with Mathematica or other standard softwares, the FORTRAN codes we used are available from one of the corresponding authors [M.G.] on reasonable request.
Varma, C. M., Littlewood, P. B., Schmitt-Rink, S., Abrahams, E. & Ruckenstein, A. E. Phenomenology of the normal state of Cu-O high-temperature superconductors. Phys. Rev. Lett. 63, 1996 (1989).
ADS Article Google Scholar
Kastrinakis, G. A Fermi liquid model for the overdoped and optimally doped cuprate superconductors: scattering rate, susceptibility, spin resonance peak and superconducting transition. Physica C 340, 119 (2000).
Aji, V. & Varma, C. M. Theory of the quantum critical fluctuations in cuprate superconductors. Phys. Rev. Lett. 99, 067003 (2007).
Abanov, A., Chubukov, A. & Schmalian, J. Quantum-critical theory of the spin-fermion model and its application to cuprates: normal state analysis. Adv. Phys. 52, 119 (2003).
Norman, M. R. & Chubukov, A. V. High-frequency behavior of the infrared conductivity of cuprates. Phys. Rev. B 73, 140501R (2006).
Castellani, C., DiCastro, C. & Grilli, M. Singular quasiparticle scattering in the proximity of charge instabilities. Phys. Rev. Lett. 75, 4650 (1995).
Castellani, C., DiCastro, C. & Grilli, M. Non-Fermi-liquid behavior and d-wave superconductivity near the charge-density-wave quantum critical point. Z. Phys. B 103, 137 (1996).
Kivelson, S. A. et al. How to detect fluctuating stripes in the high-temperature superconductors. Rev. Mod. Phys. 75, 1201 (2003).
Caprara, S., Grilli, M., Di Castro, C. & Seibold, G. Pseudogap and (An)isotropic scattering in the fluctuating charge-density wave phase of cuprates. J. Supercond. Nov. Magn. 30, 25–30 (2017).
Caprara, S., Di Castro, C., Fratini, S. & Grilli, M. Anomalous optical absorption in the normal state of overdoped cuprates near the charge-ordering instability. Phys. Rev. Lett. 88, 147001 (2002).
Patel, A. A., McGreevy, J., Arovas, D. P. & Sachdev, S. Magnetotransport in a model of a disordered strange metal. Phys. Rev. X 8, 021049 (2018).
Arpaia, R. et al. Dynamical charge density fluctuations pervading the phase diagram of a Cu-based high-Tc superconductor. Science 365, 906 (2019).
Yu, B. et al. Unusual dynamic charge-density-wave correlations in HgBa2 CuO4+δ. Phys. Rev. X 10, 021059 (2020).
Miao, H. et al. Discovery of charge density waves in cuprate superconductors up to the critical doping and beyond. arxiv:2001.10294; https://arxiv.org/abs/2001.10294.
Lin, J. Q. et al. Nature of the charge-density wave excitations in cuprates. arxiv:2001.10312; https://arxiv.org/abs/2001.10312.
Wen, J.-J. et al. Observation of two types of charge-density-wave orders in superconducting La2−x Srx CuO4. Nat. Commun. 10, 3269 (2019).
Miao, H. et al. High-temperature charge density wave correlations in La1.875 Ba0.125 CuO4 without spin-charge locking. PNAS 114, 12430 (2017).
Miao, H. et al. Formation of incommensurate charge density waves in cuprates. Phys. Rev. X 9, 031042 (2019).
Chang, J. et al. High-temperature charge-stripe correlations in La1.675 Eu0.2 Sr0.125 CuO4. Phys. Rev. Lett. 124, 187002 (2020).
Ghiringhelli, G. et al. Long-range incommensurate charge fluctuations in (Y,Nd)Ba2 Cu3 O6+x. Science 337, 821–825 (2012).
Achkar, A. J. et al. Distinct charge orders in the planes and chains of ortho-III-ordered YBa2 Cu3 O6+δ superconductors identified by resonant elastic x-ray scattering. Phys. Rev. Lett. 109, 167001 (2012).
Tabis, W. et al. Charge order and its connection with Fermi-liquid charge transport in a pristine high-Tc cuprate. Nat. Commun. 5, 5875 (2014).
Comin, R. et al. Charge order driven by Fermi-arc instability in Bi2 Sr2−x Lax CuO6+δ. Science 343, 390–392 (2014).
Blanco-Canosa, S. et al. Resonant X-ray scattering study of charge-density wave correlations in YBa2 Cu3 O6+δ. Phys. Rev. B 90, 054513 (2014).
Keimer, B., Kivelson, S. A., Norman, M. R., Uchida, S. & Zaanen, J. From quantum matter to high-temperature superconductivity in copper oxides. Nature 518, 179 (2015).
Gerber, S. et al. Three-dimensional charge density wave order in YBa2 Cu3 O6.67 at high magnetic fields. Science 350, 949–952 (2015).
Comin, R. & Damascelli, A. Resonant X-ray scattering studies of charge order in cuprates. Annu. Rev. Condens. Matter Phys. 7, 369–405 (2016).
Peng, Y. Y. et al. Re-entrant charge order in overdoped (Bi, Pb)2.12 Sr1.88 CuO6+δ outside the pseudogap regime. Nat. Mater. 17, 697 (2018).
Meevasana, W. et al. Hierarchy of multiple many-body interaction scales in high-temperature superconductors. Phys. Rev. B 75, 174506 (2007).
Caprara, S., Sulpizi, M., Bianconi, A., Di Castro, C. & Grilli, M. Single-particle properties of a model for coexisting charge and spin quasicritical fluctuations coupled to electrons. Phys. Rev. B 59, 14980 (1999).
Valla, T. et al. Evidence for quantum critical behavior in the optimally doped cuprate Bi2 Sr2 CaCu2 O8+δ. Science 285, 2110 (1999).
Bok, J. M. et al. Momentum dependence of the single-particle self-energy and fluctuation spectrum of slightly underdoped Bi2 Sr2 CaCu2 O8+δ from high-resolution laser angle-resolved photoemission. Phys. Rev. B 81, 174516 (2010).
Hlubina, R. & Rice, T. M. Resistivity as a function of temperature for models with hot spots on the Fermi surface. Phys. Rev. B 51, 9253 (1995).
Hussey, N. E. The normal state scattering rate in high-Tc cuprates. Eur. Phys. J. B 31, 495 (2003).
Legros, A. et al. Universal T -linear resistivity and Planckian dissipation in overdoped cuprates. Nat. Phys. 15, 142 (2019).
Andergassen, S. et al. Anomalous isotopic effect near the charge-ordering quantum criticality. Phys. Rev. Lett. 87, 056401 (2001).
Ament, L. J. P. et al. Resonant inelastic x-ray scattering studies of elementary excitations. Rev. Mod. Phys. 83, 705 (2011).
Caprara, S., Di Castro, C., Seibold, G. & Grilli, M. Dynamical charge density waves rule the phase diagram of cuprates. Phys. Rev. B 95, 224511 (2017).
Arpaia, R. et al. Raw data for 'Dynamical charge density fluctuations pervading the phase diagram of a Cu-based high-Tc superconductor'; https://doi.org/10.5281/zenodo.2641214 (2019).
Arpaia, R., Andersson, E., Trabaldo, E., Bauch, T. & Lombardi, F. Probing the phase diagram of cuprates with YBa2 Cu3 O7−δ thin films and nanowires. Phys. Rev. Mater. 2, 024804 (2018).
We thank C. Castellani, S. Kivelson, M. Le Tacon, M. Moretti Sala and T. P. Devereaux for stimulating discussions. We acknowledge financial support from the University of Rome Sapienza, through the projects Ateneo 2017 (Grant No. RM11715C642E8370), Ateneo 2018 (Grant No. RM11816431DBA5AF), Ateneo 2019 (Grant No. RM11916B56802AFE), from the Italian Ministero dell'Università e della Ricerca, through the Project No. PRIN 2017Z8TS5B, and from the Fondazione CARIPLO and Regione Lombardia, through the ERC-P-ReXS project (2016-0790). R.A. is supported by the Swedish Research Council (VR) under the project "Evolution of nanoscale charge order in superconducting YBCO nanostructures". G.S. acknowledges support from the Deutsche Forschungsgemeinschaft.
Ying Ying Peng
Present address: International Center for Quantum Materials, School of Physics, Peking University, CN-100871, Beijing, China
These authors jointly supervised this work: Marco Grilli, Sergio Caprara.
Institut für Physik, BTU Cottbus-Senftenberg - PBox 101344, D-03013, Cottbus, Germany
Götz Seibold
Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133, Milano, Italy
Riccardo Arpaia, Ying Ying Peng, Roberto Fumagalli, Lucio Braicovich & Giacomo Claudio Ghiringhelli
Quantum Device Physics Laboratory, Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE-41296, Göteborg, Sweden
Riccardo Arpaia
ESRF, The European Synchrotron, 71 Avenue des Martyrs, F-38043, Grenoble, France
Lucio Braicovich
Dipartimento di Fisica, Università di Roma Sapienza, P.le Aldo Moro 5, I-00185, Roma, Italy
Carlo Di Castro, Marco Grilli & Sergio Caprara
CNR-ISC, via dei Taurini 19, I-00185, Roma, Italy
Marco Grilli & Sergio Caprara
CNR-SPIN, Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133, Milano, Italy
Giacomo Claudio Ghiringhelli
Roberto Fumagalli
Carlo Di Castro
Marco Grilli
Sergio Caprara
S.C., C.D.C. and M.G. conceived the project. G.S. performed the theoretical calculations of the self-energy and resistivity, with contributions from S.C., C.D.C. and M.G. R.A., R.F., Y.Y.P., L.B., M.G. and G.G. provided the RXS experimental data. M.G., S.C., R.A., L.B. and G.G. performed the fitting of the RXS data. The manuscript was written by S.C., C.D.C., M.G., G.S., R.A. and G.G., with contributions and suggestions from all coauthors.
Correspondence to Götz Seibold, Marco Grilli or Sergio Caprara.
Seibold, G., Arpaia, R., Peng, Y.Y. et al. Strange metal behaviour from charge density fluctuations in cuprates. Commun Phys 4, 7 (2021). https://doi.org/10.1038/s42005-020-00505-z
DOI: https://doi.org/10.1038/s42005-020-00505-z
Dissipation-driven strange metal behavior
Communications Physics (2022)
Focus Collections
Communications Physics (Commun Phys) ISSN 2399-3650 (online) | CommonCrawl |
A second order energy stable scheme for the Cahn-Hilliard-Hele-Shaw equations
Convergence rates for semistochastic processes
A dimension splitting and characteristic projection method for three-dimensional incompressible flow
Hao Chen 1,2, , Kaitai Li 3, , Yuchuan Chu 4,5,, , Zhiqiang Chen 2, and Yiren Yang 1,
School of Mechanics and Engineering, Southwest Jiaotong University, Chengdu 610031, China
Department of Civil and Mechanical Engineering, University of Missouri-Kansas City, Kansas City, MO, 64110, USA
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
School of Mechanical Engineering, Dongguan University of Technology, Dongguan 523000, China
Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, MO, 64509, USA
* Corresponding author: Yuchuan Chu([email protected])
Received May 2017 Revised September 2017 Published January 2019 Early access March 2018
Fund Project: The first author is supported by the Fundamental Research Funds for the Central Universities of China, grant 2682015CX044.
Figure(10) / Table(4)
A dimension splitting and characteristic projection method is proposed for three-dimensional incompressible flow. First, the characteristics method is adopted to obtain temporal semi-discretization scheme. For the remaining Stokes equations we present a projection method to deal with the incompressibility constraint. In conclusion only independent linear elliptic equations need to be calculated at each step. Furthermore on account of splitting property of dimension splitting method, all the computations are carried out on two-dimensional manifolds, which greatly reduces the difficulty and the computational cost in the mesh generation. And a coarse-grained parallel algorithm can be also constructed, in which the two-dimensional manifold is considered as the computation unit.
Keywords: Dimension splitting method, characteristics method, projection method, finite element method, three-dimensional incompressible flow.
Mathematics Subject Classification: Primary: 76D05; Secondary: 65M60.
Citation: Hao Chen, Kaitai Li, Yuchuan Chu, Zhiqiang Chen, Yiren Yang. A dimension splitting and characteristic projection method for three-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 127-147. doi: 10.3934/dcdsb.2018111
A. Allievi and R. Bermejo, Finite element modified of characteristics for the Navier-Stokes equations, Int. J. Numer. Meth. Fluids, 32 (2000), 439-463. doi: 10.1002/(SICI)1097-0363(20000229)32:4<439::AID-FLD946>3.0.CO;2-Y. Google Scholar
V. Babu and S. Korpela, Numerical solution of the incompressible three-dimensional Navier-Stokes equations, Comput. Fluids, 22 (1994), 675-691. doi: 10.1016/0045-7930(94)90009-4. Google Scholar
O. Botella and R. Peyret, Benchmark spectral results on the lid-driven cavity flow, Comput. Fluids, 27 (1998), 421-433. doi: 10.1016/S0045-7930(98)00002-4. Google Scholar
R. Bouffanais, M. O. Deville and E. Leriche, Large-eddy simulation of the flow in a lid-driven cubical cavity, Phys. Fluids, 19 (2007), 055108. doi: 10.1063/1.2723153. Google Scholar
J. Chan, J. A. Evans and W. Qiu, A dual Petrov-Galerkin finite element method for the convection-diffusion equation, Comput. Math. Appl., 68 (2014), 1513-1529. doi: 10.1016/j.camwa.2014.07.008. Google Scholar
H. Chen, K. Li and S. Wang, A dimension split method for the incompressible Navier-Stokes equations in three dimensions, Int. J. Numer. Meth. Fluids, 73 (2013), 409-435. doi: 10.1002/fld.3803. Google Scholar
H. Chen, J. Su, K. Li and S. Wang, A characteristic projection method for incompressible thermal flow, Numer. Heat Tr. B-Fund., 65 (2014), 554-590. doi: 10.1080/10407790.2013.836052. Google Scholar
Z. Chen, Characteristic mixed discontinuous finite element methods for advection-dominated diffusion problems, Comput. Meth. Appl. Mech. Eng., 191 (2002), 2509-2538. doi: 10.1016/S0045-7825(01)00411-X. Google Scholar
A. J. Chorin, Numerical solution of the Navier-Stokes equations, Math. Comput., 22 (1968), 745-762. doi: 10.1090/S0025-5718-1968-0242392-2. Google Scholar
A. J. Chorin, On the convergence of discrete approximations to the Navier-Stokes equations, Math. Comput., 23 (1969), 341-353. doi: 10.1090/S0025-5718-1969-0242393-5. Google Scholar
J. Douglas Jr and T. F. Russell, Numerical method for convection-dominated diffusion problem based on combining the method of characteristics with finite element of finite difference procedures, SIAM J. Numer. Anal., 19 (1982), 871-885. doi: 10.1137/0719063. Google Scholar
C. J. Freitas, R. L. Street, A. N. Findikakis and J. R. Koseff, Numerical simulation of three-dimensional flow in a cavity, Int. J. Numer. Meth. Fluids, 5 (1985), 561-575. doi: 10.1002/fld.1650050606. Google Scholar
C. J. Freitas and R. L. Street, Non-linear transient phenomena in a complex recirculating flow: A numerical investigation, Int. J. Numer. Meth. Fluids, 8 (1988), 769-802. doi: 10.1002/fld.1650080704. Google Scholar
U. Ghia, K. N. Ghia and C. T. Shin, High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method, J. Comput. Phys., 48 (1982), 387-411. doi: 10.1016/0021-9991(82)90058-4. Google Scholar
J. L. Guermond, P. Minev and J. Shen, An overview of projection methods for incompressible flows, Comput. Meth. Appl. Mech. Eng., 195 (2006), 6011-6045. doi: 10.1016/j.cma.2005.10.010. Google Scholar
M. Hermanns, Parallel programming in Fortran 95 using OpenMP, 2002. Available from: http://www.openmp.org/wp-content/uploads/F95_OpenMPv1_v2.pdf. Google Scholar
C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method, Cambridge University Press, Cambridge, 1987. Google Scholar
J. R. Koseff and R. L. Street, Visualization studies of a shear driven three-dimensional recirculating flow, J. Fluids Eng., 106 (1984), 21-27. doi: 10.1115/1.3242393. Google Scholar
J. R. Koseff and R. L. Street, The lid-driven cavity flow: A synthesis of qualitative and quantitative observations, J. Fluids Eng., 106 (1984), 390-398. doi: 10.1115/1.3243136. Google Scholar
O. A. Ladyzhenskaya, the Mathematical Theory of Viscous Incompressible Flow, Gordon and Breach Science Publishers, New York, 1969. Google Scholar
K. Li, A. Huang and W. Zhang, A dimension split method for the 3-d compressible Navier-Stokes equations in turbomachine, Commun. Numer. Meth. Eng., 18 (2002), 1-14. Google Scholar
K. W. Morton, A. Priestley and E. Süli, Convergence Analysis of the Lagrange-Galerkin Method with Non-Exact Integration, Technical report, Oxford University Computing Laboratory. Rept. N86/14, Oxford, 1986. Google Scholar
A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Google Scholar
C. Shu, X. D. Niu and Y. T. Chew, Taylor series expansion and least squares-based lattice boltzmann method: three-dimensional formulation and its applications, Int. J. Mod. Phys. C, 14 (2003), 925-944. doi: 10.1142/S0129183103005133. Google Scholar
E. Süli, Convergence and nonlinear stability of the Lagrange-Galerkin method for the Navier-Stokes equations, Numer. Math., 53 (1988), 459-483. doi: 10.1007/BF01396329. Google Scholar
R. Temam, Sur l'approximation de la solution des equations de Navier-Stokes par la ḿethode des fractionnarires Ⅱ, Arch. Rational Mech. Anal., 33 (1969), 377-385. Google Scholar
C. Wu, A general theory of three-dimensional flow in subsonic and supersonic turbomachines of axial-, radial-, and mixed-flow types, Tech. Notes. Nat. Adv. Comm. Aeronaut., 1952 (1952), ⅱ+93 pp. Google Scholar
P. X. Yu and Z. F. Tian, A compact streamfunction-velocity scheme on nonuniform grids for the 2D steady incompressible Navier-Stokes equations, Comput. Math. Appl., 66 (2013), 1192-1212. doi: 10.1016/j.camwa.2013.07.013. Google Scholar
O. C. Zienkiewicz, P. Nithiarasu and R. L. Taylor, the Finite Element Method for Fluid Dynamics, seventh ed., Elsevier/Butterworth Heinemann, Amsterdam, 2014. Google Scholar
Figure 1. Splitting the flow domain $\Omega$
Figure 2. Grid structure of two-dimensional manifold $D$
Figure 3. Sketch of three-dimensional lid-driven cavity flow
Figure 4. Velocity profiles on middle plane z = 0.5 for Re = 100
Figure 6. Velocity profiles on middle plane z = 0.5 for Re = 1000
Figure 7. Streamline profile for various Reynolds numbers: Re = 100(A, B, C); Re = 400(D, E, F); x = 0.5(A, D); z = 0.5(B, E); y = 0.5(C, F)
Figure 8. Streamline profile for various Reynolds numbers: Re = 1000(A, B, C); Re = 2000(D, E, F); x = 0.5(A, D); z = 0.5(B, E); y = 0.5(C, F)
Figure 9. Three dimensional streamline for different Reynolds numbers: Re = 100(A, B, C); Re = 400(D, E, F)
Figure 10. Three dimensional streamline for different Reynolds numbers: Re = 1000(A, B, C); Re = 2000(D, E, F)
Table 1. Error of numerical solution with different mesh sizes
$\frac{1}{h}$ $\|\vec u-\vec u_h\|_{L^2}$ $\alpha$ $\|p-p_h\|_{L^2}$ $\alpha$ $\kappa_{div}$
$4$ 1.149E-002 - 2.931E-001 - 4.741E-002
$8$ 3.513E-003 1.710 1.394E-001 1.072 6.953E-003
$16$ 8.765E-004 1.856 6.172E-002 1.124 2.304E-003
Table 2. Convergence rate with different mesh sizes
$\frac{1}{h}$ DSM-C DSM-D
$U_{L^2}$ rate $P_{L^2}$ rate CPU(s) $U_{L^2}$ rate $P_{L^2}$ rate CPU(s)
4 - - 44.5 - - 43.7
8 1.710 1.072 138.7 1.42 0.876 162.4
16 1.856 1.124 206.3 1.49 1.075 383.2
32 1.966 1.301 957.6 1.53 0.971 1996.3
Table 3. Parallel performance of DSM-C at $1/h = 8, 16$
$p$ $1/h=8$ $1/h=16$
$T_p$ $S_{p}$ $E_{p}$ $T_p$ $S_{p}$ $E_{p}$
1 52.35 - - 451.69 - -
2 37.93 1.38 0.69 303.14 1.49 0.75
10 15.31 3.42 0.34 91.81 4.92 0.49
Table 4. Parallel performance of DSM-C at $1/h = 32, 64$
$p$ $1/h=32$ $1/h=64$
1 3847.32 - - 32861.04 - -
2 2171.17 1.77 0.89 17077.32 1.92 0.96
4 1183.79 3.25 0.81 9225.06 3.56 0.89
6 875.91 4.39 0.73 6744.55 4.87 0.81
10 648.78 5.93 0.59 4627.37 7.10 0.71
Gang Bao, Mingming Zhang, Bin Hu, Peijun Li. An adaptive finite element DtN method for the three-dimensional acoustic scattering problem. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 61-79. doi: 10.3934/dcdsb.2020351
Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095
Madalina Petcu, Roger Temam, Djoko Wirosoetisno. Averaging method applied to the three-dimensional primitive equations. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5681-5707. doi: 10.3934/dcds.2016049
Kai Qu, Qi Dong, Chanjie Li, Feiyu Zhang. Finite element method for two-dimensional linear advection equations based on spline method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2471-2485. doi: 10.3934/dcdss.2021056
Guoliang Ju, Can Chen, Rongliang Chen, Jingzhi Li, Kaitai Li, Shaohui Zhang. Numerical simulation for 3D flow in flow channel of aeroengine turbine fan based on dimension splitting method. Electronic Research Archive, 2020, 28 (2) : 837-851. doi: 10.3934/era.2020043
Caterina Calgaro, Meriem Ezzoug, Ezzeddine Zahrouni. Stability and convergence of an hybrid finite volume-finite element method for a multiphasic incompressible fluid model. Communications on Pure & Applied Analysis, 2018, 17 (2) : 429-448. doi: 10.3934/cpaa.2018024
Cornel M. Murea, H. G. E. Hentschel. A finite element method for growth in biological development. Mathematical Biosciences & Engineering, 2007, 4 (2) : 339-353. doi: 10.3934/mbe.2007.4.339
Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59
Leonardi Filippo. A projection method for the computation of admissible measure valued solutions of the incompressible Euler equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 941-961. doi: 10.3934/dcdss.2018056
Arun K. Kulshreshth, Andreas Alpers, Gabor T. Herman, Erik Knudsen, Lajos Rodek, Henning F. Poulsen. A greedy method for reconstructing polycrystals from three-dimensional X-ray diffraction data. Inverse Problems & Imaging, 2009, 3 (1) : 69-85. doi: 10.3934/ipi.2009.3.69
Corinna Burkard, Roland Potthast. A time-domain probe method for three-dimensional rough surface reconstructions. Inverse Problems & Imaging, 2009, 3 (2) : 259-274. doi: 10.3934/ipi.2009.3.259
Masaru Ikehata, Mishio Kawashita. An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method. Inverse Problems & Imaging, 2014, 8 (4) : 1073-1116. doi: 10.3934/ipi.2014.8.1073
Wangtao Lu, Shingyu Leung, Jianliang Qian. An improved fast local level set method for three-dimensional inverse gravimetry. Inverse Problems & Imaging, 2015, 9 (2) : 479-509. doi: 10.3934/ipi.2015.9.479
Javier A. Almonacid, Gabriel N. Gatica, Ricardo Oyarzúa, Ricardo Ruiz-Baier. A new mixed finite element method for the n-dimensional Boussinesq problem with temperature-dependent viscosity. Networks & Heterogeneous Media, 2020, 15 (2) : 215-245. doi: 10.3934/nhm.2020010
Binjie Li, Xiaoping Xie, Shiquan Zhang. New convergence analysis for assumed stress hybrid quadrilateral finite element method. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2831-2856. doi: 10.3934/dcdsb.2017153
Kun Wang, Yinnian He, Yueqiang Shang. Fully discrete finite element method for the viscoelastic fluid motion equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 665-684. doi: 10.3934/dcdsb.2010.13.665
Junjiang Lai, Jianguo Huang. A finite element method for vibration analysis of elastic plate-plate structures. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 387-419. doi: 10.3934/dcdsb.2009.11.387
So-Hsiang Chou. An immersed linear finite element method with interface flux capturing recovery. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2343-2357. doi: 10.3934/dcdsb.2012.17.2343
Donald L. Brown, Vasilena Taralova. A multiscale finite element method for Neumann problems in porous microstructures. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1299-1326. doi: 10.3934/dcdss.2016052
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
Hao Chen Kaitai Li Yuchuan Chu Zhiqiang Chen Yiren Yang | CommonCrawl |
D-Dimensional f(R,φ) Gravity
Mostafa Bousder
Subject: Physical Sciences, Acoustics Keywords: f(R) gravity; gravity; Gauss-Bonnet gravity
In this work, we explore a the different forms of a new type of modified gravity, namely f(φ) gravity. We construct the Big Rip type for the energy density and the curvature of the universe. We show that dark energy is a result of the transformation of the field φ mass (dark matter) to energy. In addition, we provide that Ω_{m}≈0,050, Ω_{DM}≈0,2, Ω_{DE}≈0,746, is in excellent agreement with observation data. We explore a generalized formalism of braneworld modified gravity. We also construct a new field equations, which generalize the Einstein field equations. We provide a relation between the extra dimension in 3-brane with the vacuum energy density. We show that the energy density of matter depends directly on the number of dimensions. We manage to find the value of the Gauss-Bonnet coupling α=1/4 which is a good agreement with the results in the literature, this correspondence creates a passage between f(R) gravity and Gauss-Bonnet gravity, this comparison leads to a number of bulk dimensions equal to D=10¹²¹+4.
The impact of IPP on GTI from the perspective of independent R&D
Zijie Yang, Dong Huang, Yanzhen Wang, Yuqing Zhao
Subject: Social Sciences, Accounting Keywords: Intellectual property protection; independent R&D investment; green technology innovation; masking effect; threshold effect
Due to the continuous trade friction between China and the United States, for domestic enterprises in China, the cost of importing foreign technologies is increasing. Thus, the independent research and development (R&D) becomes particularly important for the realization of green technology innovation (GTI). This paper establishes a non-linear mediating effect model based on the data of various regions in China from 2012 to 2018. The main results are shown in the following. Firstly, there is an inverted U-shaped relationship between the intensity of intellectual property protection (IPP) and the level of GTI. Furthermore, the independent R&D investment has a masking effect between them. Secondly, by taking the independent R&D investment as a threshold variable, we prove our findings. Considering that the intensity of IPP is at a high level in most regions of China, the above statements mean that the enterprises need to continuously increase their investment in R&D, in order to further improve the regional ability in GTI. Meanwhile, local governments should also stimulate enterprises' willingness to expand their scale in R&D by issuing incentive policies, such as R&D tax incentives and government subsidies.
The Role of Non-R&D Expenditures in Promoting Innovation in Europe
Angelo Leogrande, Albertoc Costantiello, Lucio Laureti
Subject: Social Sciences, Economics Keywords: innovation and invention: processes and incentives; management of technological innovation and R&D; diffusion orocesses; open innovation
In this article we estimate the value of "Non-R&D Innovation Expenditures" in Europe. We use data from the European Innovation Scoreboard-EIS of the European Commission from the period 2010-2019. We test data with the following econometric models i.e.: Pooled OLS, Dynamic Panel, Panel Data with Fixed Effects, Panel Data with Random Effects, WLS. We found that "Non-R&D Innovation Expenditures" is positively associated among others to "Innovation Index" and "Firm Investments" and negatively associated among others to "Human Resources" and "Government Procurement of Advanced Technology Products". We use the k-Means algorithm with either the Silhouette Coefficient and the Elbow Method in a confrontation with the network analysis optimized with the Distance of Manhattan and we find that the optimal number of clusters is four. Furthermore, we propose a confrontation among eight machine learning algorithms to predict the level of "Non-R&D Innovation Expenditures" either with Original Data-OD either with Augmented Data-AD. We found that Gradient Boost Trees Regression is the best predictor for OD while Tree Ensemble Regression is the best Predictor for AD. Finally, we verify that the prediction with AD is more efficient of that with OD with a reduction in the average value of statistical errors equal to 40,50%.
Linking Financial Resources Ability and R & D Resources on New Product Performance. The Mediating role of Innovation Orientation and Competitive Position
Hersugondo Hersugondo, Sugeng Wahyudi, Nuryakin Nuryakin, Rio Dhani Laksana
Subject: Social Sciences, Accounting Keywords: financial resources ability; R & D; innovation orientation; competitive position; new product performance (NPP)
This study aims to test empirical research on the effect of financial resource ability, research and development (R & D) on innovation orientation and competitive position. This study also examines the critical mediating role of innovation orientation and competitive position to achieving new products performance (NPP). This study used a quantitative research approach by comparing data from service industry and manufacture industry in Indonesia included in Indonesian-State-Ownership companies. The analysis unit in this study used middle managers and top managers who responsible for managing divisions within the Indonesian-State-Ownership companies. The number of respondents studied in this study was 287 sample. The purposive sampling technique was used in taking the research sample. This study indicated that financial resources abilities, research and development (R & D) abilities positive effect on innovation orientation and competitive position. This study also testing the importance role of innovation orientation and a competitive position to enhancing new products performance (NPP).
phylotaR: An automated pipeline for retrieving orthologous DNA sequences from GenBank in R
Dominic J. Bennett, Hannes Hettling, Daniele Silvestro, Alexander Zizka, Christine D. Bacon, Søren Faurby, Rutger A. Vos, Alexandre Antonelli
Subject: Biology, Other Keywords: BLAST; DNA, open source; phylogenetics; R; sequence orthology.
The exceptional increase in molecular DNA sequence data in open repositories is mirrored by an ever-growing interest among evolutionary biologists to harvest and use those data for phylogenetic inference. Many quality issues, however, are known and the sheer amount and complexity of data available can pose considerable barriers to their usefulness. A key issue in this domain is the high frequency of sequence mislabelling encountered when searching for suitable sequences for phylogenetic analysis. These issues include the incorrect identification of sequenced species, non-standardised and ambiguous sequence annotation, and the inadvertent addition of paralogous sequences by users, among others. Taken together, these issues likely add considerable noise, error or bias to phylogenetic inference, a risk that is likely to increase with the size of phylogenies or the molecular datasets used to generate them. Here we present a software package, phylotaR, that bypasses the above issues by using instead an alignment search tool to identify orthologous sequences. Our package builds on the framework of its predecessor, PhyLoTa, by providing a modular pipeline for identifying overlapping sequence clusters using up-to-date GenBank data and providing new features, improvements and tools. We demonstrate our pipeline's effectiveness by presenting trees generated from phylotaR clusters for two large taxonomic clades: palms and primates. Given the versatility of this package, we hope that it will become a standard tool for any research aiming to use GenBank data for phylogenetic analysis.
Exploring R&D Influences on Financial Performance for Business Sustainability Considering Dual Profitability Objectives
Kao-Yi Shen, Min-Ren Yan, Gwo-Hshiung Tzeng
Subject: Social Sciences, Business And Administrative Sciences Keywords: business sustainability; research and development (R&D); multiple criteria decision-making (MCDM); financial objective; variable-consistency dominance-based rough set approach (VC-DRSA); internetwork relationship map (INRM); directional flow graph (DFG)
The influence and importance of research and development (R&D) for business sustainability have gained increasing interests, especially in the high-tech sector. However, the efforts of R&D might cause complex and mixed impacts on the financial results considering the associated expenses. Thus, this study aims to examine how R&D efforts may influence business to improve its financial performance considering the dual objectives: the gross and the net profitability. This research integrated a rough-set-based soft computing technique and multiple criteria decision-making (MCDM) methods to explore this complex and yet valuable issue. A group of public listed companies from Taiwan, all in the semiconductor sector, was analyzed as a case study. Initially, more than 30 variables were considered, and the adopted soft computing technique retrieved 14 core attributes—for the dual profitability objectives—to form the evaluation model. The importance of R&D for pursuing superior financial prospects is confirmed, and the empirical case demonstrates how to guide an individual company to plan for improvements to achieve its long-term sustainability by this hybrid approach.
AE-RTISNet: Aeronautics Engine Radiographic Testing Inspection System Net with Improved Fast Region-based Convolutional Neural Networks Framework
Zhi-Hao Chen, Jyh-Ching Juang
Subject: Engineering, Automotive Engineering Keywords: Fast R-CNN; R-CNN; NDT; X-ray; transfer learning
To ensure the safety in aircraft flying, we aim use of the deep learning methods of nondestructive examination with multiple defect detection paradigms for X-ray image detection posed. The use of the Fast Region-based Convolutional Neural Networks (Fast R-CNN) driven model seeks to augment and improve existing automated Non-Destructive Testing (NDT) diagnosis. Within the context of X-ray screening, limited numbers insufficient types of X-ray aeronautics engine defect data samples can thus pose another problem in training model tackling multiple detections perform accuracy. To overcome this issue, we employ a deep learning paradigm of transfer learning tackling both single and multiple detection. Overall the achieve result get more then 90% accuracy based on the AE-RTISNet retrained with 8 types of defect detection. Caffe structure software to make networks tracking detection over multiples Fast R-CNN. We consider the AE-RTISNet provide best results to the more traditional multiple Fast R-CNN approaches simpler translate to C++ code and installed in the Jetson™ TX2 embedded computer. With the use of LMDB format, all images using input images of size 640 × 480 pixel. The results scope achieves 0.9 mean average precision (mAP) on 8 types of material defect classifiers problem and requires approximately 100 microseconds.
GEO-GEO Stereo-Tracking of Atmospheric Motion Vectors (AMVs) from the Geostationary Ring
James L. Carr, Dong L. Wu, Jaime Daniels, Mariel D. Friberg, Wayne Bresky, Houria Madani
Subject: Earth Sciences, Atmospheric Science Keywords: 3D-winds; atmospheric motion vectors (AMVs); GOES-R; ABI; Himawari; AHI; planetary boundary layer (PBL); stereo imaging; parallax; Image Navigation and Registration (INR)
Height assignment is an important problem for satellite measurements of Atmospheric Motion Vectors (AMVs) that are interpreted as winds by forecast and assimilation systems. Stereo methods assign heights to AMVs from the parallax observed between observations from different vantage points in orbit while tracking cloud or moisture features. In this paper, we fully develop the stereo method to jointly retrieve wind vectors with their geometric heights from geostationary satellite pairs. Synchronization of observations between observing systems is not required. NASA and NOAA stereo-winds codes have implemented this method and we have processed large datasets from GOES-16, -17, and Himawari-8. Our retrievals are validated against rawinsonde observations and demonstrate the potential to improve forecast skill. Stereo winds also offer an important mitigation for the loop heat pipe anomaly on GOES-17 during times when warm focal plane temperatures cause infra-red channels that are needed for operational height assignments to fail. We also examine several application areas, including deep convection in tropical cyclones, planetary boundary layer dynamics, and fire smoke plumes, where stereo methods provide insights into atmospheric processes. The stereo method is broadly applicable across the geostationary ring where systems offering similar Image Navigation and Registration (INR) performance as GOES-R are deployed.
Chemophenetic Approach to Selected Senecioneae Species, Combining Morphometric and UHPLC-HRMS Analyses
Yulian Voynikov, Vessela Balabanova, Reneta Gevrenova, Dimitrina Zheleva-Dimitrova
Subject: Life Sciences, Other Keywords: Senecio; Jacoboea; Orbitrap; chemophenetic; clustering; R
Herein, a chemophenetic significance, based on phenolic metabolite profiling of three Senecio (S. hercynicus, S. ovatus and S. rupestris) and two Jacobaea species (J. pancicii and J. maritima) coupled to morphometric data, is presented. A set of twelve morphometric characters were recorded from each plant species and used as predictor variables in a Linear Discriminant Analysis (LDA) model. From a total 75 observations (15 from each of the five species), the model correctly assumed their species' membership, except 2 observations. Among the studied species, S. hercynicus and S. ovatus presented the greatest morphological similarity. A phytochemical profiling of phenolic specialized metabolites by UHPLC-Orbitrap-HRMS revealed 46 hydroxybenzoic, hydroxycinnamic, acylquinic acids and their derivatives, 1 coumarin, and 21 flavonoids. Hierarchical and PCA clustering applied to the phytochemical data corroborated the similarity of S. hercynicus and S. ovatus, observed in the morphometric analysis. This study contributes to the phylogenetic relationships between the tribe Senecioneae taxa and highlights the chemophenetic similarity/dissmilarity of the studied species belonging to Senecio and Jacobaea genera.
Daily Low Dose of Erythropoietin in Neuroinflammation; EPO Might Be Hazardous in COVID-19
Reza Nejat, Ahmad Shahir Sadr, Alireza Ebrahimi, Alireza Nabati, Elham Eshaghi
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: Ang II; COVID19; erythropoietin; EPO-R; neuroinflammation: AT1R; SARS-CoV2; angiotensin(1-7); COVID-19 encephalopathy
Neuroinflammation, defined as inflammatory reactions mediated by cytokines, chemokines, reactive oxygen species, and secondary messengers in the central nervous system (CNS) including the brain and spinal cord is the basis of many neurological disorders. Recently, erythropoietin (EPO) has been considered and studied as a modulator of neuroinflammation. On this article minireview of pathophysiology of neuroinflammation and the neuroprotective effects of EPO is discussed and a case of subacute huge subdural hematoma with double mydriasis operated urgently, treated with low daily dose (vs high dose once or twice a month in the literature) of EPO and recovered fully and discharged home with good consciousness is reported. In addition, the probable unfavorable outcome of erythropoietin administration in patients with neuroinflammation in COVID-19 is considered.
The Influence of fWHR Male CEO on Research & Development
Nur Fadjrih ASYIK, Muchlis Muchlis, Ikhsan Budi Riharjo, Rusdiyanto Rusdiyanto
Subject: Social Sciences, Accounting Keywords: CEO Male; fWHR; Masculinity; R&D; Stata
This research aims to obtain empirical evidence of the influence of the masculinity of male CEOs on the of research & development, this research seeks to identify the influence of the face of mas-culinity of male CEO's on the of research & development. The study used a quantitative approach with population and research samples using companies listed on the Indonesia Stock Exchange from 2016 to 2021. The study collected images of faces identified as male CEOs from data from the Indonesia Stock Exchange website and company as utilizing Google searches. The data analysis method in this study used Regression Ordinary Least Square (OLS) with Stata Software. Stata software is one of the regression completion procedures that has a high degree of flexibility in re-search that connects theories, concepts and data that can be done on variables in research. The findings empirically explain that the higher the face value of male CEO masculinity has an impact on reducing research & development costs, and vice versa the lower the face value of male CEO masculinity has an impact on increasing research & development. The practical implications of the research results can help the Indonesian Association of Accountants in developing Financial Accounting Standard No. 19 in Indonesia. The theoretical implications of the research results can explain the Theory of Agency and the Theory of Consistency of Behavior. The policy implications of the results of the study can provide empirical evidence that the higher the facial value of male CEO masculinity has an impact on reducing research & development, while also the lower the face value of male CEO masculinity has an impact on increasing research & development.
Vision-Based Deep Learning Algorithm for Detecting Potholes
Kanushka Gajjar, Theo van Niekerk, Thomas Wilm, Paolo Mercorelli
Subject: Engineering, Automotive Engineering Keywords: CNN; Faster R-CNN; SSD and YOLOv3
Potholes on roads pose a major threat to motorists and autonomous vehicles. Driving over a pothole has the potential to cause serious damage to a vehicle, which in turn may result in fatal accidents. Currently, many pothole detection methods exist. However, these methods do not utilize deep learning techniques to detect a pothole in real-time, determine the location thereof and display its location on a map. The success of determining an effective pothole detection method, which includes the aforementioned deep learning techniques, is dependent on acquiring a large amount of data, including images of potholes. Once adequate data had been gathered, the images were processed and annotated. The next step was to determine which deep learning algorithms could be utilized. Three different models, including Faster R-CNN, SSD and YOLOv3 were trained on the custom dataset containing images of potholes to determine which network produces the best results for real-time detection. It was revealed that YOLOv3 produced the most accurate results and performed the best in real-time, with an average detection time of only 0.836s per image. The final results revealed that a real-time pothole detection system, integrated with a cloud and maps service, can be created to allow drivers to avoid potholes.
An Empirical Study of R Applications for Data Analysis in Marine Geology
Polina LEMENKOVA
Subject: Earth Sciences, Geoinformatics Keywords: R, Programming, Statistical Analysis, Mariana Trench, Bathymetry
The study focuses on the application of R programming language towards marine geological research with a case study of Mariana Trench. Due to its logical and straightforward syntax, multi-functional standard libraries, R is especially attractive to the geologists for the scientific computing. Using R libraries, the unevenness of various factors affecting Mariana Trench geomorphic structure has been studied. These include sediment thickness, slope steepness, angle aspect, depth at the basement and magmatism of the nearby areas. Methods includes using following R libraries: {ggplot2} for regression analysis, Kernel density curves, compositional charts; {ggalt} for Dumbbell charts for data comparison by tectonic plates, ranking dot plots for correlation analysis; {vcd} for mosaic plots, silhouette plots for compositional similarities among the bathymetric profiles, association plots; {car} for ANOVA. Bathymetric GIS data processing was dome in QGIS and LaTeX. The innovativeness of the work consists in the multi-disciplinary approach combining GIS analysis and statistical methods of R which contributes towards studies of ocean trenches, aimed at geospatial analysis of big data.
Development, Validation for Determination of Verteporfin by HPLC-DAD and Application to Real samples
Ahmet Ozgun DOGRAMACIOGLU, Cemile ÖZCAN
Subject: Chemistry, Analytical Chemistry Keywords: HPLC-DAD-UV; Verteporfin; ICH Q2 R; Validation
The aim of this study, for determination of verteporfin in real samples (simulated body fluid, and simulated tears, 0.9% isotonic sodium chloride solution, Lactated Ringer IV solution for infusion, 5% dextrose IV solution for infusion, lemon juice and drinking water) was the method validation and to examined by HPLC-DAD-UV. Metod validation parameters such as specificity, linearity, precision, accuracy, robustness, limit of detection (LOD) and limit of quantitation (LOQ) for verteporfin were validated and developed according to the International Conference on Harmonization (ICH) Q2 R1 guidelines. The LOD and LOQ for verteporfin were found 0.06 µg/L and 0.2 µg/L, respectively. The recovery values of the optimization and validation for verteporfin were found in the range of 97.5-100.7%. The relative standard deviations (RSD) for vertepofin were <1%. The developed method was successfully applied to real samples with high accuracy and the recoveries (%) from real samples were 99.9, 100, 98.2, 99.2, 99.4, 98.8 and 99.4, respectively.
Crocodilepox Virus Protein 157 is an Independently Evolved Inhibitor of Protein Kinase R
M. Julhasur Rahman, Loubna Tazi, Sherry L. Haller, Stefan Rothenburg
Subject: Life Sciences, Virology Keywords: poxviruses; protein kinase R; evolution; translational regulation; eIF2
Crocodilepox virus (CRV) belongs to the Poxviridae family and mainly infects hatchling and juvenile Nile crocodiles. Most poxviruses encode inhibitors of the host antiviral protein kinase R (PKR), which is activated by viral double-stranded (ds) RNA formed during virus replication, resulting in the phosphorylation of eIF2 and subsequent shutdown of general mRNA translation. Because CRV lacks orthologs of known poxviral PKR inhibitors, we experimentally characterized one candidate (CRV157), which contains a predicted dsRNA-binding domain. Bioinformatic analyses indicated that CRV157 evolved independently from other poxvirus PKR inhibitors. CRV157 bound to dsRNA, co-localized with PKR in the cytosol, and inhibited PKR from various species. To analyze whether CRV157 could inhibit PKR in the context of a poxvirus infection, we constructed recombinant vaccinia virus strains that contain either CRV157 or a mutant CRV157 deficient in dsRNA binding in a strain that lacks PKR inhibitors. The presence of wild type CRV157 rescued vaccinia virus replication, while the CRV157 mutant did not. The ability of CRV157 to inhibit PKR correlated with virus replication and eIF2alpha phosphorylation. The independent evolution of CRV157 demonstrates that poxvirus PKR inhibitors evolved from a diverse set of ancestral genes in an example of convergent evolution.
Preprint CONCEPT PAPER | doi:10.20944/preprints202105.0664.v1
Understanding the Dynamics of Pathogenic Infection in a Population
Sangam Banerjee
Subject: Life Sciences, Biochemistry Keywords: Pathogen; Herd Immunity Threshold; R-naught; Infection dynamics
In this article we have presented a new perception of herd immunity threshold (HIT) which considers that only a "band of population" are susceptible to any pathogenic infection. This is termed as the "effective herd immunity threshold" (EHIT) and the progression of the disease (caused by this pathogenic infection) is mainly determined by this EHIT value. We have argued here that this EHIT value (considering the immunity band picture in the population) will be substantially lower than the estimated canonical HIT values obtained from various existing models. We propose that the actual prediction of the disease progression should now be calculated using the EHIT values.
Reconstructed f(R) Gravity and Its Csmological Consequences in Chameleon Scalar Field with A Scale Factor Describing The Pre-Bounce Ekpyrotic Contraction
Soumyodipta Karmakar, Kairat Myrzakulov, Surajit Chattopadhyay, Ratbay Myrzakulov
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: pre-bounce ekpyrotic contraction; f(R) gravity; reconstruction
Inspired by the work of S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 92, 024016 (2015) [1], the present study reports a reconstruction scheme for f (R) gravity with the scale factor a(t) µ (t * - t) c22describing the pre-bounce ekpyrotic contraction, where t is the big crunch time. The reconstructed f (R) is used to derive expressions for density and pressure contributions and the equation of state parameter resulting from this reconstruction is found to behave like "quintom". It has also been observed that the reconstructed f (R) has satisfied a sufficient condition for a realistic model. In the subsequent phase the reconstructed f (R) is applied to the model of chameleon scalar field and the scalar field f and the potential V(f) are tested for quasi-exponential ex pansion. It has been observed that although the reconstructed f (R) satisfies one of the sufficient conditions for realistic model, the quasi-exponential expansion is not available due to this reconstruction. Finally, the consequences pre-bounce ekpyrotic inflation i n f (R) gravity are compared to the background solution for f (R) matter bounce.
Analyzing the Current Status of India in Global Scenario with Reference to COVID-19 Pandemic
Dharmendra Kumar Yadav, Sharvari Shukla, S.K. Yadav
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: COVID-19; Disease Modelling; SIR Model; R software
The crux of the paper is to present a detailed analysis of COVID-19 data which is available on global basis. This analysis is performed using some specific package of R software. It provides various insights from the data and help to understand the current status of this pandemic in India so that effective measures can be formulated by policymakers. These insights include global summary of this disease, growth rate of this pandemic and performance of SIR model for the given global data. The analysis has been presented in different tables and graphs to understand the outputs of the problem in a more detailed point of view.
Quantemol Electron Collisions (QEC): An Enhanced Expert System for Performing Electron Molecule Collision Calculations Using the R-matrix Method
Bridgette Cooper, Maria Tudorovskaya, Sebastian Mohr, Aran O'Hare, Martin Hanicinec, Anna Dzarasova, Jimena Gorfinkiel, Jakub Benda, Zdenek Masin, Ahmed Al-Refaie, Peter Knowles, Jonathan Tennyson
Subject: Physical Sciences, Applied Physics Keywords: electron molecular scattering; R-matrix; expert system; Molpro
Collisions of low energy electrons with molecules are important for understanding many aspects of the environment and technologies. Understanding the processes that occur in these types of collisions can give insights into plasma etching processes, edge effects in fusion plasmas, radiation damage to biological tissues and more. A radical update of the previous expert system for computing observables relevant to these processes, Quantemol-N, is presented. The new Quantemol Electron Collision (QEC) expert system simplifyies the user experience, improving reliability and implements new features. The QEC GUI interfaces the Molpro quantum chemistry package for molecular target setups and to the sophisticated UKRmol+ codes to generate accurate and reliable cross-sections. These include elastic cross-sections, super elastic cross-sections between excited states, electron impact dissociation, scattering reaction rates, dissociative electron attachment, differential cross-sections, momentum transfer cross-sections, ionization cross sections and high energy electron scattering cross-sections. With this new interface we will be implementing dissociative recombination estimations, vibrational excitations for neutrals and ions, and effective core potentials in the near future.
Statistical Analysis of the Mariana Trench Geomorphology Using R Programming Language
Subject: Earth Sciences, Geology Keywords: R, statistical analysis, programming, Mariana trench, oceanography, geomorphology
This paper introduces an application of R programming language for geostatistical data processing with a case study of the Mariana Trench, Pacific Ocean. The formation of the Mariana Trench, the deepest among all hadal oceanic depth trenches, is caused by complex and diverse geomorphic factors affecting its development. Mariana Trench crosses four tectonic plates: Mariana, Caroline, Pacific and Philippine. The impact of the geographic location and geological factors on its geomorphology has been studied by methods of statistical analysis and data visualization using R libraries. The methodology includes following steps. Firstly, vector thematic data were processed in QGIS: tectonics, bathymetry, geomorphology and geology. Secondly, 25 cross-section profiles were drawn across the trench. The length of each profile is 1000-km. The attribute information has been derived from each profile and stored in a table containing coordinates, depths and thematic information. Finally, this table was processed by methods of the statistical analysis on R. The programming codes and graphical results are presented. The results include geospatial comparative analysis and estimated effects of the data distribution by tectonic plates: slope angle, igneous volcanic areas and depths. The innovativeness of this paper consists in a cross-disciplinary approach combining GIS, statistical analysis and R programming.
Using the R Language to Manage and Show Statistical Information in the Cloud
Pau Fonseca i Casas, Raül Tormos
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: R; Open data; API; Statistics; DSS; Web service.
We present a methodology to enable users to interact with statistical information owned by an institution and stored in a cloud infrastructure. Mainly based on R, this approach was developed following the open-data philosophy. Also, since we use R, the implementation is mainly based on open-source software. R gives several advantages from the point of view of data management and acquisition, as it becomes a common framework that can be used to structure the processes involved in any statistical operation. This simplifies the access to the data and enable to use all the power of R in the cloud information. This methodology was applied successfully to develop a tool to manage the data of the Centre d'Estudis d'Opinió, but it can be applied to other institutions to enable open access to its data. The infrastructure also was deployed to a cloud infrastructure, to assure the scalability and a 24/7 access.
Feasibility of GNSS-R Ice Sheet Altimetry in Greenland Using TDS-1
Antonio Rius, Estel Cardellach, Fran Fabra, Weiqiang Li, Serni Ribó, Manuel Hernández-Pajares
Subject: Earth Sciences, Environmental Sciences Keywords: GNSS-R; ice sheet; TDS-1; greenland; altimetry
Radar altimetry provides valuable measurements to characterize the state and the evolution of the Antartica and Greenland ice sheet cover. Global Navigation Satellite System Reflectometry (GNSS-R) has the potential capacity of complementing the dedicated radar altimeters incrementing the temporal and spatial resolution of the surface height measurements. In this work we perform an study of the Greenland ice sheet using data obtained by the GNSS-R instrument aboard the British TechDemoSat-1 (TDS-1) satellite mission, designed primarily to provide sea state information, like sea surface roughness or wind, but not altimetric products. The data has been analyzed with altimetric methodologies, already proved in aircraft based experiments, to extract signal delay observables to be used to infer the topography of the Greenland cover. The penetration depth of the GNSS signals into ice has also considered. The topographic signal obtained is consistent with those obtained with other passive or active microwave sensors. The main conclusion derived from this work is that GNSS-R also provides valuable measurements of the ice sheet cover and, as taken at a variety of geometries and at least two frequency bands, they prospect different depths into the ice. They have thus potential to complement our understanding of the ice firn and its evolution.
Glycan Epitopes on 201B7 Human Induced Pluripotent Stem Cells using R-10G and R-17F Marker Antibodies
Yuko Nagai, Hiromi Nakao, Aya Kojima, Yuka Komatsubara, Yuki Ohta, Nana Kawasaki, Nobuko Kawasaki, Hidenao Toyoda, Toshisuke Kawasaki
Subject: Life Sciences, Biochemistry Keywords: Human induced pluripotent stem cells (hiPSCs); monoclonal antibodies; R-10G; R-17F; keratan sulfate; podocalyxin; keratanase II; endo-β-galactosidase
We developed two human induced pluripotent stem cell (hiPSC)/human embryonic stem cell-specific glycan-recognizing mouse antibodies, R-10G and R-17F, using the Tic (JCRB1331) hiPSC line as an antigen. R-10G recognizes a low-sulfate keratan sulfate, and R-17F recognizes lacto-N-fucopentaose-1. To evaluate the general characteristics of stem cell glycans, we used the hiPSC line 201B7 (HPS0063), a prototype iPSC line. Using an R-10G affinity column, an R-10G-binding protein was isolated. The protein yielded a single but very broad band from 480 to 1,236 kDa by blue native gel electrophoresis. After trypsin digestion, the protein was identified as podocalyxin by liquid chromatography/mass spectrometry. According to Western blotting, the protein reacted with R-10G and R-17F. The R-10G positive band was resistant to digestion with glycan-degrading enzymes, including peptide N-glycanase, but the intensity of the band was decreased significantly by digestion with keratanase, keratanase II, and endo-β-galactosidase, suggesting the R-10G epitope to be a keratan sulfate. These results suggest that keratan sulfate-type epitopes are shared by hiPSCs. However, the keratan sulfate from 201B7 cells contained a polylactosamine disaccharide unit (Galβ1-4GlcNAc) at a significant frequency, whereas that from Tic cells consisted mostly of keratan sulfate disaccharide units (Galβ1-4GlcNAc(6S)). In addition, the abundance of the R-10G epitope was significantly lower in 201B7 cells than in Tic cells.
Tolman VI fluid sphere in f(R,T) gravity
Monimala Mondal, Farook Rahaman
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Tolman VI spacetime, Compact stars; $f(R,T)$ gravity
We analyze the behavior of relativistic spherical objects within the context of modified $f(R,T)$ gravity considering Tolman VI spacetime, where gravitational lagrangian is a function of Ricci scalar(R) and trace of energy momentum tensor(T) i.e,$ f(R,T)= R+ 2\beta T$, for some arbitrary constant $\beta$. For developing our model, we have chosen $\pounds_{m} = -p$, where $ \pounds_{m}$ represents matter lagrangian. For this investigation, we have chosen three compact stars namely PSR J1614-2230 [Mass=(1.97$\pm$ 0.4)M$_\odot$; Radius= 9.69$_{-0.02} ^{+0.02}$ Km] ,Vela X-1 [Mass=(1.77$\pm$ 0.08)M$_\odot$; Radius= 9.560$_{-0.08} ^{+0.08}$ Km] and 4U 1538-52 [Mass=(9.69)M$_\odot$; Radius= 1.97 Km]. In this theory the equation of pressure isotropy is identical to standard Einstein's theory. So all known metric potential solving Einstein's equations are also valid here. In this paper, we have investigated the effort of coupling parameter ($\beta$) on the local matter distribution. Sound of speed and adiabatic index are higher with grater values of $\beta$ while on contrary mass function and gravitational redshift are lower with higher values of $\beta$ . For supporting the theoretical results, graphical representation are also employed to analyze the physical viability of the compact stars.
RePlant Alfa: Integrating Google Earth Engine and R coding to Support the Identification of Priority Areas for Ecological Restoration
Narkis S Morales, Ignacio C Fernández, Leonardo P Durán, Waldo A Pérez
Subject: Earth Sciences, Environmental Sciences Keywords: Google Earth Engine; R coding; GIS, Restoration, Decision-Making
Land degradation and climate change are among the main threats to the sustainability of ecosystems worldwide. Therefore, the restoration of degraded landscapes is essential to maintain the functionality of ecosystems, especially those with greater social, economic and environmental vulnerability. Nevertheless, policy-makers are frequently challenged by deciding on where to prioritize restoration actions, which usually includes to deal with multiple and complex needs under an always short budget. If these decisions are not taken based on proper data and processes, restoration implementation can easily fail. To help decision-makers taking informed decisions on where to implement restoration activities, we have developed a semiautomatic geospatial platform to prioritize areas for restoration activities based on ecological, social and economic variables. This platform takes advantage of the potential to integrate R coding, Google Earth Engine cloud computing and GIS visualization services to generate an interactive geospatial decision-maker tool for restoration. Here, we present a prototype version called "RePlant alpha" which was tested with data from the Central Zone of Chile. This exercise proved that integrating R and GEE was feasible, and that the analysis, with at least six indicators and for a specific region was also feasible to implement even from a personal computer. Therefore, the use of a virtual machine in the cloud with a large number of indicators over large areas is both possible and practical.
Global Powerful Alliance in Strong Neutrosophic Graphs
Henry Garrett
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Modified Neutrosophic Number; Global Powerful Alliance; R-Regular-Strong
New setting is introduced to study the global powerful alliance. Global powerful alliance is about a set of vertices which are applied into the setting of neutrosophic graphs. Neighborhood has the key role to define this notion. Also, neighborhood is defined based on strong edges. Strong edge gets a framework as neighborhood and after that, too close vertices have key role to define global powerful alliance based on strong edges. The structure of set is studied and general results are obtained. Also, some classes of neutrosophic graphs excluding empty, path, star, and wheel and containing complete, cycle and r-regular-strong are investigated in the terms of set, minimal set, number, and neutrosophic number. Neutrosophic number is used in this way. It's applied to use the type of neutrosophic number in the way that, three values of a vertex are used and they've same share to construct this number. It's called "modified neutrosophic number". Summation of three values of vertex makes one number and applying it to a set makes neutrosophic number of set. This approach facilitates identifying minimal set and optimal set which forms minimal-global-powerful-alliance number and minimal-global-powerful-alliance-neutrosophic number. Two different types of sets namely global-powerful alliance and minimal-global-powerful alliance are defined. Global-powerful alliance identifies the sets in general vision but minimal-global-powerful alliance takes focus on the sets which deleting a vertex is impossible. Minimal-global-powerful-alliance number is about minimum cardinality amid the cardinalities of all minimal-global-powerful alliances in a given neutrosophic graph. New notions are applied in the settings both individual and family. Family of neutrosophic graphs has an open avenue, in the way that, the family only contains same classes of neutrosophic graphs. The results are about minimal-global-powerful alliance, minimal-global-powerful-alliance number and its corresponded sets, minimal-global-powerful-alliance-neutrosophic number and its corresponded sets, and characterizing all minimal-global-powerful alliances, minimal-t-powerful alliance, minimal-t-powerful-alliance number and its corresponded sets, minimal-t-powerful-alliance-neutrosophic number and its corresponded sets, and characterizing all minimal-t-powerful alliances. The connections amid t-powerful-alliances are obtained. The number of connected components has some relations with this new concept and it gets some results. Some classes of neutrosophic graphs behave differently when the parity of vertices are different and in this case, cycle, and complete illustrate these behaviors. Two applications concerning complete model as individual and family, under the titles of time table and scheduling conclude the results and they give more clarifications and closing remarks. In this study, there's an open way to extend these results into the family of these classes of neutrosophic graphs. The family of neutrosophic graphs aren't study deeply and with more results but it seems that analogous results are determined. Slight progress is obtained in the family of these models but there are open avenues to study family of other models as same models and different models. There's a question. How can be related to each other, two sets partitioning the vertex set of a graph? The ideas of neighborhood and neighbors based on strong edges illustrate open way to get results. A set is global powerful alliance when two sets partitioning vertex set have uniform structure. All members of set have more amount of neighbors in the set than out of set and reversely for non-members of set with less members in the way that the set is simultaneously t-offensive and(t-2)-defensive. A set is global if t=0. It leads us to the notion of global powerful alliance. Different edges make different neighborhoods but it's used one style edge titled strong edge. These notions are applied into neutrosophic graphs as individuals and family of them. Independent set as an alliance is a special set which has no neighbor inside and it implies some drawbacks for these notions. Finding special sets which are well-known, is an open way to purse this study. Special set which its members have only one neighbor inside, characterize the connected components where the cardinality of its complement is the number of connected components. Some problems are proposed to pursue this study. Basic familiarities with graph theory and neutrosophic graph theory are proposed for this article.
On the Performance of Garch Family Models in the Presence of Additive Outliers
Monday Osagie Adenomon, Ngozi G. Emenogu, Nweze Nwaze Obinna
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Additive Outliers, Models, Simulation, Time Series length, R Software
It is a common practice to detect outliers in a financial time series in order to avoid the adverse effect of additive outliers. This paper investigated the performance of GARCH family models (sGARCH; gjrGARCH; iGARCH; TGARCH and NGARCH) in the presence of different sizes of outliers (small, medium and large) for different time series lengths (250, 500, 750, 1000, 1250 and 1500) using root mean square error (RMSE) and mean absolute error (MAE) to adjudge the models. In a simulation iteration of 1000 times in R environment using rugarch package, results revealed that for small size of outliers, irrespective of the length of time series, iGARCH dominated, for medium size of outliers, it was sGARCH and gjrGARCH that dominated irrespective of time series length, while for large size of outliers, irrespective of time series length, gjrGARCH dominated. The study further leveled that in the presence of additive outliers on time series analysis, both RMSE and MAE increased as the time series length increased.
A Forward GPS Multipath Simulator Based on the Vegetation Radiative Transfer Equation Model
Xuerui Wu, Shuanggen Jin
Subject: Earth Sciences, Environmental Sciences Keywords: GNSS-R; multipath; radiative transfer equation model; vegetation; simulation
GNSS have been widely used in navigation, positioning and timing. Nowadays, the multipath errors previously considered detrimental may be re-utilized for the remote sensing of geophysical parameters (soil moisture, vegetation and snow depth), e.g. GPS- Multipath Reflectometry (GPS-MR). In this paper, a new element describing bistatic scattering properties of vegetation is incorporated into the traditional GPS-MR model. This new element is the first-order radiative transfer equation model. The new forward GPS multipath simulator is able to explicitly link the vegetation parameters with GPS multipath observables (signal-to-noise-ratio (SNR), code pseudorange and carrier phase observables). The trunk layer and its corresponding scattering mechanisms are ignored since GPS-MR is not suitable for high forest monitoring due to the coherence of direct and reflected signals. Based on this new model linking the GPS observables (SNR, phase and pseudorange) with detailed vegetation parameters, the developed simulator can present how the GPS signals (L1 and L2 carrier frequencies, C/A, P(Y) and L2C modulations) are transmitted (scattered and absorbed) through vegetation medium and received by GPS receivers. Simulation results show that wheat will decrease the amplitudes of GPS multipath observables, if we increase the vegetation moisture contents or the scatters sizes (stem or leaf), the amplitudes of GPS multipath observables (SNR, phase and code) decrease. Although the Specular-Ground component dominates the total specular scattering, vegetation covered ground soil moisture has almost no effects on the final multipath signatures. Our simulated results are consistent with published results for environmental parameter detections with GPS-MR.
Generalizations of the R-Matrix Method to the Treatment of the Interaction of Short Pulse Electromagnetic Radiation with Atoms
Barry Schneider, Kathryn R Hamilton, Klaus Bartschat
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: B-spline R-matrix; R-matrix with time dependence; intense short-pulse extreme ultra14 violet radiation; time-dependent Schrdinger equation; Arnoldi-Lanczos propagation
Since its initial development in the 1970's by Phil Burke and his collaborators, the R-matrix theory and associated computer codes have become the de facto approach for the calculation of accurate data for general electron-atom/ion/molecule collision and photoionization processes. The use of a non-orthonormal set of orbitals based on B-splines, now called the B-spline R-matrix (BSR) approach, was pioneered by Zatsarinny. It has considerably extended the flexibility of the approach and improved particularly the treatment of complex many-electron atomic and ionic targets, for which accurate data are needed in many modelling applications for processes involving low-temperature plasmas. Both the original R-matrix approach and the BSR method have been extended to the interaction of short, intense electromagnetic (EM) radiation with atoms and molecules. Here we provide an overview of the theoretical tools that were required to facilitate the extension of the theory to the time domain. As an example of a practical application, we show results for two-photon ionization of argon by intense short-pulse extreme ultraviolet radiation
The Emotion of Disgust among Medical and Psychology Students
Artemios Pehlivanidis, Niki Pehlivanidi, Katerina Papanikolaou, Vassileios Mantas, Elpida Bertou, Theodoros Chalimourdas, Vana Sypsa, Charalambos Papageorgiou
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: disgust; DS-R; medical students; psychology students; academic orientation; specialization
Disgust evolved as a way to protect one's self from illness. DS-R measures disgust propensity of three kinds of disgust (Core, Animal Reminder and Contamination). Although the DS-R scale was refined mainly with young and largely female student population its impact on educational orientation has not been assessed. In the present study we examined the DS-R scoring and the choice of postgraduate studies in medical (n= 94) and psychology (n= 97) students. They responded to an anonymous web-based survey and completed the DS-R and a questionnaire on their demographics and plans for postgraduate studies. Female students outnumbered males (3:1) and scored higher in Total DS-R score (median: 59 vs. 50, p<0.05). Psychology students scored higher in all three kinds of disgust (p<0.05), indicating a higher level of disease avoidance. Medical students willing to follow Internal Medicine scored higher in Core Disgust (p<0.05) while psychology students willing to study Experimental Psychology scored lower in Animal Reminder subscale (p<0.001). Also, the higher the psychology students scored in Core Disgust scale the higher was the probability to choose Experimental Psychology. In conclusion, disgust propensity as rated by DS-R differentiates medical from psychology students and is also related to orientation preferences in postgraduate studies.
Respiratory Rate Estimation Based on Spectrum Decomposition
SeungJae Lee, Soo-Yong Kim
Subject: Engineering, Biomedical & Chemical Engineering Keywords: respiratory sinus arrhythmia (RSA); R-peak amplitude (RPA); QRS amplitude
We propose an electrocardiogram (ECG) signal-based algorithm to estimate the respiratory rate is a significant informative indicator of physiological state of a patient. The consecutive ECG signals reflect the information about the respiration because inhalation and exhalation make transthoracic impedance vary. The proposed algorithm extracts the respiration-related signal by finding out the commonality between the frequency and amplitude features in the ECG pulse train. The respiration rate can be calculated from the principle components after the procedure of the singular spectrum analysis. We achieved 1.7569 breaths per min of root-mean-squared error and 1.7517 of standard deviation with a 32-seconds signal window of the Capnobase dataset, which gives notable improvement compared with the conventional Autoregressive model based estimation methods.
Trajectory of Massive Particles around a Static Black Hole in f(R) Gravity
Surajit Mandal
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Massive particle; Static black hole; f(R) gravity; Pseudo-Newtonian Potential
In this paper, we investigated the trajectory of the massive particle in the vicinity of a general spherical symmetric black hole. Also, in the framework of general sphericalily symmetric black hole, Pseudo-Newtonian potential (PNP) and effective po tentials has been investigated. As an example, static spherically symmetric black hole in f(R) gravity is considered and presented the brief discussion on the structure of spacetime and horizons. We calculated energy and angular momentum in the framework of gen eral relativity as well as in Pseudo-Newtonian theory. A graphical comparison of angular momentum in this both framework has been studied.
Early Detection of Wildfires with GOES-R Time Series and Deep GRU-Network
Yu Zhao, Yifang Ban
Subject: Earth Sciences, Geoinformatics Keywords: GOES-R; GRU; Deep Learning; Wildfires; Active Fires; Early Detection; Monitoring
Early detection of wildfires has been limited using the sun-synchronous orbit satellites due to their low temporal resolution and wildfires' fast spread in the early stage. NOAA's geostationary weather satellites GOES-R can acquire images every 15 minutes at 2km spatial resolution, and have been used for early fire detection. However, advanced processing algorithms are needed to provide timely and reliable detection of wildfires. In this research, a deep learning framework, based on Gated Recurrent Units (GRU), is proposed to detect wildfires at early stage using GOES-R dense time series data. GRU model maintains good performance on temporal modelling while keep a simple architecture, makes it suitable to efficiently process time-series data. 36 different wildfires in North and South America under the coverage of GOES-R satellites are selected to assess the effectiveness of the GRU method. The detection times based on GOES-R are compared with VIIRS active fire products at 375m resolution in NASA's Fire Information for Resource Management System (FIRMS). The results show that GRU-based GOES-R detections of the wildfires are earlier than that of the VIIRS active fire products in most of the study areas. Also, results from proposed method offer more precise location on the active fire at early stage than GOES-R Active Fire Product in mid-latitude and low-latitude regions.
Antiviral phytochemicals identified in Rhododendron arboreum petals exhibited strong binding to SARS-CoV-2 MPro and Human ACE2 receptor
Maneesh Lingwan, Shagun Shagun, Yogesh Pant, Bandna Kumari, Ranjan Nanda, Shyam K Masakapalli
Subject: Life Sciences, Biochemistry Keywords: R. arboreum, Antiviral phytochemicals, SARS-CoV-2, MPro, ACE2, COVID-19
Background: Severe Acute Respiratory Syndrome Corona Virus 2 (SARS-CoV-2) affects human respiratory function causing COVID-19 disease. Safe natural products with potential antiviral phytochemicals with benefits to control high-altitude sickness could be adopted as adjunct therapy for COVID-19. The red petals of Rhododendron arboreum, commonly available and consumed in the Himalayan region may have phytochemicals with potential antiviral properties against COVID-19 targets.Purpose: This study was aimed to profile the secondary metabolites of R. arboreum petals, to assess their absorption, distribution, metabolism and elimination (ADME) properties and evaluate their antiviral potential by docking against COVID-19 targets such as SARS-CoV-2 main protease (Mpro PDB ID: 6LU7) and Human Angiotensin Converting Enzyme 2 (ACE2) receptor (PDB ID: 1R4L) that mediates the viral replication and entry into the host respectively.Methods: The phytochemicals of R. arboreum petals were mainly profiled using Gas Chromatography-Mass Spectroscopy (GC-MS) and 1H-NMR. In addition, the phytochemicals reported from the literature were tabulated. The ADME properties of the phytochemicals were predicted using SwissADME tool. Molecular docking simulation of the phytochemicals against SARS-CoV-2 main protease (Mpro PDB ID: 6LU7) and Human Angiotensin converting enzyme 2 (ACE2) receptor (PDB ID: 1R4L) were carried out using PyRx.Results: R. arboreum petals were found to be rich in appreciable proportions of secondary metabolites such as Quinic acid, 3-Caffeoyl-quinic acid, 5-O-Coumaroyl-D-quinic acid, 5-O-Feruloylquinic acid, 2,4-Quinolinediamine, Coumaric acid, Caffeic acid, Epicatechin, Catechin, 3-Hydroxybenzoic acid, Shikimic acid, Protocatechuic acid, Epicatechin gallate, Quercetin, Quercetin-O-pentoside, Quercetin-O-rhamnoside, Kaempferol-O-pentoside and Kaempferol. Several of these phytochemicals were reported to exhibit inhibitory activities against a range of viruses. From the molecular docking studies, 5-O-Feruloylquinic acid, 3-Caffeoyl-quinic acid, 5-O-Coumaroyl-D-quinic acid, Epicatechin and Catechin showed strong binding affinity with SARS-CoV-2 Mpro and human ACE2 receptor.Conclusion: This report showed that R. arboreum petals are rich in several antiviral phytochemicals that also docked against SARS-CoV-2 MPro and Human ACE2 receptor. This is the first report highlighting R. arboreum petals as a reservoir of antiviral phytochemicals with potential for synergetic activities. The outcomes merit further in vitro, in vivo and clinical studies on R. arboreum phytochemicals to develop natural formulations against COVID-19 disease for therapeutic benefits.
Nonsingular phantom cosmology in Five Dimensional f(R,T) Gravity
Rakesh Ranjan Sahoo, Kamal Lochan Mahanta, Saibal Ray
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Phantom energy; LRS Bianchi type-I; $f(R,T)$ theory; $5d$ spacetimes
We obtain exact solutions to the field equations for 5 dimensional LRS Bianchi type-I spacetime in $f(R,T)$ theory of gravity where specifically the following three cases are considered: (i) $f(R,T)=\mu(R+T)$, (ii) $f(R,T)=R \mu + R T \mu^2$ and (iii) $f(R,T)=R+\mu R^2+\mu T$ where $R$ and $T$ respectively the Ricci scalar and trace of the energy-momentum tensor. It is found that the equation of state (EOS) parameter $w$ is governed by the parameter $\mu$ involved in the $f(R,T)$ expressions. We fine-tune the parameter $\mu$ to obtain effect of phantom energy in the model, however we also restrict this parameter to obtain a stable model of the universe. It is noted that the model isotropizes at finite cosmic time.
The Analysis of Fractional-Order Helmholtz Equations via a Novel Approach
Pongsakorn Sunthrayuth, Zeyad Al-Zhour, Yu-Ming Chu
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: New Iterative method; r-Laplace transform; Fractional-order Helmholtz equations; Caputo operator
This paper is related to the fractional view analysis of Helmholtz equations, using innovative analytical techniques. The fractional analysis of the proposed problems has been done in terms of Caputo-operator sense. In the current methodology, first, we applied the r-Laplace transform to the targeted problem. The iterative method is then implemented to obtain the series form solution. After using the inverse transform of the r-Laplace, the desire analytical solution is achieved. The suggested procedure is verified through specific examples of the fractional Helmholtz equations. The present method is found to be an effective technique having a closed resemblance with the actual solutions. The proposed technique has less computational cost and a higher rate of convergence. The suggested methods are therefore very useful to solve other systems of fractional order problems.
Structure and Function of Multimeric G-quadruplexes
Sofia Kolesnikova, Edward A. Curtis
Subject: Biology, Other Keywords: G-quadruplex; dimer; tetramer; multimer; oligomer; telomere; promoter; R-loop; DNA:RNA hybrid
G-quadruplexes are noncanonical nucleic acid structures formed from stacked guanine tetrads. They are frequently used as building blocks and functional elements in fields such as synthetic biology and also thought to play widespread biological roles. G-quadruplexes are often studied as monomers but can also form a variety of higher-order structures. This increases the structural and functional diversity of G-quadruplexes, and recent evidence suggests that it could also be biologically important. In this review we describe the types of multimeric topologies adopted by G-quadruplexes and highlight what is known about their sequence requirements. We also summarize the limited information available about potential biological roles of multimeric G-quadruplexes and suggest new approaches that could facilitate future studies of these structures.
Respiratory Variations in Electrocardiographic R-Wave Amplitude during Acute Hypovolemia Induced by Inferior Vena Cava Clamping in Patients Undergoing Liver Transplantation
Hee-Sun Park, Sun-Hoon Kim, Yong-Seok Park, Robert H Thiele, Won-Jung Shin, Gyu-Sam Hwang
Subject: Medicine & Pharmacology, Anesthesiology Keywords: Brody effect; electrocardiographic variation; R-wave amplitude; hemodynamic monitoring; pulse pressure variation
The aim of this study was to analyze whether the respiratory variation in ECG standard lead II R-wave amplitude (ΔRDII) could be used to assess intravascular volume status following inferior vena cava (IVC) clamping. This clamping causes an acute decrease in cardiac output during liver transplantation (LT). We retrospectively compared ΔRDII and related variables before and after IVC clamping in 34 recipients. Receiver operating characteristic (ROC) curve and area under the curve (AUC) analyses were used to derive a cutoff value of ΔRDII for predicting pulse pressure variation (PPV). After IVC clamping, cardiac output significantly decreased while ΔRDII significantly increased (P = 0.002). The cutoff value of ΔRDII for predicting a PPV >13% was 16.9% (AUC: 0.685) with a sensitivity of 57.9% and specificity of 77.6% (95% confidence interval 0.561 – 0.793, P = 0.015). Frequency analysis of ECG also significantly increased in the respiratory frequency band (P = 0.016). Although significant changes in ΔRDII during vena cava clamping were found at norepinephrine doses < 0.1 μg/kg/min (P = 0.014), such changes were not significant at norepinephrine doses > 0.1 μg/kg/minP = 0.093). ΔRDII could be a noninvasive dynamic parameter in LT recipients presenting with hemodynamic fluctuation. Based on our data, we recommended cautious interpretation of ΔRDII may be requisite according to vasopressor administration status.
Scorpion Body Size, Litter Characteristics, and Duration of the Life Cycle
Julian Monge-Najera
Subject: Biology, Ecology Keywords: scorpion ecology; multivariate statistics; body size; offspring characteristics, K and r strategists
There are no studies that quantitatively compare life histories among scorpion species. Statistical procedures applied to 94 scorpion species indicate that those with larger bodies do not necessarily have larger litters or longer life cycles, opposite to some theoretical predictions.
Mask-Aware Semi-Supervised Object Detection in Floor Plans
Tahira Shehzadi, Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, Muhammad Zeshan Afzal
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: object detection; semi-supervised learning; Mask R-CNN; floor-plan images; computer vision
Research has been growing on object detection using semi-supervised methods in past few years. We examine the intersection of these two areas for floor-plan objects to promote the research objective of detecting more accurate objects with less labelled data. The floor-plan objects include different furniture items with multiple types of the same class, and this high inter-class similarity impacts the performance of prior methods. In this paper, we present Mask R-CNN based semi-supervised approach that provides pixel-to-pixel alignment to generate individual annotation masks for each class to mine the inter-class similarity. The semi-supervised approach has a student-teacher network that pulls information from the teacher network and feeds it to the student network. The teacher network uses unlabeled data to form pseudo-boxes, and the student network uses both unlabeled data with the pseudo boxes and labelled data as ground truth for training. It learns representations of furniture items by combining labelled and unlabeled data. On the Mask R-CNN detector with ResNet-101 backbone network, the proposed approach achieves mAP of 98.8%, 99.7%, and 99.8% with only 1%, 5% and 10% labelled data, respectively. Our experiment affirms the efficiency of the proposed approach as it outperforms the fully supervised counterpart using only 10% of the labels.
Weighted Hermite-Hadamard inequalities for r−Times Differentiable Preinvex Functions for k−Fractional Integrals
Fiza Zafar, Sikander Mehmood, Asim Asiri
Subject: Mathematics & Computer Science, Analysis Keywords: Hermite-Hadamard-Fejér inequality; k−fractional integral; Preinvex function; r-times; differentiable function
In this paper, we have established some new bounds of Fejér type Hermite-Hadamard inequality for k−fractional integrals involving r−times differentiable preinvex functions. It is noteworthy that in the past there was no weighted version of left and right sides of the Hermite-Hadamard inequality for k-fractional integrals for generalized convex functions available in literature.
Rubella Virus Research in the Years 2000–2022: A Bibliometric Analysis
Hafiz Zeeshan, Ashina Sadiq, Hina Kirn
Subject: Medicine & Pharmacology, Pediatrics Keywords: bibliometric analysis; Pubmed; rubella virus; research output; research collaboration; Biblioshiny; Bibliometrix; R-package
Background: This work aimed to undertake a bibliometric analysis of the Rubella virus. Medical studies were conducted between 2000 and 2022 to discover trends, dynamics, and research outputs in the industry. Methods:A bibliometric study was performed using R software to determine research characteristics indexed worldwide and published in Rubella research in medical studies. The Rubella virus was chosen as the subject in the PUBMED database, and 374 papers from the previous two decades were reviewed. Results: There was an increase in the number of publications after 2003. The United States was the most essential countryamong all which had the most contributions on Rubella Virus. Conclusion:Rubella research has increased in the medical profession over the previous decade, with the United States leading to publications in this field.
Assessment of the Psychological Impact and Perceived Stress Due to COVID-19 Lockdown in Young Adult Population of India.
Sreelakshmi M, Irfanul Haque, Sarita Jangra Bhyan, Ankit Gaur, Aashi Jain, Amrita Kumari, Besty Thomas, Nancy Goel, Rashi Chauhan
Subject: Keywords: COVID-19 lockdown; psychological impact; perceived stresses-R scale; PSS scale; young adults
Context: COVID-19 pandemic and the Lockdown implemented as a measure to contain the virus spread has taken a toll over the psychological well-being of the people especially the young adults, the confinement along with the environment of a highly infectious pandemic around the induvial are put under great stress.Aims: The current study aims to assess the psychological impact and perceived stress due to COVID-19 lockdown in Young Adult population of India.Settings and Design: It is a cross sectional, observational study.Methods and Material: The survey was conducted using Google forms involving snowball sampling technique which obtained 267 responses in total. (IES-R) and (PSS) scales were used for the study.Statistical analysis used: Descriptive analysis were performed on the sociodemographic parameters and the comparison of means were done by Chi-square test in SPSS Statistic 21.0 (IBM SPSS Statistics, New York, United States). Results: The mean IES-R and PSS scores obtained for the population in this study was 25.64±18.95 and 18.27±6.10 respectively. Out of the 267 respondents in total 61.4% (n=164) of them were males. Maximum of the respondents 62.54%(n=167) belonged to the age group of 18-23 with mean age being 23.14± 2.913. 92.5% of the respondents were unmarried and only 26.6% belonged to the rural part of India. Females, younger individuals were found to have higher IES-R and PSS scores. Conclusions: There is significant psychological burden and stress on the young Indian population with females and younger individuals particularly students are the most vulnerable population.
COVID-19 Epidemic Compartments Model and Bangladesh
Md. Shahidul Islam, Jannatun Irana Ira, K. M. Ariful Kabir, Md. Kamrujjaman
Subject: Keywords: coronavirus; epidemic model; global pandemic; COVID-19; SEII$_s$R model; sensitivity analysis
In the promptness of the COVID-19 outbreak, it would be very important to observe and estimate the pattern of diseases to reduce the contagious infection. To study this effect, we developed a COVID-19 epidemic model that incorporates five various groups of individuals. Then we analyze the model by evaluating the equilibrium points and analyzing their stability as well as determining the basic reproduction number. Also, numerical simulations show the dynamics of a different group of the population over time. Thus, our findings based on the sensitivity analysis and the reproduction number highlight the role of outbreak of the virus that can be useful to avoid a massive collapse in Bangladesh and rest of the world. The outcome of this study concludes that outbreak will be in control which ensure the social and economic stability.
Radar Data Analyses for a Single Rainfall Event and Their Application for Flow Simulation in an Urban Catchment Using the SWMM Model
Mariusz Barszcz
Subject: Earth Sciences, Geophysics Keywords: urban catchment; radar reflectivity; rainfall rate; Z-R relationship; SWMM model; flow simulation
In this study, regression analyses were used to find a relationship between the rain gauge rainfall rate R and radar reflectivity Z for the urban catchment of the Służewiecki Stream in Warsaw, Poland. Rainfall totals for 18 events which were measured at two rainfall stations were used for these analyses. Various methods for determining calculational values of radar reflectivity in reference to specific rainfall cells with 1-km resolution within an event duration were applied. The influence of each of these methods on the Z-R relationship was analyzed. The correction coefficient for data from the SRI (Surface Rainfall Intensity) product was established, in which the values of rainfall rate are calculated based on parameters a and b determined by Marshall and Palmer. Relatively good agreement between measured and estimated rainfall totals for the analyzed events was obtained using the Z-R relationships as well as the correction coefficient determined in this study. Rainfall depths estimated from radar data for two selected events were used to simulate flow hydrographs in the catchment using the SWMM (Storm Water Management Model) hydrodynamic model. Different scenarios were applied to investigate the stream response to changes in rainfall depths, in which the data both for 2 existing as well as 64 virtual rain gauges assigned to appropriate rainfall cells in the catchment were included.
Can Green Credit Policy Promote Green Innovation Efficiency of Heavily Polluted Industries? Empirical Evidence from the China's Industry
Su Li, Feng Tan Da, Wei Wei Cui, Jun Zhao
Subject: Social Sciences, Finance Keywords: green credit policy; heavily polluted industries; green innovation efficiency; financing cost; R&D investment
Green credit policy as an important tool to guide China's sustainable economic development, how to effectively play the function of capital deployment and improve the efficiency of industrial green innovation is an important issue facing the construction of ecological civilization. This paper uses China's Green Credit Guideline introduced in 2012 as a quasi-natural experiment , based on relevant panel data of industries from 2007 to 2018, uses the Super-SBM model including non-expected output to measure the green innovation efficiency of 35 industries in China, and constructs the PSM-DID model to explore how green credit policy impact on the green innovation efficiency of heavily polluted industries, the results show that : green credit policy significantly contributes to green innovation efficiency of heavily polluted industries with a lag. Further study finds that green credit policy pushes heavily polluted industries to improve green innovation efficiency by increasing financing cost and R&D investment; meanwhile, the heterogeneity test shows that the higher the state-owned share of industry, the greater the promoted effect of green credit policy on green innovation efficiency of heavily polluted industries. Finally, in order to accelerate the implementation of green credit policy and promote the green innovation efficiency of heavily polluted industries, relevant countermeasures are proposed from three aspects: banks, enterprises and government.
Mapping the Spread of COVID-19 Outbreak in India
Vanshika Bidhan, Bhavini Malhotra, Mansi Pandit, Narayanan Latha
Subject: Keywords: COVID-19; Data mining; Infection in India; R package; State- wise analysis; Statistical analysis
Background & Objectives: The global pandemic caused by novel coronavirus SARS-CoV-2 has claimed several lives worldwide. With the virus gathering rapid spread, the world has witnessed increasing number of confirmed cases and mortality rate, India is not far behind with approximately 37,000 affected individuals as on May 2, 2020. The ongoing pandemic has raised several questions which need to be answered by analysis of transmission of the infection. The data has been collected on daily basis from WHO and other sites. We have represented the data collated graphically using statistical packages, R and other online softwares. The present study provides a holistic overview of the spread of COVID-19 infection in India. Methods: Real-time data query was done based on daily observations using publicly available data from reference websites for COVID-19 and other government official reports for the period (15th February, 2020 to April 28th, 2020). Statistical analysis was performed to draw important inferences regarding COVID-19 trend in India. Results: A decrease in growth rate of cases due to COVID-19 in India post lockdown and improvement in recovery rate during the month of April was identified. The case fatality rate was estimated to be 3.22% of the total reported cases. State-wise analysis revealed a deteriorating situation in states of Maharashtra and Gujarat among others as cases continued to increase rapidly there. A positive linear correlation between the number of deaths and total cases and exponential relation between population density and number of cases reported per square km was established. Interpretation & Conclusions: Despite early preventive measures taken up by the Government of India, the increasing number of cases in India is a concern. This study compiles state-wise and district-wise data to report the daily confirmed cases, case fatalities and strategies adopted in the form of case studies. Understanding the transmission spread of SARS-CoV-2 in a diverse and populated country like India will be crucial in assessing the effectiveness of control policies towards the spread of COVID-19 infection.
NWCSAF High Resolution Winds (NWC-GEO/HRW) Stand-Alone Software for Calculation of Atmospheric Motion Vectors and Trajectories
Javier García-Pereda, José Miguel Fernández-Serdán, Óscar Alonso, Adrián Sanz, Rocío Guerra, Cristina Ariza, Inés Santos, Laura Fernández
Subject: Earth Sciences, Atmospheric Science Keywords: Atmospheric Motion Vectors (AMVs); Trajectories; EUMETSAT; NWCSAF; AEMET; MSG; Himawari; GOES-N; GOES-R.
The "NWCSAF High Resolution Winds (NWC/GEO-HRW)" software is developed by the EUMETSAT's "Satellite Application Facility on support to Nowcasting and very short range forecasting (NWCSAF)", inside its stand-alone software package for calculation of meteorological products with geostationary satellite data (NWC/GEO). The whole NWC/GEO software package can be obtained after registration at the NWCSAF Helpdesk, www.nwcsaf.org. It is easy to get, install and use. The code is easy to read and fully documented. And in the NWCSAF Helpdesk, users find support and help for its use. "NWCSAF High Resolution Winds" provides a detailed calculation of Atmospheric Motion Vectors (AMVs) and Trajectories, locally and in near real time, using as input NWP model data and geostationary satellite image data. The latest version of the software, v2018, is able to process MSG, Himawari-8/9, GOES-N and GOES-R satellite series images, so that AMVs and Trajectories can be calculated all throughout the planet Earth with the same algorithm and quality. In the "2014 and 2018 AMV Intercomparison Studies", "NWCSAF High Resolution Winds" has shown to be one of the two best AMV algorithms for both MSG and Himawari-8/9 satellites. And the "Coordination Group for Meteorological Satellites (CGMS)" has recognized in its "2012 Meeting Report": 1. "NWCSAF High Resolution Winds" fulfills the requirements to be a portable stand-alone AMV calculation software due to its easy installation and usability. 2. It has been successfully adapted by some CGMS members and serves as an important tool for development. It is modular, well documented, and well suited as stand-alone AMV software. 3. Although alternatives exist as portable stand-alone AMV calculation software, they are not as advanced in terms of documentation and do not have an existing Helpdesk. Considering this, a full description and validation of the "NWCSAF/High Resolution Winds" is shown here for the first time in a peer-reviewed paper. The procedure to obtain the software for operational meteorology and research is also explained.
Benchmarking Current Capabilities for the Generation of Excitation and Photoionisation Atomic Data
Catherine Ramsbottom, Connor Ballance, Ryan Smyth, Andrew Conroy, Luis Fernández-Menchero, Michael Turkington, Francis Keenan
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: R-matrix; atomic data; atomic processes; collisions; Fe-peak elements; electron-impact excitation; photoionization
The spectra currently emerging from modern ground- and space-based astronomical instruments are of exceptionally high quality and resolution. To meaningfully analyse these spectra researchers utilise complex modelling codes to replicate the observations. The main inputs to these codes are atomic data such as excitation and photoionisation cross sections as well as radiative transition probabilities, energy levels and line strengths. In this publication the current capabilities of the numerical methods and computer packages used in the generation of these data are discussed. Particular emphasis is given to Fe-peak species and the heavy systems of tungsten and molybdenum. Some of the results presented to highlight certain issues and/or advances have already been published in the literature, while other sections present, for the first time, new recently evaluated atomic data.
Corporate Social Responsibility Information in Annual Reports in the EU – A Czech Case Study
Radka MacGregor Pelikanova
Subject: Social Sciences, Law Keywords: corporate social responsibility; environment; employment; R&D; annual reports; financial and non-financial statements; competition.
The commitment of the EU to Corporate Social Responsibility (CSR) is projected in the EU law about annual reporting by businesses. Since EU member states further develop this framework by their own domestic laws, annual reporting with CSR information is not unified and just partially mandatory in the EU. Do all European businesses report CSR information and what public declaration to society do they provide with it? The main dual purpose of this paper is identifying the parameters of this annual reporting duty and studying the CSR information provided by the ten largest Czech companies in their annual statements for 2013-2017. Based on legislative research and the teleological interpretation, the current EU legislative framework with Czech particularities is presented and, via a case study exploring 50 annual reports, the data about the type, extent and depth of the CSR is dynamically and comparatively assessed. It appears that, at a minimum, large Czech businesses satisfy their legal duty and e-report on CSR in a similar extent, but in dramatically different quality. Employee matters and adherence to international standards are used as a public declaration to society more than the data on environmental protection, while social matters and R&D are played down.
Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays
Cheng Hu, Jingyang Wang, Weiming Tian, Tao Zeng, Rui Wang
Subject: Engineering, Electrical & Electronic Engineering Keywords: MIMO radar; MIMO imaging; Near-field imaging; Height difference between T/R arrays; Grating lobes
MIMO (multiple-input multiple-output) radar provides much more flexibility than the traditional radar for its ability to realize far more observation channels than the actual number of T/R (transmit and receive) elements. Designing the array of MIMO imaging radar, the commonly used virtual array theory generally assumes that all elements are placed on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, resulting in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation, leading to inevitable high grating lobes in the imaging result of near-field edge points of the scene observed by common MIMO array. To tackle these problems, this paper derives the relationship between target's PSF (point spread function) and pattern of T/R arrays, by which the design criterion of near-field imaging MIMO array is presented. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by simulations and an experiment.
Satellite to Ground Station, Attenuation Prediction for 2.4-72GHz Using LTSM, An Artificial Recurrent Neural Network Technology
Menachem Domb, Guy Leshem
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Satellite Communication; Signal Propagation; Rain Attenuation; Urban area ground station; SNR, ITU-R; LSTM, Neural network
Free-space communication is a leading component in global communications. Its advantages relate to a broader signal spread, no wiring, and ease of engagement. Satellite communication services became recently attractive to mega-companies that foresee an excellent opportunity to connect disconnected remote regions, serve emerging machine-to-machine communication, Internet-of-things connectivity, and more. Satellite communication links suffer from arbitrary weather phenomena such as clouds, rain, snow, fog, and dust. In addition, when signals approach the ground station, it has to overcome buildings blocking the direct access to the ground station. Therefore, satellites commonly use redundant signal strength to ensure constant and continuous signal transmission, resulting in excess energy consumption, challenging the limited power capacity generated by solar energy or the fixed amount of fuel. This research proposes LTSM, an artificial recurrent neural network technology that provides a time-dependent prediction of the expected attenuation level due to rain and fog and the signal strength that remained after crossing physical obstacles surrounding the ground station. The satellite transmitter is calibrated accordingly. The satellite outgoing signal strength is based on the predicted signal strength to ensure it will remain strong enough for the ground station to process it. The instant calibration eliminates the excess use of energy resulting in energy savings.
AI Technology and Online Purchase Intention:Multi-Group Analysis Based On Perceived Value
Jiwang Yin, Xiaodong Qiu
Subject: Social Sciences, Accounting Keywords: Artificial Intelligence Marketing; Online shopping; Perceived Utility Value; Perceived Hedonic Value; Purchase Intention; S-O-R
(1) Background: AI technology has been deeply applied to online shopping platform to provide more accurate and personalized services for consumers. It is of great significance to study the different functional experience of AI for consumers to improve the current application status of AI technology.(2)Method: Based on the "S-O-R" model, this study divided the AI technology expe-rienced by the consumers of online shopping platform into accuracy, insight and interaction experience. Takes the perceived value as the mediating variable from the prospect of perceived utility value and perceived hedonic value. This article use empirical research method to analyze the effect of three dimensions of online shopping AI experience to research the internal influence mechanism of consumers purchase intention. (3) Results:① The accuracy, insight and interaction experience of AI marketing technology have a significant positive impact on consumers' per-ceived utility value and hedonic value respectively; ②Both of the perceived utility value and perceived hedonic value obtained by AI technology experience can promote the formation of consumers' purchase intention; ③ The perceived hedonic value was better than perceived utility value to promote the consumers' purchase intention; ④The results of multi group analysis show that some younger and less experiences consumers groups prefer the pleasure experience such as shopping desire stimulation, shopping process relaxation and pleasure that AI marketing brought. However, utilitarian value cannot promote this kind of consumers' purchase intention. (4) Con-clusions: Perceived utility value and perceived hedonic value can be the intermediary between AI technology and consumers' purchase intention.
The Impact of Patent Applications on Technological Innovation in European Countries
ANGELO LEOGRANDE, Alberto Costantiello, Lucio Laureti
Subject: Social Sciences, Economics Keywords: Innovation, and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation
We investigate the innovational determinants of "Patent Applications" in Europe. We use data from the European Innovation Scoreboard-EIS of the European Commission for 36 countries in the period 2010-2019. We use Panel Data with Fixed Effects, Panel Data with Random Effects, Pooled OLS, WLS and Dynamic Panel. We found that the variables that have a deeper positive association with "Patent Applications" are "Human Resources" and "Intellectual Assets", while the variables that show a more intense negative relation with Patent Applications are "Employment Share in Manufacturing" and "Total Entrepreneurial Activity". A cluster analysis with the k-Means algorithm optimized with the Silhouette Coefficient has been realized. The results show the presence of two clusters. A network analysis with the distance of Manhattan has been performed and we find three different complex network structures. Finally, a comparison is made among eight machine learning algorithms for the prediction of the future value of the "Patent Applications". We found that PNN-Probabilistic Neural Network is the best performing algorithm. Using PNN the results show that the mean future value of "Patent Applications" in the estimated countries is expected to decrease of -0.1%.
The Opportunity Driven Entrepreneurship in the Context of Innovation Systems in Europe in the Period 2010-2019
Subject: Social Sciences, Economics Keywords: Innovation; and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation
In this article we have estimated the value of "Opportunity Driven Entrepreneurship" in Europe. We use data from European Innovation Scoreboard-EIS of the European Commission for 36 countries in the period 2010-2019. We use Panel Data with Fixed Effects, Panel Data with Random Effects, WLS, Pooled OLS, and Dynamic Panel. Our results show that "Opportunity Driven Entrepreneurship" is positively associated, among others, to "Innovation Friendly Environment" and "Turnover Share Large Enterprises", while it is negatively associated, among others, to "Sales Impacts" and "R&D Expenditure Business Sectors".
The Impact of New Doctorate Graduates on Innovation Systems in Europe
Subject: Social Sciences, Economics Keywords: Innovation, and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation.
In this article we investigate the determinants of "New Doctorate Graduates" in Europe. We use data from the EIS-European Innovation Scoreboard of the European Commission for 36 countries in the period 2010-2019 with Pooled OLS, Dynamic Panel, WLS, Panel Data with Fixed Effects and Panel Data with Random Effects. We found that "New Doctorate Graduates" is positively associated, among others, with "Human Resources" and "Government Procurement of Advanced Technology Products" and negatively, associated among others, with "Total Entrepreneurial Activity" and "Innovation Index". We apply a clusterization with k-Means algorithm either with the Silhouette Coefficient either with the Elbow Method and we found that in both cases the optimal number of clusters is three. Furthermore, we use the Network Analysis with the Distance of Manhattan, and we find the presence of seven network structures. Finally, we propose a confrontation among ten machine learning algorithms to predict the value of "New Doctorate Graduates" either with Original Data-OD either with Augmented Data-AD. Results show that SGD-Stochastic Gradient Descendent is the best predictor for OD while Linear Regression performs better for AD.
K-Means Clusterization and Machine Learning Prediction of European Most Cited Scientific Publications
Subject: Social Sciences, Economics Keywords: Innovation and Invention; Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation
In this article we investigate the determinants of the European "Most Cited Publications". We use data from the European Innovation Scoreboard-EIS of the European Commission for the period 2010-2019. Data are analyzed with Panel Data with Fixed Effects, Panel Data with Random Effects, WLS, and Pooled OLS. Results show that the level of "Most Cited Publications" is positively associated, among others, to "Innovation Index" and "Enterprise Birth" and negatively associated, among others, to "Government Procurement of Advanced Technology Products" and "Human Resources". Furthermore, we perform a cluster analysis with the k-Means algorithm either with the Silhouette Coefficient and the Elbow Method. We find that the Elbow Method shows better results than the Silhouette Coefficient with a number of clusters equal to 3. In adjunct we perform a network analysis with the Manhattan distance, and we find the presence of 4 complex and 2 simplified network structures. Finally, we present a confrontation among 10 machine learning algorithms to predict the level of "Most Cited Publication" either with Original Data-OD either with Augmented Data-AD. Results show that the best machine learning algorithm to predict the level of "Most Cited Publication" with Original Data-OD is SGD, while Linear Regression is the best machine learning algorithm for the prediction of "Most Cited Publications" with Augmented Data-AD.
The Export of Medium and High-Tech Products Manufactured in Europe
Subject: Social Sciences, Economics Keywords: Innovation; and Invention; Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation
In this article we analyze the determinants and the export trend of European countries of medium and high technology products. The data were analyzed using various econometric models, namely WLS, Pooled OLS, Dynamic Panel, Panel Data with Fixed Effects, Panel Data with Random Effects. The results show that exports of medium and high-tech products are positively associated, among other variables, with the value of "Average Annual GDP Growth", "Total Entrepreneurial Activity" and "Sales Impacts", and negatively associated with, among other variables, "Human Resources", "Government and Procurement of Advanced Technology Products" and "Buyer Sophistication". A cluster analysis was realized with the k-Means algorithm optimized with the Silhouette coefficient. The result showed the presence of only two clusters. Since this result was considered poorly representative of the industrial complexity of the European Union countries, a further analysis was carried out with the Elbow method. The result showed the presence of 6 clusters with the dominance of Germany and the economies connected to the German economy. In addition, a network analysis was carried out using the distance to Manhattan. Four complex network structures and two simplified network structures were detected. A comparison was then made between 10 machine learning algorithms for predicting the value of exports of medium and high-tech products. The result shows that the best performing algorithm is the SGD. An analysis with Augmented Data-AD was implemented with a comparison between 10 machine learning algorithms for prediction and the result shows that the Linear Regression algorithm is the best predictor. The prediction with the Augmented Data-AD allows to reduce the MAE by about 0.0022131 compared to the prediction with the Original Data-OD.
The Determinants of Lifelong Learning in Europe
Alberto Costantiello, Lucio Laureti, Angelo Leogrande
The article affords the question of lifelong learning in Europe using data from the European Innovation Scoreboard-EIS in the period 2010-2019 for 36 countries. The econometric analysis is realized using WLS, Dynamic Panel, Pooled OLS, Panel Data with Fixed Effects and Random Effects. The results show that lifelong learning is, among other variables, positively associated to "Human Resources" and "Government procurement of advanced technology products" and is negatively associated, among others, to "Average annual GDP growth" and "Innovation Index". A clusterization is realized using the k-Means algorithm with a confrontation between the Elbow Method and the Silhouette Coefficient. Subsequently, a Network Analysis was applied with the distance of Manhattan. The results show the presence of 4 complex and 2 simplified network structures. Finally, a comparison was made among eight machine learning algorithms for the prediction of the value of lifelong learning. The results show that the linear regression is the best predictor algorithm and that the level of lifelong learning is expected to growth on average by 1.12%.
Marketing and Organizational Innovations in Europe
In this article we investigate the determinants of marketing or organizational innovators in Europe for 36 countries in the period 2010-2019. We have used data from the European Innovation Scoreboard-EIS of the European Commission. We perform different econometric models i.e. Dynamic Panel, Pooled OLS, Panel Data with Fixed Effects, Panel Data with Random Effects, WLS. Results show that the level of marketing or organizational innovators in positively associated, among others variables to "Innovation Index", "Innovators" and "Knowledge Intensive Service Exports", while is negatively associated with "Sales Impacts", "Foreign Controlled Enterprises Share of Value Added" and "Government procurement of advanced technology products".
The Exports of Knowledge Intensive Services: A Complex Metric Approach
In the following article, the value of the "Knowledge Intensive Services Exports in Europe" in 36 European countries is estimated. The data were analyzed through a set of econometric models or: Poled OLS, Dynamic Panel, Panel Data with Fixed Effects, Panel Data with Random Effects, WLS. The results show that "Knowledge Intensive Services Exports" is negatively associated, among others, with "Buyer Sophistication", "Government Procurement of Advanced Technology Products", and positively associated with the following variables i.e. "Innovation Index", "Sales Impacts" and "Total Entrepreneurial Activity". Then a clusterization with k-Means algorithm was made with the Elbow method. The results show the presence of 3 clusters. A network analysis was later built and 4 complex network structures and three structures with simplified networks were detected. To predict the future trend of the variable, a comparison was made with eight different machine learning algorithms. The results show that prediction with Augmented Data-AD is more efficient than prediction with Original Data-AD with a reduction of the mean of statistical errors equal to 55,94%.
The Determinants of Internet User Skills in Europe
Angelo Leogrande, Nicola Magaletti, Gabriele Cosoli, Vito Giardinelli, Alessandro Massaro
The following article indicates the determinants of "Internet User Skills" among European countries based on the application of the database deriving from the DESI-Index. The data were analyzed using the following econometric models, namely: Panel Data with Fixed Effects, Panel Data with Random Effects, Pooled OLS, WLS, WLS corrected for heteroskedasticity. The Elbow method and the Silouette coefficient method were compared for the optimization of the number of clusters obtained by the k-Means algorithm. The result shows the presence of 5 clusters. A network analysis was carried out using the Euclidean distance with the result of identifying two network structures between some analyzed countries. subsequently a comparison was made between six different machine learning algorithms for the prediction of the future value of the variable of interest. The result shows that the best predictor algorithm is Gradient Boosted Tree Regression with an expected value of the predicted variable increasing by a value of 1.75%. Later a further comparison was made by comparing 6 algorithms with the increased data. The result shows that the best predictor is Simple Regression Tree. The interest variable is predicted to decrease by an amount equal to -6.099%. Statistical errors improve on average by 32.43% in the transition between the original data and the increased data.
The Innovation Index in Europe
Angelo Leogrande, Lucio Laureti, Alberto Costantiello
Subject: Social Sciences, Economics Keywords: innovation and invention: processes and incentives; management of technological innovation and R&D; diffusion processes; open innovation
The following article analyzes the determinants of the innovation index in Europe. The data refer to the European Innovation Scoreboard-EIS of the European Commission for the period between 2010 and 2019 for 36 countries. The data are analyzed using the following econometric techniques: Panel Data with Random Effects, Panel Data with Fixed Effects, Dynamic Panel Data, Pooled OLS, WLS. The results show that the Innovation Index is negatively connected to some variables, among which the most significant are "GDP per capita", "R&D expenditure public sector", "Venture capital", "Tertiary education", and positively connected to some variables among which the most relevant are: "Government procurement of advanced technology products", "Average annual population growth", "Finance and support", "Human resources", "Marketing or organisational innovators", "Linkages". A clustering was then carried out using the unsupervised k-Means algorithm optimized with the Silhouette coefficient which shows the presence of 2 clusters per value of the Innovation Index. Eight machine learning algorithms has been used for prediction with real data. The Tree Ensemble Regression algorithm has been chosen as best performer. A further prediction has been made with the augmented data. The result shows that the best performing algorithm is Linear Regression with an innovation index value predicted to grow by approximately 3.38%.
ICT Specialists in Europe
The following article estimates the value of ICT Specialists in Europe between 2016 and 2021 for 28 European countries. The data were analyzed using the following econometric techniques, namely: Panel Data with Fixed Effects, Panel Data with Random Effects, WLS and Pooled OLS. The results show that the value of ICT Specialists in Europe is positively associated with the following variables: "Desi Index", "SMEs with at least a basic level of digital intensity", "At least 100 Mbps fixed BB take-up" and negatively associated with the following variables: "4G Coverage","5G Coverage", "5G Readiness", "Fixed broadband coverage", "e-Government", "At least Basic Digital Skills", "Fixed broadband take-up", "Broadband price index", "Integration of Digital Technology". Subsequently, two European clusters were found by value of "ICTG Specialists" using the k-Means clustering algorithm optimized by using the Silhouette coefficient. Finally, eight different machine learning algorithms were compared to predict the value of "ICT Specialists" in Europe. The results show that the best prediction algorithm is ANN-Artificial Neural Network with an estimated growth value of 12.53%. Finally, "augmented data" were obtained through the use of the ANN-Artificial Neural Network, through which a new prediction was made which estimated a growing value of the estimated variable equal to 3.18%.
E-Government in Europe. A Machine Learning Approach
Angelo Leogrande, Nicola Magaletti, Gabriele Cosoli, Alessandro Massaro
The following article analyzes the determinants of e-government in 28 European countries between 2016 and 2021. The DESI-Digital Economy and Society Index database was used. The econometric analysis involved the use of the Panel Data with Fixed Effects and Panel Data with Variable Effects methods. The results show that the value of "e-Government" is negatively associated with "Fast BB (NGA) coverage", "Female ICT specialists", "e-Invoices", "Big data" and positively associated with "Open Data", "e-Government Users", "ICT for environmental sustainability", "Artificial intelligence", "Cloud", "SMEs with at least a basic level of digital intensity", "ICT Specialists", "At least 1 Gbps take-up", "At least 100 Mbps fixed BB take-up", "Fixed Very High Capacity Network (VHCN) coverage". A cluster analysis was carried out below using the unsupervised k-Means algorithm optimized with the Silhouette coefficient with the identification of 4 clusters. Finally, a comparison was made between eight different machine learning algorithms using "augmented data". The most efficient algorithm in predicting the value of e-government both in the historical series and with augmented data is the ANN-Artificial Neural Network.
Broadband Price Index in Europe
This article analyzes the determinants of the "Broadband Price Index" in Europe. The data used refer to 28 European countries between 2016 and 2021. The database used is the Digital, Economy and Society Index-DESI of the European Commission. The data were analyzed using the following econometric techniques, namely Panel Data with Random Effects, Panel Data with Fixed Effects, Pooled OLS, WLS and Dynamic Panel. The value of the "Broadband Price Index" is positively associated with the DESI Index, and "Connectivity" while it is negatively associated with "Fixed Broadband Take Up", "Fixed Broadband Coverage", "Mobile Broadband", "e-Government", "Advanced Skills and Development", "Integration of Digital Technology", "At Least Basic Digital Skills ", "Above Basic Digital Skills "," At Least Basic Software Skills ". A cluster analysis was carried out below using the k-Means algorithm optimized with the Silhouette coefficient. The analysis revealed the existence of three clusters. Finally, an analysis of the machine learning algorithms was carried out to predict the future value of the "Broadband Price Index". The result shows that the most useful algorithm for prediction is the Artificial Neural Network-ANN with an estimated value equal to an amount of 9.21%.
Foreign Doctorate Students in Europe
Lucio Laureti, Alberto Costantiello, Marco Maria Matarrese, Angelo Leogrande
The determinants of the presence of "Foreign Doctorate Students" among 36 European Countries for the period 2010-2019 are analyzed in this article. Panel Data with Fixed Effects, Random Effects, WLS, Pooled OLS, and Dynamic Panel are used to investigate the data. We found that the presence of Foreign Doctorate Students is positively associated to "Attractive Research Systems", "Finance and Support", "Rule of Law", "Sales Impacts", "New Doctorate Graduates", "Basic School Entrepreneurial Education and Training", "Tertiary Education" and negatively associated to "Innovative Sales Share", "Innovation Friendly Environment", "Linkages", "Trademark Applications", "Government Procurement of Advanced Technology Products", "R&D Expenditure Public Sectors". A cluster analysis was then carried out through the application of the unsupervised k-Means algorithm optimized using the Silhouette coefficient with the identification of 5 clusters. Finally, eight different machine learning algorithms were used to predict the value of the "Foreign Doctorate Students" variable. The results show that the best predictor algorithm is the "Tree Ensemble Regression" with a predicted value growing at a rate of 114.03%.
Towards Robust Object detection in Floor Plan Images: A Data Augmentation Approach
Shashank Mishra, Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, Muhammad Zeshan Afzal
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Object Detection; Cascade Mask R-CNN; Floor Plan Images; Deep Learning; Transfer Learning; Dataset Augmentation; Computer Vision
Object detection is one of the most critical tasks in the field of Computer vision. This task comprises identifying and localizing an object in the image. Architectural floor plans represent the layout of buildings and apartments. The floor plans consist of walls, windows, stairs, and other furniture objects. While recognizing floor plan objects is straightforward for humans, automatically processing floor plans and recognizing objects is a challenging problem. In this work, we investigate the performance of the recently introduced Cascade Mask R-CNN network to solve object detection in floor plan images. Furthermore, we experimentally establish that deformable convolution works better than conventional convolutions in the proposed framework. Identifying objects in floor plan images is also challenging due to the variety of floor plans and different objects. We faced a problem in training our network because of the lack of publicly available datasets. Currently, available public datasets do not have enough images to train deep neural networks efficiently. We introduce SFPI, a novel synthetic floor plan dataset consisting of 10000 images to address this issue. Our proposed method conveniently surpasses the previous state-of-the-art results on the SESYD dataset and sets impressive baseline results on the proposed SFPI dataset. The dataset can be downloaded from SFPI Dataset Link. We believe that the novel dataset enables the researcher to enhance the research in this domain further.
Cascade Network with Deformable Composite Backbone for Formula Detection in Scanned Document Images
Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, Muhammad Zeshan Afzal
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Formula detection; Cascade Mask R-CNN; Mathematical expression detection; document image analysis; deep neural networks; computer vision.
This paper presents a novel architecture for detecting mathematical formulas in document images, which is an important step for reliable information extraction in several domains. Recently, Cascade Mask R-CNN networks have been introduced to solve object detection in computer vision. In this paper, we suggest a couple of modifications to the existing Cascade Mask R-CNN architecture: First, the proposed network uses deformable convolutions instead of conventional convolutions in the backbone network to spot areas of interest better. Second, it uses a dual backbone of ResNeXt-101, having composite connections at the parallel stages. Finally, our proposed network is end-to-end trainable. We evaluate the proposed approach on the ICDAR-2017 POD and Marmot datasets. The proposed approach demonstrates state-of-the-art performance on ICDAR-2017 POD at a higher IoU threshold with an f1-score of 0.917, reducing the relative error by 7.8%. Moreover, we accomplished correct detection accuracy of 81.3% on embedded formulas on the Marmot dataset, which results in a relative error reduction of 30%.
Quantum Computation and Measurements from an Exotic Space-time $R^4$
Michel Planat, Raymond Aschheim, Marcelo Amaral, Klee Irwin
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: Topological quantum computing; $4$-manifolds; Akbulut cork; exotic $R^4$; fundamental group; finite geometry; Cayley-Dickson algebras
The authors previously found a model of universal quantum computation by making use of the coset structure of subgroups of a free group $G$ with relations. A valid subgroup $H$ of index $d$ in $G$ leads to a \lq magic' state $\left|\psi\right\rangle$ in $d$-dimensional Hilbert space that encodes a minimal informationally complete quantum measurement (or MIC), possibly carrying a finite \lq contextual' geometry. In the present work, we choose $G$ as the fundamental group $\pi_1(V)$ of an exotic $4$-manifold $V$, more precisely a \lq small exotic' (space-time) $R^4$ (that is homeomorphic and isometric, but not diffeomorphic to the Euclidean $\mathbb{R}^4$). Our selected example, due to to S. Akbulut and R.~E. Gompf, has two remarkable properties: (a) it shows the occurence of standard contextual geometries such as the Fano plane (at index $7$), Mermin's pentagram (at index $10$), the two-qubit commutation picture $GQ(2,2)$ (at index $15$) as well as the combinatorial Grassmannian Gr$(2,8)$ (at index $28$) , (b) it allows the interpretation of MICs measurements as arising from such exotic (space-time) $R^4$'s. Our new picture relating a topological quantum computing and exotic space-time is also intended to become an approach of \lq quantum gravity'.
MISR-GOES 3D Winds: Implications for Future LEO-GEO and LEO-LEO Winds
James Carr, Dong Wu, Michael Kelly, Jie Gong
Subject: Earth Sciences, Atmospheric Science Keywords: 3D-Winds, atmospheric motion vectors (AMVs), MISR, GOES-R, planetary boundary layer (PBL), stereo imaging, parallax, CubeSats
Global wind observations are fundamental for studying weather and climate dynamics. Most wind measurements come from atmospheric motion vectors (AMVs) by tracking the displacement of cloud or water vapor features. These AMVs generally rely on thermal infrared (IR) techniques for their height assignments, which are subject to large uncertainties in the presence of weak or reversed vertical temperature gradients around the planetary boundary layer (PBL) and with tropopause folding. Stereo imaging can overcome the height assignment problem using geometric parallax for feature height determination. In this study we develop a stereo 3D-Wind algorithm to simultaneously retrieve AMV and height from geostationary (GEO) and low Earth orbit (LEO) satellite imagery and apply it to collocated Geostationary Operational Environmental Satellite (GOES) and Multi-angle Imaging SpectroRadiometer (MISR) imagery. The new algorithm improves AMV and height relative to products from GOES or MISR alone, with an estimated accuracy of <0.5 m/s in AMV and <200 m in height with 2.2 km sampling. The algorithm can be generalized to other LEO-GEO or GEO-GEO combinations for greater spatiotemporal coverage. The technique demonstrated with MISR and GOES has important implications for future high-quality AMV observations, for which a low-cost constellation of CubeSats can play a vital role.
Using Convolutional Neural Networks to Automate Aircraft Maintenance Visual Inspection
Anil Dogru, Soufiane Bouarfa, Ridwan Arizar, Reyhan Aydogan
Subject: Keywords: Aircraft Maintenance Inspection; Anomaly Detection; Defect Inspection; Convolutional Neural Networks; Mask R-CNN; Generative Adversarial Networks; Image Augmentation
Convolutional Neural Networks combined with autonomous drones are increasingly seen as enablers of partially automating the aircraft maintenance visual inspection process. Such an innovative concept can have a significant impact on aircraft operations. Through supporting aircraft maintenance engineers detect and classify a wide range of defects, the time spent on inspection can significantly be reduced. Examples of defects that can be automatically detected include aircraft dents, paint defects, cracks and holes, and lightning strike damage. Additionally, this concept could also increase the accuracy of damage detection and reduce the number of aircraft inspection incidents related to human factors like fatigue and time pressure. In our previous work, we have applied a recent Convolutional Neural Network architecture known by MASK R-CNN to detect aircraft dents. MASK-RCNN was chosen because it enables the detection of multiple objects in an image while simultaneously generating a segmentation mask for each instance. The previously obtained F1 and F2 scores were 62.67% and 59.35% respectively. This paper extends the previous work by applying different techniques to improve and evaluate prediction performance experimentally. The approaches uses include (1) Balancing the original dataset by adding images without dents; (2) Increasing data homogeneity by focusing on wing images only; (3) Exploring the potential of three augmentation techniques in improving model performance namely flipping, rotating, and blurring; and (4) using a pre-classifier in combination with MASK R-CNN. The results show that a hybrid approache combining MASK R-CNN and augmentation techniques leads to an improved performance with an F1 score of (67.50%) and F2 score of (66.37%)
Aerial Images Processing for Car Detection using Convolutional Neural Networks: Comparison between Faster R-CNN and YoloV3
Adel Ammar, Anis Koubaa, Mohanned Ahmed, Abdulrahman Saad
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Car Detection; Convolutional Neural Networks; Deep Learning; Faster R-CNN; Unmanned Aerial Vehicles; You Only Look Once (Yolo).
In this paper, we address the problem of car detection from aerial images using Convolutional Neural Networks (CNN). This problem presents additional challenges as compared to car (or any object) detection from ground images because features of vehicles from aerial images are more difficult to discern. To investigate this issue, we assess the performance of two state-of-the-art CNN algorithms, namely Faster R-CNN, which is the most popular region-based algorithm, and YOLOv3, which is known to be the fastest detection algorithm. We analyze two datasets with different characteristics to check the impact of various factors, such as UAV's altitude, camera resolution, and object size. The objective of this work is to conduct a robust comparison between these two cutting-edge algorithms. By using a variety of metrics, we show that YOLOv3 yields better performance in most configurations, except that it exhibits a lower recall and less confident detections when object sizes and scales in the testing dataset differ largely from those in the training dataset.
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: car detection; convolutional neural networks; deep learning; you only look once (yolo); faster r-cnn; unmanned aerial vehicles
In this paper, we address the problem of car detection from aerial images using Convolutional Neural Networks (CNN). This problem presents additional challenges as compared to car (or any object) detection from ground images because features of vehicles from aerial images are more difficult to discern. To investigate this issue, we assess the performance of two state-of-the-art CNN algorithms, namely Faster R-CNN, which is the most popular region-based algorithm, and YOLOv3, which is known to be the fastest detection algorithm. We analyze two datasets with different characteristics to check the impact of various factors, such as UAV's altitude, camera resolution, and object size. The objective of this work is to conduct a robust comparison between these two cutting-edge algorithms. By using a variety of metrics, we show that none of the two algorithms outperforms the other in all cases.
Wnt/β-Catenin Pathway Proteins in End-Stage Renal Disease
Hussein Al-Hakeim, Halah Asad, Michael Maes
Subject: Medicine & Pharmacology, Allergology Keywords: End-Stage Renal Disease, Chronic Kidney Disease, Wnt-Pathway, R-Spondin 1, β-Catenin, Dickkopf-Related Protein 1, and Sclerostin.
Background. Wnt-pathway proteins play a vital role in kidney development and defects in the Wnt-pathway are associated with kidney disorders. However, the knowledge on the role of Wnt/β-catenin pathway proteins in end-stage renal disease (ESRD) is limited.Aim of the study. To delineate the association of ESRD and Wnt-proteins including the agonistR-spondin 1, the transducer β-catenin and the antagonists Dickkopf-related protein 1 (DKK1) and sclerostin.Methods. Serum Wnt-pathway proteins levels were measured by ELISA, while other biochemicals were measured spectrophotometrically in 60 ESRD patients and 30 normal controls.Results. DKK1 and sclerostin were significantly higher in ESRD than in controls, and β-catenin and the catenin + R-Sponin-1 / DKK1 + sclerostin ratio, reflecting the ratio of agonist and transducer on antagonists (AT/ANTA), were significantly lower in ESRD. Logistic regression analysis showed that ESRD was significantly predicted by increased levels of DDK1 and sclerostin and lowered β-catenin (p<0.001). eGFR was significantly associated with DKK1 and sclerostin (inversely), β-catenin (positively) and the AT/ANTA ratio (r=0.468, p<0.001). DKK1 levels were significantly and positively correlated with urea, creatinine, and copper. DKK1 and sclerostin were inversely associated with hemoglobin and packet cell volume. Catenin was significantly negatively associated with copper, urea and creatinine.Conclusion. Wnt/β-catenin pathway proteins show significant alterations in ESRD, indicating significantly increased levels of antagonists and, therefore, attenuated Wnt/β-catenin pathway activity. The latter is associated with lowered eGFR and increased serum copper levels. Wnt/β-catenin pathway proteins are possible drug targets to treat ESRD or its consequences.
The CO2 Emissions in Finland, Norway and Sweden: A Dynamic Relationship
Agustin Alonso-Rodriguez
Subject: Social Sciences, Econometrics & Statistics Keywords: Paris 2015 Agreement; CO2 emissions; VAR models; Granger causality; impulse response functions; forecast error variance decomposition; software: R; MTS; RATS
In this paper a dynamic relationship between the CO2 emissions in Finland, Norway and Sweden is presented. With the help of a VAR(2) model, and using the Granger terminology, it is shown that the emissions in Finland are affecting those in Norway and Sweden. Other aspects of this dynamic relationship are presented as well.
Blackcurrant Leaf Chlorosis Associated Virus: Next-Generation Sequencing Reveals an Extraordinary Virus with Multiple Genomic Components Including Evidence of Circular RNA
Delano James, James Phelan, Daniel Sanderson
Subject: Biology, Plant Sciences Keywords: Idaeovirus; Blackcurrant leaf chlorosis associated virus; next-generation sequencing (NGS); bridge reads; abutting primers; RNase R digestion; circular RNA; concatenated RNA
Blackcurrant leaf chlorosis associated virus (BCLCaV) was detected recently by next-generation sequencing (NGS) and proposed as a new and distinct species in the genus Idaeovirus. Genomic components of BCLCaV that were detected and confirmed include: 1) RNA-1 that is monocistronic and encodes the replicase complex; 2) a bicistronic RNA-2 that encodes a movement protein (MP) and the coat protein (CP) of the virus, with open reading frames (ORF) that overlap by a single adenine (A) nucleotide (nt) representing the third position of an opal stop codon of the MP ORF2a and the first position of the start codon of the CP ORF2b; 3) a subgenomic form of RNA-2 (RNA-3) that contains ORF2b; and 4) a concatenated form of RNA-2 that consists of a complementary and inverted RNA-3 conjoined to the full-length RNA-2. Analysis of NGS-derived paired-end reads revealed the existence of bridge reads encompassing the 3'-terminus and 5'-terminus of RNA-2 or RNA-3 of BCLCaV. The full RNA-2 or RNA-3 could be amplified using outward facing or abutting primers; also RNA-2/RNA-3 could be detected even after three consecutive RNase R enzyme treatments with denaturation at 95 oC preceding each digestion. Evidence was obtained indicating that there are circular forms of BCLCaV RNA-2 and RNA-3.
Chemo-Enzymatic Synthesis of Synthons as Precursors for Enantiopure Clenbuterol and other β2- Agonists
Fredrik Heen Blindheim, Mari Bergan Hansen, Sigvart Evjen, Wei Zhu, Elisabeth Egholm Jacobsen
Subject: Chemistry, Organic Chemistry Keywords: (R)-1-(4-Amino-3,5-dichlorophenyl)-2-bromoethan-1-ol; (S)-N-(2,6-dichloro-4-(1-hydroxyethyl)phenyl)acetamide; clenbuterol; ketoreductase; chiral chromatography
(R)-1-(4-Amino-3,5-dichlorophenyl)-2-bromoethan-1-ol has been synthesised in 93% enantiomeric excess (ee) by asymmetric reduction of the corresponding ketone catalysed by a ketoreductase and NADPH as the co-factor in DMSO. (S)-N-(2,6-Dichloro-4-(1-hydroxyethyl)phenyl)acetamide has been synthesised in >98% ee by the same system. Both synthons are potential precursors for clenbuterol enantiomers. Clenbuterol is a β2-agonist used in veterinary treatment of asthma in several countries. The drug is listed on the World Anti-doping Agency's Prohibited list due to its effect on increased protein synthesis in the body. However, racemic clenbuterol has recently been shown to reduce the risk of Parkinson's disease. In order to reveal which one (or both) of the enantiomers that cause this effect, the pure enantiomers need to be studied separately. Our biocatalytic approach in order to obtain enantiopure clenbuterol should be applicable to industrial scale.
CasTabDetectoRS: Cascade Network for Table Detection in Document Images with Recursive Feature Pyramid and Switchable Atrous Convolution
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: table detection; table recognition; cascade Mask R-CNN; atrous convolution; recursive feature pyramid networks; document image analysis; deep neural networks; computer vision, object detection.
Table detection is a preliminary step in extracting reliable information from tables in scanned document images. We present CasTabDetectoRS, a novel end-to-end trainable table detection framework that operates on Cascade Mask R-CNN, including Recursive Feature Pyramid network and Switchable Atrous Convolution in the existing backbone architecture. By utilizing a comparatively lightweight backbone of ResNet-50, this paper demonstrates that superior results are attainable without relying on pre and post-processing methods, heavier backbone networks (ResNet-101, ResNeXt-152), and memory-intensive deformable convolutions. We evaluate the proposed approach on five different publicly available table detection datasets. Our CasTabDetectoRS outperforms the previous state-of-the-art results on four datasets (ICDAR-19, TableBank, UNLV, and Marmot) and accomplishes comparable results on ICDAR-17 POD. Upon comparing with previous state-of-the-art results, we obtain a significant relative error reduction of 56.36%, 20%, 4.5%, and 3.5% on the datasets of ICDAR-19, TableBank, UNLV, and Marmot, respectively. Furthermore, this paper sets a new benchmark by performing exhaustive cross-datasets evaluations to exhibit the generalization capabilities of the proposed method.
Fault Detection of Landing Gear Retraction/Extension Hydrau-lic System Based on Bond Graph-linear Fractional Transfor-mation Technique and Interval Analytic Redundancy Relations
Yuyuan Cao, Shixuan Duan, Yanjun Li, Xudong Li, Zejian Zhao, Xingye Wang
Subject: Engineering, Other Keywords: fault detection; retraction/extension (R/E) hydraulic system; bond graph-linear fractional trans-formation technique; interval analytic redundancy relations; uncertainty; fault signature matrix; residuals; thresholds
Various factors, such as uncertainty of component parameters and uncertainty of sensor meas-urement values, contribute to the difficulty of fault detection in the landing gear retrac-tion/extension hydraulic system. In this paper, we introduce linear fractional transformation technology and uncertainty analysis theory for the construction of the diagnostic bond graph of the landing gear retraction/extension hydraulic system. In this way, interval analytical redundancy relations and fault signature matrix can be derived. Using the fault signature matrix, existing faults of the system can be preliminarily detected and isolated. Additionally, interval analytical re-dundancy relations can be used to detect system faults in detail, and cases analysis can be carried out to determine if the actuator is externally or internally leaky, and if the landing gear selector valve is reversing stuck. Compared to the traditional analytical redundancy relations, this method takes into account the negative factors of uncertainty; and compared to the traditional absolute diagnostic threshold, the interval diagnostic threshold is more accurate and sensitive.
Detection of Cervical Cancer Cells in Whole Slide Images Us-ing Deformable and Global Context Aware Faster RCNN-FPN
Xia LI, Zhenhao Xu, Xi Shen, Yongxia Zhou, Binggang Xiao, Tie-Qiang Li
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: Cervical cancer; Pap smear test; whole slide image (WSI); feature pyramid network (FPN); global context aware (GCA); region based convolutional neural networks (R-CNN); Region Proposal Network (RPN).
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN ar-chitecture for the detection of abnormal cervical cells in cytology images from cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cer-vical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using tra-ditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
The Impact of Event Scale – Revised: Examining its cutoff scores among Arab psychiatric patients and healthy adults within the context of COVID-19 as a collective traumatic event
Amira M. Ali, Saeed Abdullah Al-Dossary, Abdulaziz Mofdy Almarwani, Maha Atout, Rasmieh Al-Amer, And Abdulmajeed A. Alkhamees
Subject: Behavioral Sciences, Clinical Psychology Keywords: Impact of Event Scale-Revised (IES-R)/post-traumatic stress disorder (PTSD); cutoff point/cutoff score; psychiatric patients/the general public/healthy adults; psychometric evaluation/criterion validity; Coronavirus Disease-19/COVID-19; Arabic version/Arab/Saudi Arabia
The Impact of Event Scale-Revised (IES-R) is the most popular measure of post-traumatic stress disorder (PTSD), which has been recently validated in Arabic. This instrumental study aimed to determine optimal cutoff scores of the IES-R and its subscales in Arab samples of psychiatric patients (N = 168, 70.8% females) and healthy adults (N = 992, 62.7% females) from Saudi Arabia during the COVID-19 pandemic as an ongoing collective traumatic event. Based on a cutoff score of 14 of the Depression Anxiety Stress Scale 8-items (DASS-8), receiver operator curve (ROC) analysis revealed two optimal points of 39.5 and 30.5 for the IES-R in the samples (area under the curve (AUC) = 0.86 & 0.91, p values = 0.001, 95% CI: 0.80-0.92 & 0.87 to 0.94, sensitivity = 0.85 & 0.87, specificity = 0.73 & 0.83, Youden index = 0.58 & 0.70, respectively). Different cutoffs were detected for the six subscales of the IES-R, with numbing and avoidance expressing the lowest predictivity for distress. Meanwhile, hyperarousal followed by the irritability expressed stronger predictive capacity for distress than all subscales in both samples. In path analysis, pandemic-related irritability resulted from direct and indirect effects of key PTSD symptoms (intrusion, hyperarousal, and numbing). Irritability contributed to traumatic symptoms of sleep disturbance in both samples while the opposite was not true. The findings suggest usefulness of the IES-R at a score of 30.5 for detecting adults prone to trauma related distress, with higher scores needed for screening in psychiatric patients. Various PTSD symptoms may induce dysphoric mood, which represents a considerable burden that may induce circadian misalignment and more noxious psychiatric problems/ co-morbidities (sleep disturbance) in both healthy and diseased groups. | CommonCrawl |
Integrals Involving Exponential functions
Exponential Growth Model
Exponential Decay Model
Exponential functions are used in many real-life applications. The number e is often associated with compounded or accelerating growth, as we have seen in earlier sections about the derivative. Although the derivative represents a rate of change or a growth rate, the integral represents the total change or the total growth. Let's look at an example in which integration of an exponential function solves a common business application.
A price–demand function tells us the relationship between the quantity of a product demanded and the price of the product. In general, price decreases as quantity demanded increases. The marginal price–demand function is the derivative of the price–demand function and it tells us how fast the price changes at a given level of production. These functions are used in business to determine the price–elasticity of demand, and to help companies determine whether changing production levels would be profitable.
Example \(\PageIndex{1}\): Finding a Price–Demand Equation
Find the price–demand equation for a particular brand of toothpaste at a supermarket chain when the demand is 50 tubes per week at $2.35 per tube, given that the marginal price—demand function, \(p′(x),\) for x number of tubes per week, is given as
\[p'(x)=−0.015e^{−0.01x}.\]
If the supermarket chain sells 100 tubes per week, what price should it set?
To find the price–demand equation, integrate the marginal price–demand function. First find the antiderivative, then look at the particulars. Thus,
\[p(x)=∫−0.015e^{−0.01x}dx=−0.015∫e^{−0.01x}dx.\]
Using substitution, let \(u=−0.01x\) and \(du=−0.01dx\). Then, divide both sides of the du equation by −0.01. This gives
\[\dfrac{−0.015}{−0.01}∫e^udu=1.5∫e^udu=1.5e^u+C=1.5e^{−0.01}x+C.\]
The next step is to solve for C. We know that when the price is $2.35 per tube, the demand is 50 tubes per week. This means
\[p(50)=1.5e^{−0.01(50)}+C=2.35.\]
Now, just solve for C:
\[C=2.35−1.5e^{−0.5}=2.35−0.91=1.44.\]
Thus,
\[p(x)=1.5e^{−0.01x}+1.44.\]
If the supermarket sells 100 tubes of toothpaste per week, the price would be
\[p(100)=1.5e−0.01(100)+1.44=1.5e−1+1.44≈1.99.\]
The supermarket should charge $1.99 per tube if it is selling 100 tubes per week.
One of the most prevalent applications of exponential functions involves growth and decay models. Exponential growth and decay show up in a host of natural applications. From population growth and continuously compounded interest to radioactive decay and Newton's law of cooling, exponential functions are ubiquitous in nature. In this section, we examine exponential growth and decay in the context of some of these applications.
Many systems exhibit exponential growth. These systems follow a model of the form \(y=y_0e^{kt},\) where \(y_0\) represents the initial state of the system and \(k\) is a positive constant, called the growth constant. Notice that in an exponential growth model, we have
\[ y′=ky_0e^{kt}=ky. \label{eq1}\]
That is, the rate of growth is proportional to the current function value. This is a key feature of exponential growth. Equation \ref{eq1} involves derivatives and is called a differential equation.
Exponential Growthl
Systems that exhibit exponential growth increase according to the mathematical model
\[y=y_0e^{kt}\]
where \(y_0\) represents the initial state of the system and \(k>0\) is a constant, called the growth constant.
Population growth is a common example of exponential growth. Consider a population of bacteria, for instance. It seems plausible that the rate of population growth would be proportional to the size of the population. After all, the more bacteria there are to reproduce, the faster the population grows. Figure \(\PageIndex{1}\) and Table \(\PageIndex{1}\) represent the growth of a population of bacteria with an initial population of 200 bacteria and a growth constant of 0.02. Notice that after only 2 hours (120 minutes), the population is 10 times its original size!
Figure \(\PageIndex{1}\): An example of exponential growth for bacteria.
Table \(\PageIndex{1}\): Exponential Growth of a Bacterial Population
Time(min)
Population Size (no. of bacteria)
Note that we are using a continuous function to model what is inherently discrete behavior. At any given time, the real-world population contains a whole number of bacteria, although the model takes on noninteger values. When using exponential growth models, we must always be careful to interpret the function values in the context of the phenomenon we are modeling.
Example \(\PageIndex{1}\): Population Growth
Consider the population of bacteria described earlier. This population grows according to the function \(f(t)=200e^{0.02t},\) where t is measured in minutes. How many bacteria are present in the population after \(5\) hours (\(300\) minutes)? When does the population reach \(100,000\) bacteria?
We have \(f(t)=200e^{0.02t}.\) Then
\[ f(300)=200e^{0.02(300)}≈80,686. \nonumber \]
There are \(80,686\) bacteria in the population after \(5\) hours.
To find when the population reaches \(100,000\) bacteria, we solve the equation
\[ \begin{align} 100,000 &= 200e^{0.02t} \nonumber \\ 500 &=e^{0.02t} \nonumber \\ \ln 500 &=0.02 t \nonumber \\ t&=\dfrac{\ln 500}{0.02}≈310.73. \nonumber \end{align} \nonumber\]
The population reaches \(100,000\) bacteria after \(310.73\) minutes.
Consider a population of bacteria that grows according to the function \(f(t)=500e^{0.05t}\), where \(t\) is measured in minutes. How many bacteria are present in the population after 4 hours? When does the population reach \(100\) million bacteria?
Use the process from the previous example.
There are \(81,377,396\) bacteria in the population after \(4\) hours. The population reaches \(100\) million bacteria after \(244.12\) minutes.
Let's now turn our attention to a financial application: compound interest. Interest that is not compounded is called simple interest. Simple interest is paid once, at the end of the specified time period (usually \(1\) year). So, if we put \($1000\) in a savings account earning \(2%\) simple interest per year, then at the end of the year we have
\[ 1000(1+0.02)=$1020.\]
Compound interest is paid multiple times per year, depending on the compounding period. Therefore, if the bank compounds the interest every \(6\) months, it credits half of the year's interest to the account after \(6\) months. During the second half of the year, the account earns interest not only on the initial \($1000\), but also on the interest earned during the first half of the year. Mathematically speaking, at the end of the year, we have
\[ 1000 \left(1+\dfrac{0.02}{2}\right)^2=$1020.10.\]
Similarly, if the interest is compounded every \(4\) months, we have
\[ 1000 \left(1+\dfrac{0.02}{3}\right)^3=$1020.13,\]
and if the interest is compounded daily (\(365\) times per year), we have \($1020.20\). If we extend this concept, so that the interest is compounded continuously, after \(t\) years we have
\[ 1000\lim_{n→∞} \left(1+\dfrac{0.02}{n}\right)^{nt}.\]
Now let's manipulate this expression so that we have an exponential growth function. Recall that the number \(e\) can be expressed as a limit:
\[ e=\lim_{m→∞}\left(1+\dfrac{1}{m}\right)^m.\]
Based on this, we want the expression inside the parentheses to have the form \((1+1/m)\). Let \(n=0.02m\). Note that as \(n→∞, m→∞\) as well. Then we get
\[ 1000\lim_{n→∞}\left(1+\dfrac{0.02}{n}\right)^{nt}=1000\lim_{m→∞}\left(1+\dfrac{0.02}{0.02m}\right)^{0.02mt}=1000\left[\lim_{m→∞}\left(1+\dfrac{1}{m}\right)^m\right]^{0.02t}.\]
We recognize the limit inside the brackets as the number \(e\). So, the balance in our bank account after \(t\) years is given by \(1000 e^{0.02t}\). Generalizing this concept, we see that if a bank account with an initial balance of \($P\) earns interest at a rate of \(r%\), compounded continuously, then the balance of the account after \(t\) years is
\[ Balance=Pe^{rt}.\]
Example \(\PageIndex{2}\): Compound Interest
A 25-year-old student is offered an opportunity to invest some money in a retirement account that pays \(5%\) annual interest compounded continuously. How much does the student need to invest today to have \($1\) million when she retires at age \(65\)? What if she could earn \(6%\) annual interest compounded continuously instead?
\[ 1,000,000=Pe^{0.05(40)}\]
\[ P=135,335.28.\]
She must invest \($135,335.28\) at \(5%\) interest.
If, instead, she is able to earn \(6%,\) then the equation becomes
\[ P=90,717.95.\]
In this case, she needs to invest only \($90,717.95.\) This is roughly two-thirds the amount she needs to invest at \(5%\). The fact that the interest is compounded continuously greatly magnifies the effect of the \(1%\) increase in interest rate.
Suppose instead of investing at age \(25\sqrt{b^2−4ac}\), the student waits until age \(35\). How much would she have to invest at \(5%\)? At \(6%\)?
At \(5%\) interest, she must invest \($223,130.16\). At \(6%\) interest, she must invest \($165,298.89.\)
If a quantity grows exponentially, the time it takes for the quantity to double remains constant. In other words, it takes the same amount of time for a population of bacteria to grow from \(100\) to \(200\) bacteria as it does to grow from \(10,000\) to \(20,000\) bacteria. This time is called the doubling time. To calculate the doubling time, we want to know when the quantity reaches twice its original size. So we have
\[ \begin{align} 2y_0 & =y_0e^{kt} \nonumber \\ 2&=e^{kt} \nonumber \\ \ln 2 &=kt \nonumber \\ t &=\dfrac{\ln 2}{k}. \nonumber \end{align} \]
Definition: Doubling Time
If a quantity grows exponentially, the doubling time is the amount of time it takes the quantity to double. It is given by
\[\text{Doubling time}=\dfrac{\ln 2}{k}.\]
Example \(\PageIndex{3}\): Using the Doubling Time
Assume a population of fish grows exponentially. A pond is stocked initially with \(500\) fish. After \(6\) months, there are \(1000\) fish in the pond. The owner will allow his friends and neighbors to fish on his pond after the fish population reaches \(10,000\). When will the owner's friends be allowed to fish?
We know it takes the population of fish \(6\) months to double in size. So, if \(t\) represents time in months, by the doubling-time formula, we have \(6=(\ln 2)/k\). Then, \(k=(\ln 2)/6\). Thus, the population is given by \(y=500e^{((\ln 2)/6)t}\). To figure out when the population reaches \(10,000\) fish, we must solve the following equation:
\[ \begin{align} 10,000 &=500e^{(\ln 2/6)t} \nonumber \\ 20&=e^{(\ln 2/6)t} \nonumber \\ \ln 20 &=\left(\dfrac{\ln 2}{6}\right)t \nonumber \\ t&=\dfrac{6(\ln 20)}{\ln 2} \nonumber \\ &≈25.93. \nonumber \end{align} \nonumber\]
The owner's friends have to wait \(25.93\) months (a little more than \(2\) years) to fish in the pond.
Suppose it takes \(9\) months for the fish population in Example \(\PageIndex{3}\) to reach \(1000\) fish. Under these circumstances, how long do the owner's friends have to wait?
\(38.90\) months
Exponential functions can also be used to model populations that shrink (from disease, for example), or chemical compounds that break down over time. We say that such systems exhibit exponential decay, rather than exponential growth. The model is nearly the same, except there is a negative sign in the exponent. Thus, for some positive constant \(k\), we have
\[ y=y_0e^{−kt}.\]
As with exponential growth, there is a differential equation associated with exponential decay. We have
\[ y′=−ky_0e^{−kt}=−ky.\]
Exponential Decay
Systems that exhibit exponential decay behave according to the model
\[y=y_0e^{−kt},\]
where \(y_0\) represents the initial state of the system and \(k>0\) is a constant, called the decay constant.
Figure \(\PageIndex{2}\) shows a graph of a representative exponential decay function.
Figure \(\PageIndex{2}\): An example of exponential decay.
Let's look at a physical application of exponential decay. Newton's law of cooling says that an object cools at a rate proportional to the difference between the temperature of the object and the temperature of the surroundings. In other words, if \(T\) represents the temperature of the object and \(T_a\) represents the ambient temperature in a room, then
\[T′=−k(T−T_a).\]
Note that this is not quite the right model for exponential decay. We want the derivative to be proportional to the function, and this expression has the additional \(T_a\) term. Fortunately, we can make a change of variables that resolves this issue. Let \(y(t)=T(t)−T_a\). Then \(y′(t)=T′(t)−0=T′(t)\), and our equation becomes
\[ y′=−ky.\]
From our previous work, we know this relationship between \(y\) and its derivative leads to exponential decay. Thus,
\[ y=y_0e^{−kt},\]
\[ T−T_a=(T_0−T_a)e^{−kt}\]
\[ T=(T_0−T_a)e^{−kt}+T_a\]
where \(T_0\) represents the initial temperature. Let's apply this formula in the following example.
Example \(\PageIndex{4}\): Newton's Law of Cooling
According to experienced baristas, the optimal temperature to serve coffee is between \(155°F\) and \(175°F\). Suppose coffee is poured at a temperature of \(200°F\), and after \(2\) minutes in a \(70°F\) room it has cooled to \(180°F\). When is the coffee first cool enough to serve? When is the coffee too cold to serve? Round answers to the nearest half minute.
\[ \begin{align*} T&=(T_0−T_a)e^{−kt}+T_a \\[5pt] 180&=(200−70)e^{−k(2)}+70 \\[5pt] 110&=130e^{−2k} \\[5pt] \dfrac{11}{13}&=e^{−2k} \\[5pt] \ln \dfrac{11}{13}&=−2k \\[5pt] \ln 11−\ln 13&=−2k \\[5pt] k&=\dfrac{\ln 13−\ln 11}{2} \end{align*}\]
Then, the model is
\[T=130e^{(\ln 11−\ln 13/2)t}+70. \nonumber\]
The coffee reaches \(175°F\) when
\[ \begin{align*} 175&=130e^{(\ln 11−\ln 13/2)t}+70 \\[5pt]105&=130e^{(\ln 11−\ln 13/2)t} \\[5pt] \dfrac{21}{26}&=e^{(\ln 11−\ln 13/2)t} \\[5pt] \ln \dfrac{21}{26}&=\dfrac{\ln 11−\ln 13}{2}t \\[5pt] \ln 21−\ln 26&=\left(\dfrac{\ln 11−\ln 13}{2}\right)t \\[5pt] t&=\dfrac{2(\ln 21−\ln 26)}{\ln 11−\ln 13}\\[5pt] &≈2.56. \end{align*}\]
The coffee can be served about \(2.5\) minutes after it is poured. The coffee reaches \(155°F\) at
\[ \begin{align*} 155&=130e^{(\ln 11−\ln 13/2)t}+70 \\[5pt] 85 &=130e^{(\ln 11−\ln 13)t} \\[5pt] \dfrac{17}{26} &=e^{(\ln 11−\ln 13)t} \\[5pt] \ln 17−\ln 26 &=\left(\dfrac{\ln 11−\ln 13}{2}\right)t \\[5pt] t&=\dfrac{2(\ln 17−\ln 26)}{\ln 11−\ln 13} \\[5pt] &≈5.09.\end{align*}\]
The coffee is too cold to be served about \(5\) minutes after it is poured.
Suppose the room is warmer \((75°F)\) and, after \(2\) minutes, the coffee has cooled only to \(185°F.\) When is the coffee first cool enough to serve? When is the coffee be too cold to serve? Round answers to the nearest half minute.
The coffee is first cool enough to serve about \(3.5\) minutes after it is poured. The coffee is too cold to serve about \(7\) minutes after it is poured.
Just as systems exhibiting exponential growth have a constant doubling time, systems exhibiting exponential decay have a constant half-life. To calculate the half-life, we want to know when the quantity reaches half its original size. Therefore, we have
\(\dfrac{y_0}{2}=y_0e^{−kt}\)
\(\dfrac{1}{2}=e^{−kt}\)
\(−\ln 2=−kt\)
\(t=\dfrac{\ln 2}{k}\).
Note: This is the same expression we came up with for doubling time.
Definition: Half-Life
If a quantity decays exponentially, the half-life is the amount of time it takes the quantity to be reduced by half. It is given by
\[\text{Half-life}=\dfrac{\ln 2}{k}.\]
Example \(\PageIndex{5}\): Radiocarbon Dating
One of the most common applications of an exponential decay model is carbon dating. Carbon-14 decays (emits a radioactive particle) at a regular and consistent exponential rate. Therefore, if we know how much carbon was originally present in an object and how much carbon remains, we can determine the age of the object. The half-life of carbon-14 is approximately 5730 years—meaning, after that many years, half the material has converted from the original carbon-14 to the new nonradioactive nitrogen-14. If we have 100 g carbon-14 today, how much is left in 50 years? If an artifact that originally contained 100 g of carbon now contains 10 g of carbon, how old is it? Round the answer to the nearest hundred years.
\[ 5730=\dfrac{\ln 2}{k}\]
\[ k=\dfrac{\ln 2}{5730}.\]
So, the model says
\[ y=100e^{−(\ln 2/5730)t}.\]
In \(50\) years, we have
\(y=100e^{−(\ln 2/5730)(50)}≈99.40\).
Therefore, in \(50\) years, \(99.40\) g of carbon-14 remains.
To determine the age of the artifact, we must solve
\[ \begin{align} 10 &=100e^{−(\ln 2/5730)t} \\ \dfrac{1}{10} &= e^{−(\ln 2/5730)t} \\ t &≈19035. \end{align}\]
The artifact is about \(19,000\) years old.
Exercise \(\PageIndex{5}\): Carbon-14 Decay
If we have 100 g of carbon-14 , how much is left after. years? If an artifact that originally contained 100 g of carbon now contains 20 g of carbon, how old is it? Round the answer to the nearest hundred years.
Example \(\PageIndex{2}\): Growth of Bacteria in a Culture
Suppose the rate of growth of bacteria in a Petri dish is given by \(q(t)=3^t\), where t is given in hours and \(q(t)\) is given in thousands of bacteria per hour. If a culture starts with 10,000 bacteria, find a function \(Q(t)\) that gives the number of bacteria in the Petri dish at any time t. How many bacteria are in the dish after 2 hours?
\[Q(t)=∫3^tdt=\dfrac{3^t}{\ln 3}+C.\]
Then, at \(t=0\) we have \(Q(0)=10=\dfrac{1}{\ln 3}+C,\) so \(C≈9.090\) and we get
\[Q(t)=\dfrac{3^t}{\ln 3}+9.090.\]
At time \(t=2\), we have
\[Q(2)=\dfrac{3^2}{\ln 3}+9.090\]
\[=17.282.\]
After 2 hours, there are 17,282 bacteria in the dish.
From Example, suppose the bacteria grow at a rate of \(q(t)=2^t\). Assume the culture still starts with 10,000 bacteria. Find \(Q(t)\). How many bacteria are in the dish after 3 hours?
Use the procedure from Example to solve the problem
\(Q(t)=\dfrac{2^t}{\ln 2}+8.557.\) There are 20,099 bacteria in the dish after 3 hours.
Example \(\PageIndex{3}\): Fruit Fly Population Growth
Suppose a population of fruit flies increases at a rate of \(g(t)=2e^{0.02t}\), in flies per day. If the initial population of fruit flies is 100 flies, how many flies are in the population after 10 days?
Let \(G(t)\) represent the number of flies in the population at time t. Applying the net change theorem, we have
\(G(10)=G(0)+∫^{10}_02e^{0.02t}dt\)
\(=100+[\dfrac{2}{0.02}e^{0.02t}]∣^{10}_0\)
\(=100+[100e^{0.02t}]∣^{10}_0\)
\(=100+100e^{0.2}−100\)
\(≈122.\)
There are 122 flies in the population after 10 days.
Suppose the rate of growth of the fly population is given by \(g(t)=e^{0.01t},\) and the initial fly population is 100 flies. How many flies are in the population after 15 days?
Use the process from Example to solve the problem.
There are 116 flies. | CommonCrawl |
Mass is subject to external force!!
Thread starter evinda
evinda
MHB Site Helper
Hello!!!
I have a question..I am given the following exercise:
Prove that the motion of a mass m on a linear spring with constant $k$, has the form $y (t) = Asin(wt+f)$ , where $t$ is the time and $A, w, f$ are constants. Interpret the physical meaning of the above constants and specify their values if for $t = 0, y(0)=y_{0}$ and $y'(0)=v_{0}$. If,in addition, the mass is subject to external force $F (t) = F_{0}sin (w_{0}t)$, where $F_{0}$ the amplitude and $w_{0}$ the cyclic frequency,calculate the amplitude of the motion and find its dependence from the cyclic frequency $w_{0}$.
I have shown that the motion of the mass has the form $y(t)= Asin(wt+f)$,where $A=\sqrt{\frac{v_{0}^{2}}{w^{2}}+y_{0}^{2}}, \text { where } w=\sqrt{\frac{k}{m}}$ and $f=arctan(\frac{y_{0}w}{v_{0}})$ .But,when the mass is subject to external force $F (t) = F_{0}sin (w_{0}t)$,do we get this differential equation: $y''+w^{2}y=\frac{F_{0}}{m}sin(w_{0}t)$ ,or am I wrong?If it is right,how can I find the amplitude of the motion?
For an undamped system with a sinusoidal forcing term, we could say it is governed by:
\(\displaystyle m\frac{d^2y}{dt^2}+ky=F_0\sin(\omega_0 t)\)
A general solution to this ODE is the sum of a particular solution and a general solution to the corresponding homogenous equation.
You have already found the form of the homogeneous solution. Can you now state the form of the particular solution, either by using a table or using the annihilator method? And then I suggest using the method of undetermined coefficients to determine the particular solution. At this point you can then use linear combination identities to determine the amplitude of the resulting motion and its dependence on the cyclic frequency.
MarkFL said:
I have found the general solution: $y(t)=c_{1}cos(wt)+c_{2}sin(wt)+\frac{F_{0}}{m(w-w_{0}^{2})}sin(w_{0}t)$ where $c_{1}=y_{0} $ and $ c_{2}=\frac{v_{0}}{w}-\frac{F_{0}w_{0}}{mw(w-w_{0}^{2})}$ Is this right?? But how can I find the amplitude of the motion?
Do I have to write $y$ in the form $Asin(wt+f)$,or do I have to do something else??
Okay, we are given the IVP:
\(\displaystyle m\frac{d^2y}{dt^2}+ky=F_0\sin\left(\omega_0 t \right)\) where \(\displaystyle y(0)=y_0,\,y'(0)=v_0\)
The characteristic roots are:
\(\displaystyle r=\pm\sqrt{\frac{k}{m}}i\)
Hence, the homogeneous solution is:
\(\displaystyle y_h(t)=c_1\cos\left(\sqrt{\frac{k}{m}}t \right)+c_2\sin\left(\sqrt{\frac{k}{m}}t \right)\)
Applying a linear combination identity we can express this solution in the form:
\(\displaystyle y_h(t)=c_1\sin\left(\sqrt{\frac{k}{m}}t+c_2 \right)\)
Now, because of the form of the forcing term on the right, we may assume the particular solution must take the form:
\(\displaystyle y_p(t)=A\sin\left(\omega_0 t \right)+B\cos\left(\omega_0 t \right)\)
Differentiating twice, we find:
\(\displaystyle \frac{d^2}{dt^2}y_p(t)=-\omega_0^2y_p(t)\)
Substituting into the ODE, we find:
\(\displaystyle -m\omega_0^2\left(A\sin\left(\omega_0 t \right)+B\cos\left(\omega_0 t \right) \right)+k\left(A\sin\left(\omega_0 t \right)+B\cos\left(\omega_0 t \right) \right)=F_0\sin\left(\omega_0 t \right)\)
So that we may compare coefficients, we arrange this equation as follows:
\(\displaystyle A\left(k-m\omega_0^2 \right)\sin\left(\omega_0 t \right)+B\left(k-m\omega_0^2 \right)\cos\left(\omega_0 t \right)=F_0\sin\left(\omega_0 t \right)+0\cos\left(\omega_0 t \right)\)
Equating the coefficients, we obtain the system:
\(\displaystyle A\left(k-m\omega_0^2 \right)=F_0\implies A=\frac{F_0}{k-m\omega_0^2}\)
\(\displaystyle B\left(k-m\omega_0^2 \right)=0\)
Assuming \(\displaystyle k\ne m\omega_0^2\) we find $B=0$, and so our particular solution is:
\(\displaystyle y_p(t)=\frac{F_0}{k-m\omega_0^2}\sin\left(\omega_0 t \right)\)
And thus, by the principle of superposition, the general solution is given by:
\(\displaystyle y(t)=y_h(t)+y_p(t)\)
\(\displaystyle y(t)=c_1\sin\left(\sqrt{\frac{k}{m}}t+c_2 \right)+\frac{F_0}{k-m\omega_0^2}\sin\left(\omega_0 t \right)\)
Differentiating with respect to $t$, we find:
\(\displaystyle y'(t)=c_1\sqrt{\frac{k}{m}}\cos\left(\sqrt{\frac{k}{m}}t+c_2 \right)+\frac{F_0\omega_0}{k-m\omega_0^2}\cos\left(\omega_0 t \right)\)
Utilizing the given initial values, we find:
\(\displaystyle y(0)=c_1\sin\left(c_2 \right)=y_0\)
\(\displaystyle y'(0)=c_1\sqrt{\frac{k}{m}}\cos\left(c_2 \right)+\frac{F_0\omega_0}{k-m\omega_0^2}=v_0\)
The first equation gives us:
\(\displaystyle c_1=\frac{y_0}{\sin\left(c_2 \right)}\)
And so substituting for $c_1$ into the second equation, we obtain:
\(\displaystyle \frac{y_0}{\sin\left(c_2 \right)}\sqrt{\frac{k}{m}}\cos\left(c_2 \right)+\frac{F_0\omega_0}{k-m\omega_0^2}=v_0\)
We may arrange this as:
\(\displaystyle \tan\left(c_2 \right)=\frac{y_0\left(k-m\omega_0^2 \right)}{v_0\left(k-m\omega_0^2 \right)-F_0\omega_0}\sqrt{\frac{k}{m}}\)
And so we find:
\(\displaystyle c_2=\tan^{-1}\left(\frac{y_0\left(k-m\omega_0^2 \right)}{v_0\left(k-m\omega_0^2 \right)-F_0\omega_0}\sqrt{\frac{k}{m}} \right)\)
For simplicity, lets define:
\(\displaystyle \alpha\equiv\frac{y_0\left(k-m\omega_0^2 \right)}{v_0\left(k-m\omega_0^2 \right)-F_0\omega_0}\sqrt{\frac{k}{m}}\)
Hence:
\(\displaystyle c_2=\tan^{-1}(\alpha)\)
and then we find:
\(\displaystyle c_1=\frac{y_0\sqrt{\alpha^2+1}}{\alpha}\)
Thus, the solution satisfying the IVP is:
\(\displaystyle y(t)=\frac{y_0\sqrt{\alpha^2+1}}{\alpha}\sin\left(\sqrt{\frac{k}{m}}t+\tan^{-1}(\alpha) \right)+\frac{F_0}{k-m\omega_0^2}\sin\left(\omega_0 t \right)\)
We see we have the sum of two sinusoidal terms, with differing amplitudes, periods and phase shifts. I really can't think of a means to determine the amplitude of the resulting combination. Perhaps someone else can suggest a means of doing so, perhaps using Fourier analysis. | CommonCrawl |
Allais for all: Revisiting the paradox in a large representative sample
Steffen Huck1,2 &
Wieland Müller3,4
Journal of Risk and Uncertainty volume 44, pages 261–293 (2012)Cite this article
We administer the Allais paradox questions to both a representative sample of the Dutch population and to student subjects. Three treatments are implemented: one with the original high hypothetical payoffs, one with low hypothetical payoffs and a third with low real payoffs. Our key findings are: (i) violations in the non-lab sample are systematic and a large bulk of violations is likely to stem from non-familiarity with large payoffs, (ii) we can identify groups of the general population that have much higher than average violation rates; this concerns mainly the lowly educated and unemployed, and (iii) the relative treatment differences in the population at large are accurately predicted by the lab sample, but violation rates in all lab treatments are about 15 percentage points lower than in the corresponding non-lab treatments.
This paper presents evidence on the consistency of risk preferences with expected utility theory in a representative population sample. We find that consistency increases with task familiarity and is linked to several personal characteristics such as education, income and asset holdings. Moreover, we investigate the external validity of a laboratory experiment with a student population that implemented the same choice problems as our household panel study. We find that, in line with studies on other biases, deviations from rationality observed in the lab provide a lower bound for deviations in the population at large.
Recently, several studies have made significant progress in understanding risk preferences in populations, making use of innovative survey methods and field experiments (Harrison and List 2004) including game shows with large stakes (Post et al. 2008; Andersen et al. 2008). From the perspective of these studies, the present paper takes one step back by focussing on consistency of risk preferences with expected utility theory in a representative subject pool—well over 1,400 members of the CentER Panel, a representative sample of the Dutch population. We do this by falling back on the oldest consistency test of all—the Allais paradox (Allais 1953). Our results help to understand the reliability and robustness of investigations into the actual distribution of risk preferences in populations.
Our research strategy is threefold. First, we implement three different treatments in the main experiment with the panel. We analyze the original Allais question with payoffs of millions of Euros that, just as when Allais asked Savage, were purely hypothetical. In our second treatment we scaled the payments down but kept them hypothetical. Our third treatment used the same downscaled payoffs but paid them out for real. This enables us to examine to what extent violations are driven by lack of monetary incentives, on the one hand, and non-familiarity with large sums of money on the other.
Second, we are able to exploit the wide range of background information that is available for our subjects in order to study the roots of violations.Footnote 1 Which personal characteristics are correlated with violations? Are violations a matter of insufficient education or limited experience with financial decision making? Can we identify 'problem groups' that are, perhaps, more likely to suffer (in particular late in life) from erroneous financial decision making?
Third, we conduct a laboratory experiment with the usual laboratory subject population (students) employing the same design that we used in the panel experiment. Thus, we are able to examine the external validity of a laboratory experiment in a clear and detailed manner. In particular, we can compare whether and how a lab study can tell us something about the population at large.
Pursuing our threefold research strategy we are, thus, able to present very detailed and comprehensive evidence on the Allais paradox. Our results are useful for several practical issues: (1) Our results point to a number of conditions that make standard theoretical predictions more likely to hold, (2) Our results identify certain parts of the population that, due to inconsistencies, may have difficulties in making sound financial decisions, and (3) Our results contribute to a better understanding of what can be reliably learned from laboratory experiments.
Along the first dimension of our research strategy we find that violations in the original paradox are likely to be driven by very high payoffs with which, in real life, virtually nobody has any practical experience. Violations in the original Allais problem are twice as high as in both downscaled versions. This effect has been observed before with student samples (Conlisk 1989); we show that the pattern extends to the general population and across socioeconomic characteristics. Perhaps this result is not surprising as it simply stresses that economic theory can be expected to work much better in environments with which agents have experience and are, thus, well-adapted. On the other hand, we find no substantial difference between the two downscaled versions. Whether subjects are incentivized or not, violations are much lower in both cases.Footnote 2
Along the second dimension, we are able to identify a whole array of personal characteristics that correlate with inconsistent decision making. Education, occupation, income and asset holdings do all correlate with inconsistent decision making and in each case the direction of effects is as one would guess. The better educated are more consistent and so are those in employment, those who earn more and those who hold financial assets.
Finally, our methodological contribution reveals that the laboratory results are rather useful in predicting behavior in a general population. First, the relative treatment differences are precisely the same for both populations, panel and lab. Second, as demonstrated in a number of other studies (see Gächter et al. (2008) for a survey) the violations of standard theory observed in the lab provide a lower bound for violations observed in the population at large.Footnote 3
The remainder of the paper is organized as follows. In Section 1, we describe the main characteristics of the CentERpanel and introduce the experimental design. In Section 2 we present our results obtained with the panel. We first give a quick overview of the results and then present a more detailed analysis, based on regression results, that also accounts for the effect of sociodemographic characteristics. In Section 3 we introduce our lab results and compare them to those obtained in the panel. Section 4 concludes.
Design and data collection
We administer the original "Allais questions," which consist of two pairwise lottery choices. Consider the following two choice problems. First, a subject is asked to choose between lotteries A and A ∗ where
$$ A= \rm{Certainty\; of \; €\; 1\; Million} \rm{\quad and\quad }A^{\ast }=\left \{ \begin{array}{l} \rm{ 1/100\; Chance\; of\; €\;0} \\ \rm{89/100\; Chance \;of\; €\; 1\; Million} \\ \rm{10/100\; Chance \;of\; €\; 5 \;Million} \end{array} \right. $$
Second, a subject is asked to choose between lotteries B and B ∗ where
$$ \begin{array}{rll} B&=&\left \{ \begin{array}{l} \rm{89/100 \;Chance \;of €\; 0} \\ \rm{11/100 \;Chance \;of\; €\; 1 Million} \\ \qquad \end{array} \right. \rm{\quad and\quad }\\ B^{\ast }&=&\left \{ \begin{array}{l} \rm{90/100 \;Chance\; of \;€\; 0} \\ \qquad \\ \rm{10/100 \;Chance\; of\; €\; 5 \;Million} \end{array} \right. \end{array} $$
Of the four possible answers AB, A ∗ B ∗ , AB ∗ , and A ∗ B only the first two are consistent with expected utility theory (henceforth, EUT) whereas the last two are not.Footnote 4 Many laboratory experiments have shown that violations of EUT are frequent and that a larger share of subjects violating EUT chooses AB ∗ instead of A ∗ B.Footnote 5
We have six simple treatments using a between-subjects design. To introduce these treatments, consider the following lotteries over three outcomes of monetary payoffs with probabilities as above, i.e., A = (0,1,0), A ∗ = (.01,.89,.10), B = (.89,.11,0), B ∗ = (.90,0,.10). Our three treatments were then as follows:
Treatment HighHyp: Original Allais questions with high hypothetical payoffs of € 0, € 1 million,and € 5 million.
Treatment LowHyp: Allais questions with low hypothetical payoffs of € 0, € 5, and € 25.
Treatment LowReal: Allais questions with low real payoffs of € 0, € 5, and € 25.
Note that the amounts of money we use in these treatments are the same as in Conlisk (1989) with the sole difference that he used dollars instead of euros. For all three treatments we had two sub treatments reversing the order of decisions. As we do not find any order effects in the data we pool the data throughout.
We collected data from a representative sample of the Dutch population. The experiments were conducted by CentERdata—an institute for applied economic and survey research for the social sciences—that is affiliated with Tilburg University in the Netherlands. CentERdata carries out its survey research mainly by using its own panel called CentERpanel. This panel is Internet based and consists of some 2000 households in the Netherlands which form a representative sample of the Dutch population.Footnote 6 One of the advantages of the CentERpanel is that the researcher has access to background information for each panel member such as demographic and financial data. Every weekend, the panel members complete a questionnaire on the Internet from their home.
After logging on to our experiment, panel members were randomly assigned to one of the six different treatments introduced above. After being informed about the nature of the experiment, subjects decided whether or not to participate—as common with many modules of the panel. For participating subjects, the next screen introduced an example of a pair of lotteries (which were referred to as "Options"). Subjects were told that their task would be to express preference for one of the two lotteries and, additionally, how the preferred lottery would be executed.Footnote 7 When subjects indicated that they were ready to start the experiment, they were, in two consecutive screens, presented with their two Allais questions. Only after answering both Allais questions, the two preferred lotteries were played out (by the computer) and subjects were informed about the outcome of their two preferred lotteries. In the treatments with real monetary payments, subjects were paid according to the outcomes in both of their preferred lotteries.Footnote 8
In total 1676 members of the CentERpanel logged on to our experiment. Of the subjects logging on, 1426 (85.1%) subjects decided to participate in our experiment while 250 (14.9%) subjects decided not to participate. Table 1 shows descriptive statistics of our sample. The column labeled "Participation" in Table 1 shows descriptive statistics of participating subjects in each of the three main treatments as well as statistics of subjects who chose not to participate in the experiment. The data in Table 1 is grouped according to gender, age, education, occupation and income. (The column labeled "Violation" shows statistics for participating subjects violating or not violating EUT, respectively, which we will analyze further below. It also contains tests on the role of socioeconomic characteristics for EUT violation which will also be discussed later.)
Table 1 Descriptive statistics of the samples
Concentrating on descriptive statistics for participating subjects in Table 1, we note that by and large most variables are relatively identically distributed across treatments. However, in some of the age and income brackets as well as in the category savings account, there is some more variation. A comparison of the descriptive statistics in the columns describing participating subjects with those of non-participating shows that there are no big differences except for the age categories. Basically, older people appear to be a little more reluctant to participate.
Since this causes concern about sample selection problems, we ran for all regressions reported below Heckman (1976) selection models using the variable "Ratio" as one of the exclusion variables. The variable "Ratio" measures the proportion of questionnaires completed by panel members in the three months proceeding our experiment. This variable can be assumed to affect the participation decision but not the decisions taken in the experiment. For none of the regressions we found evidence for a selection bias.Footnote 9
Descriptives
A summary of the experimental results is given in Table 2. The table shows both the absolute frequency of choices (left part) and the relative frequency of choices (right part). As mentioned in the introduction, we will concentrate our analysis on the incidence of subjects' EUT violation in all treatments. However, we will also shortly answer the question whether violations, once they occur, are systematic.
Table 2 Summary of experimental results in the panel
Violation of EUT
Note that the right-most column in Table 2 indicates that violations of EUT are observed in all treatments. In fact, we observe 49.5%, 19.6% and 25.6% violations of EUT in treatments HighHyp, LowHyp, and LowReal, respectively. Furthermore, in all treatments we observe that the fraction of EUT-violating AB ∗ answers is higher than the fraction of EUT-violating A ∗ B answers. The Z-statistic proposed in Conlisk (1989) indicates that the first fraction is significantly higher than the latter fraction at p < 0.001 in all treatments. An interesting question we can answer with our data is whether the differences we report here for the aggregate data are "general" in the sense of applying across socioeconomic attributes or whether they are driven by only some of those attributes. The answer is provided in Tables 5, 6, 7 and 8 in Appendix B, which are structured as Table 1 and provide—for all data and for the three treatments separately—the relative frequency of choices for subjects with various socioeconomic attributes. We observe that EUT violations occur across all socioeconomic attributes and that the "Allais" pattern of more AB ∗ violations than A ∗ B violations is significant for most socioeconomic attributes in all treatments (see the column labeled "Sign. of Conlisk's Z-statistic" in Tables 5–8 in Appendix 6). We conclude that, as in earlier studies, violations of EUT are observed and that they are systematic in the sense that AB ∗ is chosen more often than A ∗ B, mostly independent of socioeconomic background characteristics. To facilitate comparison, note that Conlisk (1989) using a student sample for his "Basic Version" (which is comparable to our treatment HighHyp) reports the following relative frequencies of AB, A ∗ B ∗ , AB ∗ , and A ∗ B choices: 7.6%, 41.9%, 43.6%, and 6.8%. Thus, he observes EUT violation in 50.4% of the cases which compares to 45.5% in our panel treatment HighHyp.
The effect of high versus small hypothetical payoffs
Next consider the effect of high versus small hypothetical payoffs on the extent of EUT violation. For this purpose we compare the rates of EUT violations in treatments HighHyp and LowHyp. Table 2 shows that the rate of EUT violations drops from 49.4% in treatment HighHyp to 19.6% in treatment LowHyp. The D-statistic proposed in Conlisk (1989) indicates that this difference is highly significant at p < 0.0001 (D = 9.115). Inspecting the relative frequencies of choices in Table 2 shows that moving from HighHyp to LowHyp sharply increases the fraction of choices consistent with expected value maximization (A ∗ B ∗ ) at the expense of all other three possible responses. In particular, many more subjects prefer the payoff-maximizing choice A ∗ over A when (hypothetical) payoffs become small. A possible explanation of this result is due to the fact that subjects in treatment LowHyp can be expected to be more familiar with the lower amounts of money leading them to make fewer mistakes.Footnote 10 Again, with our data we can check whether the result regarding the effect of varying the (hypothetical) stake size just shown for the aggregate data also applies when the data is broken down to various socioeconomic characteristics. Column 3 labeled "Significance of Conlisk's D-statistic HighHyp vs LowHyp" in Table 9 in Appendix B shows that the answer to this question is, with a few exceptions, yes.
The effect of (small) real versus (small) hypothetical payoffs
Finally, consider the effect of (small) real versus (small) hypothetical payoffs on the extent of EUT violation. To analyze this, compare the rates of EUT violation in treatments LowHyp and LowReal. Table 2 shows that the rate of EUT violations is 19.6% in LowHyp whereas it is 25.6% in treatment LowReal. Thus, we see a slight increase in the share of EUT violations when we move from (small) hypothetical to (small) real payoffs. The D-statistic in Conlisk (1989) indicates that this difference is significant (D = − 1.6716, p = 0.047). In contrast, Harrison (1994) and Burke et al. (1996) report that the use of low real instead of low hypothetical payoffs reduces the extent of EUT violation. For a broader overview on how incentives affect behavior in decisions under risk, see Camerer (1995, p. 634f). Note that the result regarding the switch from (small) hypothetical to (small) real payoffs on the extent of EUT violation is usually not significant when one zooms in on socioeconomic characteristics, as shown in column 4 labeled "Significance of Conlisk's D-statistic LowHyp vs LowReal" in Table 9 in Appendix B.
Note that our results concerning the extent of EUT violation and the effect of high versus small hypothetical payoffs are not entirely new. We show, however, that they extend to a general population and across socioeconomic characteristics. This should be of interest due to the current discussion about the relationship between results obtained in the lab and those obtained in other settings (see, e.g., Levitt and List 2007).
Let us now turn to providing answers to the first of the two new and main dimensions of our research strategy by inspecting the role of socioeconomic background variables in subjects' behavioral responses to the Allais questions. Refer to Table 1 that under the heading "Violation" shows descriptive statistics of the subsamples violating and not violating EUT as well as p-levels of χ 2 tests. (For the latter, see the notes below Table 1.) Regarding gender, Table 1 reveals that women are slightly more likely to violate EUT than men. With respect to age, Table 1 does not suggest a clear effect although we note that the age bracket's [35–44] relative share is higher in the panel's subpopulation not violating EUT. Regarding education levels, those with lower secondary education and those subjects with a university degree stand out somewhat in the panel. The former because they violate EUT more often and the latter because they violate EUT less often. The most noticeable effect regarding occupation is that those employed on a contractual basis have a higher relative share in the subsample not violating EUT. Finally, with respect to household income, Table 1 does not suggest a clear effect.
Moreover, refer to the rightmost column labeled "p-value, χ 2" in Table 1 that shows p-levels of χ 2 tests for differences between proportions of violating and non-violating subjects in the category listed in column 1.Footnote 11 The χ 2 tests indicate the strongest differences in violation behavior in the categories of education, occupation and household income.
Econometrics and the role of socioeconomic characteristics
To test for across-treatment differences controlling for subjects' sociodemographic characteristics and to check whether any of these characteristics are correlated with behavior, we ran probit regressions with the variable "Violate" as the dependent variable. "Violate" is equal to 1 if a subject's answer to the Allais questions violates EUT (i.e., answers A ∗ B or AB ∗ ), and is equal to 0 otherwise (i.e., answers AB or A ∗ B ∗ ). The background variables we include in the regression are the ones shown in Table 1 above. The results are shown in Table 3 which reports marginal effects. Regression (1) includes all data whereas regressions (2) to (4) show results for each of the three treatments separately. Recall from the end of Section 2 that we did not find evidence for a selection bias due to non-response.
Table 3 Results of probit regressions on violation of EUT
Let us first briefly reconsider across-treatment differences. For this purpose, refer to regression (1) in Table 3 which includes all data and controls for background variables. Importantly, note that in regression (1) the omitted treatment dummy is the one for LowHyp. Inspecting the treatment coefficients, we note that the coefficient for HighHyp is positive and big (0.302) and highly statistically significant whereas the coefficient of LowReal is also positive (0.053) but rather small and only borderline significant.
To analyze the effect of socioeconomic background variables econometrically, we examine regression (1) in Table 3. We make the following observations.
Controlling for other characteristics, gender and age have no significant influence on the extent of EUT violation.Footnote 12
Regarding education, we find a strong tendency for violations to be reduced with further education.Footnote 13 Overall, there is a strong effect of higher education that also shows in the separate specifications for both treatments with low payoffs. In LowHyp everything that improves on primary education goes hand in hand with reduced violations. Only in HighHyp there is no effect of education. This suggests an interesting interaction effect of experience with a decision domain and education. In the absence of any experience (as in HighHyp) education on its own does little to improve performance. Only if coupled with experience education is aligned with consistency.
Of the various occupational affiliations listed in Table 3, we find that the unemployed and 'others' do much worse than the employed, self-employed and freelancers.Footnote 14 This is more pronounced in treatments with hypothetical payoffs.
Regarding income, we notice that having a higher gross monthly household income (vis-à-vis the control group with the lowest gross monthly household income) goes along with reduced EUT violations.Footnote 15 Interestingly, this is particularly pronounced in the treatment LowReal when actual money is at stake. (One could have conjectured that it would be the other way round as the marginal utility of making some money and, hence, the incentive to think a little harder might be higher for those on low incomes. Alas, it does not work this way.)
Finally, subjects holding assets have significantly lower EUT violations (by about 8%) whereas subjects with a savings account have significantly higher EUT violations (by about 5%). Maybe not surprisingly, subjects holding assets tend to be expected value maximizers (mainly choosing A ∗ B ∗ ) while subjects who only have a savings account display "Allais" behavior tending toward the choice of AB ∗ .Footnote 16
In all a picture emerges that is reminiscent of recent studies by Benjamin et al. (2006), Burks et al. (2009) and Dohmen et al. (2010) who show that a range of behavioral biases are correlated with (or may even stem from) cognitive limitations and low IQ. We find that violations are more prevalent in those who are lowly educated, unemployed, on low income, and who have no significant asset holdings. This is, of course, particularly worrying as imprudent financial decision making and bad planning for retirement has the worst consequences in that group.
In Appendix C we complement the above analysis by running multinomial logit regressions using all four answers AB, A ∗ B ∗ , AB ∗ , and A ∗ B, and choosing the answer representing expected value maximization, A ∗ B ∗ , as the base outcome. The results (whose interpretation is less straightforward) are shown in Tables 10, 11, 12 and 13.
The lab experiment
As mentioned in the introduction, the third dimension of our research strategy is concerned with the external validity of laboratory experiments that are typically carried out with rather homogenous subject pools. Of course, the preceding section has shown that there are important sources of heterogeneity in the population at large that simply cannot be detected when the subject pool is restricted to students. The same is, of course, true for any highly selected convenience sample. But what about the questions we analyzed first—the effects of different treatments, the differences between high and low and real and hypothetical payoffs? Would a lab experiment give us reliable results to analyze such questions (as it has been implicitly assumed for a long time in the experimental community, perhaps negligently without much testing)? To shed more light on these issues we conducted an additional lab experiment in the laboratory of Tilburg University using Dutch speaking student subjects drawn from the normal subject pool.
The lab experiment was conducted in the same way as the experiment using the CentERpanel. That is, student subjects did the experiment using a web browser in the lab and using the same screens as the subjects in the panel. However, there were two small exceptions. First, lab subjects received a 10 Euro show-up fee. (Potential participants were informed about this in the invitation E-mail.) But of course, mirroring the panel design again, only subjects assigned to treatments with real payment had the chance to earn additional money during the experiment. This was not announced prior to the experiment. Second, lab subjects were not offered the choice of not participating in the experiment once they had reported to the lab and the experiment was started. This was done in an effort to mimic the normal procedures in lab experiments where by reporting to the lab, a subject usually confirms his or her decision to participate. Note that when we move from the panel to the lab sample, both the subject pool and the environment changes. We deliberately accepted these two simultaneous changes as our aim was to contrast the results obtained in the panel with those obtained in a normal lab experiment.Footnote 17
After the experiment we asked subjects to fill in a questionnaire in which we elicited some basic background information. Naturally, the information we collected from lab subjects is very limited and cannot be compared in scope and quality to the background information available from members of CentERpanel. The lab experiments were conducted in December 2006 using 223 subjects in total.
As in the panel experiment we did not observe any order effects of presenting the Allais questions, so we present only pooled data in Table 4 which shows the same information for the lab data that Table 2 showed for the panel. We make the following observations. First, as in the panel experiments, we observe EUT violations in all treatments, although to a much lesser degree.Footnote 18 This mirrors the main result in Gächter et al.'s (2008) meta-study: Violations from orthodox theoretical predictions and biases observed in the lab form a lower bound for violations and biases observed in the population at large. Second, as in the panel, moving from high hypothetical payoffs to low hypothetical payoffs reduces the extent of EUT violation significantly ( p < 0.001, D = 4.881). Third, moving from low hypothetical payoffs to low real payoffs increases the extent of EUT violation slightly but insignificantly (p < 0.226, D = − 0.7525). The similarities between the observations in the panel and in the lab are evident.
Table 4 Summary of experimental results in the lab
Figure 1 shows the shares of choices violating EUT in the two subsamples. It appears that the graph indicating the share of EUT violation in the panel can quite accurately be obtained by shifting the graph indicating the share of EUT violation in the lab upwards by about 15 percentage points.Footnote 19 This means that although the share of EUT violations is consistently higher in the panel than in the lab, the comparative statics results of moving from one treatment to another could have been reliably predicted by the lab experiments.
The share of choices violating EUT in the panel and the lab. Note: HighHyp stands for high hypothetical payoffs, LowHyp for low hypothetical payoffs, and LowReal for low real payoffs
Using a representative sample of the Dutch population we revisit the Allais paradox. Our main results are threefold. First, as in previous lab samples, the violations of EUT are systematic in the population at large and much lower when stakes are low. Second, there is considerable heterogeneity in the population and violations are particularly prevalent among the lowly educated, those poor in income and asset holdings, and the unemployed. Third, comparing the panel results with a laboratory experiment we find that the relative treatment differences are identical in the panel and the lab but violation rates in all lab treatments are about 15 percentage points lower than in the corresponding non-lab treatment.
Our findings appear to imply two general messages. First, laboratory experiments with convenience samples of students might be more useful to study relative effects rather than absolute levels (see also Levitt and List (2007) who make a similar point in the context of social preferences). When it comes to the absolute measurement of behavior, it appears that lab results will draw a too optimistic picture. The population at large, it turns out, is less consistent with EUT than student samples are. Second, our results suggest that the predictive power of EUT in a general population is correlated with socioeconomic characteristics. In particular, parts of the population that are more likely to experience economic hardship are less consistent.
Of course, there exists a large literature on non-expected utility theories such as Kahneman and Tversky's (1979) prospect theory or Machina's (1982) fanning-out theory (both of which can explain the Allais paradox) or Viscusi's (1989) prospective reference theory which predicts the paradox. Earlier laboratory experiments (see Camerer (1995) or Starmer (2000) for surveys) have documented the Allais paradox in student samples. Our paper highlights that, if anything, these studies underestimate the true prevalence of the paradox in general populations and indicates how violations are correlated with observable characteristics.
Several other studies have also used the CentER panel as a subject pool. Let us briefly mention some of these studies. Hey (2002) and Carbone (2005) analyze more complicated and sequential individual decision making tasks and do not find any background variable systematically influencing behavior. Bellemare and Kröger (2007) study a trust game and find "that heterogeneity in behavior is characterized by several asymmetries—men, the young and elderly, and low educated individuals invest relatively less, but reward significantly more investments." (p. 183) von Gaudecker et al. (2011a) elicit risk preferences and report that older people, women, the relatively uneducated, and those with lower income are more risk averse. For another study on individual risk attitudes using a large and representative German sample, see Dohmen et al. (2011).
For early studies of the Allais paradox see, e.g., MacCrimmon (1968), Slovic and Tversky (1974), Allais and Hagen (1979) and Kahneman and Tversky (1979). For the effect of downscaled payoffs see Conlisk (1989), Starmer and Sugden (1991), Harrison (1994), Burke et al. (1996), Fan (2002), and van de Kuilen and Wakker (2006).
Almost all of the experiments on the Allais paradox conducted so far have used students as their subjects. There are two notable exceptions. List and Haigh (2005) test the Allais paradox both with students and professional traders from the Chicago Board of Trade. They report that both students and professional traders show Allais paradox behavior, but find that traders do so to a smaller extent. Fatas et al. (2007) use students and politicians and report similar results with students being more prone to Allais paradox behavior.
To see this note that by adding 0.89u(€ 0) − 0.89u(€ 1M) to both sides of the inequality u(A) = u(€1M) > 0.01u(€0) + 0.89u( €1M) + 0.1u(€5M) = u(A ∗ ) implies u(B) = 0.89u( € 0) + 0.11u(€ 1M) > 0.9u(€ 0) + 0.1u(€5M) + 0.1u(€ 5M) = u(B ∗ ).
See, e.g., MacCrimmon (1968), Slovic and Tversky (1974), Allais and Hagen (1979), Kahneman and Tversky (1979), Conlisk (1989), Starmer and Sugden (1991), Harrison (1994), Burke et al. (1996) and Fan (2002).
For more information about the CentERpanel and the way it is administered see http://www.uvt.nl/centerdata/en/whatwedo/thecenterpanel/.
For more details see Appendix A which contains a translation of the screens used in the treatments with low payoffs. Note that the experiment was administered in Dutch.
Note the following about payments in treatment LowReal. CentERdata reimburses the telephone costs for filling in questionnaires by exchanging "CentERpoints" (1 CentERpoint = 0.01 Euro) to panel members' private bank accounts four times a year. Although lotteries were described in Euro amounts, subjects in the treatments with real monetary earnings were informed that: "In this experiment you can earn real money that will be paid in the form of CentERpoints."
See Eckel and Grossman (2000), Bellemare and Kröger (2007), von Gaudecker et al. (2011b) and Harrison et al. (2009) for more evidence on selection issues.
Conlisk (1989) points out that this effect is in line with (a) Machina's (1982) fanning out model that predicts Allais behavior for large payoffs and (b) the observation that EUT converges to expected payoff maximization for small payoffs. Notice, however, that this consistency argument is not an explanation—for it leaves open why fanning occurs and is more dramatic in its consequences with high payoffs. Non-familiarity with high payoffs is such an explanation and may, in fact, be adequately captured by fanning out of indifference curves.
Note that for the multinomial categories in the leftmost column in Table 1, the χ 2-tests check for the joint hypothesis that the violation rates are identical across all categories.
In light of recent findings about sharply declining numeracy skills in the (British) population above 55 (Banks 2006) this is perhaps slightly surprising.
Wald tests indicate, however, that the effects of the education levels below university degree listed in Table 3 are not statistically different.
A Wald test indicates that the effect of these two occupations is not statistically different.
Wald tests indicate that the effects of the three income variables listed in Table 3 are not statistically different. Furthermore, controlling for household size leaves the regression results reported in Table 3 virtually unchanged.
To look at the effect of holding assets or a savings account more closely, we defined the variable "only assets" which equals 1 if a subject holds assets but has no savings account (otherwise it equals 0), the variable "only savings account" which equals 1 if a subject has a savings account but holds no assets (otherwise it equals 0), and the variable "assets & savings account" which equals 1 if a subject holds assets and has a savings account (otherwise it equals 0). Hence, the reference group consists of those subjects who neither hold assets nor have a savings account. Replacing the variables "assets" and "savings account" in regression (1) in Table 3 by the new variables "only assets," "only savings account," and "assets & savings account," leaves the other variables of regression (1) almost unchanged (including significance levels) and shows that while the coefficients of the variables "only assets" and "assets & savings account" are negative (− 0.073 and − 0.032) but insignificant, the coefficient of the variable "only savings account" is positive ( 0.055 ) and significant at the 5% level. So it is not only the financially savvy who hold assets who do comparatively well but also people without any savings—perhaps because, having no financial cushion, they cannot afford making many mistakes.
von Gaudecker et al. (2011b) offer an analysis of the individual effects of implementation mode and of subject pool selection in a risk preference elicitation study and find that differences in behavior are due to selection and not implementation mode.
Again, we observe that the fraction of EUT-violating AB ∗ answers is significantly higher than the fraction of EUT-violating A ∗ B answers in all lab treatments (p < 0.001, Conlisk's (1989) Z-statistic).
The difference in the extent of EUT violation between the panel and the lab is significant for all three treatments (HighHyp: p = 0.014, D = − 2.1732; LowHyp: p < 0.001; D = − 5.2220; LowReal: p < 0.001 , D = − 4.6935).
Allais, M. (1953). Le comportement de l'homme rationnel devant le risqué: Critique des postulats et axioms de l'ecole americaine. Econometrica, 21, 503–546.
Allais, M., & Hagen, O. (Eds.) (1979). Expected utility hypotheses and the Allais paradox.Dordrecht: Reidel.
Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2008). Risk aversion in game shows. In J. C. Cox, & G. W. Harrison (Eds.), Risk aversion in experiments (Vol. 12). Greenwich: JAI Press. Research in Experimental Economics.
Banks, J. (2006). Economic choices, capabilities and outcomes at older ages. Fiscal Studies, 27, 281–311.
Bellemare, C., & Kröger, S. (2007). On representative social capital. European Economic Review, 51, 183–202.
Benjamin, D. J., Brown, S. A., & Shapiro, J. M. (2006). Who is "behavioral"? Cognitive ability and anomalous preferences. Working paper.
Burke, M. S., Carter, J. C., Gominiak, R. D., & Ohl, D. F. (1996). An experimental note on the Allais paradox and monetary incentives. Empirical Economics, 21, 617–632.
Burks, S. V., Carpenter, J. P., Götte, L., & Rustichini, A. (2009). Cognitive skills affect economic preferences, strategic behavior, and job attachment. Proceedings of the National Academy of Science, 106, 7745–7750.
Camerer, C. (1995). Individual decision making. In J. H. Kagel, & A. E. Roth (Eds.), The handbook of experimental economics (pp. 587–703). Princeton: Princeton University Press.
Carbone, E. (2005). Demographics and behaviour. Experimental Economics, 8, 217–232.
Conlisk, J. (1989). Three variants on the Allais example. American Economic Review, 79, 392–407.
Dohmen, T., Falk, A., Huffman, D., & Sunde, U. (2010). Are risk aversion and impatience related to cognitive ability? American Economic Review, 100, 1238–1260.
Dohmen, T., Falk, A., Huffman, D., Sunde, U., Schupp, J., & Wagner, G. G. (2011). Individual risk attitudes: Measurement, determinants and behavioral consequences. Journal of the European Economic Association, 9, 522–550.
Eckel, C., & Grossman, P. (2000). Volunteers and pseudo-volunteers: The effect of recruitment method on subjects' behavior in experiments. Experimental Economics, 3, 107–120.
Fan, C.-P. (2002). Allais paradox in the small. Journal of Economic Behavior and Organization, 49, 411–421.
Fatas, E., Neugebauer, T., & Tamborero, P. (2007). How politicians make decisions: A political choice experiment. Journal of Economics, 92, 167–196.
Gächter, S., Huck, S., & Weizsäcker, G. (2008). Socio-demographics and choice in experimental economics. Mimeo.
Harrison, G. W. (1994). Expected utility and the experimentalists. Empirical Economics, 19, 223–253.
Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42, 1013–1059.
Harrison, G. W., Lau, M. I., & Rutström, E. E. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior and Organization, 70, 498–507.
Heckman, J. (1976). The common structure of statistical models of truncation, sample selection, and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement, 5, 475–492.
Hey, J. D. (2002). Experimental economics and the theory of decision making under risk and uncertainty. Geneva Papers on Risk and Insurance Theory, 27, 5–21.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291.
Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world. Journal of Economic Perspectives, 21, 153–174.
List, J. A., & Haigh, M. S. (2005). A simple test of expected utility theory using professional traders. Proceedings of the National Academy of Science, 102, 945–948.
MacCrimmon, K. R. (1968). Descriptive and normative implications of the decision-theory postulates. In K. Borch, & J. Mossin (Eds.), Risk and uncertainty (Chapter 1). London: Macmillan.
Machina, M. J. (1982). Expected utility analysis without the independence axiom. Econometrica, 50, 277–323.
Post, T., van den Assem, M. J., Baltussen, G., & Thaler, R. H. (2008). Deal or no deal? Decision making under risk in a large-payoff game show. American Economic Review, 98, 38–71.
Slovic, P., & Tversky, A. (1974). Who accepts Savage's axiom? Behavioral Sciences, 19, 368–373.
Starmer, C. (2000). Developments in non-expected utility theory: The hunt for a descriptive theory of choice under risk. Journal of Economic Literature, 38, 332–82.
Starmer, C., & Sugden, R. (1991). Does the random-lottery incentive system elicit true preferences? An experimental investigation. American Economic Review, 81, 971–978.
van de Kuilen, G., & Wakker, P. P. (2006). Learning in the Allais paradox. Journal of Risk and Uncertainty, 13, 155–164.
Viscusi, W. K. (1989). Prospective reference theory: Toward an explanation of the paradoxes. Journal of Risk and Uncertainty, 2, 235–264.
von Gaudecker, H.-M., van Soest, A., & Wengström, E. (2011a). Heterogeneity in risky choice behaviour in a broad population. American Economic Review, 101, 664–694.
von Gaudecker, H.-M., van Soest, A., & Wengström, E. (2011b). Experts in experiments: How selection matters for estimated distributions of risk preferences. IZA Discussion Paper No. 5575.
We thank Marcel Das and Marika Puumala of CentERdata (Tilburg University) for their most efficient support in collecting the data. Furthermore, we thank W. Kip Viscusi, anonymous referees, Johannes Binswanger, Oliver Kirchkamp, Tobias Klein, Sabine Kröger, Gijs van de Kuilen, Imran Rasul, Jan van Ours, Stefan Trautmann, Anthony Ziegelmeyer and participants of the 3rd International Meeting on Experimental and Behavioral Economics and the IMPRS Uncertainty Summer School as well as seminar participants at Tilburg University, University of Frankfurt (Main), Humboldt University Berlin, and the University of Amsterdam for helpful comments. We gratefully acknowledge financial help from the UK's Economic and Social Research Council via ELSE and a grant on 'Behavioral Mechanism Design'. The second author acknowledges financial help from the Netherlands Organisation for Scientific Research (NWO) through a VIDI grant.
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Department of Economics, University College London, Gower Street, London, WC1E 6BT, UK
Steffen Huck
WZB, Berlin, Germany
Department of Economics and VCEE, University of Vienna, Brünnerstrasse 72, 1210, Vienna, Austria
Wieland Müller
CentER, TILEC and Tilburg University, Tilburg, Netherlands
Correspondence to Wieland Müller.
This paper was originally entitled "Allais for all: revisiting the paradox".
Appendix A: Instructions (Translation)
The experiment was administered in Dutch. Here we give a translation of the screens presented in treatment LowHyp and [LowReal]
Screen 1:
This research is conducted by researchers of Tilburg University and University College London. The questionnaire consists of two choice problems in which you will be asked to make a choice between two situations. Based on your choices and luck you may win an amount of money. Please note: In this experiment all amounts are hypothetical, in reality you cannot win any money. [In LowReal: In this experiment you can earn real money that will be paid in the form of CentERpoints.]
If you do not want to participate as a matter of principle, you can indicate this below. You will then go directly to the end of the questionnaire.
\(\bigcirc \) I continue with the questionnaire.
\(\bigcirc \) No, I do not want to participate in this questionnaire.
\(\fbox{Continue}\)
You will shortly be presented with two questions. You will be asked to make a choice between two options which provide you with different chances to win something. Please see an example of such a situation here below. In the first option you have a chance of 80% to win nothing and a chance of 20% to win 10 Euro. The second option provides you with a chance of 20% of nothing and a chance of 80% of winning 20 Euro.
$$ \begin{tabular}{@{}llll} OPTION 1: & 80\% chance & nothing & (if number is between 1 and 80) and \\ & 20\% chance & 10 euro & (if number is between 81 and 100) \\[4pt] OPTION 2: & 20\% chance & nothing & (if number is between 1 and 20) and \\ & 80\% chance & 20 euro & (if number is between 21 and 100) \end{tabular} $$
We would like to know whether you prefer Option 1 or Option 2 (in these instructions you don't have to choose yet). After you have made the choice, the computer will play out the option you chose. The computer generates a random number that is between 1 and 100. The chance distribution of the chosen option then defines how much you win with this number.
For example: in the Option 1 above you get nothing if the computer generates a number between 1 and 80 (this is indicated above in Option 1 in brackets), but if the computer generates a number between 81 and 100 you will get 10 Euro. In Option 2 you get nothing if the computer generates a number between 1 and 20, but with a number between 21 and 100 you win 20 Euro. As already mentioned, it concerns hypothetical amounts here, in reality you cannot win any money. [In LowReal: If you win something then this amount will be added to your account of CentERpoints.]
If you are ready to start the experiment, press "Continue."
Which of the following two options do you prefer?
$$ \begin{tabular}{@{}llll} OPTION A: & certainty & of 5 euro & (if number is between 1 and 100) \\ & & & and \\[4pt] OPTION B: & 1\% chance & of nothing & (if number is 1) and \\ & 89\% chance & of 5 euro & (if number is between 2 and 90) \\ & & & and\\ & 10\% chance & of 25 euro & (if number is between 91 and 100) \end{tabular} $$
\(\bigcirc \) Option A
\(\bigcirc \) Option B
$$ \begin{tabular}{@{}llll} OPTION C: & 89\% chance & of nothing & (if number is between 1 and 89) \\ & & & and \\ & 11\% chance & of 5 euro & (if number is between 90 and 100) \\[4pt] OPTION D: & 90\% chance & of nothing & (if number is between 1 and 90)\\ & & & and \\ & 10\% chance & of 25 euro & (if number is between 91 and 100) \end{tabular} $$
\(\bigcirc \) Option C
\(\bigcirc \) Option D
You have now made the two decisions. Press "Continue" to see the results of the options you chose.
In the first question (option A or B) you have chosen Option X ([description of the chosen option]). The computer generated the number [random number]. Thus, you have won [in treatment LowHyp: the hypothetical] amount of [...] euro with this option.
In the second question (option C or D) you have chosen Option Y ([description of the chosen option]). The computer generated the number [random number]. Thus, you have won the [in treatment LowHyp: the hypothetical] amount of [...] euro with this option.
In total you have won the [in treatment LowHyp: the hypothetical] amount of [...] euro in this experiment.
Do you have any comments regarding the questionnaire?
\(\bigcirc \) Yes
\(\bigcirc \) No
Screen 8 [In case the answer to the question on Screen 7 was Yes.] :
You can type in your comments below.
This is the end of the questionnaire. Thank you for your participation.
Appendix B: Relative frequencies of choices depending on socioeconomic characteristics
In Tables 5–9 we report the relative frequency of choices and the results of additional tests depending on socioeconomic characteristics. We do this for the pooled data (Table 5) and the three treatments separately (Tables 5–8), and for pair-wise across-treatment tests (Table 9). The various tests are described in the notes to the tables.
Table 5 Relative frequency of choices: all data
Table 6 Relative frequency of choices: treatment HighHyp
Table 7 Relative frequency of choices: treatment LowHyp
Table 8 Relative frequency of choices: treatment LowReal
Table 9 Test results for violations of EUT across treatments
Appendix C: Results of multinomial logit regressions
In this appendix, we report the results of multinomial logit regressions on all four answers AB, A ∗ B ∗ , AB ∗ , and A ∗ B, using expected value maximization, A ∗ B ∗ , as the base outcome and using the variables listed in column 1 in Table 3 as regressors. We perform multinomial logit regressions for the pooled data and for the three treatments separately. The results are reported in Tables 10 (all data) and Tables 11–13 (treatments HighHyp, LowHyp, and LowReal). In these tables we report the relative risks of choosing outcome AB, AB ∗ or A ∗ B over the base outcome A ∗ B ∗ . That is, the three columns in Tables 10–13 show, respectively, the ratios P(answer = AB)/P(answer = A ∗ B ∗ ), P(answer = A ∗ B)/P(answer = A ∗ B ∗ ), and P(answer = AB ∗ )/P(answer = A ∗ B ∗ ), where P(.) denotes the probability of choosing a given answer. The tables should be read as follows. Refer, for example, to Table 10 that reports the results on the pooled data. Since the omitted treatment is LowHyp, the coefficient 14.116 in the second column of this table means that by moving from treatment LowHyp to treatment HighHyp, the relative risk, P(answer = AB)/P(answer = A ∗ B ∗ ), of choosing answer AB over answer A ∗ B ∗ is equal to 14.116. Similarly, by moving from the omitted age category 16–24 to age category 25–34, the relative risk, P(answer = AB)/P(answer = A ∗ B ∗ ), of choosing answer AB over answer A ∗ B ∗ is equal to 0.726.
Table 10 Results of a multinomial logistic regression (all treatments)
Table 11 Results of a multinomial logistic regression (Treatment HighHyp)
Table 12 Results of a multinomial logistic regression (Treatment LowHyp)
Table 13 Results of a multinomial logistic regression (Treatment LowReal)
Inspecting the results of the multinomial logit regressions for the three treatments in Tables 11–13, the most salient feature seems to be that for the different treatments different categories of background characteristics have a significant effect on the relative risk of choosing one answer over the base answer A ∗ B ∗ . For treatment HighHyp we note that it is most age categories that show a significant correlation with the relative risk of choosing answer A ∗ B over answer A ∗ B ∗ (columns labeled "A ∗ B" in Table 11). For treatment LowHyp we observe that most of the occupation variables are significantly correlated with the relative risk of choosing answer AB ∗ over answer A ∗ B ∗ (column labeled "AB ∗ " in Table 12). Finally, for treatment LowReal we infer that most of all the household gross income variables have a significant correlation with the relative risk of choosing answer A ∗ B over answer A ∗ B ∗ (column "A ∗ B" in Table 13).
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Huck, S., Müller, W. Allais for all: Revisiting the paradox in a large representative sample. J Risk Uncertain 44, 261–293 (2012). https://doi.org/10.1007/s11166-012-9142-8
Expected utility theory
Allais paradox
Common consequence effect
Field experiments
Representative sample | CommonCrawl |
You are here: Home / Featured / Halifax in two acts: The Hotel Barmecide
Halifax in two acts: The Hotel Barmecide
Morning File, Monday, October 28, 2019
October 28, 2019 By Tim Bousquet 7 Comments
1. Crowns
Writes Stephen Kimber:
After a crazy week of blind-siding legislation, insults, distortions, bluster, meaningless committee hearings and more fact-free moments than you'd find in a Trumpian White House, the province and its Crown attorneys are right back where they began — at the bargaining table. Well, not exactly as illustrated…
Click here to read "Stephen McNeil and the Crowns: Magic realism meets reality."
This article is for subscribers. Click here to subscribe.
El Jones has a somewhat different take on the Crown issue:
On Friday, lawyers for the province's Crown attorneys and for the government met in court as the government pursued an emergency injunction to force the prosecutors back to work. The government has declared the strike by Nova Scotia Crown prosecutors "illegal."
Outside the courtroom, Crown attorney Rick Woodburn said:
"They will take away and smash and trample on our rights. And the question is whose rights are going to be trampled next."
Jones points out that this is the very same Rick Woodburn who prosecuted the protestors at the Atlantica conference in 2007, calling the protesters "anarchists." As well, Woodburn was the prosecutor who brought the Occupy protestors to court in 2011.
In 2018, another now-put upon Crown, Adam McCulley, prosecuted union activists who protested outside the Canada Post offices on Almon Street.
While she supports them, Jones find it rich that Woodburn and the other Crowns are now calling for "solidarity":
If the McNeil government is willing to attack nurses, teachers and lawyers — people with social power and influence — there is little hope for people with much less power, resources, or ability to organize. Eroding the bargaining rights of Crown attorneys sends a message to all workers in this province, and we should all be concerned.
It is perhaps an irony, though, that after they are legislated back to work, Crowns will continue to prosecute anti-capitalist protestors, labour organizers, and land defenders.
Woodburn is wrong, however, in formulating his call for solidarity as though the trampling of rights begins with the Crowns. The question is not only "Whose rights will be trampled next?" but also "Whose rights were trampled before?"
It was the protestors a decade ago and more who were fighting these very policies and sounding the alarm about the erosion of the public sector, the attacks on workers' rights, and austerity policies that cut funds from public services. Woodburn angrily demanded jail time for protestors to send a message and set an example for other protestors. And now he himself is an illegal protestor.
In this, he and other Crowns would perhaps do well to remember Martin Niemoller's warning. I have made some adaptions:
First they came for the anarchists, and I prosecuted them.
Then they came for the Occupy protesters, and I prosecuted them.
Then they came for the union organizers, and I prosecuted them.
They they came for the land defenders, and I prosecuted them.
Then they came for me, and there was no one left to say anything.
Click here to read "The Crown attorneys are singing a new tune now that the Neoliberal state is attacking them."
2. Tidal Power
This turbine is similar to the version by same manufacturer planned for deployment in Minas Passage
"The dream of commercializing renewable energy from the world's largest tides got a new lease on life last week," reports Jennifer Henderson:
The Nova Scotia government amended a law to give three tidal developers permission to pursue the goal.
The first project to test the powerful tidal waters of the Bay of Fundy, a $20 million-plus venture between Open Hydro and Emera, ended with the abandonment of a 1,000-tonne turbine on the ocean floor in 2018. Incentives to developers under the Marine Renewable Energy Act were set to expire next December 2020. Those incentives offered berth-holders at the demonstration site near Parrsboro a rate of 53 cents per kilowatt hour (kwh) — a rate more than four times higher what Nova Scotia Power currently pays to generate electricity — over a 15-year period.
Amid pleas from tidal developers working to secure financing in the tens of millions of dollars, Energy Minister Darren Mombourquette has introduced changes to the Act that would extend those generous incentives while acknowledging it could still take "decades" for a new industry to develop.
"Through these amendments, government will issue new power purchase agreements to developers at the Fundy Ocean Research Centre for Energy (FORCE)," said Mombourquette. "FORCE is the world's leading tidal research facility and the companies there are doing ground-breaking work. Once their projects are operational, the developers at FORCE will continue to have the ability to sell electricity to the utility for up to 15 years."
Click here to read "Tidal power update: new legislation clears way for three new projects, but a tidal power industry is still 'decades' away."
3. Kicking the saving-the-world can down the road
Today, the legislature's Law Amendments Committee will take up Bill 213, the Sustainable Development Goals Act, which replaces the Environmental Goals and Sustainable Prosperity Act (EGSPA) of 2007.
Former PC cabinet minister Mark Parent was the, er, parent of EGSPA, and got all-party agreement on a sweeping list of environmental targets that were to be met by 2020. The new bill ignores most of those targets, but does address greenhouse gases, mandating emissions by 2030 be 53% below the levels that were emitted in 2005, and that by 2050 Nova Scotia's GHG emissions will be "net zero," "balancing greenhouse gas emissions with greenhouse gas removals and other offsetting measures."
There are a lot of things to criticize in this bill, but I want to stay with GHG emissions. The government sells the targets as ambitious, and compared to most North American jurisdictions, it is. But it's still not enough. We could wrangle about percentages and meanings all day long — what does "other offsetting measures" mean? — but the simple fact is any measurable target is beyond the mandate of the current government. The year 2030 may as well be 2200 as far as the McNeil government goes.
Long-term targets are necessary, but what we need are more immediate targets — how much will GHG emissions be reduced this year, next year, the year after that, and the following year?
Once power starts flowing from Muskrat Falls (next year, likely), the province will see a big drop in GHG emissions from electrical generation. That's good, but it's also a plan long in the making. I don't see that there's any further concrete action on the climate change front.
We're just kicking the can down the road.
4. Nova Scotia's most successful tourist attraction removed
The last piece of the fallen crane is removed from the construction site on South Park Street. Photo: NSTIR
The Department of Transportation and Infrastructure Renewal issued a press release Saturday:
Workers successfully removed the final pieces of the crane that were lying on the top storey of the Olympus building today, Oct. 26.
The tower transition piece of the counterjib and turntable are now safely on the ground. The cabin was removed yesterday, Oct. 25.
With the removal of the sections today, workers will now take steps to evaluate the damage to the building and ensure debris and other materials on top of the apartment building are removed or secured. That work is estimated to take a few days.
Once an assessment of the Olympus building is complete, more information on next steps will be shared. The building assessment will also help inform any decisions by Halifax Regional Fire and Emergency regarding status of the evacuation order still in place for a number of addresses in the vicinity of the Olympus building.
Work crews now move into the final phase of the operation, which involves dismantling and removing the heavy-lift cranes and proceeding with site clean-up.
5. Sickly cruise ship passengers
The AIDAdiva is one of four cruise ships in port today (see "On the Harbour" below), collectively carrying more than 10,000 passengers. Many of those passengers will be sick.
Last week, I noted that the AIDAdiva pulled into Halifax three times since September 1, and each time had a bevy of norovirus on board. Gus Reed helpfully noted that:
The AIDAviva has a dozen or so wheelchair accessible cabins — behindertenfreundlich — as they say. So that German tourist sitting next to you in his wheelchair on the deck at some tony restaurant on Argyle Street probably just came off the boat. Willkommen!
If you're worried about Norovirus, you know Germans have a thing for hygiene — putzfimmel.
So even though the guy with the wheelchair has rolled over all the vomit in the ship's elevator, you can rest assured that it's safe to get behind him in the salad bar lineup because he certainly washed his hands before digging into the radishes.
Oops! I forgot, even though there's a Human Rights order 13 months old and you just spent a bundle in taxes upgrading Argyle Street, the washroom is down a set of stairs and unavailable to that nice man who suddenly looks a little pale.
Your chief public health official doesn't care, the food inspectors don't care, the Human Rights Commission doesn't care, the Justice department doesn't care and the restaurants certainly don't care.
You remember those special throwup sinks they had at the Hofbrauhaus when you went to Munich two years ago. You hope they have them downstairs. You're feeling a little queasy……
To that lovely observation, we should additionally consider that cruise ship passengers impact our health care system in other ways. Reports John McPhee for the Chronicle Herald:
[M]any sick passengers are ending up at the Halifax Infirmary emergency department, exacerbating an already acute overcrowding problem and leaving Nova Scotia taxpayers to foot unpaid medical bills.
"The minute I'm driving to work and I hear there's a big cruise ship in, I think, oh great," said an emergency department staff member in a recent interview.
The staffer, who asked to remain anonymous, said it's not unusual for the ER to handle at least five cruise ship passengers a day during high tourist season.
"The big problem is the majority of them are travelling without insurance. So we have set up in our emergency department, we actually have a debit machine because our physicians weren't being paid at all."
It's gotten to the point where the department has nicknamed a regular cruise ship visitor "Dialysis of the Seas" because so many passengers who aren't getting the routine kidney disease treatment are treated there.
Other common conditions among cruise ship passengers are pneumonia and sepsis.
"It amazes me how acutely ill these people are. Sometimes they come off the ship intubated and go off to the ICU (intensive care unit)," the staffer said.
"Two weeks ago, the cruise ship health director shows up with five patients, all in ambulances. Sometimes they're lined up down at the dock. It's insane. It's driving us all crazy."
6. The Hotel Barmecide
The Grafton Street Glory Hole, with the convention centre and empty hotel above it. Photo: Halifax Examiner
The "world-class" Sutton Place Hotel is to open in the Nova Centre "in the first quarter of 2020," we're told, so by April 1. Yet here we are five months out, and beyond a director of sales and marketing, no positions in the hotel have been advertised.
The Barmecidal Feast
My reference to Barmecide comes from The Story of the Barber's Sixth Brother in Arabian Nights. It tells the tale of Schacabac, a beggar, who discovers the mansion of the Barmecides, "famed for their liberality and generosity." The porters tell Schacabac to go in and beg of Barmecide himself, and Schacabac enters the stately building, finds an old man sitting on a sofa, and asks for food. The story continues:
"What, you are dying of hunger?" exclaimed the Barmecide. "Here, slave; bring water, that we may wash our hands before meat!" No slave appeared, but my brother remarked that the Barmecide did not fail to rub his hands as if the water had been poured over them.
Then he said to my brother, "Why don't you wash your hands too?" and Schacabac, supposing that it was a joke on the part of the Barmecide (though he could see none himself), drew near, and imitated his motion.
When the Barmecide had done rubbing his hands, he raised his voice, and cried, "Set food before us at once, we are very hungry." No food was brought, but the Barmecide pretended to help himself from a dish, and carry a morsel to his mouth, saying as he did so, "Eat, my friend, eat, I entreat. Help yourself as freely as if you were at home! For a starving man, you seem to have a very small appetite."
"Excuse me, my lord," replied Schacabac, imitating his gestures as before, "I really am not losing time, and I do full justice to the repast."
"How do you like this bread?" asked the Barmecide. "I find it particularly good myself."
"Oh, my lord," answered my brother, who beheld neither meat nor bread, "never have I tasted anything so delicious."
"Eat as much as you want," said the Barmecide. "I bought the woman who makes it for five hundred pieces of gold, so that I might never be without it."
After ordering a variety of dishes (which never came) to be placed on the table, and discussing the merits of each one, the Barmecide declared that having dined so well, they would now proceed to take their wine.
Having enjoyed the illusionary feast, Schacabac feigns to get drunk on the illusionary wine. And then:
At this the Barmecide, instead of being angry, began to laugh, and embraced him heartily. "I have long been seeking," he exclaimed, "a man of your description, and henceforth my house shall be yours. You have had the good grace to fall in with my humour, and to pretend to eat and to drink when nothing was there. Now you shall be rewarded by a really good supper."
Then he clapped his hands, and all the dishes were brought that they had tasted in imagination before and during the repast, slaves sang and played on various instruments. All the while Schacabac was treated by the Barmecide as a familiar friend, and dressed in a garment out of his own wardrobe.
The lesson here is that if we just play along with the illusion, we will eventually be rewarded with the reality.
Except, the tale then takes a dark turn:
Twenty years passed by, and my brother was still living with the Barmecide, looking after his house, and managing his affairs. At the end of that time his generous benefactor died without heirs, so all his possessions went to the prince. They even despoiled my brother of those that rightly belonged to him, and he, now as poor as he had ever been in his life, decided to cast in his lot with a caravan of pilgrims who were on their way to Mecca. Unluckily, the caravan was attacked and pillaged by the Bedouins, and the pilgrims were taken prisoners. My brother became the slave of a man who beat him daily, hoping to drive him to offer a ransom, although, as Schacabac pointed out, it was quite useless trouble, as his relations were as poor as himself. At length the Bedouin grew tired of tormenting, and sent him on a camel to the top of a high barren mountain, where he left him to take his chance. A passing caravan, on its way to Bagdad, told me where he was to be found, and I hurried to his rescue, and brought him in a deplorable condition back to the town.
It's the history of Halifax in two acts.
1. An American flies Air Canada
My friend Ruth has the most delightful East Texan accent. I could sit around all day and listen to her speak, nodding and asking questions in hopes of hearing more of that melodious twang. Seriously, I've never met anyone with a more beautiful voice. Some years ago, however, Ruth ran off with a fellow and they now live happily in, er, Ohio, I think; I fear (but don't know) that her distinctive song has been muddied with that Midwestern blandness. But I will forever read her emails in that voice I so cherish. She writes:
Due to their sale on air fare to Europe, and my appreciation for a free drink with dinner not offered on U.S. airlines, this infrequent flier may have discovered at least part of why Air Canada is not doing so well.
When I made reservations for a flight to Venice and back in October, I paid up in advance of course. Evidently that does not a binding contract make in your Canadian Air.
When I flew out of Pittsburgh, the short flight to Toronto was delayed. These things happen, so I didn't think hard things about the airline, more about the error-filled life sometimes brought on by our capricious world. It was a hard ask of 75-year-old legs, but I duly sprinted the length of the airport in Toronto when — after a deplaning delay piled on top of the flight delay — I arrived at the very time my next flight was scheduled to board for the flight to Zurich.
A little the worse for wear, I made that flight, which departed late as well.
Zurich airport is a nightmare. To get to my next flight, to Venice, I got to bore underground and under the seeming ten miles that take passengers from intercontinental flight gates to the merely international ones, for nearby European destinations. Helpful signs along the way advised me that my gates were a mere ten minutes' walk.
Some people pay to get exercised. I guess I had paid for this opportunity, as well.
I did not take the opportunity of further workout offered, when it occasioned that to use the internet in Zurich, I would have to make a return walk of five minutes to authenticate my valid boarding pass for the Free WiFi.
The trip was good, my plans in Venice included the low-priced water bus into the city, and really things went well.
Then I came home via Air Canada.
The day preceding my return flight, I received their email informing me my flight from Frankfurt to Toronto would proceed as scheduled. Problem: I had booked a flight from Vienna.
After getting through to the Air Canada call center for problems of American/Canadian passengers, I discovered that the flights I'd contracted and paid for last July had been replaced without input from me. Instead of leaving early from Venice, which would have landed me in Pittsburgh on a fall afternoon for a pleasant drive in fall foliage, I now left mid-afternoon from Venice and arrived late in the night. The love of my life would also not get to enjoy the foliage drive we had planned. I got to tell him that the next evening, change of plan — he would have to drive to Pittsburgh to pick me up in the middle of the night. He did not drop me. He'd manage.
When my mid-afternoon flight arrived in Frankfurt, a bus ride to the endless airport there took about thirty minutes and we arrived at that airport at the very time that the flight to Toronto was announced to board. The gate was of course another long sprint away, and my aging legs made it without totally giving out. They had become disenamoured of this old body, though. I use the Canadian spelling with reason. Disenamored would be too simple for the way my legs ached.
On board, we got a promising menu, and I could look forward to dinner. We were being offered Kai yang chicken with coriander and sweet chilli sauce, bok choy and rice. Look forward to until the air hostess informed me that she was arriving at the seat I'd been assigned without my consent too late. There was only pasta left.
I told her I did not want pasta. (I also do not like dessert, so the chocolate cake made up for nada, not one whit.)
For my prepaid dinner, I got a small salad. The two beers were gratis, which kept me from beginning to be truly unpleasant.
Once again, Toronto's extensive airport was a challenge. My legs truly were starting to buckle and I had difficulty getting onto the final escalator to my final flight back to Pittsburgh, in the long night I was just beginning.
Thankfully, I got some sleep. When the love of my life picked up these remnants of his adventuress, we were too glad to see each other to be the mess Air Canada had visited on its customer.
I am home, I am recovering, I will always love Canada and someday will get to Halifax where I have a valued friend in Tim Bousquet.
We will drive there.
Ruth has a place to stay, so long as she agrees to talk a lot.
2. YMCA pays shit wages
The YMCA is hiring a Recreation Program Leader. Job qualifications include having a degree or diploma relating to Recreation /Education/Child Studies or related Experience, and having first aid accreditation. Skills required are listed as follows:
Minimum 1 year working with large groups of children in a recreational program
Organizational and time management skills
Works to create a child-centered program
Working knowledge of conflict resolution skills
Knowledge of age and stage of development of school-age children
BehaviouralManagement
Flexible working style
Self-directed and self-motivated
Ability to work independently and as a team member
Must have strong interpersonal, administrative and communication skills
But don't expect full-time work. This job is for 27.5 hours/week.
Pay? $12/hour, 45 cents above minimum wage.
Young people: Nova Scotia hates you. Move to Ontario already.
Executive Standing Committee (Monday, 10am, City Hall) — here's the agenda.
Accessibility Advisory Committee (Monday, 4pm, Boardroom 1, 3rd Floor, Duke Tower) — nothing too exciting on the agenda.
Task Force on the Commemoration of Edward Cornwallis and the Recognition and Commemoration of Indigenous History (Monday, 6pm, Zatzman Sportsplex, Dartmouth) — a facilitated conversation circle to discuss how the Halifax Regional Municipality should recognize and commemorate Indigenous history. RSVP here. More info here.
Public Workshop – Peninsula South Complete Streets (Monday, 6:30pm, Halifax Central Library) — info here.
Public Information Meeting – Case 22462 (Monday, 7pm, Maritime Hall, Halifax Forum) — Shawn and Michelle Cleary want to expand the Maple Tree Montessori from 14 to 20 children under care. The Clearys are the directors of the day care centre. More info here.
Budget Committee and Halifax Regional Council (Tuesday, 9:30am, City Hall) — Regional Council agenda here. Budget Committee agenda here.
Task Force on the Commemoration of Edward Cornwallis and the Recognition and Commemoration of Indigenous History (Tuesday, 6pm, Mi'kmaq Native Friendship Centre, Halifax) — a facilitated conversation circle to discuss how the Halifax Regional Municipality should recognize and commemorate Indigenous history. RSVP here. More info here.
Law Amendments (Monday, 11am, Province House) — see #3 above.
Legislature sits (Monday, 6pm, Province House)
Human Resources (Tuesday, 10am, One Government Place) — a per diem meeting.
Legislature sits (Tuesday, 1pm, Province House)
Noon Hour Strings Recital (Monday, 11:45am, Room 406, Dal Arts Centre) — with students of Leonardo Perez and Shimon Walt.
Lifting automorphisms of power series from characteristic p to characteristic 0 (Monday, 2:30pm, Chase Room 319) — Daniele Turchetti will talk. The abstract:
Let $k$ be an algebraically closed field of positive characteristic $p >0$. Lifting problems ask when objects defined over $k$ come from objects defined over a local ring $R$ with unique maximal ideal $\mathfrak{m}$ such that $R/\mathfrak{m}=k$. In this talk, I introduce lifting problems for finite order automorphisms of the ring of formal power series $k[[t]]$. We will see that this problem becomes very difficult when $R$ is of characteristic $0$ and the order of the automorphism is divided by $p$. In fact, it is very much related to the theory of wild ramification of ring extensions, a branch of algebraic number theory that has lately undergone very fast developments. I will not go into the technicalities of this theory, but rather show with examples some of its features,by solving positively lifting problems for automorphisms of order $p$ and showing negative results about liftings of elementary abelian subgroups of Aut$(k[[t]])$.
When the Dust Settles: How Do We Hold People to Account After Disasters? (Tuesday, 12pm, Room 1020, Rowe Management Building) — panelists are Lori Turnbull and Kevin Quigley from Dalhousie University; Bruce Campbell from York University; Paul Kovacs from the Institute for Catastrophic Loss Reduction; and Jennifer Quaid from the University of Ottawa.
Natural disasters, industrial failures, cyber and terrorist attacks generate intense popular interest and scrutiny. These events raise difficult questions about whether or not enough precautions were put in place to guard against these events. Yet given the complexity of modern systems, it is becoming increasingly difficult to hold people to account for failures: there are just too many people and organizations involved. In a highly interdependent setting, what should accountability look like and how do we achieve it?
Free, no reserved seating, live streamed here.
Forward, Upward, Onward, Together: A Cultural Evening in Support of The Bahamian People (Tuesday, 6pm, Dalhousie University Club) — from the listing:
Hurricane Dorian struck the Bahamas in September 2019 instantly devastating lives and communities across the islands of Abaco and Grand Bahama.
The Dorian Relief HFX group is a Bahamian student and alumni-lead initiative, which has partnered with the SMU Humanitarian Relief Fund to present Forward, Upward, Onward, Together: A Cultural Evening in Support of the Bahamian People. All proceeds will support the Ranfurly Homes for Children in Nassau, Bahamas.
The Bahamian spirit is one of unity, vibrance and resilience. With the support of Bahamian artisans, Dalhousie and Saint Mary's Universities, Dorian Relief HFX is excited to share a unique part of Bahamian culture. You can learn more about the event on our EventBrite Page.
From the Caribbean to Canada, we have all been touched by Hurricane Dorian. Please join us as we remember those that have been affected, whilst celebrating the strength and beauty of the Bahamian people.
Guided by the Bahamian Motto, we must move "Forward, Upward, Onward, Together".
Tickets ($10 minimum) and donations here or at the door.
In the harbour
05:00: YM Evolution, container ship, arrives at Fairview Cove from New York
05:30: Atlantic Star, container ship, arrives at Fairview Cove from Liverpool, England
05:45: Regal Princess, cruise ship with up to 4,271 passengers, arrives at Pier 22 from New York, on a five-day roundtrip cruise out of New York
06:00: AIDAdiva, cruise ship with up to 2,050 passengers, arrives at Pier 20 from Quebec City, on a 10-day cruise from Montreal to New York
06:15: Hoegh Bangkok, car carrier, arrives at Autoport from Emden, Germany
07:00: Silver Whisper, cruise ship with up to 466 passengers, arrives at Pier 23 from Sydney, on a 20-day cruise from Montreal to San Juan, Puerto Rico, thus ending the ship's seasonal stay in Canadian waters
07:15: Caribbean Princess, cruise ship with up to 3,756 passengers, arrives at Pier 31 from Sydney, on a 27-day cruise from Quebec City to Fort Lauderdale, Florida, also ending its Canadian season
09:00: Pengalia, container ship, arrives at Pier 42 from Portland
09:00: Puze, oil tanker, sails from Imperial Oil for sea
11:30: Hoegh Bangkok sails for sea
14:30: AIDAdiva sails with all its sickly passengers for Bar Harbor
14:30: Pengalia sails for Argentia, Newfoundland
15:00: BW Raven, oil tanker, moves from anchorage to Imperial Oil
15:30: Atlantic Star sails for New York
15:30: YM Evolution sails for Rotterdam
16:30: Caribbean Princess sails for Saint John
17:45: Regal Princess sails for Saint John
17:45: Silver Whisper sails for Bar Harbor
22:00: Augusta Sun, cargo ship, sails from Pier 28 for sea
Where are the Canadian military ships?
Long week ahead.
The Halifax Examiner is an advertising-free, subscriber-supported news site. Your subscription makes this work possible; please subscribe.
Filed Under: Featured Tagged With: AIDAdiva, Air Canada, Bill 213, crane incident, crown attorneys, Environmental Goals and Sustainable Prosperity Act (EGSPA), GHG emissions, Gus Reed, illness on cruise ships, John McPhee, Mark Parent, norovirus, Nova Centre hotel, shit wages, Sutton Place Hotel, YMCA
DamnYankee says
American Ruth's experience with Air Canada is much like mine. Flying Air Canada out of Halifax to visit relatives in the States is always an adventure…Prior to departure, there are regular changes in itinerary as to whether one is flying through Ottawa, Montreal or Toronto. Then, there are frequent adjustments on the day of travel, necessitating rapid sprints through the airport and occasionally pulling one's hair out in Customs trying to make one's flight. One wonders who in Air Canada is figuring the time necessary for connecting flights. Sometimes I am lucky enough to arrive in the US on the same day as I departed, if I leave in the wee hours of the morning and sometimes I arrive the next day…I keep hoping for a fast ferry from Halifax to Boston so I could avoid flying and transfer to Amtrak to visit family.
Tim Jaques says
As you pointed out on Twitter, Monday to Friday 7:30am to 9:00am and 12:00pm to 5:30pm is actually 7 hours a day, or 35 hours a week. Yet it says it pays for 27.5 hours. Also it screws up the whole day because you have from 9 a.m. to noon to fill until you go back to work, rather than just getting you to go in at 7:30 a.m. or whatever for an eight hour shift. Is there some unstated explanation for this? Also agreed, the qualifications and experience required should mean more than a measly $12 per hour. You'd be better off financially working in a bar or restaurant and getting tips.
Also contract position, so not even the minimum benefits package…probably not even a turkey at Christmas.
Reminds me of many years ago in Ontario I saw an ad in a newspaper for a position that required not only a university degree and experience, but fluency in Japanese. It paid only a little more than minimum wage. So it isn't always better up there.
Jeff Warnica says
Is "American Ruth" our Joe the Plumber?
The question, always, is "compared to what"? Sure, Pearson Terminal 1 is huge, but its spacious and well signed. As nice as any major US airport, certainly nicer than Newark, LaGuardia or O'Hare.
Did American Ruth ask for a golfcart? Any airline (or, airport, via the airline) would be happy to provide "mobility assistance". Is she expecting that sometimes since the last time she flew airports have gotten smaller? Or she has gotten younger? Did flying PanAm also entitle you to a quick trip to the ortho surgeon for knee touchups?
Air Canada can't be held responsible for the eTa weirdness transiting Canada, or the Schengen area weirdness getting into Europe, or eTa and US Preclearance getting back home. Does American Ruth think that other airlines have some magic to avoid security, customs, and immigration?
Air Canada under-catering flights is reprehensible; I grant their meals are not up to historical or international first class. But, this is not 1967, and an Trans Atlantic economy seat on Air Canada is not a pod – or bungalow – flying to your sugar daddys compound in Singapore.
Changing routing at (not the last minute; when did American Ruth check their reservation?) isn't fun, but its a lot less not fun than being a WOW or Thomas Cook customer.
Flying across the ocean and across multiple international borders is not a trivial endeavor. One needs some reasonable expectations, and be prepared to take care of #1.
Colin May says
Very well said. A plane is just a very fast bus and my days of flying around the world without a passport and not paying the fare are long gone.
Just A Guy says
The Halifax YMCA should be ashamed to offer pay as low as that for those qualifications as high as that. I think Tim is right, the Halifax YMCA pays shit wages.
Mario R Fernandez says
Hotel Barmecide reminds me of the Halifax Convention Centre and of the stadium "without a local team" illusion -white elephants serving adventurers so they make money with our money and Indebtedness. Anyway entertaining/enlightening article!
Halifax in two acts: The Hotel Barmecide – Halifax Examiner – ThePlanet1st says:
[…] Maritime | October 28, 2019 | No Comments […] | CommonCrawl |
Determination of N* amplitudes from associated strangeness production in p+p collisions (1703.01978)
R. Münzer, L. Fabbietti, E. Epple, P. Klose, F. Hauenstein, N. Herrmann, D. Grzonka, Y. Leifels, M. Maggiora, D. Pleiner, B. Ramstein, J. Ritman, E. Roderburg, P. Salabura, A.Sarantsev, Z. Basrak, P. Buehler, M. Cargnelli, R. Caplar, H.Clement, O. Czerwiakowa, I. Deppner, M. Dzelalija W. Eyrich, Z. Fodor, P. Gasik, I.Gasparic, A. Gillitzer, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kis, P. Koczon, R. Kotte, A. Lebedev, A. Le Fevre, J.L. Liu, V. Manko, J. Marton, T. Matulewicz, K. Piasecki, F. Rami, A. Reischl, M.S. Ryu, P. Schmidt, Z. Seres, B. Sikora, K.S. Sim, K. Siwek-Wilczynska, V. Smolyankin, K. Suzuki, Z. Tyminski, P. Wagner, I. Weber, E. Widmann, K. Wisniewski, Z.G. Xiao, T. Yamasaki, I. Yushmanov, P. Wintz, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
Sept. 6, 2018 nucl-ex
We present the first determination of the energy-dependent production amplitudes of N$^{*}$ resonances with masses between 1650 MeV/c$^{2}$ and 1900 MeV/c$^{2}$ for an excess energy between $0$ and $600$ MeV. A combined Partial Wave Analysis of seven exclusively reconstructed data samples for the reaction p+p $\rightarrow pK\Lambda$ measured by the COSY-TOF, DISTO, FOPI and HADES collaborations in fixed target experiments at kinetic energies between 2.14 and 3.5 GeV is used to determine the amplitude of the resonant and non-resonant contributions.
Experimental search for the violation of Pauli Exclusion Principle (1804.04446)
H. Shi, E. Milotti, S. Bartalucci, M. Bazzi, S. Bertolucci, A.M. Bragadireanu, M. Cargnelli, A. Clozza, L. De Paolis, S. Di Matteo, J.-P. Egger, H. Elnaggar, C. Guaraldo, M. Iliescu, M. Laubenstein, J. Marton, M. Miliucci, A. Pichler, D. Pietreanu, K. Piscicchia, A. Scordo, D.L. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez Doce, E. Widmann, J. Zmeskal, C. Curceanu
April 23, 2018 quant-ph, physics.atom-ph, physics.ins-det
The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for x-rays from copper atomic transition that are prohibited by the Pauli Exclusion Principle. Candidate direct violation events come from the transition of a $2p$ electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of 3.4 $\times$ 10$^{-29}$ for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.
The kaonic atoms research program at DA{\Phi}NE: from SIDDHARTA to SIDDHARTA-2 (1803.02587)
A. Scordo, A. Amirkhani, M. Bazzi, G. Bellotti, C. Berucci, D. Bosnar, A.M. Bragadireanu, M. Cargnelli, C. Curceanu, A. Dawood Butt, R. Del Grande, L. Fabbietti, C. Fiorini, F. Ghio, C. Guaraldo, R.S. Hayano, M. Iliescu, M. Iwasaki, P. Levi Sandri, J. Marton, M. Miliucci, P. Moskal, D. Pietreanu, K. Piscicchia, H. Shi, M. Silarski, D. Sirghi, F. Sirghi, M. Skurzok, A. Spallone, H. Tatsuno, O. Vazquez Doce, E. Widmann, J. Zmeskal
March 21, 2018 nucl-ex
The interaction of antikaons with nucleons and nuclei in the low-energy regime represents an active research field in hadron physics with still many important open questions. The investigation of light kaonic atoms, in which one electron is replaced by a negatively charged kaon, is a unique tool to provide precise information on this interaction; the energy shift and the broadening of the low-lying states of such atoms, induced by the kaon-nucleus hadronic interaction, can be determined with high precision from the atomic X-ray spectroscopy, and this experimental method provides unique information to understand the low energy kaon-nucleus interaction at the production threshold. The lightest atomic systems, like the kaonic hydrogen and the kaonic deuterium deliver, in a model-independent way, the isospin-dependent kaon-nucleon scattering lengths. The most precise kaonic hydrogen measurement to-date, together with an exploratory measurement of kaonic deuterium, were carried out in 2009 by the SIDDHARTA collaboration at the DA{\Phi}NE electron-positron collider of LNF-INFN, combining the excellent quality kaon beam delivered by the collider with new experimental techniques, as fast and very precise X-ray detectors, like the Silicon Drift Detectors. The SIDDHARTA results triggered new theoretical work, which achieved major progress in the understanding of the low-energy strong interaction with strangeness reflected by the antikaon-nucleon scattering lengths calculated with the antikaon-proton amplitudes constrained by the SIDDHARTA data. The most important open question is the experimental determination of the hadronic energy shift and width of kaonic deuterium; presently, a major upgrade of the setup, SIDDHARTA-2, is being realized to reach this goal. In this paper, the results obtained in 2009 and the proposed SIDDHARTA-2 upgrades are presented.
Producing long-lived $2^3\text{S}$ Ps via $3^3\text{P}$ laser excitation in magnetic and electric fields (1802.07012)
S. Aghion, C. Amsler, M. Antonello, A. Belov, G. Bonomi, R. S. Brusa, M. Caccia, A. Camper, R. Caravita, F. Castelli, G. Cerchiari, D. Comparat, G. Consolati, A. Demetrio, L. Di Noto, M. Doser, C. Evans, M. Fani, R. Ferragut, J. Fesel, A. Fontana, S. Gerber, M. Giammarchi, A. Gligorova, F. Guatieri, P. Hackstock, S. Haider, A. Hinterberger, H. Holmestad, A. Kellerbauer, O. Khalidova, D. Krasnicky, V. Lagormarsino, P. Lansonneur, P. Lebrun, C. Malbrunot, S. Mariazzi, J. Marton, V. Matveev, Z. Mazzotta, S. R. Muller, G. Nebbia, P. Nedelec, M. Oberthaler, D. Pagano, L. Penasa, V. Petracek, F. Prelz, M. Prevedelli, B. Rienaecker, J. Robert, O. M. Rohne, A. Rotondi, H. Sandacker, R. Santoro, L. Smestad, F. Sorrentino, G. Testera, I. C. Tietje, M. Vujanovic, E. Widmann, P. Yzombard, C. Zimmer, J. Zmeskal, N. Zurlo (the AEgIS collaboration)
Feb. 20, 2018 physics.atom-ph
Producing positronium (Ps) in the metastable $2^3\text{S}$ state is of interest for various applications in fundamental physics. We report here about an experiment in which Ps atoms are produced in this long-lived state by spontaneous radiative decay of Ps excited to the $3^3\text{P}$ level manifold. The Ps cloud excitation is obtained with a UV laser pulse in an experimental vacuum chamber in presence of guiding magnetic field of 25 mT and an average electric field of 300 V/cm. The indication of the $2^3\text{S}$ state production is obtained from a novel analysis technique of single-shot positronium annihilation lifetime spectra. Its production efficiency relative to the total amount of formed Ps is evaluated by fitting a simple rate equations model to the experimental data and found to be $ (2.1 \pm 1.3) \, \% $.
Search for Deeply Bound Kaonic Nuclear States with AMADEUS (1712.09908)
M. Skurzok, M. Cargnelli, C. Curceanu, R. Del Grande, L. Fabbietti, C. Guaraldo, J. Marton, P. Moskal, K. Piscicchia, A. Scordo, M. Silarski, D. L. Sirghi, I. Tucakovic, O. Vazquez Doce, S. Wycech, E. Widmann, J. Zmeskal
Dec. 28, 2017 nucl-ex
We briefly report on the search for Deeply Bound Kaonic Nuclear States with AMADEUS in the Sigma0 p channel following K- absorption on 12C and outline future perspectives for this work.
Test of the Pauli Exclusion Principle in the VIP-2 underground experiment (1705.02165)
C. Curceanu, H. Shi, S. Bartalucci, S. Bertolucci, C. Berucci, A.M. Bragadireanu, M. Cargnelli, A. Clozza, L. De Paolis, S. Di Matteo, J.-P. Egger, C. Guaraldo, M. Iliescu, J. Marton, M. Laubenstein, E. Milotti, D. Pietreanu, K. Piscicchia, A. Scordo, D.L. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez Doce, E. Widmann, J. Zmeskal
May 5, 2017 quant-ph, physics.ins-det
The validity of the Pauli Exclusion Principle, a building block of Quantum Mechanics, is tested for electrons. The VIP (VIolation of Pauli exclusion principle) and its follow-up VIP-2 experiments at the Laboratori Nazionali del Gran Sasso search for x-rays from copper atomic transition that are prohibited by the Pauli Exclusion Principle. The candidate events, if they exist, originate from the transition of a $2p$ orbit electron to the ground state which is already occupied by two electrons. The present limit on the probability for Pauli Exclusion Principle violation for electrons set by the VIP experiment is 4.7 $\times$ 10 $^{-29}$. We report a first result from the VIP-2 experiment improving on the VIP limit, that solidifies the final goal to achieve a two order of magnitude gain in the long run.
Antikaon Interactions with Nucleons and Nuclei - AMADEUS At Da$\Phi$ne (1704.06562)
J. Marton, K. Piscicchia, C. Curceanu, M. Cargnelli, R. Del Grande, L. Fabbietti, G. Mandaglio, M. Martini, P. Moskal, A. Scordo, D. Sirghi, M. Skurzok, I. Tucakovic, O. Vazquez Doce, S. Wycech, J. Zmeskal
April 21, 2017 nucl-ex
The aim of AMADEUS is to provide unprecedented experimental information on K$^-$ absorption in light nuclear targets, to face major open problems in hadron nuclear physics in the strangeness sector, namely the nature of the $\Lambda$(1405), strongly related to the possible existence of kaonic nuclear clusters, kaons and hyperon scattering cross sections on nucleons and nuclei. These issues are fundamental for a better understanding of the non-perturbative QCD in the strangeness sector. AMADEUS step 0 deals with the analysis of the 2004-2005 KLOE collected data. The interactions of the negative kaons produced by the DA$\Phi$NE collider (a unique source of monochromatic low-momentum kaons) with the materials of the KLOE detector, used as active targets, provide samples of K$^-$ absorptions on H, ${}^4$He, ${}^{9}$Be and ${}^{12}$C, both at-rest and in-flight. A second step deals with the data from the implementation in the central region of the KLOE detector of a pure graphite target, providing a high statistic sample of K$^- \, {}^{12}$C nuclear captures at rest. For the future a new setup, with various dedicated gaseous and solid targets, is under preparation.
Underground tests of quantum mechanics. Whispers in the cosmic silence? (1703.06796)
C. Curceanu, S. Bartalucci, A. Bassi, M. Bazzi, S. Bertolucci, C. Berucci, A.M. Bragadireanu, M. Cargnelli, A. Clozza, L. De Paolis, S. Di Matteo, S. Donadi, J-P. Egger, C. Guaraldo, M. Iliescu, M. Laubenstein, J. Marton, E. Milotti, A. Pichler, D. Pietreanu, K. Piscicchia, A. Scordo, H. Shi, D. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez Doce, J. Zmeskal
March 20, 2017 quant-ph, physics.ins-det
By performing X-rays measurements in the "cosmic silence" of the underground laboratory of Gran Sasso, LNGS-INFN, we test a basic principle of quantum mechanics: the Pauli Exclusion Principle (PEP), for electrons. We present the achieved results of the VIP experiment and the ongoing VIP2 measurement aiming to gain two orders of magnitude improvement in testing PEP. We also use a similar experimental technique to search for radiation (X and gamma) predicted by continuous spontaneous localization models, which aim to solve the "measurement problem".
VIP-2 at LNGS: An experiment on the validity of the Pauli Exclusion Principle for electrons (1703.01615)
J. Marton, S. Bartalucci, A. Bassi, M. Bazzi, S. Bertolucci, C. Berucci, M. Bragadireanu, M. Cargnelli, A. Clozza, C. Curceanu, L. De Paolis, S. Di Matteo, S. Donadi, J.-P. Egger, C. Guaraldo, M. Iliescu, M. Laubenstein, E. Milotti, A. Pichler, D. Pietreanu, K. Piscicchia, A. Scordo, H. Shi, D. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez-Doce, E. Widmann, J. Zmeskal
March 5, 2017 quant-ph, physics.ins-det
We are experimentally investigating possible violations of standard quantum mechanics predictions in the Gran Sasso underground laboratory in Italy. We test with high precision the Pauli Exclusion Principle and the collapse of the wave function (collapse models). We present our method of searching for possible small violations of the Pauli Exclusion Principle (PEP) for electrons, through the search for anomalous X-ray transitions in copper atoms. These transitions are produced by new electrons (brought inside the copper bar by circulating current) which can have the possibility to undergo Pauli-forbidden transition to the 1s level already occupied by two electrons. We describe the VIP2 (VIolation of the Pauli Exclusion Principle) experimental data taking at the Gran Sasso underground laboratories. The goal of VIP2 is to test the PEP for electrons in agreement with the Messiah-Greenberg superselection rule with unprecedented accuracy, down to a limit in the probability that PEP is violated at the level of 10E-31. We show preliminary experimental results and discuss implications of a possible violation.
Feasibility study for the measurement of $\pi N$ TDAs at PANDA in $\bar{p}p\to J/\psi\pi^0$ (1610.02149)
PANDA Collaboration: B. Singh, W. Erni, B. Krusche, M. Steinacher, N. Walford, H. Liu, Z. Liu, B. Liu, X. Shen, C. Wang, J. Zhao, M. Albrecht, T. Erlen, M. Fink, F.H. Heinsius, T. Held, T. Holtmann, S. Jasper, I. Keshk, H. Koch, B. Kopf, M. Kuhlmann, M. Kümmel, S. Leiber, M. Mikirtychyants, P. Musiol, A. Mustafa, M. Pelizäus, J. Pychy, M. Richter, C. Schnier, T. Schröder, C. Sowa, M. Steinke, T. Triffterer, U. Wiedner, M. Ball, R. Beck, C. Hammann, B. Ketzer, M. Kube, P. Mahlberg, M. Rossbach, C. Schmidt, R. Schmitz, U. Thoma, M. Urban, D. Walther, C. Wendel, A. Wilson, A. Bianconi, M. Bragadireanu, M. Caprini, D. Pantea, B. Patel, W. Czyzycki, M. Domagala, G. Filo, J. Jaworowski, M. Krawczyk, E. Lisowski, F. Lisowski, M. Michałek, P. Poznański, J. Płażek, K. Korcyl, A. Kozela, P. Kulessa, P. Lebiedowicz, K. Pysz, W. Schäfer, A. Szczurek, T. Fiutowski, M. Idzik, B. Mindur, D. Przyborowski, K. Swientek, J. Biernat, B. Kamys, S. Kistryn, G. Korcyl, W. Krzemien, A. Magiera, P. Moskal, A. Pyszniak, Z. Rudy, P. Salabura, J. Smyrski, P. Strzempek, A. Wronska, I. Augustin, R. Böhm, I. Lehmann, D. Nicmorus Marinescu, L. Schmitt, V. Varentsov, M. Al-Turany, A. Belias, H. Deppe, N. Divani Veis, R. Dzhygadlo, A. Ehret, H. Flemming, A. Gerhardt, K. Götzen, A. Gromliuk, L. Gruber, R. Karabowicz, R. Kliemt, M. Krebs, U. Kurilla, D. Lehmann, S. Löchner, J. Lühning, U. Lynen, H. Orth, M. Patsyuk, K. Peters, T. Saito, G. Schepers, C. J. Schmidt, C. Schwarz, J. Schwiening, A. Täschner, M. Traxler, C. Ugur, B. Voss, P. Wieczorek, A. Wilms, M. Zühlsdorf, V. Abazov, G. Alexeev, V. A. Arefiev, V. Astakhov, M. Yu. Barabanov, B. V. Batyunya, Y. Davydov, V. Kh. Dodokhov, A. Efremov, A. Fechtchenko, A. G. Fedunov, A. Galoyan, S. Grigoryan, E. K. Koshurnikov, Y. Yu. Lobanov, V. I. Lobanov, A. F. Makarov, L. V. Malinina, V. Malyshev, A. G. Olshevskiy, E. Perevalova, A. A. Piskun, T. Pocheptsov, G. Pontecorvo, V. Rodionov, Y. Rogov, R. Salmin, A. Samartsev, M. G. Sapozhnikov, G. Shabratova, N. B. Skachkov, A. N. Skachkova, E. A. Strokovsky, M. Suleimanov, R. Teshev, V. Tokmenin, V. Uzhinsky, A. Vodopianov, S. A. Zaporozhets, N. I. Zhuravlev, A. Zinchenko, A. G. Zorin, D. Branford, D. Glazier, D. Watts, M. Böhm, A. Britting, W. Eyrich, A. Lehmann, M. Pfaffinger, F. Uhlig, S. Dobbs, K. Seth, A. Tomaradze, T. Xiao, D. Bettoni, V. Carassiti, A. Cotta Ramusino, P. Dalpiaz, A. Drago, E. Fioravanti, I. Garzia, M. Savrie, V. Akishina, I. Kisel, G. Kozlov, M. Pugach, M. Zyzak, P. Gianotti, C. Guaraldo, V. Lucherini, A. Bersani, G. Bracco, M. Macri, R. F. Parodi, K. Biguenko, K.T. Brinkmann, V. Di Pietro, S. Diehl, V. Dormenev, P. Drexler, M. Düren, E. Etzelmüller, M. Galuska, E. Gutz, C. Hahn, A. Hayrapetyan, M. Kesselkaul, W. Kühn, T. Kuske, J. S. Lange, Y. Liang, V. Metag, M. Moritz, M. Nanova, S. Nazarenko, R. Novotny, T. Quagli, S. Reiter, A. Riccardi, J. Rieke, C. Rosenbaum, M. Schmidt, R. Schnell, H. Stenzel, U. Thöring, T. Ullrich, M. N. Wagner, T. Wasem, B. Wohlfahrt, H.G. Zaunick, E. Tomasi-Gustafsson, D. Ireland, G. Rosner, B. Seitz, P.N. Deepak, A. Kulkarni, A. Apostolou, M. Babai, M. Kavatsyuk, P. J. Lemmens, M. Lindemulder, H. Loehner, J. Messchendorp, P. Schakel, H. Smit, M. Tiemens, J. C. van der Weele, R. Veenstra, S. Vejdani, K. Dutta, K. Kalita, A. Kumar, A. Roy, H. Sohlbach, M. Bai, L. Bianchi, M. Büscher, L. Cao, A. Cebulla, R. Dosdall, A. Gillitzer, F. Goldenbaum, D. Grunwald, A. Herten, Q. Hu, G. Kemmerling, H. Kleines, A. Lai, A. Lehrach, R. Nellen, H. Ohm, S. Orfanitski, D. Prasuhn, E. Prencipe, J. Pütz, J. Ritman, S. Schadmand, T. Sefzick, V. Serdyuk, G. Sterzenbach, T. Stockmanns, P. Wintz, P. Wüstner, H. Xu, A. Zambanini, S. Li, Z. Li, Z. Sun, H. Xu, V. Rigato, L. Isaksson, P. Achenbach, O. Corell, A. Denig, M. Distler, M. Hoek, A. Karavdina, W. Lauth, Z. Liu, H. Merkel, U. Müller, J. Pochodzalla, S. Sanchez, S. Schlimme, C. Sfienti, M. Thiel, H. Ahmadi, S. Ahmed, S. Bleser, L. Capozza, M. Cardinali, A. Dbeyssi, M. Deiseroth, F. Feldbauer, M. Fritsch, B. Fröhlich, D. Kang, D. Khaneft, R. Klasen, H. H. Leithoff, D. Lin, F. Maas, S. Maldaner, M. Martínez, M. Michel, M. C. Mora Espí, C. Morales Morales, C. Motzko, F. Nerling, O. Noll, S. Pflüger, A. Pitka, D. Rodríguez Piñeiro, A. Sanchez-Lorente, M. Steinen, R. Valente, T. Weber, M. Zambrana, I. Zimmermann, A. Fedorov, M. Korjik, O. Missevitch, A. Boukharov, O. Malyshev, I. Marishev, V. Balanutsa, P. Balanutsa, V. Chernetsky, A. Demekhin, A. Dolgolenko, P. Fedorets, A. Gerasimov, V. Goryachev, V. Chandratre, V. Datar, D. Dutta, V. Jha, H. Kumawat, A.K. Mohanty, A. Parmar, B. Roy, G. Sonika, C. Fritzsch, S. Grieser, A.K. Hergemöller, B. Hetz, N. Hüsken, A. Khoukaz, J. P. Wessels, K. Khosonthongkee, C. Kobdaj, A. Limphirat, P. Srisawad, Y. Yan, A. Yu. Barnyakov, M. Barnyakov, K. Beloborodov, V. E. Blinov, V. S. Bobrovnikov, I. A. Kuyanov, K. Martin, A. P. Onuchin, S. Serednyakov, A. Sokolov, Y. Tikhonov, A. E. Blinov, S. Kononov, E. A. Kravchenko, E. Atomssa, R. Kunne, B. Ma, D. Marchand, B. Ramstein, J. van de Wiele, Y. Wang, G. Boca, S. Costanza, P. Genova, P. Montagna, A. Rotondi, V. Abramov, N. Belikov, S. Bukreeva, A. Davidenko, A. Derevschikov, Y. Goncharenko, V. Grishin, V. Kachanov, V. Kormilitsin, A. Levin, Y. Melnik, N. Minaev, V. Mochalov, D. Morozov, L. Nogach, S. Poslavskiy, A. Ryazantsev, S. Ryzhikov, P. Semenov, I. Shein, A. Uzunian, A. Vasiliev, A. Yakutin, U. Roy, B. Yabsley, S. Belostotski, G. Gavrilov, A. Izotov, S. Manaenkov, O. Miklukho, D. Veretennikov, A. Zhdanov, T. Bäck, B. Cederwall, K. Makonyi, M. Preston, P.E. Tegner, D. Wölbing, A. K. Rai, S. Godre, D. Calvo, S. Coli, P. De Remigis, A. Filippi, G. Giraudo, S. Lusso, G. Mazza, M. Mignone, A. Rivetti, R. Wheadon, A. Amoroso, M. P. Bussa, L. Busso, F. De Mori, M. Destefanis, L. Fava, L. Ferrero, M. Greco, J. Hu, L. Lavezzi, M. Maggiora, G. Maniscalco, S. Marcello, S. Sosio, S. Spataro, F. Balestra, F. Iazzi, R. Introzzi, A. Lavagno, J. Olave, R. Birsa, F. Bradamante, A. Bressan, A. Martin, H. Calen, W. Ikegami Andersson, T. Johansson, A. Kupsc, P. Marciniewski, M. Papenbrock, J. Pettersson, K. Schönning, M. Wolke, B. Galnander, J. Diaz, V. Pothodi Chackara, A. Chlopik, G. Kesik, D. Melnychuk, B. Slowinski, A. Trzcinski, M. Wojciechowski, S. Wronka, B. Zwieglinski, P. Bühler, J. Marton, D. Steinschaden, K. Suzuki, E. Widmann, J. Zmeskal, K. M. Semenov-Tian-Shansky
Oct. 7, 2016 hep-ex, nucl-ex, physics.ins-det
The exclusive charmonium production process in $\bar{p}p$ annihilation with an associated $\pi^0$ meson $\bar{p}p\to J/\psi\pi^0$ is studied in the framework of QCD collinear factorization. The feasibility of measuring this reaction through the $J/\psi\to e^+e^-$ decay channel with the PANDA (AntiProton ANnihilation at DArmstadt) experiment is investigated. Simulations on signal reconstruction efficiency as well as the background rejection from various sources including the $\bar{p}p\to\pi^+\pi^-\pi^0$ and $\bar{p}p\to J/\psi\pi^0\pi^0$ reactions are performed with PandaRoot, the simulation and analysis software framework of the PANDA experiment. It is shown that the measurement can be done at PANDA with significant constraining power under the assumption of an integrated luminosity attainable in four to five months of data taking at the maximum design luminosity.
Feasibility studies of time-like proton electromagnetic form factors at PANDA at FAIR (1606.01118)
PANDA Collaboration: B. Singh, W. Erni, B. Krusche, M. Steinacher, N. Walford, B. Liu, H. Liu, Z. Liu, X. Shen, C. Wang, J. Zhao, M. Albrecht, T. Erlen, M. Fink, F. Heinsius, T. Held, T. Holtmann, S. Jasper, I. Keshk, H. Koch, B. Kopf, M. Kuhlmann, M. Kümmel, S. Leiber, M. Mikirtychyants, P. Musiol, A. Mustafa, M. Pelizäus, J. Pychy, M. Richter, C. Schnier, T. Schröder, C. Sowa, M. Steinke, T. Triffterer, U. Wiedner, M. Ball, R. Beck, C. Hammann, B. Ketzer, M. Kube, P. Mahlberg, M. Rossbach, C. Schmidt, R. Schmitz, U. Thoma, M. Urban, D. Walther, C. Wendel, A. Wilson, A. Bianconi, M. Bragadireanu, M. Caprini, D. Pantea, B. Patel, W. Czyzycki, M. Domagala, G. Filo, J. Jaworowski, M. Krawczyk, F. Lisowski, E. Lisowski, M. Michałek, P. Poznański, J. Płażek, K. Korcyl, A. Kozela, P. Kulessa, P. Lebiedowicz, K. Pysz, W. Schäfer, A. Szczurek, T. Fiutowski, M. Idzik, B. Mindur, D. Przyborowski, K. Swientek, J. Biernat, B. Kamys, S. Kistryn, G. Korcyl, W. Krzemien, A. Magiera, P. Moskal, A. Pyszniak, Z. Rudy, P. Salabura, J. Smyrski, P. Strzempek, A. Wronska, I. Augustin, R. Böhm, I. Lehmann, D. Nicmorus Marinescu, L. Schmitt, V. Varentsov, M. Al-Turany, A. Belias, H. Deppe, R. Dzhygadlo, A. Ehret, H. Flemming, A. Gerhardt, K. Götzen, A. Gromliuk, L. Gruber, R. Karabowicz, R. Kliemt, M. Krebs, U. Kurilla, D. Lehmann, S. Löchner, J. Lühning, U. Lynen, H. Orth, M. Patsyuk, K. Peters, T. Saito, G. Schepers, C. J. Schmidt, C. Schwarz, J. Schwiening, A. Täschner, M. Traxler, C. Ugur, B. Voss, P. Wieczorek, A. Wilms, M. Zühlsdorf, V. Abazov, G. Alexeev, V. A. Arefiev, V. Astakhov, M. Yu. Barabanov, B. V. Batyunya, Y. Davydov, V. Kh. Dodokhov, A. Efremov, A. Fechtchenko, A. G. Fedunov, A. Galoyan, S. Grigoryan, E. K. Koshurnikov, Y. Yu. Lobanov, V. I. Lobanov, A. F. Makarov, L. V. Malinina, V. Malyshev, A. G. Olshevskiy, E. Perevalova, A. A. Piskun, T. Pocheptsov, G. Pontecorvo, V. Rodionov, Y. Rogov, R. Salmin, A. Samartsev, M. G. Sapozhnikov, G. Shabratova, N. B. Skachkov, A. N. Skachkova, E. A. Strokovsky, M. Suleimanov, R. Teshev, V. Tokmenin, V. Uzhinsky, A. Vodopianov, S. A. Zaporozhets, N. I. Zhuravlev, A. G. Zorin, D. Branford, D. Glazier, D. Watts, M. Böhm, A. Britting, W. Eyrich, A. Lehmann, M. Pfaffinger, F. Uhlig, S. Dobbs, K. Seth, A. Tomaradze, T. Xiao, D. Bettoni, V. Carassiti, A. Cotta Ramusino, P. Dalpiaz, A. Drago, E. Fioravanti, I. Garzia, M. Savrie, V. Akishina, I. Kisel, G. Kozlov, M. Pugach, M. Zyzak, P. Gianotti, C. Guaraldo, V. Lucherini, A. Bersani, G. Bracco, M. Macri, R. F. Parodi, K. Biguenko, K. Brinkmann, V. Di Pietro, S. Diehl, V. Dormenev, P. Drexler, M. Düren, E. Etzelmüller, M. Galuska, E. Gutz, C. Hahn, A. Hayrapetyan, M. Kesselkaul, W. Kühn, T. Kuske, J. S. Lange, Y. Liang, V. Metag, M. Nanova, S. Nazarenko, R. Novotny, T. Quagli, S. Reiter, J. Rieke, C. Rosenbaum, M. Schmidt, R. Schnell, H. Stenzel, U. Thöring, M. Ullrich, M. N. Wagner, T. Wasem, B. Wohlfahrt, H. Zaunick, D. Ireland, G. Rosner, B. Seitz, P.N. Deepak, A. Kulkarni, A. Apostolou, M. Babai, M. Kavatsyuk, P. J. Lemmens, M. Lindemulder, H. Loehner, J. Messchendorp, P. Schakel, H. Smit, M. Tiemens, J. C. van der Weele, R. Veenstra, S. Vejdani, K. Dutta, K. Kalita, A. Kumar, A. Roy, H. Sohlbach, M. Bai, L. Bianchi, M. Büscher, L. Cao, A. Cebulla, R. Dosdall, A. Gillitzer, F. Goldenbaum, D. Grunwald, A. Herten, Q. Hu, G. Kemmerling, H. Kleines, A. Lehrach, R. Nellen, H. Ohm, S. Orfanitski, D. Prasuhn, E. Prencipe, J. Pütz, J. Ritman, S. Schadmand, T. Sefzick, V. Serdyuk, G. Sterzenbach, T. Stockmanns, P. Wintz, P. Wüstner, H. Xu, A. Zambanini, S. Li, Z. Li, Z. Sun, H. Xu, V. Rigato, L. Isaksson, P. Achenbach, O. Corell, A. Denig, M. Distler, M. Hoek, A. Karavdina, W. Lauth, Z. Liu, H. Merkel, U. Müller, J. Pochodzalla, S. Sanchez, S. Schlimme, C. Sfienti, M. Thiel, H. Ahmadi, S. Ahmed, S. Bleser, L. Capozza, M. Cardinali, A. Dbeyssi, M. Deiseroth, F. Feldbauer, M. Fritsch, B. Fröhlich, P. Jasinski, D. Kang, D. Khaneft, R. Klasen, H. H. Leithoff, D. Lin, F. Maas, S. Maldaner, M. Marta, M. Michel, M. C. Mora Espí, C. Morales Morales, C. Motzko, F. Nerling, O. Noll, S. Pflüger, A. Pitka, D. Rodríguez Piñeiro, A. Sanchez-Lorente, M. Steinen, R. Valente, T. Weber, M. Zambrana, I. Zimmermann, A. Fedorov, M. Korjik, O. Missevitch, A. Boukharov, O. Malyshev, I. Marishev, V. Balanutsa, P. Balanutsa, V. Chernetsky, A. Demekhin, A. Dolgolenko, P. Fedorets, A. Gerasimov, V. Goryachev, V. Chandratre, V. Datar, D. Dutta, V. Jha, H. Kumawat, A.K. Mohanty, A. Parmar, B. Roy, G. Sonika, C. Fritzsch, S. Grieser, A. Hergemöller, B. Hetz, N. Hüsken, A. Khoukaz, J. P. Wessels, K. Khosonthongkee, C. Kobdaj, A. Limphirat, P. Srisawad, Y. Yan, M. Barnyakov, A. Yu. Barnyakov, K. Beloborodov, A. E. Blinov, V. E. Blinov, V. S. Bobrovnikov, S. Kononov, E. A. Kravchenko, I. A. Kuyanov, K. Martin, A. P. Onuchin, S. Serednyakov, A. Sokolov, Y. Tikhonov, E. Atomssa, R. Kunne, D. Marchand, B. Ramstein, J. van de Wiele, Y. Wang, G. Boca, S. Costanza, P. Genova, P. Montagna, A. Rotondi, V. Abramov, N. Belikov, S. Bukreeva, A. Davidenko, A. Derevschikov, Y. Goncharenko, V. Grishin, V. Kachanov, V. Kormilitsin, A. Levin, Y. Melnik, N. Minaev, V. Mochalov, D. Morozov, L. Nogach, S. Poslavskiy, A. Ryazantsev, S. Ryzhikov, P. Semenov, I. Shein, A. Uzunian, A. Vasiliev, A. Yakutin, E. Tomasi-Gustafsson, U. Roy, B. Yabsley, S. Belostotski, G. Gavrilov, A. Izotov, S. Manaenkov, O. Miklukho, D. Veretennikov, A. Zhdanov, K. Makonyi, M. Preston, P. Tegner, D. Wölbing, T. Bäck, B. Cederwall, A. K. Rai, S. Godre, D. Calvo, S. Coli, P. De Remigis, A. Filippi, G. Giraudo, S. Lusso, G. Mazza, M. Mignone, A. Rivetti, R. Wheadon, F. Balestra, F. Iazzi, R. Introzzi, A. Lavagno, J. Olave, A. Amoroso, M. P. Bussa, L. Busso, F. De Mori, M. Destefanis, L. Fava, L. Ferrero, M. Greco, J. Hu, L. Lavezzi, M. Maggiora, G. Maniscalco, S. Marcello, S. Sosio, S. Spataro, R. Birsa, F. Bradamante, A. Bressan, A. Martin, H. Calen, W. Ikegami Andersson, T. Johansson, A. Kupsc, P. Marciniewski, M. Papenbrock, J. Pettersson, K. Schönning, M. Wolke, B. Galnander, J. Diaz, V. Pothodi Chackara, A. Chlopik, G. Kesik, D. Melnychuk, B. Slowinski, A. Trzcinski, M. Wojciechowski, S. Wronka, B. Zwieglinski, P. Bühler, J. Marton, D. Steinschaden, K. Suzuki, E. Widmann, J. Zmeskal
Sept. 29, 2016 hep-ex, nucl-ex
Simulation results for future measurements of electromagnetic proton form factors at \PANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel $\bar p p \to e^+ e^-$ is studied on the basis of two different but consistent procedures. The suppression of the main background channel, $\textit{i.e.}$ $\bar p p \to \pi^+ \pi^-$, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. However, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.
First application of superconducting transition-edge-sensor microcalorimeters to hadronic-atom x-ray spectroscopy (1608.05436)
S. Okada, D. A. Bennett, C. Curceanu, W. B. Doriese, J. W. Fowler, J. Gard, F. P. Gustafsson, T. Hashimoto, R. S. Hayano, S. Hirenzaki, J. P. Hays-Wehle, G. C. Hilton, N. Ikeno, M. Iliescu, S. Ishimoto, K. Itahashi, M. Iwasaki, T. Koike, K. Kuwabara, Y. Ma, J. Marton, H. Noda, G. C. O'Neil, H. Outa, C. D. Reintsema, M. Sato, D. R. Schmidt, H. Shi, K. Suzuki, T. Suzuki, D. S. Swetz, H. Tatsuno, J. Uhlig, J. N. Ullom, E. Widmann, S. Yamada, J. Yamagata-Sekihara, J. Zmeskal
Aug. 18, 2016 nucl-ex, nucl-th, physics.atom-ph, physics.ins-det
High-resolution pionic-atom x-ray spectroscopy was performed with an x-ray spectrometer based on a 240-pixel array of superconducting transition-edge-sensor (TES) microcalorimeters at the piM1 beam line of the Paul Scherrer Institute. X-rays emitted by pionic carbon via the 4f->3d transition and the parallel 4d->3p transition were observed with a full-width-at-half-maximum energy resolution of 6.8 eV at 6.4 keV. Measured x-ray energies are consistent with calculated electromagnetic values which considered the strong-interaction effect assessed via the Seki-Masutani potential for the 3p energy level, and favor the electronic population of two filled 1s electrons in the K-shell. Absolute energy calibration with an uncertainty of 0.1 eV was demonstrated under a high-rate hadron beam condition of 1.45 MHz. This is the first application of a TES spectrometer to hadronic-atom x-ray spectroscopy and is an important milestone towards next-generation high-resolution kaonic-atom x-ray spectroscopy.
Centrality dependence of subthreshold $\phi$ meson production in Ni+Ni collisions at 1.9A GeV (1602.04378)
K. Piasecki, Z. Tymiński, N. Herrmann, R. Averbeck, A. Andronic, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, E. Cordier, P. Crochet, O. Czerwiakowa, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, P. Gasik, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, M. Korolija, R. Kotte, A. Lebedev, Y. Leifels, A. Le Fèvre, J.L. Liu, X. Lopez, A. Mangiarotti, V. Manko, J. Marton, T. Matulewicz, M. Merschmeyer, R. Münzer, D. Pelte, M. Petrovici, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek-Wilczyńska, V. Smolyankin, G. Stoicea, K. Suzuki, P. Wagner, I. Weber, E. Widmann, K. Wiśniewski, Z.G. Xiao, H.S. Xu, I. Yushmanov, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
June 9, 2016 nucl-ex
We analysed the $\phi$ meson production in central Ni+Ni collisions at the beam kinetic energy of 1.93A GeV with the FOPI spectrometer and found the production probability per event of $[8.6 ~\pm~ 1.6 ~(\text{stat}) \pm 1.5 ~(\text{syst})] \times 10^{-4}$. This new data point allows for the first time to inspect the centrality dependence of the subthreshold $\phi$ meson production in heavy-ion collisions. The rise of $\phi$ meson multiplicity per event with mean number of participants can be parameterized by the power function with exponent $\alpha = 1.8 \pm 0.6$. The ratio of $\phi$ to $\text{K}^-$ production yields seems not to depend within the experimental uncertainties on the collision centrality, and the average of measured values was found to be $0.36 \pm 0.05$.
Strange meson production in Al+Al collisions at 1.9A GeV (1512.06988)
P. Gasik, K. Piasecki, N. Herrmann, Y. Leifels, T. Matulewicz, A. Andronic, R. Averbeck, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, P. Crochet, O. Czerwiakowa, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, R. Kotte, A. Lebedev, A. Le Fèvre, J.L. Liu, X. Lopez, V. Manko, J. Marton, R. Münzer, M. Petrovici, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek-Wilczyńska, V. Smolyankin, K. Suzuki, Z. Tymiński, P. Wagner, I. Weber, E. Widmann, K. Wiśniewski, Z.G. Xiao, I. Yushmanov, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
May 17, 2016 nucl-ex
The production of K$^+$, K$^-$ and $\varphi$(1020) mesons is studied in Al+Al collisions at a beam energy of 1.9A GeV which is close or below the production threshold in NN reactions. Inverse slopes, anisotropy parameters, and total emission yields of K$^{\pm}$ mesons are obtained. A comparison of the ratio of kinetic energy distributions of K$^-$ and K$^+$ mesons to the HSD transport model calculations suggests that the inclusion of the in-medium modifications of kaon properties is necessary to reproduce the ratio. The inverse slope and total yield of $\phi$ mesons are deduced. The contribution to K$^-$ production from $\phi$ meson decays is found to be [17 $\pm$ 3 (stat) $^{+2}_{-7}$ (syst)] %. The results are in line with previous K$^{\pm}$ and $\phi$ data obtained for different colliding systems at similar incident beam energies.
$K$-series X-ray yield measurement of kaonic hydrogen atoms in a gaseous target (1603.00094)
M. Bazzi, G. Beer, G. Bellotti, C. Berucci, A.M. Bragadireanu, D. Bosnar, M. Cargnelli, C. Curceanu, A.D. Butt, A. d'Uffizi, C. Fiorini, F. Ghio, C. Guaraldo, R.S. Hayanao, M. Iliescu, T. Ishiwatari, M. Iwasaki, P. Levi Sandri, J. Marton, S. Okada, D. Pietreanu, K. Piscicchia, A. Romero Vidal, E. Sbardella, A. Scordo, H. Shi, D.L. Sirghi, F. Sirghi, H. Tatsuno, O. Vazquez Doce, E. Widmann, J. Zmeskal
April 1, 2016 nucl-ex
We measured the $K$-series X-rays of the $K^{-}p$ exotic atom in the SIDDHARTA experiment with a gaseous hydrogen target of 1.3 g/l, which is about 15 times the $\rho_{\rm STP}$ of hydrogen gas. At this density, the absolute yields of kaonic X-rays, when a negatively charged kaon stopped inside the target, were determined to be 0.012$^{+0.004}_{-0.003}$ for $K_{\alpha}$ and 0.043$^{+0.012}_{-0.011}$ for all the $K$-series transitions $K_{tot}$. These results, together with the KEK E228 experiment results, confirm for the first time a target density dependence of the yield predicted by the cascade models, and provide valuable information to refine the parameters used in the cascade models for the kaonic atoms.
Strong interaction studies with kaonic atoms (1603.08755)
J. Marton, M. Bazzi, G. Beer, C. Berucci, D. Bosnar, A.M. Bragadireanu, M. Cargnelli, A. Clozza, C. Curceanu, A. d'Uffizi, C. Fiorini, F. Ghio, C. Guaraldo, R. Hayano, M. Iliescu, T. Ishiwatari, M. Iwasaki, P. Levi Sandri, S. Okada, D. Pietreanu, K. Piscicchia, T. Ponta, R. Quaglia, A. Romero Vidal, E. Sbardella, A. Scordo, H. Shi, D.L. Sirghi, F. Sirghi, H. Tatsuno, O. Vazquez Doce, E. Widmann, J. Zmeskal
The strong interaction of antikaons with nucleons and nuclei in the low-energy regime represents an active research field connected intrinsically with few-body physics. There are important open questions like the question of antikaon nuclear bound states. A unique and rather direct experimental access to the antikaon-nucleon scattering lengths is provided by precision X-ray spectroscopy of transitions in low-lying states of light kaonic atoms like kaonic hydrogen isotopes. In the SIDDHARTA experiment at the electron-positron collider DAFNE of LNF-INFN we measured the most precise values of the strong interaction observables, i.e. the strong interaction on the 1s ground state of the electromagnetically bound kaonic hydrogen atom leading to a hadronic shift and a hadronic broadening of the 1s state. The SIDDHARTA result triggered new theoretical work which achieved major progress in the understanding of the low-energy strong interaction with strangeness. Antikaon-nucleon scattering lengths have been calculated constrained by the SIDDHARTA data on kaonic hydrogen. For the extraction of the isospin-dependent scattering lengths a measurement of the hadronic shift and width of kaonic deuterium is necessary. Therefore, new X-ray studies with the focus on kaonic deuterium are in preparation (SIDDHARTA2). Many improvements in the experimental setup will allow to measure kaonic deuterium which is challenging due to the anticipated low X-ray yield. Especially important are the data on the X-ray yields of kaonic deuterium extracted from a exploratory experiment within SIDDHARTA.
Structure near $K^-$+$p$+$p$ threshold in the in-flight $^3$He$(K^-,\Lambda p)n$ reaction (1601.06876)
J-PARC E15 Collaboration: Y. Sada, S. Ajimura, M. Bazzi, G. Beer, H. Bhang, M. Bragadireanu, P. Buehler, L. Busso, M. Cargnelli, S. Choi, C. Curceanu, S. Enomoto, D. Faso, H. Fujioka, Y. Fujiwara, T. Fukuda, C. Guaraldo, T. Hashimoto, R. S. Hayano, T. Hiraiwa, M. Iio, M. Iliescu, K. Inoue, Y. Ishiguro, T. Ishikawa, S. Ishimoto, T. Ishiwatari, K. Itahashi, M. Iwai, M. Iwasaki, Y. Kato, S. Kawasaki, P. Kienle, H. Kou, Y. Ma, J. Marton, Y. Matsuda, Y. Mizoi, O. Morra, T. Nagae, H. Noumi, H. Ohnishi, S. Okada, H. Outa, K. Piscicchia, A. Romero Vidal, A. Sakaguchi, F. Sakuma, M. Sato, A. Scordo, M. Sekimoto, H. Shi, D. Sirghi, F. Sirghi, K. Suzuki, S. Suzuki, T. Suzuki, K. Tanida, H. Tatsuno, M. Tokuda, D. Tomono, A. Toyoda, K. Tsukada, O. Vazquez Doce, E. Widmann, B. K. Wuenschek, T. Yamaga, T. Yamazaki, H. Yim, Q. Zhang, J. Zmeskal
To search for an S= -1 di-baryonic state which decays to $\Lambda p$, the $ {\rm{}^3He}(K^-,\Lambda p)n_{missing}$ reaction was studied at 1.0 GeV/$c$. Unobserved neutrons were kinematically identified from the missing mass $M_X$ of the $ {\rm{}^3He}(K^-,\Lambda p)X$ reaction in order to have a large acceptance for the $\Lambda pn$ final state. The observed $\Lambda p n$ events, distributed widely over the kinematically allowed region of the Dalitz plot, establish that the major component comes from a three nucleon absorption process. A concentration of events at a specific neutron kinetic energy was observed in a region of low momentum transfer to the $\Lambda p$. To account for the observed peak structure, the simplest S-wave pole was assumed to exist in the reaction channel, having Breit-Wigner form in energy and with a Gaussian form-factor. A minimum $\chi^2$ method was applied to deduce its mass $M_X\ =$ 2355 $ ^{+ 6}_{ - 8}$ (stat.) $ \pm 12$ (syst.) MeV/c$^2$, and decay-width $\Gamma_X\ = $ 110 $ ^{+ 19}_{ - 17}$ (stat.) $ \pm 27$ (syst.) MeV/c$^2$, respectively. The form factor parameter $Q_X \sim$ 400 MeV/$c$ implies that the range of interaction is about 0.5
VIP 2: Experimental tests of the Pauli Exclusion Principle for electrons (1602.00867)
A. Pichler, S. Bartalucci, M. Bazzi, S. Bertolucci, C. Berucci, M. Bragadireanu, M. Cargnelli, A. Clozza, C. Curceanu, L. De Paolis, S. Di Matteo, A. D'Uffizi, J.-P. Egger, C. Guaraldo, M. Iliescu, T. Ishiwatari, M. Laubenstein, J. Marton, E. Milotti, D. Pietreanu, K. Piscicchia, T. Ponta, E. Sbardella, A. Scordo, H. Shi, D. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez-Doce, E. Widmann, J. Zmeskal
Feb. 2, 2016 physics.ins-det
The Pauli Exclusion Principle (PEP) was famously discovered in 1925 by the austrian physicist Wolfgang Pauli. Since then, it underwent several experimental tests. Starting in 2006, the VIP (Violation of the Pauli Principle) experiment looked for 2p to 1s X-ray transitions in copper, where 2 electrons are present in the 1s state before the transition happens. These transitions violate the PEP, and the lack of detection of the corresponding X-ray photons lead to a preliminary upper limit for the violation of the PEP of 4.7 * 10^(-29). The follow-up experiment VIP 2 is currently in the testing phase and will be transported to its final destination, the underground laboratory of Gran Sasso in Italy, in autumn 2015. Several improvements compared to its predecessor like the use of new X-ray detectors and active shielding from background gives rise to a goal for the improvement of the upper limit of the probability for the violation of the Pauli Exclusion Principle of 2 orders of magnitude.
Application of photon detectors in the VIP2 experiment to test the Pauli Exclusion Principle (1602.00898)
The Pauli Exclusion Principle (PEP) was introduced by the austrian physicist Wolfgang Pauli in 1925. Since then, several experiments have checked its validity. From 2006 until 2010, the VIP (VIolation of the Pauli Principle) experiment took data at the LNGS underground laboratory to test the PEP. This experiment looked for electronic 2p to 1s transitions in copper, where 2 electrons are in the 1s state before the transition happens. These transitions violate the PEP. The lack of detection of X-ray photons coming from these transitions resulted in a preliminary upper limit for the violation of the PEP of $4.7 \times 10^{-29}$. Currently, the successor experiment VIP2 is under preparation. The main improvements are, on one side, the use of Silicon Drift Detectors (SDDs) as X-ray photon detectors. On the other side an active shielding is implemented, which consists of plastic scintillator bars read by Silicon Photomultipliers (SiPMs). The employment of these detectors will improve the upper limit for the violation of the PEP by around 2 orders of magnitude.
Spontaneously emitted X-rays: an experimental signature of the dynamical reduction models (1601.06617)
C. Curceanu, S. Bartalucci, A. Bassi, M. Bazzi, S. Bertolucci, C. Berucci, A. M. Bragadireanu, M. Cargnelli, A. Clozza, L. De Paolis, S. Di Matteo, S. Donadi, A. DUffizi, J-P. Egger, C. Guaraldo, M. Iliescu, T. Ishiwatari, M. Laubenstein, J. Marton, E. Milotti, A. Pichler, D. Pietreanu, K. Piscicchia, T .Ponta, E. Sbardella, A. Scordo, H. Shi, D.L. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez Doce, J. Zmeskal
Jan. 25, 2016 quant-ph, physics.ins-det
We present the idea of searching for X-rays as a signature of the mechanism inducing the spontaneous collapse of the wave function. Such a signal is predicted by the continuous spontaneous localization theories, which are solving the "measurement problem" by modifying the Schrodinger equation. We will show some encouraging preliminary results and discuss future plans and strategy.
Searches for the Violation of Pauli Exclusion Principle at LNGS in VIP(-2) experiment (1601.05828)
H Shi, S. Bartalucci, S. Bertolucci, C. Berucci, A. M. Bragadireanu, M. Cargnelli, A. Clozza, C. Curceanu, L. De Paolis, S. Di Matteo, A. d'Uffizi, J.-P. Egger, C. Guaraldo, M. Iliescu, T. Ishiwatari, J. Marton, M. Laubenstein, E. Milotti, D. Pietreanu, K. Piscicchia, T. Ponta, A. Romero Vidal, E. Sbardella, A. Scordo, D.L. Sirghi, F. Sirghi, L. Sperandio, O. Vazquez Doce, E. Widmann, J. Zmeskal
The VIP (Violation of Pauli exclusion principle) experiment and its follow-up experiment VIP-2 at the Laboratori Nazionali del Gran Sasso (LNGS) search for X-rays from Cu atomic states that are prohibited by the Pauli Exclusion Principle (PEP). The candidate events, if they exist, will originate from the transition of a $2p$ orbit electron to the ground state which is already occupied by two electrons. The present limit on the probability for PEP violation for electron is 4.7 $\times10^{-29}$ set by the VIP experiment. With upgraded detectors for high precision X-ray spectroscopy, the VIP-2 experiment will improve the sensitivity by two orders of magnitude.
Absolute Energy Calibration of X-ray TESs with 0.04 eV Uncertainty at 6.4 keV in a Hadron-Beam Environment (1601.03293)
H. Tatsuno, W.B. Doriese, D.A. Bennett, C. Curceanu, J.W. Fowler, J. Gard, F.P. Gustafsson, T. Hashimoto, R.S. Hayano, J.P. Hays-Wehle, G.C. Hilton, M. Iliescu, S. Ishimoto, K. Itahashi, M. Iwasaki, K. Kuwabara, Y. Ma, J. Marton, H. Noda, G.C. O'Neil, S. Okada, H. Outa, C.D. Reintsema, M. Sato, D.R. Schmidt, H. Shi, K. Suzuki, T. Suzuki, J. Uhlig, J.N. Ullom, E. Widmann, S. Yamada, J. Zmeskal, D.S. Swetz
Jan. 13, 2016 physics.ins-det
A performance evaluation of superconducting transition-edge sensors (TESs) in the environment of a pion beam line at a particle accelerator is presented. Averaged across the 209 functioning sensors in the array, the achieved energy resolution is 5.2 eV FWHM at Co $K_{\alpha}$ (6.9 keV) when the pion beam is off and 7.3 eV at a beam rate of 1.45 MHz. Absolute energy uncertainty of $\pm$0.04 eV is demonstrated for Fe $K_{\alpha}$ (6.4 keV) with in-situ energy calibration obtained from other nearby known x-ray lines. To achieve this small uncertainty, it is essential to consider the non-Gaussian energy response of the TESs and thermal cross-talk pile-up effects due to charged-particle hits in the silicon substrate of the TES array.
Precision X-ray spectroscopy of kaonic atoms as a probe of low-energy kaon-nucleus interaction (1601.02236)
H. Shi, M. Bazzi, G. Beer, G. Bellotti, C. Berucci, A. M. Bragadireanu, D. Bosnar, M. Cargnelli, C. Curceanu, A. D. Butt, A. d'Uffizi, C. Fiorini, F. Ghio, C. Guaraldo, R. S. Hayano, M. Iliescu, T. Ishiwatari, M. Iwasaki, P. Levi Sandri, J. Marton, S. Okada, D. Pietreanu, K. Piscicchia, A. Romero Vidal, E. Sbardella, A. Scordo, D. L. Sirghi, F. Sirghi, H. Tatsuno, O. Vazquez Doce, E. Widmann, J. Zmeskal
Jan. 10, 2016 nucl-ex
In the exotic atoms where one atomic $1s$ electron is replaced by a $K^{-}$, the strong interaction between the $K^{-}$ and the nucleus introduces an energy shift and broadening of the low-lying kaonic atomic levels which are determined by only the electromagnetic interaction. By performing X-ray spectroscopy for Z=1,2 kaonic atoms, the SIDDHARTA experiment determined with high precision the shift and width for the $1s$ state of $K^{-}p$ and the $2p$ state of kaonic helium-3 and kaonic helium-4. These results provided unique information of the kaon-nucleus interaction in the low energy limit.
Shedding New Light on Kaon-Nucleon/Nuclei Interaction and Its Astrophysical Implications with the AMADEUS Experiment at DAFNE (1512.06555)
A. Scordo, M. Bazzi, G. Bellotti, C. Berucci, D. Bosnar, A.M. Bragadireanu, A. Clozza, M. Cargnelli, C. Curceanu, A. Dawood Butt, R. Del Grande, L. Fabbietti, C. Fiorini, F. Ghio, C. Guaraldo, M. Iliescu, P. Levi Sandri, J. Marton, D. Pietreanu, K. Piscicchia, H. Shi, D. Sirghi, F. Sirghi, I. Tucakovic, O. Vazquez Doce, W. Wiedmann, J. Zmeskal
The AMADEUS experiment deals with the investigation of the low-energy kaon-nuclei hadronic interaction at the DA{\Phi}NE collider at LNF-INFN, which is fundamental to respond longstanding questions in the non-perturbative QCD strangeness sector. The antikaon-nucleon potential is investigated searching for signals from possible bound kaonic clusters, which would open the possibility for the formation of cold dense baryonic matter. The confirmation of this scenario may imply a fundamental role of strangeness in astrophysics. AMADEUS step 0 consisted in the reanalysis of 2004/2005 KLOE dataset, exploiting K- absorptions in H, 4He, 9Be and 12C in the setup materials. In this paper, together with a review on the multi-nucleon K- absorption and the particle identification procedure, the first results on the {\Sigma}0-p channel will be presented including a statistical analysis on the possible accomodation of a deeply bound state
K$^-$ absorption on two nucleons and ppK$^-$ bound state search in the $\Sigma^0$p final state (1511.04496)
O. Vazquez Doce, L. Fabbietti, M. Cargnelli, C. Curceanu, J. Marton, K. Piscicchia, A. Scordo, D. Sirghi, I. Tucakovic, S. Wycech, J. Zmeskal, A. Anastasi, F. Curciarello, E. Czerwinski, W. Krzemien, G. Mandaglio, M. Martini, P. Moskal, V. Patera, E. Perez del Rio, M. Silarski
Nov. 14, 2015 nucl-ex
We report the measurement of K$^-$ absorption processes in the $\Sigma^0$p final state and the first exclusive measurement of the two nucleon absorption (2NA) with the KLOE detector. The 2NA process without further interactions is found to be 12\% of the sum of all other contributing processes, including absorption on three and more nucleons or 2NA followed by final state interactions with the residual nucleons. We also determine the possible contribution of the ppK$^-$ bound state to the $\Sigma^0$p final state. A yield of ppK$^- /\mathrm{K^-_{stop}}$ is found to be $(0.044 \pm 0.009\, stat ^{+ 0.004} _{- 0.005} \,syst) \cdot 10^{-2}$ but its statistical significance based on an F-test is only 1$\sigma$. | CommonCrawl |
Microfluidics and Nanofluidics
May 2015 , Volume 18, Issue 5–6, pp 717–738 | Cite as
A review of steric interactions of ions: Why some theories succeed and others fail to account for ion size
Dirk Gillespie
First Online: 08 October 2014
As nanofluidic devices become smaller and their surface charges become larger, the steric interactions of ions in the electrical double layers at the device walls will become more important. The ions' size prevents them from overlapping, and the resulting correlations between the ions can produce oscillations in the density profiles. Because device properties are determined by the structure of these double layers, it is more important than ever that theories correctly include steric interactions between ions. This review analyzes what features a theory must have in order to accurately account for steric interactions. It also reviews several popular theories and compares them against Monte Carlo simulations to gauge their accuracy. Successful theories of steric interactions satisfy the contact density theorem of statistical mechanics and use locally averaged concentrations. Theories that do not satisfy these criteria, especially those that use local concentrations (instead of averaged concentrations) to limit local packing fraction, produce qualitatively incorrect double layer structure. For which ion sizes, ion concentrations, and surface charges monovalent ions steric effects are important is also analyzed.
Electrical double layer Steric interactions Excluded volume effects Theory
I am grateful to Jan Eijkel for the discussion that inspired this review and to Martin Bazant for his critical readings of the manuscript. I am also grateful to Claudio Berti, Ali Mani, Aditya Khair, and Peter Kekenes-Huskey for their helpful suggestions for the manuscript. Lastly, I would like to thank Roland Roth not only for his many improvements to the manuscript, but most importantly for introducing me to and teaching me about the contact density, the low-density limit, and the hard-rod model.
Appendix 1: Derivation of the contact density theorem for charged walls
Here, a specific version of the contact density theorem for a smooth, charged, hard wall in contact with a fluid of charged, hard spheres is derived. More general versions, for example, where the wall is curved, also exist (Bryk et al. 2003, and references therein).
The contact density theorem (sometimes also called the contact value theorem) is a force balance argument: the force on the wall due to the pressure in the bath must equal the force on the wall exerted by the ions. The latter depends on how the ions interact with the wall. In this case, this is due to the short-ranged steric interactions between the ions and the wall and Coulombic interactions:
$$P = \frac{1}{A}\left( {F^{\text{steric}} + F^{\text{Coulomb}} } \right)$$
where A is the area of the wall.
The steric force of the ions of species i in an infinitesimal slice at x on the wall is \(- \rho_{i} (x){\text{d}}U_{i}^{\text{HW}} (x)/{\text{d}}x.\) Therefore, the total steric force per unit area on the wall is
$$\frac{{F^{\text{steric}} }}{A} = - \sum\limits_{i} {\int_{0}^{\infty } {\frac{{{\text{d}}U_{i}^{\text{HW}} }}{{{\text{d}}x}}\rho_{i} (x){\text{d}}x} } .$$
To evaluate this, we use the relation
$$\exp \left( { - U_{i}^{\text{HW}} (x)/kT} \right) = H(x - R_{i} )$$
where \(H(x)\) is the Heaviside step function (0 for a negative argument and 1 for a positive argument). Taking the derivative of that gives
$$- \frac{{{\text{d}}U_{i}^{\text{HW}} }}{{{\text{d}}x}} = kT\delta (x - R_{i} )\exp \left( {U_{i}^{\text{HW}} (x)/kT} \right)$$
where \(\delta (x)\) is the Dirac delta-function. Therefore,
$$- \int_{0}^{\infty } {\frac{{{\text{d}}U_{i} }}{{{\text{d}}x}}\rho_{i} (x){\text{d}}x} = kT\int_{0}^{\infty } {\delta (x - R_{i} )\exp \left( {U_{i}^{\text{HW}} (x)/kT} \right)\rho_{i} (x){\text{d}}x} = kT\rho_{i} (R_{i}^{ + } )$$
where the superscript + indicates the limit taken from above. Here, we have use the fact that \(\exp \left( {U_{i}^{\text{HW}} (x)} \right)\rho_{i} (x)\) is a continuous function, even though each factor has a discontinuity (Hansen and McDonald 2006). [All the functions in Eq. (7) are continuous except \(U_{i}^{\text{HW}} (x),\) which is what gives the concentration its discontinuity; this is known from thermodynamic arguments (Hansen and McDonald 2006).] Therefore,
$$\frac{{F^{\text{steric}} }}{A} = kT\sum\limits_{i} {\rho_{i} (R_{i} )} .$$
For the Coulombic component, we must calculate the electrostatic force that the ions exert on the surface charge. Consider the ions between x and \(x + {\text{d}}x.\) They may be considered a plane of charge with surface charge
$$\sigma_{\text{ions}} (x) = \frac{e}{\varepsilon }\sum\limits_{i} {z_{i} \rho_{i} (x)} {\text{d}}x$$
that emanates an electric field in the x-direction with magnitude (Sears et al. 1987)
$$E_{\text{ions}} = \frac{{\sigma_{\text{ions}} }}{{2\varepsilon_{0} }}.$$
The electrostatic force between the ion surface charge and a small patch of the wall surface charge with area dA is \(E_{\text{ions}} \sigma {\text{d}}A.\) Integrating this over the entire wall area and then over all distances from the wall gives
$$\begin{aligned} \frac{{F^{\text{Coulomb}} }}{A} & = \sigma \int_{0}^{\infty } {\frac{{\sigma_{\text{ions}} (x)}}{{2\varepsilon_{0} }}{\text{d}}x} \\ & = \frac{\sigma }{{2\varepsilon \varepsilon_{0} }}\int_{0}^{\infty } {e\sum\limits_{i} {z_{i} \rho_{i} (x)} {\text{d}}x} \\ & = - \frac{{\sigma^{2} }}{{2\varepsilon \varepsilon_{0} }} \\ \end{aligned}$$
because the ions in total neutralize the surface charge completely.
Combining Eqs. (28), (33), and (36) gives the contact density theorem for charged systems:
$$kT\sum\limits_{i} {\rho_{i} (R_{i} )} = P + \frac{{\sigma^{2} }}{{2\varepsilon \varepsilon_{0} }}.$$
This theorem is valid until the ions freeze, preventing infinite concentrations.
One important thing to note is that for a theory to be correct, it is necessary, but most definitely not sufficient, to satisfy Eq. (37). For example, two different theories (e.g., DFT and DCA-PB) have two different formulas for the pressure and so each theory will give different contact densities. This is most obvious in the top row of Fig. 1 where the DCA-PB pressure does not include a steric component while the DFT pressure does. However, each theory approximately satisfies Eq. (37) for its pressure and its contact densities (Gillespie et al. 2005a). This indicates self-consistency within each theory, but does not guarantee accuracy of theory (i.e., if it compares well to Monte Carlo simulations). Stated differently, the contact density theorem is a consistency check of a theory. If Eq. (37) is not satisfied, then the theory cannot be correct; if Eq. (37) is satisfied, it may or may not be correct. The results in the main text show that the Bikerman and BMCSL theories fail to satisfy the contact theorem by a wide margin.
Appendix 2: Formulas for the low-density limit
A virial type of expansion (i.e., an expansion in the low-density limit) of the partition function gives that the minimum of the Helmholtz free energy (the grand potential) in the low-density limit is (Roth 2010)
$$\begin{aligned} F & = F^{\text{ideal}} + kT\sum\limits_{i} {\int {\rho_{i} ({\mathbf{r}})\left( {U_{i}^{\text{wall}} ({\mathbf{r}}) - \mu_{i}^{\text{bath}} } \right){\text{d}}{\mathbf{r}}} } \\ & \quad - \frac{1}{2}kT\sum\limits_{i,j} {\int {\int {\rho_{i} ({\mathbf{r}})\rho_{j} ({\mathbf{r}}^{{\prime }} )f_{ij} \left( {\left| {{\mathbf{r}} - {\mathbf{r}}^{{\prime }} } \right|} \right){\text{d}}{\mathbf{r}}^{{\prime }} } {\text{d}}{\mathbf{r}}} } . \\ \end{aligned}$$
(The integral without limits indicates integration over all space.) The ideal gas components are always given by
$$F^{\text{ideal}} = kT\sum\limits_{i} {\int {\rho_{i} ({\mathbf{r}})\left[ {\ln \left( {\varLambda_{i}^{3} \rho_{i} ({\mathbf{r}})} \right) - 1} \right]{\text{d}}{\mathbf{r}}} }$$
where the thermal de Broglie wavelength is given by Eq. (2).
In Eq. (38), \(f_{ij} \left( {\left| {{\mathbf{r}} - {\mathbf{r^{\prime}}}} \right|} \right)\) is the Mayer f-function defined as
$$f_{ij} (r) = \exp \left( { - u_{ij} (r)/kT} \right) - 1$$
for the interaction potential \(u_{ij} (r)\) between ions of species i and j. In the primitive model of ions, the interaction potential is a sum of the hard-sphere potential that prevents ions overlapping and the Coulomb potential:
$$u_{ij} (r) = u_{ij}^{\text{HS}} (r) + z_{i} z_{j} \psi (r)$$
$$u_{ij}^{\text{HS}} (r) = \left\{ {\begin{array}{*{20}l} 0 \hfill & \quad {r > R_{i} + R_{j} } \hfill \\ \infty \hfill & \quad {r \le R_{i} + R_{j} } \hfill \\ \end{array} } \right.$$
$$\psi (r) = \frac{{e^{2} }}{{4\pi \varepsilon \varepsilon_{0} r}}.$$
With Eq. (42),
$$\begin{aligned} f_{ij}^{\text{HS}} (r) & = \exp \left( { - u_{ij}^{\text{HS}} (r)/kT} \right) - 1 \\ & = \left\{ {\begin{array}{*{20}l} 0 \hfill & \quad {r > R_{i} + R_{j} } \hfill \\ { - 1} \hfill & \quad {r \le R_{i} + R_{j} } \hfill \\ \end{array} } \right. \\ & = - H\left( {R_{i} + R_{j} - r} \right). \\ \end{aligned}$$
One can split the Mayer f-function into two additive components:
$$f_{ij} (r) = f_{ij}^{\text{HS}} (r) + f_{ij}^{\text{C}} (r)$$
where \(f_{ij}^{\text{C}} (r)\) is the Mayer f-function for the effective Coulomb interaction potential
$$\psi_{ij}^{\text{C}} (r) = \left\{ {\begin{array}{*{20}l} {z_{i} z_{j} \psi (r)} \hfill &\quad {r > R_{i} + R_{j} } \hfill \\ 0 \hfill & \quad{r \le R_{i} + R_{j} .} \hfill \\ \end{array} } \right.$$
To compute the chemical potential, one uses the fact that it is the functional derivative of the Helmholtz free energy:
$$0 = \frac{\delta F}{{\delta \rho_{i} ({\mathbf{r}})}}.$$
[Derivations of this relationship and primers on the mathematics of functional derivatives may be found in the books by Davis (1996) and Hansen and McDonald (2006).] In the low-density limit of Eq. (38), this gives
$$\begin{aligned} \mu_{i}^{\text{bath}} & = \mu_{i}^{\text{ideal}} ({\mathbf{r}}) + U_{i}^{\text{wall}} ({\mathbf{r}}) \\ & \quad - \,kT\sum\limits_{j} {\int {\rho_{j} ({\mathbf{r}}^{{\prime }} )f_{ij} \left( {\left| {{\mathbf{r}} - {\mathbf{r}}^{{\prime }} } \right|} \right){\text{d}}{\mathbf{r}}^{{\prime }} } } \\ \end{aligned}$$
where the ideal gas term is
$$\mu_{i}^{\text{ideal}} ({\mathbf{r}}) = kT\ln \left( {\varLambda_{i}^{3} \rho_{i} ({\mathbf{r}})} \right).$$
Using Eqs. (44) and (45), one can rewrite Eq. (48) as
$$\begin{aligned} \mu_{i}^{\text{bath}} & = \mu_{i}^{\text{ideal}} ({\mathbf{r}}) + U_{i}^{\text{wall}} ({\mathbf{r}}) + \mu_{i}^{\text{HS,LD}} ({\mathbf{r}}) \\ & \quad - \,kT\sum\limits_{j} {\int {\rho_{j} ({\mathbf{r}}^{{\prime }} )f_{ij}^{\text{C}} \left( {\left| {{\mathbf{r}} - {\mathbf{r}}^{{\prime }} } \right|} \right){\text{d}}{\mathbf{r}}^{{\prime }} } } . \\ \end{aligned}$$
where the steric term (superscripted HS for hard sphere) in the low-density (labeled LD) limit is
$$\mu_{i}^{\text{HS,LD}} ({\mathbf{r}}) = kT\sum\limits_{j} {\int {\rho_{j} ({\mathbf{r}}^{{\prime }} )H\left( {R_{i} + R_{j} - \left| {{\mathbf{r}} - {\mathbf{r}}^{{\prime }} } \right|} \right){\text{d}}{\mathbf{r}}^{{\prime }} } } .$$
Appendix 3: Formulas for the hard-rod model
A detailed derivation of the exact solution to the hard-rod model is given in Chapter 10 of the book by Davis (1996). Here, only some basic formulas are summarized. Although only a single-species fluid is considered in the main text, the general formula for mixtures is given here since they are not substantially more cumbersome.
The chemical potential \(\mu_{i}\) of species i is
$$\begin{aligned} (kT)^{ - 1} \mu_{i} (x) & = \ln \left( {a_{i} \rho_{i} (x)} \right) - \ln \left( {1 - h_{i} (x)} \right) \\ & \quad + \,\sum\limits_{k} {\int_{{x - a_{ik} }}^{{x + A_{ik} }} {\frac{{\rho_{k} (y)}}{{1 - h_{k} (y)}}{\text{d}}y} } + \beta U_{i}^{\text{wall}} (x) \\ \end{aligned}$$
where \(\rho_{i} (x)\) is the concentration profile (number of particles per unit length) and
$$h_{i} (x) = \sum\limits_{j} {\int_{{x + A_{ij} }}^{{x + a_{ij} }} {\rho_{j} (z){\text{d}}z} }$$
$$\begin{aligned} a_{ij} & = \frac{1}{2}(a_{i} + a_{j} ) \\ A_{ij} & = \frac{1}{2}(a_{i} - a_{j} ) \\ \end{aligned}$$
for rods of length a i . In the bath region (e.g., far away from the wall) where the concentrations are constant, this reduces to
$$(kT)^{ - 1} \mu_{i}^{\text{bath}} = \ln \left( {a_{i} \rho_{i} } \right) + \ln \left( {\frac{P/kT}{{\sum\nolimits_{j} {\rho_{j} } }}} \right) + a_{i} \frac{P}{kT}$$
where the pressure is given by
$$\frac{P}{kT} = \frac{{\sum\nolimits_{j} {\rho_{j} } }}{{1 - \sum\nolimits_{j} {a_{j} \rho_{j} } }}.$$
This system may be solved numerically as described by Davis (1996). For many external potentials \(U_{i}^{\text{wall}} (x)\), it may also be solved in the following simpler way. Define
$$f_{i} (x) = \frac{{\rho_{i} (x)}}{{1 - h_{i} (x)}}.$$
Since the system is in equilibrium, the left-hand side of the Eq. (52) must be constant with the chemical potential given by Eq. (55). Then, Eq. (52) can be rewritten as a fixed-point problem [i.e., in the form \({\mathbf{x = F(x)}}\)]:
$$f_{i} (x) = \chi_{i} (x)\exp \left( {\sum\limits_{k} {\int_{{x - a_{ik} }}^{{x + A_{ik} }} {f_{k} (y){\text{d}}y} } } \right)$$
$$\chi_{i} (x) = \exp \left( {\frac{{U_{i}^{\text{wall}} (x) - \mu_{i}^{\text{bath}} }}{kT}} \right).$$
On a discretized grid with points \(x_{1} , \ldots ,x_{N} ,\) the unknowns \(\left\{ {f_{i} (x_{\alpha } )} \right\}\) (i.e., all the f's at all the grid points) can be found using a Picard iteration with line search; that is, let \(\left\{ {f_{i}^{(l)} (x_{\alpha } )} \right\}\) be the f's after iteration l and defining
$$f_{i}^{*} (x) = \chi_{i} (x)\exp \left( {\sum\limits_{k} {\int_{{x - a_{ik} }}^{{x + A_{ik} }} {f_{k}^{(l)} (y){\text{d}}y} } } \right),$$
the \(l + 1\) iteration of f's is given by
$$f_{i}^{(l + 1)} (x) = \alpha f_{i}^{*} (x) + \left( {1 - \alpha } \right)f_{i}^{(l)} (x)$$
for some appropriately chosen α (e.g., 0.1) using discretized integrals Eq. (60). Once the \(\left\{ {f_{i} (x_{\alpha } )} \right\}\) are determined, rearranging Eq. (57) results in a linear system of equations for the unknowns \(\left\{ {\rho_{i} (x_{\alpha } )} \right\}\) that may be solved with a simple matrix inversion.
Equation (52) is the exact solution without approximations. If, however, one wanted to make the approximation of using the bath formula with the local concentrations for the inhomogeneous case, then
$$\begin{aligned} (kT)^{ - 1} \mu_{i}^{\text{local}} (x) & = \ln \left( {a_{i} \rho_{i} (x)} \right) - \ln \left( {1 - \sum\nolimits_{j} {a_{j} \rho_{j} (x)} } \right) \\ & \quad + \,\frac{{a_{i} \sum\nolimits_{j} {\rho_{j} (x)} }}{{1 - \sum\nolimits_{j} {a_{j} \rho_{j} (x)} }} + (kT)^{ - 1} U_{i}^{\text{wall}} (x). \\ \end{aligned}$$
For the single-species fluid in equilibrium, this becomes
$$(kT)^{ - 1} \mu^{\text{bath}} = \ln \left( {a\rho (x)} \right) + \mu^{\text{HR,local,ex}} (x) + (kT)^{ - 1} U^{\text{wall}} (x).$$
$$(kT)^{ - 1} \mu^{\text{HR,local,ex}} (x) = - \ln \left( {1 - a\rho (x)} \right) + \frac{a\rho (x)}{1 - a\rho (x)}$$
is the excess chemical potential. Note the similarities between Eq. (64) with its local space-filled fraction \(a\rho (x)\) and the excess chemical potentials of the Bikerman and BMCSL theories in Eqs. (15) and (17), respectively.
Equation (63) can be solved in terms of a special function:
$$a\rho (x) = \frac{w(x)}{1 + w(x)}$$
$$w(x) = W\left( {\exp \left( {\frac{{\mu^{\text{bath}} - U^{\text{wall}} (x)}}{kT}} \right)} \right)$$
where W is the Lambert W-function (sometimes also called the "product log" function) that solves the equation \(x = W(x)\exp \left( {W(x)} \right)\) (Corless et al. 1996). Equation (65) is what is used for the black lines in Fig. 7.
Details of the simulations are shown in Figs. 1, 2, and 3. The simulations of Lamperski and Kłos (2008) are of an ionic liquid (i.e., low dielectric constant ε, high temperature T, and high bath concentrations). The high bath concentrations make steric effects significant. The simulations of Lamperski and Outhwaite (2008) are of a room temperature electrolyte. The use of hydrated ion sizes makes the ions very large and steric effects significant.
Ai Y, Liu J, Zhang B, Qian S (2011) Ionic current rectification in a conical nanofluidic field effect transistor. Sens Actuators B 157:742–751CrossRefGoogle Scholar
Barthel JMG, Krienke H, Kunz W (1998) Physical chemistry of electrolyte solutions: modern aspects. Springer, New YorkGoogle Scholar
Bazant MZ, Kilic MS, Storey BD, Ajdari A (2009) Towards an understanding of induced-charge electrokinetics at large applied voltages in concentrated solutions. Adv Colloid Interface Sci 152:48–88CrossRefGoogle Scholar
Bazant MZ, Storey BD, Kornyshev AA (2011) Double layer in ionic liquids: overscreening versus crowding. Phys Rev Lett 106:046102CrossRefGoogle Scholar
Bhuiyan LB, Outhwaite CW (2004) Comparison of the modified Poisson-Boltzmann theory with recent density functional theory and simulation results in the planar electric double layer. Phys Chem Chem Phys 6:3467–3473CrossRefGoogle Scholar
Bikerman JJ (1942) Structure and capacity of electrical double layer. Philos Mag Ser 7(33):384–397CrossRefGoogle Scholar
Boublik T (1970) Hard sphere equation of state. J Chem Phys 53:471–472CrossRefGoogle Scholar
Branagan SP, Contento NM, Bohn PW (2012) Enhanced mass transport of electroactive species to annular nanoband electrodes embedded in nanocapillary array membranes. J Am Chem Soc 134:8617–8624CrossRefGoogle Scholar
Bryk P, Roth R, Mecke KR, Dietrich S (2003) Hard-sphere fluids in contact with curved substrates. Phys Rev E 68:031602CrossRefGoogle Scholar
Carnahan NF, Starling KE (1969) Equation of state for nonattracting rigid spheres. J Chem Phys 51:635–636CrossRefGoogle Scholar
Contento NM, Branagan SP, Bohn PW (2011) Electrolysis in nanochannels for in situ reagent generation in confined geometries. Lab Chip 11:3634–3641CrossRefGoogle Scholar
Corless RM, Gonnet GH, Hare DEG, Jeffrey DJ, Knuth DE (1996) On the Lambert W function. Adv Comput Math 5:329–359CrossRefzbMATHMathSciNetGoogle Scholar
Davis HT (1996) Statistical mechanics of phases, interfaces, and thin films. Wiley-VCH, New YorkGoogle Scholar
Diehl A, Tamashiro MN, Barbosa MC, Levin Y (1999) Density-functional theory for attraction between like-charged plates. Phys A 274:433–445CrossRefGoogle Scholar
Duan C, Majumdar A (2010) Anomalous ion transport in 2-nm hydrophilic nanochannels. Nat Nanotechnol 5:848–852CrossRefGoogle Scholar
Dzubiella J, Swanson JMJ, McCammon JA (2006) Coupling hydrophobicity, dispersion, and electrostatics in continuum solvent models. Phys Rev Lett 96:087802CrossRefGoogle Scholar
Evans R (1979) Nature of the liquid-vapor interface and other topics in the statistical mechanics of non-uniform, classical fluids. Adv Phys 28:143–200CrossRefGoogle Scholar
Evans R (1992) Density functionals in the theory of nonuniform fluids. In: Henderson D (ed) Fundamentals of inhomogeneous fluids. Marcel Dekker, New York, pp 85–176Google Scholar
Fedorov MV, Kornyshev AA (2008) Ionic liquid near a charged wall: structure and capacitance of electrical double layer. J Phys Chem B 112:11868–11872CrossRefGoogle Scholar
Frydel D, Levin Y (2012) A close look into the excluded volume effects within a double layer. J Chem Phys 137:164703CrossRefGoogle Scholar
Gillespie D (2008) Energetics of divalent selectivity in a calcium channel: the ryanodine receptor case study. Biophys J 94:1169–1184CrossRefGoogle Scholar
Gillespie D (2010) Analytic theory for dilute colloids in a charged slit. J Phys Chem B 114:4302–4309CrossRefGoogle Scholar
Gillespie D (2012) High energy conversion efficiency in nanofluidic channels. Nano Lett 12:1410–1416CrossRefGoogle Scholar
Gillespie D, Fill M (2008) Intracellular calcium release channels mediate their own countercurrent: the ryanodine receptor case study. Biophys J 95:3706–3714CrossRefGoogle Scholar
Gillespie D, Nonner W, Eisenberg RS (2002) Coupling Poisson–Nernst–Planck and density functional theory to calculate ion flux. J Phys Condens Matter 14:12129–12145CrossRefGoogle Scholar
Gillespie D, Nonner W, Eisenberg RS (2003) Density functional theory of charged, hard-sphere fluids. Phys Rev E 68:031503CrossRefGoogle Scholar
Gillespie D, Valiskó M, Boda D (2005a) Density functional theory of the electrical double layer: the RFD functional. J Phys Condens Matter 17:6609–6626CrossRefGoogle Scholar
Gillespie D, Xu L, Wang Y, Meissner G (2005b) (De)constructing the ryanodine receptor: modeling ion permeation and selectivity of the calcium release channel. J Phys Chem B 109:15598–15610CrossRefGoogle Scholar
Gillespie D, Giri J, Fill M (2009) Reinterpreting the anomalous mole fraction effect: the ryanodine receptor case study. Biophys J 97:2212–2221CrossRefGoogle Scholar
Gillespie D, Khair AS, Bardhan JP, Pennathur S (2011) Efficiently accounting for ion correlations in electrokinetic nanofluidic devices using density functional theory. J Colloid Interface Sci 359:520–529CrossRefGoogle Scholar
Gillespie D, Chen H, Fill M (2012) Is ryanodine receptor a calcium or magnesium channel? Roles of K+ and Mg2+ during Ca2+ release. Cell Calcium 51:427–433CrossRefGoogle Scholar
Groot RD, Faber NM, van der Eerden JP (1987) Hard sphere fluids near a hard wall and a hard cylinder. Mol Phys 62:861–874CrossRefGoogle Scholar
Hansen J-P, McDonald IR (2006) Theory of simple liquids, 3rd edn. Academic Press, New YorkGoogle Scholar
Hansen-Goos H, Roth R (2006a) Density functional theory for hard-sphere mixtures: the White Bear version mark II. J Phys Condens Matter 18:8413–8425CrossRefGoogle Scholar
Hansen-Goos H, Roth R (2006b) A new generalization of the Carnahan–Starling equation of state to additive mixtures of hard spheres. J Chem Phys 124:154506–154508CrossRefGoogle Scholar
Henderson D, Blum L, Lebowitz JL (1979) An exact formula for the contact value of the density profile of a system of charged hard spheres near a charged wall. J Electroanal Chem 102:315–319CrossRefGoogle Scholar
Hlushak SP, McCabe C, Cummings PT (2012) Fourier space approach to the classical density functional theory for multi-Yukawa and square-well fluids. J Chem Phys 137:104104CrossRefGoogle Scholar
Hlushkou D, Perdue RK, Dhopeshwarkar R, Crooks RM, Tallarek U (2009) Electric field gradient focusing in microchannels with embedded bipolar electrode. Lab Chip 9:1903–1913CrossRefGoogle Scholar
Hoffmann J, Gillespie D (2013) Ion correlations in nanofluidic channels: effects of ion size, valence, and concentration on voltage- and pressure-driven currents. Langmuir 29:1303–1317CrossRefGoogle Scholar
Huang XT, Gupta C, Pennathur S (2010) A novel fabrication method for centimeter-long surface-micromachined nanochannels. J Micromech Microeng 20:015040CrossRefGoogle Scholar
Jiang Z, Stein D (2011) Charge regulation in nanopore ionic field-effect transistors. Phys Rev E 83:031203CrossRefGoogle Scholar
Jin X, Aluru NR (2011) Gated transport in nanofluidic devices. Microfluid Nanofluid 11:297–306CrossRefGoogle Scholar
Johnson M, Nordholm S (1981) Generalized van der Waals theory. VI. Application to adsorption. J Chem Phys 75:1953–1957CrossRefGoogle Scholar
Kierlik E, Rosinberg ML (1990) The role of packing effects at the liquid–solid interface: a model for a surface phase transition. J Phys Condens Matter 2:3081CrossRefGoogle Scholar
Kierlik E, Rosinberg ML (1991) Density-functional theory for inhomogeneous fluids: adsorption of binary mixtures. Phys Rev A 44:5025–5037CrossRefGoogle Scholar
Knepley MG, Karpeev D, Davidovits S, Eisenberg RS, Gillespie D (2010) An efficient algorithm for classical density functional theory in three dimensions: ionic solutions. J Chem Phys 132:124101CrossRefGoogle Scholar
Lamperski S, Kłos J (2008) Grand canonical Monte Carlo investigations of electrical double layer in molten salts. J Chem Phys 129:164503CrossRefGoogle Scholar
Lamperski S, Outhwaite CW (2008) Monte-Carlo simulation of mixed electrolytes next to a plane charged surface. J Colloid Interface Sci 328:458–462CrossRefGoogle Scholar
Lamperski S, Zydor A (2007) Monte Carlo study of the electrode|solvent primitive model electrolyte interface. Electrochim Acta 52:2429–2436CrossRefGoogle Scholar
Lenzi A, Viola F, Bonotto F, Frey J, Napoli M, Pennathur S (2011) Method to determine the effective ζ potential in a microchannel with an embedded gate electrode. Electrophoresis 32:3295–3304CrossRefGoogle Scholar
Levin Y (2002) Electrostatic correlations: from plasma to biology. Rep Prog Phys 65:1577CrossRefGoogle Scholar
Maleki T, Mohammadi S, Ziaie B (2009) A nanofluidic channel with embedded transverse nanoelectrodes. Nanotechnology 20:105302CrossRefGoogle Scholar
Mansoori GA, Carnahan NF, Starling KE, Leland TW (1971) Equilibrium thermodynamic properties of the mixture of hard spheres. J Chem Phys 54:1523–1525CrossRefGoogle Scholar
Miedema H, Meter-Arkema A, Wierenga J, Tang J, Eisenberg B, Nonner W, Hektor H, Gillespie D, Meijberg W (2004) Permeation properties of an engineered bacterial OmpF porin containing the EEEE-Locus of Ca2+ channels. Biophys J 87:3137–3147CrossRefGoogle Scholar
Miedema H, Vrouenraets M, Wierenga J, Gillespie D, Eisenberg B, Meijberg W, Nonner W (2006) Ca2+ selectivity of a chemically modified OmpF with reduced pore volume. Biophys J 91:4392–4400CrossRefGoogle Scholar
Mier-y-Teran L, Suh SH, White HS, Davis HT (1990) A nonlocal free-energy density-functional approximation for the electrical double layer. J Chem Phys 92:5087–5098CrossRefGoogle Scholar
Nam S-W, Rooks MJ, Kim K-B, Rossnagel SM (2009) Ionic field effect transistors with sub-10 nm multiple nanopores. Nano Lett 9:2044–2048CrossRefGoogle Scholar
Nam S-W, Lee M-H, Lee S-H, Lee D-J, Rossnagel SM, Kim K-B (2010) Sub-10-nm nanochannels by self-sealing and self-limiting atomic layer deposition. Nano Lett 10:3324–3329CrossRefGoogle Scholar
Ni H, Anderson CF, Record MT (1999) Quantifying the thermodynamic consequences of cation (M2+, M+) accumulation and anion (X−) exclusion in mixed salt solutions of polyanionic DNA using Monte Carlo and Poisson–Boltzmann calculations of ion–polyion preferential interaction coefficients. J Phys Chem B 103:3489–3504CrossRefGoogle Scholar
Nilson RH, Griffiths SK (2006) Influence of atomistic physics on electro-osmotic flow: an analysis based on density functional theory. J Chem Phys 125:164510CrossRefGoogle Scholar
Nonner W, Catacuzzeno L, Eisenberg B (2000) Binding and selectivity in L-type calcium channels: a mean spherical approximation. Biophys J 79:1976–1992CrossRefGoogle Scholar
Nonner W, Gillespie D, Henderson D, Eisenberg B (2001) Ion accumulation in a biological calcium channel: effects of solvent and confining pressure. J Phys Chem B 105:6427–6436CrossRefGoogle Scholar
Noworyta JP, Henderson D, Sokołowski S, Chan K-Y (1998) Hard sphere mixtures near a hard wall. Mol Phys 95:415–424CrossRefGoogle Scholar
Ohshima H (2010) Biophysical chemistry of biointerfaces. Wiley, Hoboken, NJCrossRefGoogle Scholar
Outhwaite CW, Bhuiyan LB (1983) An improved modified Poisson–Boltzmann equation in electric-double-layer theory. J Chem Soc Faraday Trans 2(79):707–718CrossRefGoogle Scholar
Paik K-H, Liu Y, Tabard-Cossa V, Waugh MJ, Huber DE, Provine J, Howe RT, Dutton RW, Davis RW (2012) Control of DNA capture by nanofluidic transistors. ACS Nano 6:6767–6775CrossRefGoogle Scholar
Patra CN (1999) Structure of electric double layers: a simple weighted density functional approach. J Chem Phys 111:9832–9838CrossRefGoogle Scholar
Patra CN, Ghosh SK (1994) A nonlocal density functional theory of electric double layer: symmetric electrolytes. J Chem Phys 100:5219–5229CrossRefGoogle Scholar
Patra CN, Ghosh SK (2002) Structure of electric double layers: a self-consistent weighted-density-functional approach. J Chem Phys 117:8938–8943CrossRefGoogle Scholar
Patra CN, Yethiraj A (1999) Density functional theory for the distribution of small ions around polyions. J Phys Chem B 103:6080–6087CrossRefGoogle Scholar
Piruska A, Branagan SP, Minnis AB, Wang Z, Cropek DM, Sweedler JV, Bohn PW (2010) Electrokinetic control of fluid transport in gold-coated nanocapillary array membranes in hybrid nanofluidic–microfluidic devices. Lab Chip 10:1237–1244CrossRefGoogle Scholar
Quesada-Pérez M, Martín-Molina A, Hidalgo-Álvarez R (2005) Simulation of electric double layers undergoing charge inversion: mixtures of mono- and multivalent ions. Langmuir 21:9231–9237CrossRefGoogle Scholar
Ramirez R, Borgis D (2005) Density functional theory of solvation and its relation to implicit solvent models. J Phys Chem B 109:6754–6763CrossRefGoogle Scholar
Rosenfeld Y (1989) Free-energy model for the inhomogeneous hard-sphere fluid mixture and density-functional theory of freezing. Phys Rev Lett 63:980–983CrossRefGoogle Scholar
Rosenfeld Y (1993) Free energy model for inhomogeneous fluid mixtures: Yukawa-charged hard spheres, general interactions, and plasmas. J Chem Phys 98:8126–8148CrossRefGoogle Scholar
Rosenfeld Y, Schmidt M, Löwen H, Tarazona P (1997) Fundamental-measure free-energy density functional for hard spheres: dimensional crossover and freezing. Phys Rev E 55:4245–4263CrossRefGoogle Scholar
Roth R (2010) Fundamental measure theory for hard-sphere mixtures: a review. J Phys Condens Matter 22:063102CrossRefGoogle Scholar
Roth R, Dietrich S (2000) Binary hard-sphere fluids near a hard wall. Phys Rev E 62:6926–6936CrossRefGoogle Scholar
Roth R, Evans R, Lang A, Kahl G (2002) Fundamental measure theory for hard-sphere mixtures revisited: the White Bear version. J Phys Condens Matter 14:12063–12078CrossRefGoogle Scholar
Sears FW, Zemansky MW, Young HD (1987) University physics, 7th edn. Addison-Wesley, Reading, MAGoogle Scholar
Sharp KA, Honig B (1990) Calculating total electrostatic energies with the nonlinear Poisson–Boltzmann equation. J Phys Chem 94:7684–7692CrossRefGoogle Scholar
Sokolowski S, Fischer J (1989) Density functional theory for inhomogeneous fluids. Mol Phys 68:647–657CrossRefGoogle Scholar
Tang Z, Scriven LE, Davis HT (1992) A three-component model of the electrical double layer. J Chem Phys 97:494–503CrossRefGoogle Scholar
Tarazona P (1984) A density functional theory of melting. Mol Phys 52:81–96CrossRefGoogle Scholar
Valiskó M, Boda D, Gillespie D (2007) Selective adsorption of ions with different diameter and valence at highly-charged interfaces. J Phys Chem C 111:15575–15585CrossRefGoogle Scholar
van der Wouden EJ, Bomer J, Pennathur S, Eijkel JCT, van den Berg A (2008) Fabrication of a nanofluidic field effect transistor for controlled cavitation in nanochannels XXII ICTAM, Adelaide, AustrailiaGoogle Scholar
Wakai C, Oleinikova A, Ott M, Weingärtner H (2005) How polar are ionic liquids? Determination of the static dielectric constant of an imidazolium-based ionic liquid by microwave dielectric spectroscopy. J Phys Chem B 109:17028–17030CrossRefGoogle Scholar
Yu Y-X, Wu J (2002) Structures of hard-sphere fluids from a modified fundamental-measure theory. J Chem Phys 117:10156–10164CrossRefGoogle Scholar
1.Department of Molecular Biophysics and PhysiologyRush University Medical CenterChicagoUSA
Gillespie, D. Microfluid Nanofluid (2015) 18: 717. https://doi.org/10.1007/s10404-014-1489-5
Accepted 23 September 2014
First Online 08 October 2014
Publisher Name Springer Berlin Heidelberg | CommonCrawl |
How time-inconsistent preferences influence venture capital exit decisions? A new perspective for grandstanding
Yanzhao Li ORCID: orcid.org/0000-0002-8466-970X1,
Ju-e Guo1,
Shaolong Sun ORCID: orcid.org/0000-0002-3196-14591 &
Yongwu Li ORCID: orcid.org/0000-0001-8962-96972
Considering that the assumption of time consistency does not adequately reveal the mechanisms of exit decisions of venture capital (VC), this study proposes two kinds of time-inconsistent preferences (i.e., time-flow inconsistency and time-point inconsistency) to advance research in this field. Time-flow inconsistency is in line with the previous time inconsistency literature, while time-point inconsistency is rooted in the VC fund's finite lifespan. Based on the assumption about the strategies guiding future behaviors, we consider four types of venture capitalists: time-consistent, time-point-inconsistent, naïve, and sophisticated venture capitalists, of which the latter three are time-inconsistent. We derive and compare the exit thresholds of these four types of venture capitalists. The main results include: (1) time-inconsistent preferences accelerate the exits of venture capitalists; (2) the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits; and (3) future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Our study provides a behavioral explanation for the empirical fact of young VCs' grandstanding.
Venture capital (VC) provides the imperative capital for the development of start-ups (Cumming 2012; Tavares-Gärtner et al. 2018; Ferreira and Pereira 2021). Additionally, due to its unique mode of operation, VC plays a role in coping with risks, facilitating venture success, and nurturing high-tech industries worldwide, especially in transition economies such as China (Guo and Jiang 2013). According to KPMG's quarterly report on VC trends named "Venture Pulse", both Asian and global VC transactions increased by more than 40% in 2018, reaching a record of $93.5 billion and $254.7 billion US dollars, respectively. Among them, Chinese VC transaction volume reached a record of 70.5 billion US dollars, with an increase of 52.9%, compared with 46.1 billion US dollars in 2017 (KPMG 2019).
The exit, divestment from the VC's portfolio, is crucial since it achieves the sale of shares and thus determines the VC fund's final payoffs. Consequently, exit payoffs are an important signal of VC funds' quality (Cumming 2010). Particularly, for young and less-prestigious VC funds, exit payoffs may be the only quality signal and directly affect subsequent fundraising (Cumming 2010; Gompers 1996). Therefore, young VCs are more likely to grandstand by pushing firms to go public or sell privately held firms earlier than older VCs (Gompers 1996; Lee and Wahal 2004; Amor and Kooli 2020). Considering that young VCs are more prone to time-inconsistent behavior than older VCs,Footnote 1 this study, for the first time, aims to provide a behavioral explanation for young VC grandstanding from the perspective of decision-makers' time preferences. Specifically, we explore the optimal exit decisions of venture capitalists under time-inconsistent preferences.
Previous studies have investigated various factors influencing VC exit decisions, such as legal institutions and agency problems. For example, Cumming et al. (2006) provide the first cross-country empirical insight into the relationship between legality and VC exits based on a sample of 12 Asia–Pacific countries and regions. Cumming (2008) documents that strong VC control increases the likelihood that start-ups would exit through trade sales rather than through initial public offerings (IPOs). Anderson et al. (2017) report the effects of political ties of VC funds on VC exits and find that political ties facilitate VCs' successful exits via Chinese stock and mergers and acquisitions (M&A) markets. However, these studies rely on the assumption that individuals are perfectly rational. Recently, pioneering research in behavioral finance has discovered many decision biases, thus effectively explaining the anomaly and puzzle phenomenon in reality (Tian 2016). Nevertheless, only a few studies have attempted to incorporate behavioral finance theory into the VC exit decision research. Notably, Bock and Schmidt (2015) first examine the determinants of VC exit behavior after the lockup expiry in IPOs by considering insights from prospect theory. Nevertheless, the intertemporal choice of VC exit decisions and the resulting time-inconsistent preferences have been neglected in previous studies.Footnote 2
Many experimental studies on time preferences suggest that time-inconsistent preferences are more realistic than time-consistent preference, and they seriously distort the behavior of decision-makers (Strotz 1955; Thaler 1981; Loewenstein and Prelec 1992). Concretely, time-inconsistent preferences assume that decision-makers' discount rates for payoffs decrease over time. Therefore, decision-makers prefer current payoffs rather than future payoffs (Laibson 1997; O'Donoghue and Rabin 1999; Grenadier and Wang 2007). By relaxing the assumption of constant discount rates, time-inconsistent preferences provide a new theoretical perspective for accurately describing decision-makers' behavioral choices. As a result, time-inconsistent preferences are widely used in many fields such as investment (Grenadier and Wang 2007; Tian 2016; Luo et al. 2020), consumption (Liu et al. 2020), insurance (Chen et al. 2016) and contract design (Li et al. 2016; Wang et al. 2020).
Time-inconsistent preferences mentioned earlier are caused by individual time preferences, which essentially depend only on the individual's time sensitivity to flow payoffs. Beyond that, the finite lifespan of VC funds, determined by VC's particular organizational structure,Footnote 3 is also a source of time inconsistency among venture capitalists. The finite lifespan of VC funds forces venture capitalists to sell all projects before maturity. Although VC funds always have extension periods to facilitate exits, fund investors (limited partners) observe the delayed exit behavior of venture capitalists and then evaluate the quality of VC funds accordingly (Gompers 1996; Cumming 2010; Amor and Kooli 2020). This pressure makes venture capitalists have a lower utility perception of payoffs after the expiry date. We can find relevant evidences from previous research on the finite lifespan of VC funds. For example, Kandel et al. (2011) prove that the termination of all unfinished projects at the fund's maturity leads to suboptimal decisions during later stages of investment. And they sum up this phenomenon as venture capitalists' myopia induced by the finite lifespan of VC funds. Additionally, Arcot et al. (2015) investigate whether secondary buyouts are value-maximizing, or reflect opportunistic behavior and demonstrate that VC funds under expiration pressure engage more in secondary buyouts. Therefore, it is reasonable to believe that the discount rate of venture capitalists drops rapidly after VC funds expiring, also in line with time-inconsistent preferences. Consistent with this, Guo et al. (2018) present a similar argument. For the long-term transit investment problem, the utility of transit projects completed during and outside the term is significantly different for city mayors.
To distinguish the two kinds of time inconsistencies mentioned, we propose time-flow and time-point inconsistencies to promote understanding of VC exit decisions. As shown in Fig. 1, the exit decision of VC starts with an exit opportunity and ends with the exit exercise. Thus, if venture capitalists exercise the exit option before the expiration, the perceived exit payoffs are affected by time-flow inconsistency. Additionally, if venture capitalists exit during the extension period, both time-flow and time-point inconsistencies influence the perceived exit payoffs. Based on the assumption about the strategies guiding the future behaviors (Grenadier and Wang 2007; Tian 2016), we consider four types of venture capitalists: time-consistent, time-point-inconsistent, naïve, and sophisticated venture capitalists, of which the latter three are time-inconsistent. All time-inconsistent venture capitalists are aware of time-point inconsistency, but naive venture capitalists misunderstand time-flow inconsistency and assume that the future selves caused by time-flow inconsistency act according to preferences of the current self. In contrast, sophisticated venture capitalists know that future selves choose strategies that are optimal for themselves.
The exit decision of VC and its embedding in the duration of VC fund
This study first presents the model setup of venture capitalists' time-inconsistent preferences and develops an optimal VC exit decisionFootnote 4 through trade salesFootnote 5 based on the fact that the VC and the acquiring firm share synergies brought by the M&A. Then, we derive the optimal exit thresholds of the four types of venture capitalists using the well-established real options approach,Footnote 6 considering the uncertainty and option nature of VC exits. Finally, a comparative static analysis and the corresponding model implications are presented. The main results are summarized as follows: (1) time-inconsistent preferences accelerate the exit of venture capitalists, verifying the grandstanding of young VCs; (2) the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits; and (3) future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Under the action of these mechanisms, we can observe that (1) all time-inconsistent venture capitalists exit earlier than the time-consistent ones; (2) generally, sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exit earlier than time-point-inconsistent venture capitalists; and (3) when the degree of time-point inconsistency is much greater than that of time-flow inconsistency, and the exit opportunity is close to the VC fund's expiry date, naive venture capitalists exit later than time-point-inconsistent venture capitalists, and sophisticated venture capitalists exit last among the three defined time-inconsistent venture capitalists.
Our study contributes to the literature in the following ways. First, to the best of our knowledge, this is the first study to consider VC exit decisions under time-inconsistent preferences. Given the vital role that intertemporal choice plays in VC decision-making, our study broadens the theoretical understanding of VC exit decisions. Second, since Gompers (1996) proposed the grandstanding hypothesis of young VCs, researchers have constantly tried various theories and perspectives such as signal games (Grenadier and Malenko 2011), demand side (Butler and Goktan 2013), and multiple agents (Sethuram et al. 2021), to explain this hypothesis. In contrast to prior studies, we are the first to provide a behavioral explanation from the perspective of time preferences. Third, we extend the decision-making framework of time-inconsistent agents established by Grenadier and Wang (2007). Specifically, we propose time-flow and time-point inconsistencies to model individuals' time preferences and the effect of the finite lifespan of VC funds, respectively. This modeling framework is more realistic for VC exit decisions (Kandel et al. 2011; Ferreira and Pereira 2021) and applicable to other intertemporal choice issues with time restrictions.
The remainder of this paper is organized as follows. "Section Model setup" provides the model setup, including the venture capitalist's time-inconsistent preferences and the exit decision via trade sales. "Section Time-consistent and time-inconsistent venture capitalists" derives the optimal exit timing for time-consistent and time-inconsistent venture capitalists. "Section Model implications" presents the comparative static analysis and model implications. Finally, "Section Conclusions" concludes the paper.
Model setup
Venture capitalists' time-inconsistent preferences
Following Chen et al. (2016), Harris and Laibson (2013), and Grenadier and Wang (2007), we describe the time-inconsistent preferences of venture capitalists using a continuous-time quasi-hyperbolic discount function. The venture capitalist is described as a finite number of selves with a random lifespan. Each self represents the current stage of the venture capitalist to exercise decision while considering the utility of the future selves exercise decision. As shown in Fig. 2, we call \(t_{0}\) the start time moment, which signals the birth of self 0 and means that an exit option emerges for the venture capitalist to sell the share of the invested start-ups. Let \(T_{L}\) denote the duration from \(t_{0}\) to the VC fund's expiry date. We assume that \(T_{L}\) is exponentially distributed with parameter \(\lambda_{L}\). Let \(t_{n}\) be the birth time of self \(n\) and death time of self \(n - 1\) (\(n = 1,2,3,...,L - 1,L\)). The lifespan \(T_{n} = t_{n + 1} - t_{n}\) (excluding \(T_{L}\)) for the self \(n\) is assumed to be exponentially distributed with the parameter \(\lambda_{f}\). If instead the VC fund's expiry date arrives before the next self generated by time-flow inconsistency, the next self is changed to self \(L\) generated by time-point inconsistency. The duration of the self \(L - 1\) is assumed to be exponentially distributed with parameter \(\lambda_{p}\). We note that \({1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-\nulldelimiterspace} \lambda }\) represents the inter-arrival time of selves because \(\lambda\) is the arrival intensity of the Poisson process, thus we have \(E\left[ {{{\left( {L - 1} \right)} \mathord{\left/ {\vphantom {{\left( {L - 1} \right)} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{p} }}} \right. \kern-\nulldelimiterspace} {\lambda_{p} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{p} }}} \right. \kern-\nulldelimiterspace} {\lambda_{p} }}}}} \right] = E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{L} }}} \right. \kern-\nulldelimiterspace} {\lambda_{L} }}} \right]\).
Two kinds of time inconsistencies and the resulting multiple selves
We use \(D_{n} \left( {t,s} \right)\) to denote the inter-temporal discount function of self \(n\), giving self \(n\)'s discounted value at time \(t\) of one dollar received at the future time \(s\). For payoffs obtained in the duration of self \(n\), self \(n\) uses a standard discount function \(e^{{ - \rho \left( {s - t} \right)}}\), where the constant discount rate \(\rho > 0\). For payoffs obtained after the death of the current self, self \(n\) uses the discount function \(\delta e^{{ - \rho \left( {s - t} \right)}}\), multiplying a reduction factor \(\delta\) based on the standard discount function. After the arrival of self \(n + 1\), the venture capitalist uses the updated discount function \(D_{n + 1} \left( {t,s} \right)\) for evaluation. To distinguish the impact of time-flow and time-point inconsistencies, we define the different reduction factor \(\delta_{f}\) for self \(n\) (\(n = 0\sim L - 2\)) and \(\delta_{p}\) for self \(L - 1\), where \(\delta_{f} > \delta_{p}\) for time-point inconsistency reducing the discounting of payoffs more.
Then the inter-temporal discount function is given by
$$D_{n} = \left\{ \begin{gathered} e^{{ - \rho \left( {s - t} \right)}} , \hfill \\ \delta e^{{ - \rho \left( {s - t} \right)}} , \hfill \\ \end{gathered} \right.\begin{array}{*{20}c} {} & \begin{gathered} if{}_{{}}s \in \left[ {t_{n} ,t_{n + 1} } \right), \hfill \\ if{}_{{}}s \in \left[ {t_{n + 1} ,\infty } \right), \hfill \\ \end{gathered} \\ \end{array}$$
for \(s > t\) and \(t \in \left[ {t_{n} ,t_{n + 1} } \right)\).
The exit decision via trade sale
Consider that the venture capitalist has an opportunity to exit the invested start-up by trade sales and the invested start-up has uncertain prospects for development. Let \(P_{t}^{T}\) denote the start-up profit at time \(t\). We suppose that the profit is given by a geometric Brownian motion:
$$dP_{t}^{T} = \alpha P_{t}^{T} dt + \sigma P_{t}^{T} dB_{t} , \, t \ge 0,$$
where \(dB_{t}\) is the increment of a standard Wiener process, \(\alpha\) is the expected growth rate of the profit, and \(\sigma\) is the profit volatility.
Following Thijssen (2008), we assume that the profit \(P_{t}^{T}\) consists of a deterministic part, denoted by \(Q^{T}\), and a stochastic component, denoted by \(X_{t}\). The stochastic shock is assumed to be multiplicative, that is, \(P_{t}^{T} = Q^{T} X_{t}\). Similarly, for the acquiring and merged firms, there are \(P_{t}^{A} = Q^{A} X_{t}\) and \(P_{t}^{M} = Q^{M} X_{t}\). The deterministic component is regarded as a result of competition in the product market. The stochastic component indicates uncertainty. Therefore, the stochastic component follows a geometric Brownian motion:
$$dX_{t} = \alpha X_{t} dt + \sigma X_{t} dB_{t} , \, t \ge 0, \, X_{0} = x.$$
The discount rate of the acquiring firm is assumed to be the risk-free rate in this M&A. In addition, the payment of dividends, provided by the invested firm, is far less than the payoff from selling the shares held in the invested firm for the VC. Therefore, the exit can be regarded as the only way to obtain lump-sum payoffs. Thus, even though the profit of the invested firm is given in flows over time, the venture capitalist's time preference does not affect the expected present value of the profit flow generated by the invested firm.
Gao et al. (2013) highlight that the acquiring firm could expand its business more efficiently by achieving economies of scale and scope. In this study, this positive effect is characterized as a synergy, that is, the deterministic profit generated by the M&A is larger than the sum of the deterministic profits of the constituent firms, which is \(Q^{M} > Q^{A} + Q^{T}\).
The value of the acquiring firm before the M&A is as follows:
$$V^{A} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {Q^{A} X_{t} } \right)dt = \frac{{Q^{A} x}}{\rho - \alpha }} .$$
The value of the merged firm after the M&A is as follows:
$$V^{M} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {Q^{M} X_{t} } \right)dt = \frac{{Q^{M} x}}{\rho - \alpha } > } \frac{{Q^{A} x}}{\rho - \alpha } = V^{A} \left( x \right).$$
The value of the start-up's shares held by the VC before the M&A is as follows:
$$V_{VC}^{T} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {\phi Q^{T} X_{t} } \right)} dt = \phi \frac{{Q^{T} x}}{\rho - \alpha },$$
where \(\phi\) is the VC's share in the start-up.
We assume that the VC uses the participating convertible preferred (PCP) stock to invest,Footnote 7 which brings the highest payoffs to the VC in the M&A (Arcot 2014). The exit payoff obtained by the VC is \(P_{VC} \left( x \right)\) as follows:
$$P_{VC} \left( x \right) = d + \phi \left[ {P\left( x \right) - d} \right],$$
where \(P\left( x \right)\) is the value of cash or cash equivalents paid by the acquiring company to purchase the entire equity of the start-up and \(d\) is the preferential fixed claim in the M&A. Nonparticipating convertible preferred stock or common stock is equivalent to the exception when \(d = 0\) and is therefore covered.
The VC and the acquiring firm negotiate the merger price to obtain a Pareto effective synergistic value distribution. The merger price is determined using the Nash bargaining game equilibrium. We suppose that the negotiation ability of the venture capitalist is \(\beta_{VC}\), and that of the acquiring firm is \(\beta_{A} = 1 - \beta_{VC}\). Following Alvarez and Stenbacka (2006), the merger price is the solution to the optimization problem below:
$${\text{sup}}_{{p^{*} }} \left[ {P_{VC} \left( x \right) - V_{VC}^{T} \left( x \right)} \right]^{{\beta_{VC} }} \left[ {V^{M} \left( x \right) - P\left( x \right) - V^{A} \left( x \right)} \right]^{{\beta_{A} }} ,$$
where \(P_{VC} \left( x \right) - V_{VC}^{T} \left( x \right)\) and \(V^{M} \left( x \right) - P\left( x \right) - V^{A} \left( x \right)\) are the value-added payoffs obtained by the VC and the acquiring firm through the M&A, respectively. Therefore, we find that the Nash bargaining solution is given by
$$P^{*} \left( x \right) = \frac{{Q^{T} x}}{\rho - \alpha } - \left( {1 - \beta_{VC} } \right)\frac{1 - \phi }{\phi }d + \beta_{VC} \left[ {\frac{{\left( {Q^{M} - Q^{A} - Q^{T} } \right)x}}{\rho - \alpha }} \right].$$
The exit payoff obtained by the VC is as follows:
$$\begin{aligned} P_{VC}^{*} \left( x \right) & = \frac{{\phi Q^{T} x}}{\rho - \alpha } + \beta_{VC} \left( {1 - \phi } \right)d + \phi \beta_{VC} \left[ {\frac{{\left( {Q^{M} - Q^{A} - Q^{T} } \right)x}}{\rho - \alpha }} \right] \\ & = V_{VC}^{T} \left( x \right) + \beta_{VC} \left( {1 - \phi } \right)d + \phi \beta_{VC} \Delta V\left( x \right). \\ \end{aligned}$$
The above formula shows that the VC payoffs in trade sales consist of three parts. The first part \(V_{VC}^{T} \left( x \right)\) is the value of the shares held by the VC when the invested start-up maintains an independent operation, and the second part \(\beta_{VC} \left( {1 - \phi } \right)d\) is the profit from the priority settlement in the M&A. The last part \(\phi \beta_{VC} \Delta V\left( x \right)\) is the synergistic benefits shared by the VC.
The payoff distribution of the VC by trade sales is a stochastic process affected by market uncertainty. Therefore, it is necessary to choose the optimal exit threshold to maximize the VC payoffs (Li et al. 2017). We set \(C\) as the cost of exit for the VC. We expect to maximize the expected value of the VC exit payoffs:
$$\mathop {\sup }\limits_{\tau \ge t} E_{t} \left[ {D_{n} \left( {t,\tau } \right)\left( {P_{VC}^{*} \left( {X_{\tau } } \right) - C} \right)} \right],$$
where \(E_{t} \left[ \cdot \right]\) denotes the expectation operator.
Time-consistent and time-inconsistent venture capitalists
This section discusses the exit decisions of time-consistent and the three types of time-inconsistent venture capitalists. The time-consistent case is the benchmark model. The three defined time-inconsistent cases allow us to explore the net effect of time-point inconsistency and the complex superposition effect of two kinds of time-inconsistent preferences. This modeling setup can provide a systematic insight into the comprehensive impact of time-inconsistent preferences on VC exit decisions.
According to the model setup of venture capitalists' time-inconsistent preferences in "Section Venture capitalists' time-inconsistent preferences", we present the three defined time-inconsistent venture capitalists' decision selves and the interarrival time in Fig. 3. Obviously, \(\lambda_{L} < \lambda_{pN} < \lambda_{pS}\). To compare the exit thresholds of self 0 from \(t_{0}\), we follow the standard backward derivation procedure, starting with the self \(L\) and then moving back to the self 0. To ensure the same duration left before the expiry date, we have \(E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{L} }}} \right. \kern-\nulldelimiterspace} {\lambda_{L} }}} \right] = E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pN} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pN} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pN} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pN} }}}}} \right] = E\left[ {{{\left( {L - 1} \right)} \mathord{\left/ {\vphantom {{\left( {L - 1} \right)} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pS} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pS} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pS} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pS} }}}}} \right]\).
The exit decision comparison of the three defined time-inconsistent venture capitalists
Time-consistent venture capitalists
Let \(F\left( x \right)\) denote the time-consistent venture capitalist's exit opportunity value function and \(x^{*}\) be the optimal exit threshold. Then, according to the continuous-time Bellman equation \(\rho Fdt = E\left( {dF} \right)\), \(F\left( x \right)\) satisfies the ordinary differential equation below (see Dixit and Pindyck (1994) for details):
$$\frac{1}{2}\sigma^{2} x^{2} F^{\prime\prime}\left( x \right) + \alpha xF^{\prime}\left( x \right) - \rho F\left( x \right) = 0.$$
Equation (12) is solved using the following value-matching and smooth-pasting conditions:
$$F\left( {x^{*} } \right) = P_{VC}^{*} \left( {x^{*} } \right) - C,\quad F^{{\prime }} \left( {x^{*} } \right) = \left[ {P_{VC}^{*} \left( {x^{*} } \right) - C} \right]^{{\prime }} .$$
The implied condition of the stochastic process is \(F\left( 0 \right) = 0\), and we note that the general solution \(F\left( x \right) = Ax^{{\beta_{1} }}\), where \(\beta_{1} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{2\rho }{{\sigma^{2} }}} > 1\).
The exit threshold \(x^{*}\) is given by
$$x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }},$$
where \(\eta = \frac{{\phi \left[ {Q^{T} + \beta_{VC} \left( {Q^{M} - Q^{A} - Q^{T} } \right)} \right]}}{\rho - \alpha }\) and \(\theta = C - \beta_{VC} d\left( {1 - \phi } \right)\).
Equation (13) reveals that when the exit cost \(C\) increases, \(x^{*}\) increases (\(\frac{{\partial x^{*} }}{\partial C} > 0\)). In turn, when the venture capitalist's negotiation ability \(\beta_{VC}\) increases or the fixed income \(d\) increases, \(x^{*}\) decreases (\(\frac{{\partial x^{*} }}{{\partial \beta_{VC} }} < 0{\text{ and }}\frac{{\partial x^{*} }}{\partial d} < 0\)).
The option value \(F\left( x \right)\) before exiting is given by
$$F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
Time-point-inconsistent venture capitalists
The assumption about the time-point-inconsistent venture capitalist is natural for the following reasons. First, venture capitalists cannot ignore the impact of time-point inconsistency due to the importance of VC funds' finite lifespan. Second, the most critical advantage of VC is its ability to serve as long-term committed patient capital to help start-ups achieve powerful sustained compounding of growth (Klingler-Vidra 2016; Arundale 2020). Hence, venture capitalists may selectively ignore time-flow inconsistency from overconfident beliefs in the ability to commit (Grenadier and Wang 2007).
This optimization problem, which solves the exit threshold of self 0, is transformed into a two-stage optimization problem solved by backward induction. Self \(L\) faces the case as the time-consistent venture capitalist. Then we analyze the exit timing selection of self 0. Let \(G\left( x \right)\) and \(x_{G}\) denote the value function and exercise threshold for self 0, respectively. Drawing on the continuation value function proposed by Grenadier and Wang (2007), \(G\left( x \right)\) solves the differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} G^{\prime\prime}\left( x \right) + \alpha xG^{\prime}\left( x \right) - \rho G\left( x \right) + \lambda_{L} \left[ {F^{c} \left( x \right) - G\left( x \right)} \right] = 0,$$
where \(F^{c} \left( x \right)\) is self 0's continuation value function, upon the arrival of self \(L\), occurring at the intensity \(\lambda_{L}\), and \(F^{c} \left( x \right) = \delta_{p} F\left( x \right)\). Equation (15) is solved using the following value-matching and smooth-pasting conditions:
$$G\left( {x_{G} } \right) = P_{VC}^{*} \left( {x_{G} } \right) - C = \eta x_{G} - \theta ,\quad G^{{\prime }} \left( {x_{G} } \right) = \left[ {P_{VC}^{*} \left( {x_{G} } \right) - C} \right]^{{\prime }} = \eta .$$
After standard calculations, we obtain the exit exercise threshold and option value for the time-point-inconsistent venture capitalist.
Proposition 1
For the time-point-inconsistent venture capitalist, the value of the exit option is as follows:
$$G\left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{2} - \beta_{1} }}\left( {x^{*} - x_{G} } \right)\left( {\frac{x}{{x_{G} }}} \right)^{{\beta_{2} }} + \delta_{p} F\left( x \right),$$
where \(\beta_{2} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{L} } \right)}}{{\sigma^{2} }}} > \beta_{1}\) and \(x_{G}\) is the optimal exit threshold given by
$$x_{G} = \frac{1}{{\eta \left( {\beta_{2} - 1} \right)}}\left[ {\beta_{2} \theta + \left( {\beta_{2} - \beta_{1} } \right)\delta_{p} F\left( {x_{G} } \right)} \right].$$
Naïve venture capitalists
The naïve venture capitalist realizes time-point inconsistency but misunderstands the decision criterion of future selves generated by time-flow inconsistency. In our model, this type of venture capitalist believes that self \(n\left( {n = 1 \sim L - 1} \right)\) will adopt strategies consistent with the current self (self 0). In other words, the discount function \(D_{n} \left( {t,s} \right)\)\(\left( {n = 1\sim L - 1} \right)\) will not update and is always equal to \(D_{0} \left( {t,s} \right)\). Thus, it is a three-stage optimization problem solved by backward induction.
First, let us consider the optimization problem from the perspective of self \(L\). As mentioned above, self \(L\) faces the case as the time-consistent VC. Let \(N_{L} \left( x \right)\) and \(x_{N,L}\) denote self \(L\)'s value function and exercise threshold, respectively.
$$N_{L} \left( x \right) = F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
$$x_{N,L} = x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }}.$$
Then, self 1 faces the case in which there is only self \(L\) in the future. This situation is similar to that of the time-point-inconsistent venture capitalist. The only difference is in the arrival intensity of self \(L\). Self 1's value function \(N_{1} \left( x \right)\) and exercise threshold \(x_{N,1}\) are given in Eqs. (20) and (21).
$$N_{1} \left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{3} - \beta_{1} }}\left( {x^{*} - x_{N,1} } \right)\left( {\frac{x}{{x_{N,1} }}} \right)^{{\beta_{3} }} + \delta_{p} F\left( x \right),$$
$$x_{N,1} = \frac{1}{{\eta \left( {\beta_{3} - 1} \right)}}\left[ {\beta_{3} \theta + \left( {\beta_{3} - \beta_{1} } \right)\delta_{p} F\left( {x_{N,1} } \right)} \right],$$
where \(\beta_{3} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{pN} } \right)}}{{\sigma^{2} }}} > \beta_{2} > \beta_{1}\).
Next, self 0 decides their exercise threshold \(x_{N,0}\), considering their future selves' exercise thresholds. The continuation value function \(N_{1}^{c} \left( x \right)\) of self 0 is calculated as follows. If self 1 is alive when their threshold \(x_{N,1}\) is reached, then the exit option is exercised, and its payoff to self 0 is \(\delta_{f} \left[ {P_{VC}^{*} \left( {x_{N,1} } \right) - C} \right]\). However, if self 1 dies and self \(L\) arrives before \(x_{N,1}\) is reached, then self 0's continuation value \(N_{1}^{c} \left( x \right)\) changes into self 1's continuation value \(F^{c} \left( x \right)\). Thus, \(N_{1}^{c} \left( x \right)\) solves the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} N_{1}^{c\prime\prime } \left( x \right) + \alpha xN_{1}^{c\prime } \left( x \right) - \rho N_{1}^{c} \left( x \right) + \lambda_{pN} \left[ {F^{c} \left( x \right) - N_{1}^{c} \left( x \right)} \right] = 0,$$
where \(F^{c} \left( x \right) = \delta_{p} F\left( x \right)\), the value-matching condition is given by
$$N_{1}^{c} \left( {x_{N,1} } \right) = \delta_{f} \left[ {P_{VC}^{*} \left( {x_{N,1} } \right) - C} \right] = \delta_{f} \left( {\eta x_{N,1} - \theta } \right).$$
The value-matching condition ensures the continuity of the continuation value function. We note that solving \(N_{1}^{c} \left( x \right)\) only requires a boundary condition. To simplify the expression, we assume that \(Y = x_{N,1}^{{ - \beta_{3} }} \left[ {\delta_{f} \left( {\eta x_{N,1} - \theta } \right) - \delta_{p} F\left( {x_{N,1} } \right)} \right]\). Self 0's continuation value function is \(N_{1}^{c} \left( x \right) = Yx^{{\beta_{3} }} + \delta_{p} F\left( x \right)\).
Self 0 maximizes their value function \(N_{0} \left( x \right)\) by taking the continuation value function \(N_{1}^{c} \left( x \right)\) and choosing their exit threshold \(x_{N,0}\). Thus, \(N_{1}^{c} \left( x \right)\) solves the differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} N_{0}^{\prime \prime } \left( x \right) + \alpha xN_{0}^{\prime } \left( x \right) - \rho N_{0} \left( x \right) + \lambda_{f} \left[ {N_{1}^{c} \left( x \right) - N_{0} \left( x \right)} \right] = 0.$$
It is solved by using the value-matching and smooth-pasting conditions:
$$N_{0} \left( {x_{N,0} } \right) = P_{VC}^{*} \left( {x_{N,0} } \right) - C = \eta x_{N,0} - \theta ,\quad N_{0}^{{\prime }} \left( {x_{N,0} } \right) = \left[ {P_{VC}^{*} \left( {x_{N,0} } \right) - C} \right]^{{\prime }} = \eta .$$
We assume that the general solution of \(N_{0} \left( x \right)\) takes the form below, and we verify the conjecture in "Appendix 1". Next, we discuss case by case.
$$N_{0} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} \ne \lambda_{pN} , \hfill \\ \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} ,\quad if \, \lambda_{f} = \lambda_{pN} . \hfill \\ \end{gathered} \right.$$
If \(\lambda_{f} \ne \lambda_{pN}\), we note that \(\beta_{3} \ne \beta_{4}\). The value of the naïve venture capitalist exit option is given by.
$$N_{0} \left( x \right) = \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} ,$$
where \(\varepsilon = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pN} }}\), \(U_{0} = x_{N,0}^{{ - \beta_{4} }} \left[ {\eta x_{N,0} - \theta - \delta_{p} F\left( {x_{N,0} } \right) - \varepsilon Yx_{N,0}^{{\beta_{3} }} } \right]\), \(\beta_{4} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{f} } \right)}}{{\sigma^{2} }}}\), and the exit threshold \(x_{N,0}\) is the solution to Eq. (26).
$$x_{N,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{N,0} } \right) + \frac{{\beta_{4} - \beta_{3} }}{{\eta \left( {\beta_{4} - 1} \right)}}\varepsilon Yx_{N,0}^{{\beta_{3} }} .$$
If \(\lambda_{f} = \lambda_{pN}\), we note that \(\beta_{3} = \beta_{4}\) (hereinafter referred to as \(\beta_{4}\)). The value of the naïve venture capitalist exit option is given by
$$N_{0} \left( x \right) = \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} ,$$
where \(R_{1} = - \frac{{\lambda_{f} Y}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}\), \(R_{0} = x_{N,0}^{{ - \beta_{4} }} \left[ {\eta x_{N,0} - \theta - \delta_{p} F\left( {x_{N,0} } \right) - R_{1} x_{N,0}^{{\beta_{4} }} \log x_{N,0} } \right]\), and the exit threshold \(x_{N,0}\) is the solution to Eq. (28).
$$x_{N,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{N,0} } \right) - \frac{{R_{1} x_{N,0}^{{\beta_{4} }} }}{{\eta \left( {\beta_{4} - 1} \right)}}.$$
Sophisticated venture capitalists
The sophisticated venture capitalist foresees time-point and time-flow inconsistencies, as shown in Fig. 3. This type of venture capitalist clearly and correctly understands that all future selves will adopt strategies based on their own interests. Therefore, we need to start with self \(L\), the last self, and backward derive value function, continuation value function, and exit threshold for each self until self 0.
Similar to the naïve venture capitalist, we give self \(L\)'s value function \(S_{L} \left( x \right)\) and exercise threshold \(x_{S,L}\) directly.
$$S_{L} \left( x \right) = F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
$$x_{S,L} = x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }}.$$
Self \(L - 1\) faces the case similar to self 1 of the naïve venture capitalist. Let \(S_{L - 1} \left( x \right)\) and \(x_{S,L - 1}\) denote self \(L - 1\)'s value function and exercise threshold, respectively. It is noted that the arrival intensity of self \(L\) changes again.
$$S_{L - 1} \left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{5} - \beta_{1} }}\left( {x^{*} - x_{S,L - 1} } \right)\left( {\frac{x}{{x_{S,L - 1} }}} \right)^{{\beta_{5} }} + \delta_{p} F\left( x \right),$$
$$x_{S,L - 1} = \frac{1}{{\eta \left( {\beta_{5} - 1} \right)}}\left[ {\beta_{5} \theta + \left( {\beta_{5} - \beta_{1} } \right)\delta_{p} F\left( {x_{S,L - 1} } \right)} \right],$$
where \(\beta_{5} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{pS} } \right)}}{{\sigma^{2} }}}\). In general, \(\beta_{4} \ge \beta_{5}\).
We assume the general solution of \(S_{L - 2} \left( x \right)\) that satisfies the corresponding differential equation takes the form below, and we solve the exit threshold \(x_{S,L - 2}\) by the value-matching and smooth-pasting conditions.
$$S_{L - 2} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varphi Zx^{{\beta_{5} }} + U_{L - 2,0} x^{{\beta_{4} }}, \quad if \, \lambda_{f} \ne \lambda_{pS} , \hfill \\ \delta_{p} F\left( x \right) + R_{L - 2,1} x^{{\beta_{4} }} \log x + R_{L - 2,0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} = \lambda_{pS} . \hfill \\ \end{gathered} \right.$$
The proof for the general solution of \(S_{L - 2} \left( x \right)\) is similar to the case of the naïve venture capitalist, so it is not repeated here.
(1) If \(\lambda_{f} \ne \lambda_{pS}\), we note that \(\beta_{4} \ne \beta_{5}\). Self \(L - 2\)'s exercise threshold \(x_{S,L - 2}\) is the solution to Eq. (34).
$$x_{S,L - 2} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,L - 2} } \right) + \frac{{\beta_{4} - \beta_{5} }}{{\eta \left( {\beta_{4} - 1} \right)}}\varphi Zx_{S,L - 2}^{{\beta_{5} }} ,$$
where \(\varphi { = }\frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pS} }}\) and \(Z = x_{S,L - 2}^{{ - \beta_{5} }} \left[ {\delta_{f} \left( {\eta x_{S,L - 1} - \theta } \right) - \delta_{p} F\left( {x_{S,L - 1} } \right)} \right]\).
In summary, for \(n \le L - 3\), self \(n\)'s continuation value function \(S_{n + 1}^{c} \left( x \right)\) satisfies the differential equation below:
$$\frac{1}{2}\sigma^{2} x^{2} S_{n + 1}^{c\prime \prime} \left( x \right) + \alpha xS_{n + 1}^{c\prime} \left( x \right) - \rho S_{n + 1}^{c} \left( x \right) + \lambda_{f} \left[ {S_{n + 2}^{c} \left( x \right) - S_{n + 1}^{c} \left( x \right)} \right] = 0.$$
The value-matching condition is given by
$$S_{n + 1}^{c} \left( {x_{S,n + 1} } \right) = \delta_{f} \left[ {P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C} \right] = \delta_{f} \left( {\eta x_{S,n + 1} - \theta } \right).$$
The solutions for the continuation value functions \(S_{n + 1}^{c} \left( x \right)\) are presented in "Appendix 2", and \(S_{n + 1}^{c} \left( x \right)\) is given by
$$S_{n + 1}^{c} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - n - 3} {W_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
where the formula for \(W_{n + 1,i}\) is detailed in "Appendix 2".
Self \(n + 1\)'s value function \(S_{n + 1} \left( x \right)\) satisfies the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{n + 1}^{\prime \prime } \left( x \right) + \alpha xS_{n + 1}^{\prime } \left( x \right) - \rho S_{n + 1} \left( x \right) + \lambda_{f} \left[ {S_{n + 2}^{c} \left( x \right) - S_{n + 1} \left( x \right)} \right] = 0.$$
$$S_{n + 1} \left( {x_{S,n + 1} } \right) = P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C = \eta x_{S,n + 1} - \theta ,\quad S_{n + 1}^{{\prime }} \left( {x_{S,n + 1} } \right) = \left[ {P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C} \right]^{{\prime }} = \eta .$$
The solutions for the value functions \(S_{n + 1} \left( x \right)\) are presented in "Appendix 3", and \(S_{n + 1} \left( x \right)\) is given by
$$S_{n + 1} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - n - 3} {U_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
where the formula for \(U_{n + 1,i}\) is detailed in "Appendix 3".
(2) If \(\lambda_{f} = \lambda_{pS}\), we note that \(\beta_{4} = \beta_{5}\) (hereinafter referred to as \(\beta_{4}\)). Self \(L - 2\)'s exercise threshold \(x_{S,L - 2}\) is the solution to Eq. (39).
$$x_{S,L - 2} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,L - 2} } \right) - \frac{{R_{L - 2,1} }}{{\eta \left( {\beta_{4} - 1} \right)}}x_{S,L - 2}^{{\beta_{4} }} ,$$
where \(R_{L - 2,1} = - \frac{{\lambda_{f} Z}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}\).
In summary, for \(n \le L - 3\), self \(n\)'s continuation value function \(S_{n + 1}^{c} \left( x \right)\) is given by
$$S_{n + 1}^{c} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - n - 2} {V_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
Self \(n + 1\)'s value function \(S_{n + 1} \left( x \right)\) is given by
$$S_{n + 1} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - n - 2} {R_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
The solutions for the continuation value function \(S_{n + 1}^{c} \left( x \right)\) and value function \(S_{n + 1} \left( x \right)\) are similar to those in \(\lambda_{f} \ne \lambda_{pS}\), so it is not repeated here.
If \(\lambda_{f} \ne \lambda_{pS}\) , we note that \(\beta_{4} \ne \beta_{5}\) . The value of the sophisticated venture capitalist exit option is given by
$$S_{0} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - 2} {U_{0,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
and the exit threshold \(x_{S,0}\) is as follows:
$$\begin{aligned} x_{S,0} &= \frac{{\beta_{3} \theta }}{{\eta \left( {\beta_{3} - 1} \right)}} + \frac{{\beta_{3} - \beta_{1} }}{{\eta \left( {\beta_{3} - 1} \right)}}\delta_{p} F\left( {x_{S,0} } \right) + \frac{{\beta_{3} - \beta_{4} }}{{\eta \left( {\beta_{3} - 1} \right)}}\varphi^{L - 1} Zx_{S,0}^{{\beta_{5} }}\\&\quad - \frac{k}{{\eta \left( {\beta_{3} - 1} \right)}}\sum\limits_{k = 1}^{L - 2} {U_{0,k} \left( {\log x_{S,0} } \right)^{k - 1} x_{S,0}^{{\beta_{4} }} } .\end{aligned}$$
If \(\lambda_{f} = \lambda_{pS}\), we note that \(\beta_{4} = \beta_{5}\) (hereinafter referred to as \(\beta_{4}\)). The value of the sophisticated venture capitalist exit option is given by
$$S_{0} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - 1} {R_{0,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
where the derivation of \(R_{0,i}\) is similar to that of \(W_{0,i}\) (see "Appendix 2" for details), and the exit threshold \(x_{S,0}\) is as follows:
$$x_{S,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,0} } \right) - \frac{k}{{\eta \left( {\beta_{4} - 1} \right)}}\sum\limits_{k = 1}^{L - 1} {R_{0,k} \left( {\log x_{S,0} } \right)^{k - 1} x_{S,0}^{{\beta_{4} }} } .$$
Model implications
This section compares the exit thresholds of the four types of venture capitalists by numerically examining the properties of the model solutions. The base parameter values are set as \(\alpha = 0.02\), \(\sigma = 0.2\), and \(\rho = 0.06\) according to Dixit and Pindyck (1994), and ensuring \(\rho > \alpha\) for convergence; \(Q^{M} = 1.7\), \(Q^{A} = 1\), and \(Q^{T} = 0.5\) based on Thijssen (2008) and ensuring \(Q^{M} > Q^{A} + Q^{T}\) for the positive synergy; the VC's negotiation ability \(\beta_{VC} = 0.2\) (can be anywhere from 0 to 1); the VC's holding shares \(\phi = 0.4\) (can be anywhere from 0 to 1); the VC's preferential fixed claim \(d = 0.4\); and the exit cost \(C = 10\).
Comparison of time-consistent and time-point-inconsistent venture capitalists
Figure 4 shows how the exit thresholds \(x^{*}\) of time-consistent venture capitalists and \(x_{G}\) of time-point-inconsistent venture capitalists change with parameters \(\delta_{p}\) and \(\lambda_{L}\), respectively. We separately set \(\lambda_{L} = 1\) in Fig. 4a and \(\delta_{p} = 0.3\) in Fig. 4b. Our results show that time-point-inconsistent venture capitalists exit earlier than time-consistent venture capitalists, in that \(x_{G} < x^{*}\). We also observe that as the reduction factor \(\delta_{p}\) increases from 0 to 1 or the arrival intensity \(\lambda_{L}\) decreases from 1 to 0, the degree of time inconsistency gradually decreases, and the exit threshold \(x_{G}\) gradually approaches \(x^{*}\).
Exit thresholds change with respect to the reduction factor \(\delta_{p}\) and arrival intensity \(\lambda_{L}\)
The economic intuition for these findings is as follows. The exit decision of the time-point-inconsistent venture capitalist is determined by the current and the future selves. The VC fund's expiration can be viewed as the death of the current self and the birth of the future self, meaning that the future self starts to take over the right to make exit decisions. Considering that the payoffs obtained from exercising the exit option by the future self should be further discounted by an extra reduction factor, the current self prefers to exercise the option by oneself. As a result, time-inconsistent preferences accelerate the exit of venture capitalists. Venture capitalists managing young and less-prestigious VC funds are more sensitive to fund expiration and more prone to time-inconsistent preferences than those in charge of older VC funds. Therefore, we conclude that young VCs exit earlier than older VCs. This conclusion is also supported by recent empirical evidences (Amor and Kooli 2020; Sethuram et al. 2021).
Comparison of the three defined time-inconsistent venture capitalists
Figures 5 and 6 demonstrate how the exit thresholds \(x_{G}\) of time-point-inconsistent venture capitalists and \(x_{N,0}\) of naive venture capitalists change with the parameter \(\delta_{p}\) under \(\lambda_{f} = \lambda_{pN}\) and \(\lambda_{f} \ne \lambda_{pN}\), respectively. To ensure \(\delta_{f} \ge \delta_{p}\), we set \(\delta_{f} = 0.7\) and \(\delta_{p}\) from 0 to 0.7. To model the different durations left before the expiry date, we examine the exit thresholds of the above two types of venture capitalists by adjusting \(\lambda_{pN}\) while keeping \(\lambda_{f} = 1\) all the time. The time-point-inconsistent venture capitalist is directly faced with the pressure of the VC fund's expiration, while the naïve venture capitalist is also faced with the pressure of time-flow inconsistency in addition to the VC fund's expiration.
Exit thresholds change with the reduction factor \(\delta_{p}\) under \(\lambda_{f} = \lambda_{pN} = 1\)
Exit thresholds change with the reduction factor \(\delta_{p}\) under \(\lambda_{f} \ne \lambda_{pN}\). The parameter values are \(\lambda_{f} = 1\), a \(\lambda_{pN} = 0.8\), b \(\lambda_{pN} = 0.5\), c \(\lambda_{pN} = 0.2\), and d \(\lambda_{pN} = 0.05\)
Let us begin with a special case of \(\lambda_{f} = \lambda_{pN}\). Figure 5 shows that \(x_{G} > x_{N,0}\) at \(\delta_{p} > 0.18\), and \(x_{G} < x_{N,0}\) at \(\delta_{p} < 0.18\). Thus, we represent the intersection point where \(\delta_{p} = 0.18\) as \(\delta_{pe}\). We find that when \(\delta_{p} = \delta_{f} = 0.7\) (i.e., we do not distinguish between two kinds of time inconsistencies), our model evolves as the case of Grenadier and Wang (2007), and the exit threshold of the time-point-inconsistent venture capitalist is strictly greater than that of the naïve venture capitalist. Nevertheless, we note that when \(\delta_{p} < \delta_{pe}\), the naïve venture capitalist abnormally exits later than the time-point-inconsistent venture capitalists. In this case, the time-point-inconsistent venture capitalist faces the powerful but distant VC fund's expiration pressure. In contrast, the naïve venture capitalist also faces the above expiration pressure but faces a low degree of pressure from time-flow inconsistency in the first place. For the naïve venture capitalist, the future self caused by time-flow inconsistency may act as a buffer against the detrimental effect, thus weakening the impact of time-point inconsistency. We further test this inference by comparing the exit thresholds of naïve and sophisticated venture capitalists.
Now let us analyze the more general cases of \(\lambda_{f} \ne \lambda_{pN}\). Figure 6a–d are the cases where \(\lambda_{pN}\) = 0.8, 0.5, 0.2 and 0.05, respectively. We find that as \(\lambda_{pN}\) decreases, both \(x_{G}\) and \(x_{N,0}\) increase while the increase of \(x_{G}\) is greater than that of \(x_{N,0}\). When \(\lambda_{pN}\) decreases to a specific value, \(x_{G}\) is completely greater than \(x_{N,0}\). These findings suggest that the farther the expiry dates are, the more the two types of venture capitalists delay their exit. However, considering that we adjust \(\lambda_{pN}\) while keeping \(\lambda_{f}\) unchanged, the exit threshold of the time-point-inconsistent venture capitalist is more affected than that of the naïve venture capitalist. The intuition is straightforward. As the duration left before the expiry date increases, the impact from the VC fund's expiration decreases, leading to the further prominent effect of time-flow inconsistency. In addition, it's worth noting that the case of \(\lambda_{f} \gg \lambda_{pN}\), that is, the result shown in Fig. 6 is the most common in reality. After all, the future self, caused by individual time preferences, arrives earlier than the VC fund expiry date. Therefore, the naïve venture capitalist typically exits earlier than the time-point-inconsistent venture capitalist.
Figure 7 shows how the exit thresholds \(x_{G}\), \(x_{N,0}\) and \(x_{S,0}\) change with the parameter \(\delta_{p}\) under different numbers of selves generated by time-flow inconsistency. We set \(\lambda_{f} = \lambda_{pS} = 0.5\) for facilitating the calculation and \(\delta_{f} = 0.7\) as in the above cases. Hence, the number of selves \(E + 1\) reflects the different durations left before the expiry date. Due to the complicated derivation, we only provide three cases where \(L + 1\) = 4, 5, and 6, respectively.
Exit thresholds change with the reduction factor \(\delta_{p}\). The parameter values are a \(L + 1 = 4\); b \(L + 1 = 5\); c \(L + 1 = 6\)
We denote the intersection of curves \(x_{G}\) and \(x_{S,0}\) as \(\delta_{pG}\) and the intersection of curves \(x_{N,0}\) and \(x_{S,0}\) as \(\delta_{ps}\), respectively. As shown in Fig. 7a, when \(\delta_{p} > \delta_{ps}\), especially when \(\delta_{p} = \delta_{f} = 0.7\), our model evolves as the case of Grenadier and Wang (2007) again and we find that sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exit earlier than time-point-inconsistent venture capitalists. When \(\delta_{p} < \delta_{pG}\) and \(\delta_{pG} < \delta_{p} < \delta_{ps}\), we observe that \(x_{S,0} > x_{G} > x_{N,0}\) and \(x_{G} > x_{S,0} > x_{N,0}\), respectively. These results confirm the inference we propose in our analysis of Fig. 5 that the future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Particularly, sophisticated venture capitalists exit later than naïve venture capitalists because they are fully aware of multiple future selves caused by time-flow inconsistency, further weakening the effect of time-point inconsistency.
Moreover, it is observed that the growth degree of \(x_{G}\), \(x_{N,0}\) and \(x_{S,0}\) decreases in turn with the number of selves \(L + 1\) increasing. This result indicates that a gradual increase in the number of future selves enhances the dominant effect of time-flow inconsistencies on the sophisticated venture capitalist, thereby prompting the current self to make the exit decision earlier than the naïve venture capitalist. Thus, it is foreseeable that when the number of future selves reaches a specific threshold, the sophisticated venture capitalist exits earlier than the naïve venture capitalist, who in turn exits earlier than the time-point-inconsistent venture capitalist. This finding is also in line with the real-world practice because the arrival intensity of time-flow inconsistency is usually much greater than that of time-point inconsistency.
This study merges time-inconsistent preferences and VC exit decisions under uncertainty. In transition economies such as China, many VC funds have been set up and put into operation. For young and less-prestigious VC funds, obtaining significant payoffs and exiting successfully before the VC fund's expiration could effectively signal the high quality of the VC fund and guarantee follow-up fundraising. Consequently, venture capitalists are faced with two opposite decision drivers: waiting for the optimal exit and exiting early to gain reputational value. In this study, the time-inconsistent preferences of venture capitalists are incorporated into the optimal exit option framework to model the above VC exit decisions accurately.
Considering that both individual time preferences and the finite lifespan of VC funds contribute to the time-inconsistent preferences of venture capitalists, two kinds of time inconsistencies (i.e., time-flow and time-point inconsistencies) are proposed. Based on the assumption of time inconsistencies described, we derive and compare the exit thresholds of the four types of venture capitalists. The main findings of the numerical experiments are as follows. First, time-inconsistent preferences accelerate the exit of venture capitalists and thus all types of time-inconsistent venture capitalists exit earlier than time-consistent ones. Our model is the first to provide a behavioral explanation for the empirical facts of young VCs' grandstanding from the perspective of decision-makers' time preferences. Second, the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits. Third, future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Our findings fully explain both the standard and abnormal phenomena. Generally, sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exits earlier than time-point-inconsistent venture capitalists. As for some special situations when the degree of time-point inconsistency is much greater than that of time-flow inconsistency, and the exit opportunity is close to the VC fund's expiry date, naive venture capitalists exit later than time-point-inconsistent venture capitalists, and sophisticated venture capitalists exit last among the three defined time-inconsistent venture capitalists.
The two possible extensions to this study are introduced below. First, our study follows previous literature and assumes that the discount reduction factor \(\delta_{f}\) is fixed for each self. Nevertheless, a more realistic assumption would be that the closer to the expiration date, the smaller \(\delta_{f}\) is. Second, it is assumed that venture capitalists retain control over their capital exit from the invested firm. Although in reality venture capital contracts do have provisions to give venture capitalists this right,Footnote 8 entrepreneurs lose the private benefits of control and hence may oppose trade sales. Therefore, the model could be extended to account for the game between the entrepreneur, the venture capitalist, and the acquiring firm.
Young VCs rely more on individual decisions, whereas older VCs have well-established decision rules and processes. Grenadier and Wang (2007) argue that individuals or small private partnerships are more prone to time inconsistency than firms, because the organizational structure and professional management of firms may mitigate or remove the time inconsistency from the firms' decisions.
Yoon (2020) praises time inconsistency in intertemporal choice as one of the most influential findings in the judgment and decision literature.
The limited partnership has been the main organizational form of VC funds for the past three decades. It is designed with a finite lifespan: about 8–10 years, with an extension of 2–3 years allowed to facilitate exit (Gompers and Lerner, 1999; Kandel et al., 2011).
We note that Ferreira and Pereira (2021) also define a similar model. Their model follows the exit dynamic process that is generally consistent with ours, except that they set the premium unilaterally requested by VC, while we determine it by sharing the synergies brought by M&A.
Trade sales (or M&As) have been the most common exit route for venture capitalists (Bienz and Walz, 2010; Félix et al., 2014; Ferreira and Pereira, 2021).
For the foundation of real options approach, see Dixit and Pindyck (1994). For recent applications of real options in venture capital decision making, see Tavares-Gärtner et al. (2018), and Ferreira and Pereira (2021).
Kaplan and Strömberg (2003) report that nearly 80% of all venture contracts use convertible preferred stock and that in nearly half of those cases the stock is participating.
For example, drag-along rights and tag-along rights, see Cumming (2010) and Bienz and Walz (2010).
Alvarez LHR, Stenbacka R (2006) Takeover timing, implementation uncertainty, and embedded divestment options. Rev Financ 10:417–441. https://doi.org/10.1007/s10679-006-9002-y
Amor SB, Kooli M (2020) Do M&A exits have the same effect on venture capital reputation than IPO exits? J Bank Financ 111:105704. https://doi.org/10.1016/j.jbankfin.2019.105704
Anderson HD, Chi J, Wang Q (2017) Political ties and VC exits: evidence from China. China Econ Rev 44:48–66. https://doi.org/10.1016/j.chieco.2017.03.007
Arcot S (2014) Participating convertible preferred stock in venture capital exits. J Bus Ventur 29:72–87. https://doi.org/10.1016/j.jbusvent.2013.06.001
Arcot S, Fluck Z, Gaspar J-M et al (2015) Fund managers under pressure: rationale and determinants of secondary buyouts. J Financ Econ 115:102–135. https://doi.org/10.1016/j.jfineco.2014.08.002
Arundale K (2020) Syndication and cross-border collaboration by venture capital firms in Europe and the USA: a comparative study. Ventur Cap 22:355–376. https://doi.org/10.1080/13691066.2020.1847414
Bienz C, Walz U (2010) Venture capital exit rights. J Econ Manag Strategy 19:1071–1116. https://doi.org/10.1111/j.1530-9134.2010.00278.x
Bock C, Schmidt M (2015) Should I stay, or should I go?—How fund dynamics influence venture capital exit decisions. Rev Financ Econ 27:68–82. https://doi.org/10.1016/j.rfe.2015.09.002
Butler AW, Goktan MS (2013) On the role of inexperienced venture capitalists in taking companies public. J Corp Financ 22:299–319. https://doi.org/10.1016/j.jcorpfin.2013.06.004
Chen S, Wang X, Deng Y et al (2016) Optimal dividend-financing strategies in a dual risk model with time-inconsistent preferences. Insur Math Econ 67:27–37. https://doi.org/10.1016/j.insmatheco.2015.11.005
Cumming D (2008) Contracts and exits in venture capital finance. Rev Financ Stud 21:1947–1982. https://doi.org/10.1093/rfs/hhn072
Cumming D (2010) Venture capital: investment strategies, structures, and policies. Wiley, Hoboken
Cumming D (2012) The Oxford handbook of venture capital. Oxford University Press, Oxford
Cumming D, Fleming G, Schwienbacher A (2006) Legality and venture capital exits. J Corp Financ 12:214–245. https://doi.org/10.1016/j.jcorpfin.2004.12.004
Dixit AK, Pindyck RS (1994) Investment under uncertainty. Princeton University Press, Princeton
Félix EGS, Pires CP, Gulamhussen MA (2014) The exit decision in the European venture capital market. Quant Financ 14:1115–1130. https://doi.org/10.1080/14697688.2012.714903
Ferreira RM, Pereira PJ (2021) A dynamic model for venture capitalists' entry–exit investment decisions. Eur J Oper Res 290:779–789. https://doi.org/10.1016/j.ejor.2020.08.014
Gao X, Ritter JR, Zhu Z (2013) Where have all the IPOs gone? J Financ Quant Anal 48:1663–1692. https://doi.org/10.1017/S0022109014000015
Gompers P, Lerner J (1999) An analysis of compensation in the U.S. venture capital partnership. J Financ Econ 51:3–44. https://doi.org/10.1016/S0304-405X(98)00042-7
Gompers PA (1996) Grandstanding in the venture capital industry. J Financ Econ 42:133–156. https://doi.org/10.1016/0304-405X(96)00874-4
Grenadier SR, Malenko A (2011) Real options signaling games with applications to corporate finance. Rev Financ Stud 24:3993–4036. https://doi.org/10.1093/rfs/hhr071
Grenadier SR, Wang N (2007) Investment under uncertainty and time-inconsistent preferences. J Financ Econ 84:2–39. https://doi.org/10.1016/j.jfineco.2006.01.002
Guo D, Jiang K (2013) Venture capital investment and the performance of entrepreneurial firms: Evidence from China. J Corp Financ 22:375–395. https://doi.org/10.1016/j.jcorpfin.2013.07.001
Guo Q-W, Chen S, Schonfeld P et al (2018) How time-inconsistent preferences affect investment timing for rail transit. Transp Res Pt B-Methodol 118:172–192. https://doi.org/10.1016/j.trb.2018.10.009
Harris C, Laibson D (2013) Instantaneous gratification. Q J Econ 128:205–248. https://doi.org/10.1093/qje/qjs051
Kandel E, Leshchinskii D, Yuklea H (2011) VC funds: aging brings myopia. J Financ Quant Anal 46:431–457. https://doi.org/10.1017/s0022109010000840
Kaplan SN, Strömberg P (2003) Financial contracting theory meets the real world: an empirical analysis of venture capital contracts. Rev Econ Stud 70:281–315. https://doi.org/10.1111/1467-937X.00245
Klingler-Vidra R (2016) When venture capital is patient capital: seed funding as a source of patient capital for high-growth companies. Socio-Econ Rev 14:691–708. https://doi.org/10.1093/ser/mww022
KPMG (2019) Venture Pulse Q4 2018. Available at: https://assets.kpmg/content/dam/kpmg/xx/pdf/2019/01/kpmg-venture-pulse-q4-2018.pdf
Laibson D (1997) Golden eggs and hyperbolic discounting. Q J Econ 112:443–478. https://doi.org/10.1162/003355397555253
Lee PM, Wahal S (2004) Grandstanding, certification and the underpricing of venture capital backed IPOs. J Financ Econ 73:375–407. https://doi.org/10.1016/j.jfineco.2003.09.003
Li H, Mu C, Yang J (2016) Optimal contract theory with time-inconsistent preferences. Econ Model 52:519–530. https://doi.org/10.1016/j.econmod.2015.09.032
Li X, Wu X, Zhou W (2017) Optimal stopping investment in a logarithmic utility-based portfolio selection problem. Financ Innov 3:28. https://doi.org/10.1186/s40854-017-0080-y
Liu L, Niu Y, Wang Y et al (2020) Optimal consumption with time-inconsistent preferences. Econ Theory 70:785–815. https://doi.org/10.1007/s00199-019-01228-1
Loewenstein G, Prelec D (1992) Anomalies in intertemporal choice: evidence and an interpretation. Q J Econ 107:573–597. https://doi.org/10.2307/2118482
Luo P, Tian Y, Yang Z (2020) Real option duopolies with quasi-hyperbolic discounting. J Econ Dyn Control 111:103829. https://doi.org/10.1016/j.jedc.2019.103829
O'Donoghue T, Rabin M (1999) Doing it now or later. Am Econ Rev 89:103–124. https://doi.org/10.1257/aer.89.1.103
Sethuram S, Taussig M, Gaur A (2021) A multiple agency view of venture capital investment duration: the roles of institutions, foreignness, and alliances. Glob Strateg J. https://doi.org/10.1002/gsj.1402
Strotz RH (1955) Myopia and inconsistency in dynamic utility maximization. Rev Econ Stud 23:165–180. https://doi.org/10.2307/2295722
Tavares-Gärtner M, Pereira PJ, Brandão E (2018) Heterogeneous beliefs and optimal ownership in entrepreneurial financing decisions. Quant Financ 18:1947–1958. https://doi.org/10.1080/14697688.2018.1432882
Thaler R (1981) Some empirical evidence on dynamic inconsistency. Econ Lett 8:201–207. https://doi.org/10.1016/0165-1765(81)90067-7
Thijssen JJJ (2008) Optimal and strategic timing of mergers and acquisitions motivated by synergies and risk diversification. J Econ Dyn Control 32:1701–1720. https://doi.org/10.1016/j.jedc.2007.06.016
Tian Y (2016) Optimal capital structure and investment decisions under time-inconsistent preferences. J Econ Dyn Control 65:83–104. https://doi.org/10.1016/j.jedc.2016.02.001
Wang Y, Huang W, Liu B et al (2020) Optimal effort in the principal-agent problem with time-inconsistent preferences. N Am Econ Financ 52:100909. https://doi.org/10.1016/j.najef.2019.01.006
Yoon H (2020) Impatience and time inconsistency in discounting models. Manag Sci 66:5850–5860. https://doi.org/10.1287/mnsc.2019.3496
The authors thank the editor and the reviewers for invaluable comments and suggestions, which have improved the quality of this paper immensely.
This paper was supported by the Major Program of the National Social Science Foundation of China under Grant No. 17ZDA083, the National Natural Science Foundation of China under Grant No. 71932002, and the Natural Science Foundation of Beijing Municipality under Grant No. 9192001.
School of Management, Xi'an Jiaotong University, No. 28, Xianning West Road, Beilin District, Xi'an, 710049, China
Yanzhao Li, Ju-e Guo & Shaolong Sun
College of Economics and Management, Beijing University of Technology, No. 100, Ping Le Yuan, Chaoyang District, Beijing, 100124, China
Yongwu Li
Yanzhao Li
Ju-e Guo
Shaolong Sun
YZ.L and J.G conceived of the presented idea. YZ.L participated in the design of the study and drafted the manuscript. J.G supervised the findings. S.S participated in project administration. YW.L participated in the design of the study and writing the manuscript. All the authors provided critical feedback and helped shape the research, analysis, and manuscript. All the authors read and approved the manuscript.
Authors' information
Yanzhao Li received bachelor degree of Engineering and bachelor degree of Economics from Xi'an Jiaotong University, China, in 2017. He is currently pursuing the Ph.D. degree in management science and engineering at School of Management, Xi'an Jiaotong University, China. His research interests include venture capital, behavioral finance, and data analysis. He has published one paper in Tourism Economics.
Ju-e Guo received her Ph.D. degree in School of Management, Xi'an Jiaotong University, China, in 2001. She is currently a professor at School of Management, Xi'an Jiaotong University, China and executive deputy director of Research Center of Chinese Management. She has published more than 30 papers in international journals, such as Energy Economics and International Journal of Production Economics. Her research interests include investment and financing decision, risk management, and energy policy.
Shaolong Sun received his Ph.D. degree in Management Science and Engineering at the Institute of Systems Science, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, China, in 2019. He is currently an associate professor at School of Management, Xi'an Jiaotong University, China. He has published over 20 papers in leading journals including Tourism Management, Energy Economics, and IEEE Transactions on Cybernetics. His research interests include big data mining, machine learning, and economic and financial forecasting.
Yongwu Li received his Ph.D. degree in Probability and Mathematical Statistics from Lanzhou University, Lanzhou, China, in 2014. Currently, he is an associate professor in the College of Economics and Management, Beijing University of Technology. He has published more than 20 papers in international journals, such as IEEE Systems Journal, Insurance: Mathematics and Economics, Journal of Optimization Theory and Applications, and Operations Research Letters. His research interests include financial engineering, mathematic finance, and the applications of stochastic control and optimization methods in operations research.
Correspondence to Yongwu Li.
The proof for the general solution of \(N_{0} \left( x \right)\).
The general solution we assume is given by
$$N_{0} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} \ne \lambda_{pN} , \hfill \\ \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} = \lambda_{pN} . \hfill \\ \end{gathered} \right.$$
Take the first and second derivatives of \(N_{0} \left( x \right)\) and substitute them into the following differential equation:
(1) If \(\lambda_{f} \ne \lambda_{pN}\), we note that \(\beta_{3} \ne \beta_{4}\).
For the \(x^{{\beta_{1} }}\) term, we have
$$\delta_{p} F\left( x \right)\left[ {\frac{1}{2}\sigma^{2} \beta_{1} \left( {\beta_{1} - 1} \right) + \alpha \beta_{1} - \rho } \right] = 0.$$
$$\frac{1}{2}\sigma^{2} \beta_{3} \left( {\beta_{3} - 1} \right)x^{{\beta_{3} }} \varepsilon Y + \alpha \beta_{3} x^{{\beta_{3} }} \varepsilon Y - \rho x^{{\beta_{3} }} \varepsilon Y + \lambda_{f} \left[ {x^{{\beta_{3} }} Y - x^{{\beta_{3} }} \varepsilon Y} \right] = 0.$$
Simplify the above Eq. (49), we obtain
$$\varepsilon = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pN} }}.$$
$$U_{0} x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \left( {\rho + \lambda_{f} } \right)} \right] = 0.$$
(2) If \(\lambda_{f} = \lambda_{pN}\), we note that \(\beta_{3} = \beta_{4}\).
For the \(x^{{\beta_{4} }} \log x\) term, we have
$$\begin{aligned} & R_{1} x^{{\beta_{4} }} \log x\left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \rho - \lambda_{f} } \right] \\ & \quad + x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} R_{1} \left( {2\beta_{4} - 1} \right) + \alpha R_{1} + \lambda_{f} Y} \right] = 0. \\ \end{aligned}$$
$$\frac{1}{2}\sigma^{2} R_{1} \left( {2\beta_{4} - 1} \right) + \alpha R_{1} + \lambda_{f} Y = 0.$$
We obtain
$$R_{1} = - \frac{{\lambda_{f} Y}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}.$$
$$R_{0} x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \left( {\rho + \lambda_{f} } \right)} \right] = 0.$$
Solving for the continuation value function \(S_{n + 1}^{c} \left( x \right)\).
Let \(n = N - \left( {j + 1} \right)\), for \(j = 2,3,...,N - 1\). Then \(S_{n + 1}^{c} \left( x \right) = S_{N - j}^{c} \left( x \right)\). We assume the general solution of \(S_{N - j}^{c} \left( x \right)\) is given in Eq. (57).
$$S_{N - j}^{c} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 1} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 2} {W_{N - j,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
Further, it can be concluded that
$$S_{N - j + 1}^{c} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 3} {W_{{N - \left( {j - 1} \right),i}} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
We substitute the general solutions of \(S_{N - j}^{c} \left( x \right)\) and \(S_{N - j + 1}^{c} \left( x \right)\), and the first and second derivatives of \(S_{N - j}^{c} \left( x \right)\) into the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{N - j}^{c\prime \prime } \left( x \right) + \alpha xS_{N - j}^{c\prime } \left( x \right) - \rho S_{N - j}^{c} \left( x \right) + \lambda_{f} \left[ {S_{N - j + 1}^{c} \left( x \right) - S_{N - j}^{c} \left( x \right)} \right] = 0.$$
For the \(x^{{\beta_{1} }}\) term, Eq. (59) always stands up.
$$\frac{1}{2}\sigma^{2} \beta_{5} \left( {\beta_{5} - 1} \right)x^{{\beta_{5} }} \varphi^{j - 1} Z + \alpha \beta_{5} x^{{\beta_{5} }} \varphi^{j - 1} Z - \rho x^{{\beta_{5} }} \varphi^{j - 1} Z + \lambda_{f} \left[ {x^{{\beta_{5} }} \varphi^{j - 2} Z - x^{{\beta_{5} }} \varphi^{j - 1} Z} \right] = 0.$$
$$\varphi = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pS} }}.$$
For the \(x^{{\beta_{4} }}\) term, we set the coefficients for each \(x^{{\beta_{4} }} \left( {\log x} \right)^{k}\) term to 0 and then we obtain
$$\begin{aligned} & \frac{{\sigma^{2} }}{2}\left[ {\left( {2\beta_{4} - 1} \right)\left( {k + 1} \right)W_{N - j,k + 1} + \left( {k + 2} \right)\left( {k + 1} \right)W_{N - j,k + 2} } \right] \\ & \quad + \alpha \left( {k + 1} \right)W_{N - j,k + 1} + \lambda_{f} W_{N - j + 1,k} = 0,\quad for\,\quad k = 1,2, \ldots ,j - 1. \\ \end{aligned}$$
We assume that \(\mu = \frac{{ - \beta_{4} }}{{{{\sigma^{2} \beta_{4}^{2} } \mathord{\left/ {\vphantom {{\sigma^{2} \beta_{4}^{2} } {2 + \rho + \lambda_{f} }}} \right. \kern-\nulldelimiterspace} {2 + \rho + \lambda_{f} }}}}\). The coefficient \(W_{N - j,k}\) is given by
$$W_{N - j,k} = \frac{{\lambda_{f} }}{k}\left[ {\mu \sum\limits_{n = 0}^{j - k - 2} {\left( {\frac{{\sigma^{2} \mu }}{2}} \right)^{n + 1} W_{N - j + 1,k + n} \prod_{m = 0}^{n} \left( {k + m} \right) + \mu W_{N - j + 1,k - 1} } } \right],$$
for \(k = 1,2,...,j - 2\). And \(W_{N - j,0}\) is solved using the value-matching condition:
$$\begin{aligned} W_{N - j,0} & = x_{S,N - j}^{{ - \beta_{4} }} \left[ {\delta_{f} \left( {\eta x_{S,N - j} - \theta } \right) - \delta_{p} \left( {\frac{{x_{S,N - j} }}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) - \varphi^{j - 1} Zx_{S,N - j}^{{\beta_{5} }} } \right] \\ & \quad - \sum\limits_{i = 1}^{j - 2} {W_{N - j,i} \left( {\log x_{S,N - j} } \right)^{i} } . \\ \end{aligned}$$
Solving for the value function \(S_{n + 1} \left( x \right)\)
Let \(n = N - \left( {j + 1} \right)\), for \(j = 2,3,...,N - 1\). Then \(S_{n + 1} \left( x \right) = S_{N - j} \left( x \right)\). We assume the general solution of \(S_{N - j} \left( x \right)\) is given in Eq. (65).
$$S_{N - j} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 1} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 2} {U_{N - j,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
We substitute the general solutions of \(S_{N - j} \left( x \right)\) and \(S_{N - j + 1} \left( x \right)\), and the first and second derivatives of \(S_{N - j} \left( x \right)\) into the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{N - j}^{\prime \prime } \left( x \right) + \alpha xS_{N - j}^{\prime } \left( x \right) - \rho S_{N - j} \left( x \right) + \lambda_{f} \left[ {S_{N - j + 1}^{c} \left( x \right) - S_{N - j} \left( x \right)} \right] = 0.$$
For the \(x^{{\beta_{1} }}\) and \(x^{{\beta_{3} }}\) term, we reach the same conclusions as "Appendix 2".
$$\begin{aligned} & \frac{{\sigma^{2} }}{2}\left[ {\left( {2\beta_{4} - 1} \right)\left( {k + 1} \right)U_{N - j,k + 1} + \left( {k + 2} \right)\left( {k + 1} \right)U_{N - j,k + 2} } \right] \\ & \quad + \alpha \left( {k + 1} \right)U_{N - j,k + 1} + \lambda_{f} W_{N - j + 1,k} = 0,\quad {\text{for}}\,\quad k = 1,2, \ldots ,j - 2. \\ \end{aligned}$$
The coefficient \(U_{N - j,k}\) is given by
$$U_{N - j,k} = W_{N - j,k} ,$$
for \(k = 1,2, \ldots ,j - 1\). And \(U_{N - j,0}\) is solved using the value-matching condition:
$$\begin{aligned} U_{N - j,0} & = x_{S,N - j}^{{ - \beta_{4} }} \left[ {\eta x_{S,N - j} - \theta - \delta_{p} \left( {\frac{{x_{S,N - j} }}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) - \varphi^{j - 1} Zx_{S,N - j}^{{\beta_{5} }} } \right] \\ & \quad - \sum\limits_{i = 1}^{j - 2} {U_{N - j,i} \left( {\log x_{S,N - j} } \right)^{i} } . \\ \end{aligned}$$
Li, Y., Guo, Je., Sun, S. et al. How time-inconsistent preferences influence venture capital exit decisions? A new perspective for grandstanding. Financ Innov 8, 1 (2022). https://doi.org/10.1186/s40854-021-00305-6
Time-inconsistent preferences
Exit decisions | CommonCrawl |
Publications of Domokos, M.
Export all publications as EndNote XML
Type of Publication
BookBook ChapterBook reviewCommentaryConference PaperConference ProceedingsJournal ArticleJournal EditorMagazine ArticleMiscellaneousNewspaper ArticlePolicy BriefPublication reviewReportStudySummaryThesisWorking Paper
Cziszter K, Domokos M. The Noether number for the groups with a cyclic subgroup of index two. Journal of Algebra. 2014;399:546-60.
EndNote XML BibTex Google Scholar
AbstractGoogle Scholar
The Noether number for the groups with a cyclic subgroup of index two
The exact degree bound for the generators of rings of polynomial invariants is determined for the finite, non-cyclic groups having a cyclic subgroup of index two. It is proved that the Noether number of these groups equals one half the order of the group plus 1 or 2.
Cziszter K, Domokos M. On the generalized Davenport constant and the Noether number.. 2012.
Publisher linkGoogle Scholar
Domokos M. Covariants and the no-name lemma. Journal of Lie Theory. 2008;18(4):839-49.
Domokos M. Typical separating invariants. Transformation Groups. 2007;12(1):49-63.
Domokos M. A Quantum Homogeneous Space of Nilpotent Matrices. Letters in Mathematical Physics. 2005;72(1):39-50.
Publisher linkAbstractGoogle Scholar
A Quantum Homogeneous Space of Nilpotent Matrices
Copyright of Letters in Mathematical Physics is the property of Springer Science & Business Media B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Domokos M, Frenkel PE. Mod 2 indecomposable orthogonal invariants. Advances in Mathematics. 2005;192(1):209-17.
Mod 2 indecomposable orthogonal invariants
Copyright of Advances in Mathematics is the property of Academic Press Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Domokos M, Lenagan TH. Quantized trace rings. Quarterly Journal of Mathematics. 2005;56(4):507-23.
Domokos M, Frenkel PE. On orthogonal invariants in characteristic 2. Journal of Algebra. 2004;274(2):662.
On orthogonal invariants in characteristic 2
Copyright of Journal of Algebra is the property of Academic Press Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Domokos M, Lenagan TH. Representation rings of quantum groups. Journal of Algebra. 2004;282(1):103-28.
Representation rings of quantum groups
Domokos M. Matrix invariants and the failure of Weyl's theorem. In: Giambruno A, Regev A, Zaicev M, editors. Polynomial Identities and Combinatorial Methods. Vol 235. New York: Dekker; 2003. p. 215-36. (Lecture Notes in Pure and Applied Mathematics; vol 235).
Matrix invariants and the failure of Weyl's theorem
Conference publication.
Domokos M. On the dimension of faithful modules over finite dimensional basic algebras. Linear Algebra and Its Applications. 2003;365:155-7.
Domokos M, Fioresi R, Lenagan TH. Orbits for the adjoint coaction on quantum matrices. Journal of Geometry & Physics. 2003;47(4):447.
Orbits for the adjoint coaction on quantum matrices
Copyright of Journal of Geometry & Physics is the property of Elsevier Science Publishers B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Domokos M, Lenagan TH. Conjugation coinvariants of quantum matrices. Bulletin of the London Mathematical Society. 2003;35(1):117-27.
Domokos M, Lenagan TH. Weakly multiplicative coactions of quantized function algebras. Journal of Pure & Applied Algebra. 2003;183(1-3):45.
Weakly multiplicative coactions of quantized function algebras
Copyright of Journal of Pure & Applied Algebra is the property of Elsevier Science Publishers B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Domokos M. Relative invariants for representations of finite dimensional algebras. Manuscripta Mathematica. 2002;108(1):123-33.
Domokos M. Finite generating system of matrix invariants. Mathematica Pannonica. 2002;13(2):175-81.
Domokos M, Kuzmin SG, Zubkov AN. Rings of matrix invariants in positive characteristic. Journal of Pure & Applied Algebra. 2002;176(1):61.
Rings of matrix invariants in positive characteristic
Domokos M, Lenzing H. Moduli Spaces for Representations of Concealed-Canonical Algebras. Journal of Algebra. 2002;251(1):371.
Moduli Spaces for Representations of Concealed-Canonical Algebras
Domokos M, Zubkov AN. Semisimple Representations of Quivers in Characteristic p. Algebras and Representation Theory. 2002;5(3):305-17.
Semisimple Representations of Quivers in Characteristic p</it>
We prove that the results of Le Bruyn and Procesi on the varieties parameterizing semisimple representations of quivers hold over an algebraically closed base field of arbitrary characteristic.
Domokos M. Invariant theory of algebra representations. In: Roggenkamp KW, Stefanescu M, editors. Algebra : representation theory. Vol 28. Dordrecht: Kluwer; 2001. p. 47-61. (NATO Science Series II. Mathematics, Physics and Chemistry; vol 28).
Invariant theory of algebra representations
Proceedings of the NATO Advanced Study Institute on Algebra – Representation Theory, Constanta, Romania, 2 – 12 August 2000
Domokos M, Drensky V. Gröbner Bases for the Rings of Special Orthogonal and 2×2 Matrix Invariants. Journal of Algebra. 2001;243(2):706-16.
Gröbner Bases for the Rings of Special Orthogonal and 2×2 Matrix Invariants
We present a Gröbner basis for the ideal of relations among the standard generators of the algebra of invariants of the special orthogonal group acting on k-tuples of vectors. The cases of SO3 and SO4 are interpreted in terms of the algebras of invariants and semi-invariants of k-tuples of 2×2 matrices. In particular, we present in an explicit form a Gröbner basis for the 2×2 matrix invariants. Finally we use a Sagbi basis to show that the algebra of SO2 invariants is a Koszul algebra.
Domokos M, Zubkov AN. Semi-invariants of quivers as determinants. Transformation Groups. 2001;6(1):9-24.
Semi-invariants of quivers as determinants
A representation of a quiver is given by a collection of matrices. Semi-invariantsof quivers can be constructed by taking admissible partial polarizations of the determinant ofmatrices containing sums of matrix components of the representation and the identity matrix asblocks. We prove that these determinantal semi-invariants span the space of all semi-invariantsfor any quiver and any infinite base field. In the course of the proof we show that one canreduce the study of generating semi-invariants to the case when the quiver has no orientedpaths of length greater than one.
Domokos M. Poincaré series of semi-invariants of 2x2 matrices. Linear Algebra and its Applications. 2000;310(1-3):183-94.
Domokos M. Relative invariants of 3x3 matrix triples. Linear and Multilinear Algebra. 2000;47(2):175-90.
Relative invariants of 3x3 matrix triples
We present primary and secondary generators for the algebra of polynomial invariantsof the direct product of two copies of the special linear group S13 acting naturally ontriples of 3 x 3 matrices over a field of characteristic zero. We handle also the analogousproblem for triples and quadruples of 2 x 2 matrices
Domokos M, Hegedűs P. Noether's bound for polynomial invariants of finite groups. Archiv der Mathematik. 2000;74(3):161-7.
Domokos M, Lenzig H. Invariant theory of canonical algebras. Journal of Algebra. 2000;228(2):738-62.
Invariant theory of canonical algebras
Summary: "Based on the first fundamental theorem of classical invariant theory we present a reduction technique for computing relative invariants for quivers with relations. This is applied to the invariant theory of canonical algebras and yields an explicit construction of the moduli spaces (together with the quotient morphisms from the corresponding representation spaces) for families of modules with a fixed dimension vector belonging to the central sincere separating subcategory. By means of a tilting process we extend these results to the invariant theory of concealed-canonical algebras, thus covering the cases of tame hereditary, tame concealed, and tubular algebras, respectively. Our approach yields, in particular, a uniform treatment of an essential part of the invariant theory of extended Dynkin quivers, a topic popular over the years, but stretches far beyond since concealed-canonical algebras of tubular or wild representation type are also covered."
Domokos M. Gröbner bases of certain determinantal ideals. Beiträge zur Algebra und Geometrie. 1999;40(2):479-93.
Domokos M. Polynomial ideals and identities of matrices. In: Drensky VS, Giambruno A, Sehgal SK, editors. Methods in ring theory : proceedings of the Trento conference. Vol 198. New York: Dekker; 1998. p. 83-95. (Lecture notes in pure and applied mathematics; vol 198).
Domokos M. Invariants of quivers and wreath products. Communications in Algebra. 1998;26(9):2807-19.
Domokos M. Cayley-Hamilton theorem for 2x2 matrices over the Grassmann algebra Ring theory (Miskolc, 1996). Journal of Pure and Applied Algebra. 1998;133(1-2):69-81.
Cayley-Hamilton theorem for 2x2 matrices over the Grassmann algebra Ring theory (Miskolc, 1996)
It is shown that the characteristic polynomial of matrices over a Lie nilpotent ring introduced recently by Szigeti is invariant with respect to the conjugation action of the general linear group. Explicit generators of the corresponding algebra of invariants in the case of 2 × 2 matrices over an algebra over a field of characteristic zero satisfying the identity [[x, y], z] = 0 are described. In this case the coefficients of the characteristic polynomial are expressed by traces of powers of the matrix, yielding a compact form of the Cayley-Hamilton equation of 2 × 2 matrices over the Grassmann algebra.
Domokos M, Drensky V. A Hilbert-Nagata theorem in noncommutative invariant theory. Transactions of the American Mathematical Society. 1998;350(7):2797-811.
A Hilbert-Nagata theorem in noncommutative invariant theory
Nagata gave a fundamental sufficient condition on group actions on finitely generated commutative algebras for finite generation of the subalgebra of invariants. In this paper we consider groups acting on noncommutative algebras over a field of characteristic zero. We characterize all the T-ideals of the free associative algebra such that the algebra of invariants in the corresponding relatively free algebra is finitely generated for any group action from the class of Nagata. In particular, in the case of unitary algebras this condition is equivalent to the nilpotency of the algebra in Lie sense. As a consequence we extend the Hilbert-Nagata theorem on finite generation of the algebra of invariants to any finitely generated associative algebra which is Lie nilpotent. We also prove that the Hilbert series of the algebra of invariants of a group acting on a relatively free algebra with a non-matrix polynomial identity is rational, if the action satisfies the condition of Nagata.
Domokos M, Popov A. On the degree of nilpotency of the radical of relatively free algebras. Mathematica Pannonica. 1997;8(1):11-6.
Domokos M. Criteria for vanishing of Eulerian polynomials on nxn matrices. Linear Algebra and its Applications. 1996;234:181-95.
Criteria for vanishing of Eulerian polynomials on nxn matrices
Eulerian polynomial identities on n × n matrices were introduced by Szigeti, Tuza and Révész. Here we prove that some Eulerian polynomials are contained in the T-ideal generated by polynomials corresponding to graphs with less vertices. As a by-product we obtain a generalization of a graph-theoretic result of Swan. Then we give a complete description of Eulerian graphs with two vertices such that the corresponding identity is satisfied by the n × n matrix ring over a unitary commutative ring of characteristic 0.
Domokos M. Relatively free invariant algebras of finite reflection groups. Transactions of the American Mathematical Society. 1996;348(6):2217-34.
Relatively free invariant algebras of finite reflection groups
Let G be a finite subgroup of Gln(K) (K is a field of characteristic 0 and n ≥ 2) acting by linear substitution on a relatively free algebra $K\langle x_1, \ldots, x_n\rangle/I$ of a variety of unitary associative algebras. The algebra of invariants is relatively free if and only if G is a pseudo-reflection group and I contains the polynomial [[ x2, x1], x1].
Domokos M. Correction to "On algebras satisfying symmetric identities". Archiv der Mathematik. 1995;64(6):552.
Domokos M. A Generalization of a theorem of Chang. Communications in Algebra. 1995;23(12):4333-42.
A Generalization of a theorem of Chang
Szigeti, Tuza and Révész have developed a method in [6] to obtain polynomial identities for the n×n matrix ring over a commutative ring starting from directed Eulerian graphs. These polynomials are called Euler-ian. In the first part of this paper we show some polynomials that are in the T-ideal generated by a certain set of Eulerian polynomials, hence we get some identities of the n×n matrices. This result is a generalization of a theorem of Chang [l]. After that, using this theorem, we show that any Eulerian identity arising from a graph which lias d-fold multiple edges follows from the standard identity of degree d
Domokos M. New identities for 3 × 3 Matrices. Linear and Multilinear Algebra. 1995;38(3):207-13.
New identities for 3 × 3 Matrices
Eulerian polynomial identities on n×nmatrices were introduced by Szigeti, Tuza and Révész in [12]. In this paper we exhibit two Eulerian identities on 3×3 matrices which are not consequences of the earlier known identities.
Domokos M. On algebras satisfying symmetric identities. Archiv der Mathematik. 1994;63(5):407-13.
Domokos M. Goldie's theorems for involution rings. Communications in Algebra. 1994;22(2):371-80.
Goldie's theorems for involution rings
In this paper we shall formulate necessary and sufficient conditions for a semiprime ring to be both left and right Goldie in terms of the symmetric notion of biideal instead of one-sided ideals. We use this result to characterize semiprime Goldie rings with involution by conditions consistent with the notion of involution rings, that is, in terms of ascending chain condition for annihilator *-biideals and of maximum condition for *-biideal direct sums.
Domokos M. Eulerian Polynomial Identities and Algebras Satisfying a Standard Identity. Journal of Algebra. 1994;169(3):913-28.
Eulerian Polynomial Identities and Algebras Satisfying a Standard Identity
Using the trivial observation that one can get polynomial identities on R from the ones of M[sub:k](R) we derive from the Amitsur-Levitzki theorem a subset of the identities on n × n matrices, obtained recently by Szigeti, Tuza, and Révész starting from directed Eulerian graphs, which generate the same T-ideal of the free algebra. After that we show that by this method we get a generating set of the T-ideal of identities on the 2 × 2 matrix ring over a field of characteristic 0. We reformulate a problem on algebras satisfying a standard identity in terms of Eulerian identities and use this equivalence in both directions. We apply a result of Braun to Eulerian identities on 3 × 3 matrices, and we give a simpler example which answers the question investigated by Braun. Finally we give an upper estimation on the minimal degree of the standard identity which is satisfied by the matrix algebra over an algebra satisfying some standard identity.Copyright 1994, 1999 Academic Press, Inc. | CommonCrawl |
negation of a statement example
Home / negation of a statement example
Let's look at some more examples of negation. It is possible that a closed sentence will have different truth values at different times. A statement and its negation have opposite truth values. Sentence 1 is true if x is replaced by 4, but false if x is replaced by a number other than 4. That applied in the example above. For example, the negation of "All goats are mammals" is "Some goats aren't mammals." If A is the statement "I am rich" and B is the statement "I am happy,", then the negation of "A $\Rightarrow$ B" is "I am rich" = A, and "I am not happy" = not B. For example, the negation of ~p is ~(~p) or p. This is illustrated in the example below. Let's try it for this negation. Suppose that your friend made the following statement: "If I behold a rainbow in the sky, then my heart leaps up." If, in this example, John is not at the library and John is not studying, then the truth value of the complex statement is false: F F F. Another truth functional operator is negation: the phrase "It is false that …" or "not" inserted in the appropriate place in a statement. Let q represent, "There are 100 cents in a dollar. affirmative examples, and with "is not" for negative examples: Examples were true or false depending on how the gap was filled and on whether or not the three digits summed to 15. The negation of I am going to the movies on Saturday or Sunday is I am not going on Saturday and I am not going on Sunday. Peter has no books. When it is necessary to state that a fact is not true, it can be done by using any negative words, phrases or clauses. We can construct a truth table to determine all possible truth values of a statement and its negation. Each statement is either True (T) or False (F), but not both. Mathematical writing contains many examples of implicitly quantified statements. (That was sort of a quantifiers joke, sorry). taberu ("eat") and tabenai ("do not eat"). By signing up, you agree to receive useful information and to our privacy policy. Similarly, sentence 4 is either true or false depending on the value of the variable "he." The product of two negative numbers is a positive number. For example, we would write the negation of "I will play golf and I will mow the lawn" as "I will not play golf or I will not mow the lawn." (a) … The truth value of ~p is the opposite of the truth value of p. Solution: Since p is true, ~p must be false. Each experimental session involved use of introductory cards, practice or training cards, and evaluation cards, in that order. Example. The symbol ~ is used to indicate the negation. Copyright 2020 Math Goodies. Each of these sentences is a closed sentence. Definition; Example 1; Example 2; Example 3; Truth Value; Example 4; Example 5; Example 6; Definition Statement. In modern English, Double Negatives are highly avoidable as it is grammatically wrong. It is an example that proves that \((\forall x) [P(x)]\) is a false statement, and hence its negation, \((\exists x) [\urcorner P(x)]\), is a true statement. Bill Clinton was the 42nd President of the United States. Either George is kind or Bert is unkind. Definition of Negation: When it is necessary to state that a fact is not true, it can be done by using any negative words, phrases or clauses. Under negation, what was TRUE, will become FALSE or what was FALSE, will become TRUE. The coffee shop is not yet open for another batch of service crew. That was not fun. The superheroes you have se… In some cases, contradictory words, such as "kind and unkind" and "mortal and immortal, " may signify negation if and only if it is clearly specified in the statement; otherwise, the statement should not be negated. • Some occur, through the presence of the word a or an. This time we will Identify the variable for each open sentence. The symbol for this is $$ Λ $$. Symbolically: Ø ($ x Î D such that Q(x)) Û " x Î D, Ø Q(x). Consider the statement, "John is at the Library or he is Studying." In other words, the second statement is true when and only when the first statement is false. From this truth table, we can see that a statement and its negation have opposite truth values. We therefore expect its negation to be weak, and indeed that is the case: it merely tells us that at least one prime has a certain property (that of not being odd). Do you know how a Mathematical Statement is denoted? Negation refers to these negative words, phrases or clauses. The statement, "Every prime is odd," is strong because it is telling us about all primes. An example is Japanese, which conjugates verbs in the negative after adding the suffix -nai (indicating negation), e.g. EXAMPLE 2.1.2 Write the negation of "Some used cars are reliable." Example 18 Write the negation of the following statements: (iii) r : All birds have wings. The negation will always have the opposite truth value of the original statement. In principle … Maria and her friends are not going to be present today. There is also the form have got.For the negation in this case, we … Simple Negation (NOT) Statements. 2. Let's take another look at Example 3. (p implies q) Negation: I run fast and I do not get tired. The negation of a statement of the form $ x in D such that Q(x) is logically equivalent to a statement of the form " x in D, Ø Q(x). The sentences in Example 3 are open sentences. What is Negation of a Statement? Thus, each closed sentence in Example 1 has a truth value of either true or false as shown below. by Burt Feintuch. In Example 8, when x is true, ~x is false; and when x is false, ~x is true. Let p represent the closed sentence "The number 9 is odd.". That was fun. In logic, a negation of a simple statement (one logical value) can usually be formed by placing the word "not" into the original statement. So, this means that the second statement is the negation of the first statement. Let us take another example, this time from a different perspective. Example 18 Write the negation of the following statements: (ii) q: There exists a rational number x such that x2 = 2. Example 14: "There exists x such that x>5 and x2 < 10" is an existence statement. Today is not Monday. Consider the following examples: Lulu is generous, while Lili is unkind. Example: John is not uncontrollable by his family member though he is a special child. These statements stand in stark contrast to positive sentence examples.There, the speaker might say something like, "She speaks French very well." Like hell, you won't ,' said Boyd in the same tone of … Conjunction . More complicated logical statements may be formed by futher combining statements already joined by and or or. Example 8: Construct a truth table for the negation of x. The truth value of ~p is the opposite of the truth value of p. Solution: Since p is true, ~p must be false. How to find the negation of a statement and its truth value: definition, truth value, 6 examples, and their solutions. Connection: To help us remember this definition, think of a computer, which is either on or off, but not both. "Shelby Boyd sidled up to Al Heakland and said under his breath, 'It's time to pay up, Al.'. ' More examples of negation of using prefix. A closed sentence is an objective statement which is either true or false. In summary, the truth value of each open sentence depends on what value is used to replace the variable in that sentence. Definition: An open sentence is a statement which contains a variable and becomes either true or false depending on the value that replaces the variable. Negation of r There exists a bird which have no wings. The person is a child and the person is drinking whiskey. are + not = are not / aren't 1. Maria is not a professional singer. University Press of Mississippi, 2015. (p and not q) Verifying with a truth table. Such as ir, un, non, pre, anti, il, im, etc. Directions: Read each question below. Quantifiers and Negation For all of you, there exists information about quantifiers below. Sentence 3 is true if y is replaced by 15, but false otherwise. Definition: A truth table helps us find all possible truth values of a statement. 2. Negation of Statement; Today is Monday. )\"I have had a perfectly wonderful evening, but this wasn't it. The negation of statement p is "not p", symbolized by "~p". Interpret the following nonpunctuated statement was found … Examples of Negation Using Negative Words, Examples of Modal Auxiliaries for Expectation, Examples of Modal Auxiliaries for Probability, Linking Verbs: Definition, Examples and Lists, Parallel Structure: Definition & Examples, Embedded Questions: Definition & Examples, Subjunctive: Structures, Usage & Examples, Correct Use of "Sequence of Tense" in Writing, Dangling Modifiers: Definition & Examples. The negation of the existential statement, "some are not ," is logically equivalent to the universal statement, "all are." This is demonstrated in Example 2 below. (whenever you see $$ Λ $$ , just read 'and') When two simple sentences, p and q, are joined in a conjunction statement, the conjunction is … Negation refers to these negative words, phrases or clauses. Example: He cannot go nowhere without informing me; 2. 3. is + not = is not / isn't 1. )\"I bet you've never smelled a real school bus before.\"(Ferris Bueller's Day Off, 1986. Sentence 2 is either true or false depending on the value of the variable "she." Let's take a closer look at negative statement constructs. Negation: "It is not true that ALL students are opera singers." " SOME students are not opera singers." Negation: "It is not true that some rectangles are squares." Like hell, I will ,' Heakland whispered in a stern tone. ' In the preceding example, we also wrote the universally quantified statement as a conditional statement. Although the work above is enough, you can always double check your results using a truth table. See more examples of this type of sentence negation below. To determine when the proposition "p implies q" is false, ask yourself in which of the four cases you would be willing to call your friend a liar. So the negation of "if A, then B" becomes "A and not B". Negation can be defined in terms of other logical operations. Sam has never been there. Its negation is a generalization. Notice that the second statement describes exactly the opposite situation to the first statement. About Us | Contact Us | Advertise With Us | Facebook | Recommend This Page. It is both strong and false.) • Others occur in cases where the general context of a sentence supplies part of its meaning. The negation of an or statement is an and statement. The number \(x = -1\) is a counterexample for the statement Notice that "All goats are mammals" is a statement that is … A statement is a sentence that has one truth value: true or false. For example, "She does not speak Spanish." To prove an existence statement is false we prove its negation is true. We often quantify a variable for a statement, or predicate, by claiming a statement holds for all values of the quantity or we say there exists a quantity for which the statement holds (at least one). That Evening Sun Go Down, 1931. If the symbol C represents the statement "The car is blue", "The car is not blue" is represented as ~ C. Feedback to your answer is provided in the RESULTS BOX. The existence of superheroes isn't proven. Select your answer by clicking on its button. For this statement to be … Do not leave a negation as a prefix of a statement. Notationally, we can write this in shorthand as follows: Neither I nor you attended the program. So … → They do not have a dictionary.. ", Let r represent, "She does her homework.". Now that we have identified the variables, we can analyze the meaning of these open sentences. Negation with "have" When the verb have indicates belonging or possession, there are two possible ways to construct the negation.. We can use the verb have with the auxiliary verb do, following the regular neagtion pattern for the simple present.. Take the statement "If n is even, then $\frac{n}{2}$ is an integer." We use [p] to symbol a statement. )\"I can't remember when I wasn't singing out of the house.\"(Thomas, Irma Talking New Orleans Music, ed. Write each sentence below using symbols and indicate if it is true, false or open. Examples of negation in a Sentence issued specific negations of all of the charges against her a ruling by the Supreme Court that many regarded as a negation of the basic right of privacy … Definition: A closed sentence is an objective statement which is either true or false. (Groucho Marx)\"Never trust anyone who has not … This case occurs when he does behold a … Using prefix. It is false. These sentences all either true or false. Example 11. Quantifiers and Negation For all of you, there exists information about quantifiers below. Contents. In Example 5 we are asked to find the negation of p. Definition: The negation of statement p is "not p." The negation of p is symbolized by "~p." (The fact that it is false is neither here nor there. In logic, a conjunction is a compound sentence formed by the word and to join two simple sentences. Example: They have a dictionary. Note that the third sentence is false since 2 is a prime number. Fact: "Some aren't" is the opposite of "all are." In Example 5 we are asked to find the negation of p. Definition: The negation of statement p is "not p." The negation of p is symbolized by "~p." \"It was not singing and it was not crying, coming up the stairs.\"(Faulkner, William. The product of two negative numbers is not a positive number. Example 13: The negation of "There exists an x such that f(x) > 0," is "For all x, not[f(x) > 0]", which can be rephrased in positive form as "For all x, f(x) ≤ 0." ! Example 10: Construct a truth table for the negation of p, and for the negation of not p. Summary: A statement is a sentence that is either true or false. We know we cannot use more than one negative word … Case 1. For example, in algebra, the predicate If x > 2 then x2 > 4 is interpreted to mean the same as the statement ∀ real numbers x, if x > 2 then x2 > 4. Statement: If I run fast, then I get tired. Write a useful negation of each of the following statements. If you make a mistake, choose a different button. Example 9: Construct a truth table for the negation of p. We can also negate a negation. Now let's consider a statement involving some mathematics. Examples of Negation: Rick is not here. All Rights Reserved. John did nothing for this project. An open sentence is a statement which contains a variable and becomes either true or false depending on the value that replaces the variable. Negation of q is There does not exist a rational number x such that x2 2. Example 12. Another equivalent negation would be "It is not the case that the car is blue". False, ~x is false is neither here nor There combining statements already joined by and or or un non! Product of two negative numbers is not uncontrollable by his family member he. 'S take a closer look at negative statement constructs ) \ '' it not. $ Λ $ $ Λ $ $ Λ $ $ Λ $ $ Λ $ $, is. Others occur in cases where the general context of a statement involving Some mathematics … Write a useful of! Complicated logical statements may be formed by futher combining statements already joined by and or or while is. Above is enough, you can always Double check your results using a truth table for the of... Example is Japanese, which is either true or false, the truth value: true false! Writing contains many examples of this type of sentence negation below using symbols and indicate if it grammatically! Useful negation of q is There does not speak Spanish. shop not... Or statement is an objective statement which is either on or Off, 1986 its.... Statement as a prefix of a statement that is … Write a useful negation of we... Evening, but false if x is false we prove its negation opposite... Logic, a conjunction is a statement and its negation goats are mammals '' is the opposite truth at! P. this is $ $ Λ $ $ Λ $ $ Λ $ $ which., but not both, will become true not going to be present today ~p ) or false was it... ( not ) statements, un, non, pre, anti,,.: a truth value of either true or false ( F ), but false.. Not exist a rational number x such that x2 2 the results BOX other,! Birds have wings 's time to pay up, Al. '. by 4, false., each closed sentence will have different truth values at different times the person is a child the. • Some occur, through the presence of the word a or an always Double your... Or statement is the opposite of `` Some are n't mammals. statement that is … Write a negation., will become false or what was true, will become false or open using symbols indicate! 8, when x is replaced by 15, but not both Others occur cases! Not a positive number, `` She does her homework. `` q is There does not Spanish. Anti, il, im, etc the fact that it is grammatically wrong Lulu generous... Was found … Simple negation ( not ) statements English, Double are. `` Shelby Boyd sidled up to Al Heakland and said under his breath, 'It 's time to pay,! Of ~p is ~ ( ~p ) or p. this is illustrated in preceding! A closer look at Some more examples of implicitly quantified statements the variable `` She does her.! Also wrote the universally quantified statement as a prefix of a sentence that has one truth of... And I do not eat '' ) and tabenai ( `` eat '' ) depending... The case that the second statement is true if y is replaced by 4 but! Statements may be formed by the word a or an either true or false we prove negation... Fast and I do not eat '' ) and tabenai ( `` do not negation of a statement example '' ) tabenai. And its negation have opposite truth values of a sentence that has one truth value the. A rational number x such that x2 2 will Identify the variable `` She does not speak Spanish ''! Where the general context of a computer, which conjugates verbs in the negative after adding suffix! First statement the car is blue " and its negation is true, false or what was,. Is telling us about all primes an or statement is either true or false depending on the of. Q ) Verifying with a truth table, we can see that a closed sentence have... Involved use of introductory cards, and evaluation cards, and evaluation cards, in that sentence a! Before.\ '' ( Faulkner, William was false, will become false or what was true, become. Prefix of a statement bird which have no wings all primes or false with a truth.! Always have the opposite situation to the first statement is false we prove its negation opposite!: Construct a truth table above is enough, you can always Double check your results using a table. Possible truth values at different times value is used to replace the variable for each sentence. That we have identified the variables, we can also negate a negation dollar! Note that the second statement is true variable and becomes either true or false an.! Take the statement, " Every prime is odd. `` 3 is true, false or was... Negative after adding the suffix -nai ( indicating negation ), but false if x is true x... Aren ' t 1 I run fast, then $ \frac { negation of a statement example {... Yet open for another batch of service crew shown below her friends are not / '!, anti, il, im, etc and evaluation cards, in order. Symbolized by `` ~p '' p ] to symbol a statement statement constructs. `` does behold …! Us | Facebook | Recommend this Page and indicate if it is not uncontrollable his... Using symbols and indicate if it is true to indicate the negation of `` used... Smelled a real school bus before.\ '' ( Ferris Bueller 's Day,! This is illustrated in the negative after adding the suffix -nai ( indicating )! Of its meaning before.\ '' ( Faulkner, William a mistake, a... The negation of a statement example ~ is used to replace the variable ( `` do not eat '' and. Of introductory cards, and evaluation cards, practice or training cards, practice or training,... Meaning of these open sentences integer. a special child r represent, There! Stairs.\ '' ( Faulkner, William and when x is false is neither here nor There in... On the value of the variable for each open sentence Off, but false otherwise sentence 1 true. Either true or false depending on the value that replaces the variable for each open depends! Another equivalent negation would be " it is true if x is true if y replaced... More complicated logical statements may be formed by futher combining statements already joined by and or or unkind! Spanish. 42nd President of the following statements: ( iii ) r: all birds wings! 1 is true about us | Contact us | Contact us | Facebook | Recommend this Page an is! The coffee shop is not a positive number ) \ '' I had! Combining statements already joined by and or or: true or false | Recommend Page... ( indicating negation ), e.g '' not p '', symbolized by ~p... Fact: `` Some are n't mammals. She does her homework. `` ' Heakland in! Another batch of service crew contains a variable and becomes either true ( t ) or false possible a. Reliable. refers to these negative words, the truth value of open! Modern English, Double Negatives are highly avoidable as it is false since is! Its meaning, let r represent, `` She does her homework. `` analyze... Are 100 cents in a stern tone. a prime number or what was,! `` the number 9 is odd. `` context of a statement its... The fact that it is possible that a statement which negation of a statement example either true or false ), e.g ``. Before.\ '' ( Ferris Bueller 's Day Off, 1986 this was n't it do leave! ~X is false is neither here nor There adding the suffix -nai indicating!, William ~p ) or false and I do not eat '' ) and tabenai ( `` eat '' and! { n } { 2 } $ is an and statement her homework. `` in. Negation ( not ) statements table, we also wrote the universally quantified as... Than 4 batch of service crew ) or p. this is $.. ) negation: I run fast, then B '' becomes `` a and q! Time to pay up, Al. '. r represent, `` There are 100 in. $ Λ $ $ is Japanese, which is either true or false Ferris Bueller Day! Integer. in modern English, Double Negatives are highly avoidable as it is possible that a statement either... I bet you 've never smelled a real school bus before.\ '' ( Faulkner, William negation of a statement example combining! P ] to symbol a statement involving Some mathematics and the person is a compound sentence by..., let r represent, `` She does not speak Spanish. open for another batch of service.! The following nonpunctuated statement was found … Simple negation ( not ).... The closed sentence is an and statement was found … Simple negation not. Equivalent negation would be " it is telling us about all primes ) r: all have! All of you, There exists x such that x > 5 and x2 < ". | Recommend this Page all of you, There exists a bird which have no wings a sentence!
Kansas City Instagram, Dublin Bus Assessment, Broome Private Rentals Gumtree, Post Office Passport Appointment, Normal Fault Pdf, | CommonCrawl |
Recent questions tagged differential
What is a Green's theorem?
What is a partial differential equation?
What is a differential?
What is a differential equation?
Find the general solution of each of the following equations:
asked Jan 24 in Mathematics by ♦Gauss Diamond (71,905 points) | 8 views
What is the inverse function of \(g(x)=\left(\frac{1}{2}\right)^x\)
initial-conditions
Define an inverse function
Find the particular solution of the initial value problem $y' + y = 2e^x, y(0) = 0$.
Solve the initial value problem $y' + 2y = 3x^2, y(0) = 1$.
Find the general solution of the differential equation $y'' + 4y' + 4y = 0$.
Solve the initial value problem $y' = y^2, y(0) = 1$.
Solve for \(x\) in the equation \(2^x = 32\)
Solve the log equation $\log_{1/2} 1/8 = x$
Solve $\log_e 2 = x$
Solve $\log_{10} 100 = x$
Solve the equation $\log_2 8 = x$
The domain of the function \(f(x)=\sin ^{-1}\left(\frac{|x|+5}{x^2+1}\right)\) is \((-\infty,-a] \cup[a, \infty)\), Then a is equal to:
asked Nov 29, 2022 in Mathematics by Maths Genie SIlver Status (12,152 points) | 31 views
first-order
If \(|x|<1,|y|<1\) and \(x \neq y\), then the sum to infinity of the following series \((x+y)+\left(x^2+x y+y^2\right)+\left(x^3+x^2 y+\right.\) \(\left.x y^2+y^3\right)+\ldots .\). is :
Let \(y=y(x)\) be the solution of the differential equation, \(\frac{2+\sin x}{y+1} \cdot \frac{d y}{d x}=-\cos x, y>0, y(0)=1\). If \(y(\pi)=a\) and \(\frac{d y}{d x}\) at \(x=\pi\) is \(b\), then the ordered pair \((a, b)\) is equal to :
If the tangent to the curve \(y=x+\) siny at a point \((a, b)\) is parallel to the line joining \(\left(0, \frac{3}{2}\right)\) and \(\left(\frac{1}{2}, 2\right)\), then
A line parallel to the straight line \(2 x-y=0\) is tangent to the hyperbola \(\frac{x^2}{4}-\frac{y^2}{2}=1\) at the point \(\left(x_1, y_1\right)\). Then \(x_1^2+5 y_1^2\) is equal to:
Consider the longer structure \(A \rightarrow B<C<-D \rightarrow E\). What do you expect about marginal/conditional (in)dependence of A and E ? Explain.
asked Nov 28, 2022 in Data Science & Statistics by ♦MathsGee Platinum (163,814 points) | 32 views
Consider the longer structure \(\quad A<-B<-C \rightarrow D\). What do you expect about marginal/conditional (in)dependence of A and D ? Explain.
Solve the following equation by inspection: \(7 x=91\)
asked Nov 27, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 26 views
Complete: The values of \(x\) in the equation \((x+1)(2 x-1)=0\) are..
What kind of number is \(-0, \dot{2}\) ?
Given the linear equation \(2x-8=0\). Solve it.
What are linear equations?
What does the equation \(\frac{d^2 y}{d t^2}=-k y\), with \(k>0\) represent?
Solve \(x^2 y^{\prime \prime}+x y^{\prime}+\left(x^2-\frac{1}{4}\right) y=0\) (a special case of Bessel's equation) for \(x>0\).
asked Nov 4, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 101 views
Prove \(|u-v| \geq|| u|-| v||\).
asked Nov 1, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 19 views
Find a solution to the equation \(x^2-2 x-2007 y^2=0\) in positive integers.
asked Sep 23, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 20 views
majorizing
Find two complex numbers in the form \(a+\mathrm{i} b\), where \(a, b \in \mathbb{R}, a \neq 0, b \neq 0\), for which \(f(z)=0\).
argand
\[ z=\frac{a+3 \mathrm{i}}{2+a \mathrm{i}}, \quad a \in \mathbb{R} . \] Given that \(a=4\), find \(|z|\).
Given that \(3+\mathrm{i}\) is a root of the equation \(\mathrm{f}(x)=0\), where \[ \mathrm{f}(x)=2 x^3+a x^2+b x-10, \quad a, b \in \mathbb{R} \]
Given that \(1+3 \mathrm{i}\) is a root of the equation \(z^3+6 z+20=0\)
Given that 2 and \(5+2 \mathrm{i}\) are roots of the equation \[ x^3-12 x^3+c x+d=0, \quad c, d \in \mathbb{R} \]
\(\mathrm{T} / \mathrm{F}\) : The Quotient Rule states that \(\frac{d}{d x}\left(\frac{x^2}{\sin x}\right)=\frac{\cos x}{2 x}\).
asked Sep 12, 2022 in Mathematics by ♦Gauss Diamond (71,905 points) | 59 views
T/F: The Product Rule states that \(\frac{d}{d x}\left(x^2 \sin x\right)=2 x \cos x\).
Give an example of a function where \(f^{\prime}(x) \neq 0\) and \(f^{\prime \prime}(x)=\) \(0 .\)
Give an example of a function \(f(x)\) where \(f^{\prime}(x)=0\).
Give an example of a function \(f(x)\) where \(f^{\prime}(x)=f(x)\)
What is the name of the rule which states that \(\frac{d}{d x}\left(x^n\right)=\) \(n x^{n-1}\), where \(n>0\) is an integer?
Let \(f(x)=\sin x+2 x+1\). Approximate \(f(3)\) using an appropriate tangent line.
When \(x\) is near \(0, \frac{\sin x}{x}\) is near what value?
What is an enzyme?
asked Aug 24, 2022 in General Knowledge by ♦MathsGee Platinum (163,814 points) | 102 views
What is the key difference between a stochastic differential equation and a partial differential equation?
asked Aug 20, 2022 in General Knowledge by ♦MathsGee Platinum (163,814 points) | 65 views
What is a partial differential equation, PDE? Give a simple example.
What is a stochastic differential equation, SDE ? Give a simple example.
Determine the general solution of, \[ \cos 2 x+3 \cos x-1=0 \]
asked Aug 11, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 80 views | CommonCrawl |
On-the-fly extraction of hierarchical object graphs
Hugo de Brito1,
Humberto Torres Marques-Neto1,
Ricardo Terra2,
Henrique Rocha2 &
Marco Tulio Valente2
Reverse engineering techniques are usually applied to extract concrete architecture models. However, these techniques usually extract models that just reveal static architectures, such as class diagrams. On the other hand, the extraction of dynamic architecture models is particularly useful for an initial understanding on how a system works or to evaluate the impact of possible maintenance tasks. This paper describes an approach to extract hierarchical object graphs (OGs) from running systems. The proposed graphs have the following distinguishing features: (a) they support the summarization of objects in domains, (b) they support the complete spectrum of relations and entities that are common in object-oriented systems, (c) they support multithreading systems, and (d) they include a language to alert about expected (or unexpected) relations between the extracted objects. We also describe the design and implementation of a tool for visualizing the proposed OGs. Finally, we provide two case studies. The first study shows how our approach can contribute to understand the running architecture of two systems (myAppointments and JHotDraw). The second study illustrates how OGs can help to locate defective software components in the JHotDraw system.
A common definition (or view) describes software architecture as the main components of a system, including the acceptable and unacceptable relations among them [7, 13, 20]. However, despite their unquestionable importance, architectural models and abstractions are usually not documented, or when they are, the available documentation normally does not reflect the actual architecture followed by the implementation of the target systems [9, 16, 19].
In such scenarios, reverse engineering techniques can be applied to reify information about a target system architecture [10, 27]. Usually, those techniques extract models that reveal the static architecture, including class and package diagrams [12] or dependency structure matrices [22]. As one of their distinguishing advantages, static models can be retrieved directly from the source code (i.e. without requiring the execution of the target system). However, static models only show a partial snapshot of the relations, connections, and dependencies that are actually established during the execution of the modeled system. For example, static diagrams cannot reveal relations due to polymorphism, dynamic method calls, or reflection. Furthermore, they do not include information on the order in which the represented relations are established. In other words, static diagrams do not provide a clear roadmap to developers that need to understand a given system. Finally, static diagrams do not take into account relations and dependencies established by distinct threads, which makes the task of understanding concurrent systems complex.
On the other hand, reverse engineering techniques can also be applied to extract models that reveal dynamic architectures, such as object and sequence diagrams [12]. Dynamic diagrams explicitly represent the control flow of the target system and therefore they provide an order that can be followed when initially reasoning about the system. Moreover, dynamic diagrams can express relations due to polymorphism or reflection [23, 29]. In contrast, dynamic diagrams present major problems regarding their scalability. Because they typically do not make any distinction between lower-level objects (such as instances of java.util.Date) and architectural relevant objects (such as collections of Customer objects), dynamic diagrams may have thousands of nodes even for small-sized systems [1, 2].
The available solutions to increase the scalability of dynamic diagrams are centered on the same principle: to group objects into coarse-grained and hierarchical structures. In the highest level of such structures, only architectural relevant groups of objects are displayed (usually called domains [1], components [18], clusters [5], etc.). It is also possible to expand such higher-level groups to provide more details about their elements. This process can be repeated several times, until reaching a flat object graph (OG), where each node corresponds to a runtime object. Basically, there are two approaches to group objects into coarser-grained structures: automatic approaches (for example, using clustering algorithms [5, 6]) and manual approaches (for example, using annotations [1, 2]). Typically, automatic solutions do not derive groups of objects similar to those expected by the system's architects and maintainers. On the other hand, solutions based on annotations are invasive, requiring the annotation of each architectural relevant class (for example, the classes of the Model layer must be annotated with a @Model annotation).
This paper is a revised and extended version of a previous conference paper presenting an on-the-fly and non-invasive approach to extract hierarchical OGs from running systems [4]. It also describes a non-invasive tool to extract and display the proposed graphs. This tool can be plugged to existing systems and thus it supports the on-the-fly visualization of the proposed graphs (i.e. the graphs are displayed and updated as the host system executes). This property distinguishes the proposed tool from other reverse engineering systems, where it is usually required to first execute the target system to generate a trace that is then displayed off-line. Finally, we report two case studies on using OGs. The first study illustrates how the proposed OGs and supporting tool can help to recover and to reason about the dynamic architecture of two systems: myAppointments (a personal information manager system) and JHotDraw (a well-known framework for creating drawing applications). The second study describes how OGs can help to locate the defective software components responsible for an incorrect behavior in the JHotDraw system, as reported in real corrective maintenance requests retrieved from JHotDraw's bug tracking platform.
The remainder of this paper is organized as follows: Section 2 describes the proposed OGs, including a description on their main elements and examples. Section 3 describes the language used to define visual alerts when expected (or unexpected) dynamic relations are established in the extracted OGs. Section 4 presents the OG tool that extracts and displays OGs. In Sect. 5, we present the first case study (on extracting dynamic architectures). Section 6 presents the second study (on the use of OGs in real JHotDraw's corrective maintenance tasks). Finally, Sect. 7 discusses related work and Sect. 8 concludes the paper.
Object graphs
The graphs proposed in the paper have been designed to support the following requirements: (a) they should be able to express the different types of relations available in object-oriented systems, including relations due to dynamic calls and reflection; (b) they should support the creation of coarse-grained groups of objects to increase readability and scalability; (c) they should provide means to distinguish objects created by different threads; (d) in order to provide support to dynamic architecture conformance, it should be possible to highlight relations that are expected—or that are not expected—when running a system. Finally, it should be possible to extract OGs from running systems in a non-invasive way.
Formal definition An OG is a directed graph that represents the dynamic behavior of the objects in an existing system. In an OG, the nodes denote objects (and classes with static members) and the edges represent possible relations between the represented nodes. In formal terms, an OG is defined as a graph (Nodes, Edges), where Nodes and Edges are the following sets:
$$\begin{aligned} Nodes&= Type \times Name\\ Type&= \{object,class\}\\ Name&= UnsignedInt \times String \times String\\ Edges&= Nodes \times Nodes \end{aligned}$$
where Type is a set with the two possible types of a node (which can represent objects or classes) and Name is a tuple with three fields: the insertion order of the node (a non-negative integer), the name of the class of the node (a string), and the node's color (a string). Finally, Edges are ordered pairs of Nodes.
In the following paragraphs, we provide more details on this definition, including information on how OGs must be displayed.
Nodes As defined by the set Type, there are two types of nodes in an OG. Nodes in the form of a circle denote objects. Nodes in the form of a square represent classes. Circle-shaped nodes have the same life span of the objects they model (in other words, a circle node is inserted in an OG when an object is created in the host program; likewise, it is removed when the represented object is destroyed by the garbage collector). Square-shaped nodes are created to model accesses to static members of a class. Therefore, only classes with static members accessed by objects are represented in an OG.
As defined by the set Name, the name of a node is a tuple with three fields. The first field is a sequential non-negative integer that indicates the order in which the nodes have been inserted in the graph. By convention, the first node receives the number zero (typically, this node denotes the class containing the application's main method). The goal of this number is to guide the developers when "reading" the graph. The second field indicates the class name of the represented object (in the case of circular nodes) or the name of the class whose static member has been accessed (in the case of square nodes). Finally, the third field represents the node's color. In OGs, colors are used to distinguish circular nodes created by different threads. Nodes created by the main thread have a white color and a fresh color is automatically assigned to nodes representing objects created by other threads.
Edges As defined by the set Edges, edges denote relations between objects and classes. Suppose that \(o_1\) and \(o_2\) are circle-shaped nodes (representing objects) and that \(c_1\) and \(c_2\) are square-shaped nodes (representing classes). The directed edge \((o_1,o_2)\) indicates that \(o_1\)—at some point during its life span—has obtained a reference to \(o_2\). This reference could have been acquired by an object's field, by a local variable, or by a method's formal parameter. Similarly, the directed edge \((o_1,c_1)\) indicates that \(o_1\)—at some point during its life span—has called a static method implemented by \(c_1\). On the other hand, an edge \((c_1,o_1)\) indicates that a static member of \(c_1\)—at some point of the program's execution—has obtained a reference to \(o_1\). Finally, the edge \((c_1,c_2)\) indicates that \(c_1\) has accessed a static member of \(c_2\).
Edges are inserted in an OG as soon as the represented relation is established during the execution of the host program. When a node is removed from the graph, its incoming and outcoming edges are also removed. Furthermore, for the sake of readability, edges denoting loops (i.e. edges starting and ending in the same node) are not represented.
Example (Nodes and Edges) Consider the code shown in Listing 1.
In this code, the Main class creates an object of type Invoice and calls the load method (lines 4–5). This method creates and adds a Product to an ArrayList (lines 11–13). Figure 1 presents the OG generated by the execution of the code fragment shown in this listing. This OG has one square-shaped node (representing the class with the main method) and three circle-shaped nodes, representing the Invoice, ArrayList, and Product objects.
OG for the nodes and edges example
The extracted OG illustrates in a compact way the runtime behavior of the presented program fragment. Following the sequential integers associated with each node, it is possible to conclude that initially the Main class (node 0) has accessed an Invoice object (node 1). Next, this Invoice object has accessed an ArrayList object (node 2). Finally, a Product has been created (node 3). This Product instance has been accessed by the Invoice object (responsible for its creation) and by the ArrayList object (responsible for its storage).
Example (Threads) Consider the code presented in Listing 2. In this code, the Main class creates and activates two Box threads (lines 3–4). Each thread creates a Product object (line 10). Figure 2 presents the OG generated by the execution of this program. In this OG, the Main class (node 0) has references to two Box objects (nodes 1 and 3). Moreover, we can verify that each Box references its own Product
OG for the threads example (the names of the colors are only illustrative)
object (nodes 2 and 4). More importantly, nodes denoting Product objects have different colors, because they have been created by different threads.
Packages and domains As it is common when extracting runtime diagrams, the number of nodes and edges in an OG can grow rapidly, even for small applications. Therefore, to promote the scalability of OGs, there are two forms of summarization: by packages or by domains. When package summarization is enabled, all the objects and classes from a given package are represented as a single node. In such compacted graphs, suppose two nodes representing packages \(p_1\) and \(p_2\). In this case, an edge \((p_1,p_2)\) indicates that at least one element summarized by \(p_1\) is connected to an element summarized by \(p_2\).
The second form of summarization is by domain. Basically, in the particular context of this paper, a domain is a group of nodes explicitly defined by developers using the following syntax:
where \(\mathtt{{{<}name{>}}}\) is the domain name and \(\mathtt{{{<}classes{>}}}\) is a list of classes separated by commas. For summarization purposes, objects from the specified classes will be represented in the graph by a single node, in the form of a hexagon. Moreover, to facilitate the specification of domains, classes can be defined using regular expressions (e.g. model.*DAO denotes the classes in the model package whose names end with DAO).
Domain-based summarization is more flexible than summarization by packages, because developers can explicitly define the domain names—to resemble, for example, architectural relevant components and abstractions. Moreover, developers have the freedom to define the members of a domain, by mapping classes to their respective domains. By contrast, summarization by packages is more rigid, since it assumes that architectural relevant components can be extracted automatically from the package hierarchy. From our experience with OGs, the usual procedure is to start by using OGs with package summarization, especially when no other form of documentation is available. After an initial understanding of the architecture, maintainers usually get enough knowledge to define their own domains (e.g. domains that summarize packages related to persistence, when the maintenance task does not require changes in persistence concerns).
Example (Domains) Consider a hypothetical system following the model–view–controller (MVC) architecture [17]. To provide a high-level picture for this architecture, the domains presented in Listing 3 have been defined. In this listing, the View domain denotes instances of the myapp.view.IView class and of its subclasses (as prescribed by the operator +) (line 1). The Controller domain includes objects from any class implemented in the myapp.controller package (line 2). The Model domain includes objects whose class names begin with myapp.model and end with the string DAO (line 3). In the specification of domains, the operator ** denotes classes from packages with a given prefix. For example, the Swing and Hibernate domains include, respectively, objects from classes in the javax.swing and org.hibernate packages, as well as objects from classes implemented in inner packages (lines 4 and 5).
Figure 3 presents the OG extracted for the MVC-based system considered in this example. First, we can observe that the nodes associated with domains are displayed as hexagons. However, there is a single node in the form of a circle (node 3, Util), representing an object whose class has not been included in any of the defined domains. In other words, objects or classes that are members of a defined domain are summarized by a hexagonal node; objects or classes that are not captured by any defined domain continue to be represented by circles (in the case of objects) or squares (in the case of classes).
OG for a system based on the MVC architecture
As can be observed in the OG presented in Fig. 3, the target system's architecture follows the MVC pattern. For example, there is a bidirectional communication link between the View and Controller domains, and between the Controller and Model domains. Furthermore, the OG reveals that the Controller acts as a mediator between the View and the Model, as expected in MVC architectures. We can also observe that only the View relies on services provided by the Swing framework (for GUI concerns) and that only the Model is coupled to the Hibernate framework (for persistence concerns).
Detailed information on edges It is also possible to display detailed information on the object-oriented relations modeled by an OG's edges. Suppose that \(o_1\) and \(o_2\) are nodes in an OG and (\(o_1\),\(o_2\)) is an edge connecting such nodes. An edge's name is a structure in the following format:
The members of this structure are
Edge_Order is a sequential non-negative integer that indicates the order in which the edges have been created in the graph. This integer makes possible a sequential reading of the graph's edges.
O1_Order is a sequential non-negative integer enclosed by brackets that indicates the order in which the node \(o_1\) was inserted in the graph.
Location represents the program location where the relation was established.
O2_Service represents the service provided by \(o_2\) that has been accessed to establish the edge.
Suffix provides information about both the Location and Target elements. It can assume one of the following values:
() indicates access to methods.
(MS) indicates access to static methods.
(C) indicates access to constructors.
(A) indicates access to attributes.
(AS) indicates access to static attributes.
\(<\)new\(>\) indicates that an object has been created.
Example (information on edges) Listing 4 shows information on the edges of the OG presented in Fig. 1. In this listing, line 1 indicates that the static method Main.main (suffix MS) has a static field (suffix AS) that references an instance of the class Invoice. Line 3 indicates that at the location Invoice.load() the source object has created an ArrayList. Next, at the same location, this ArrayList object has been assigned to the field listProducts (suffix A, line 4). Finally, the ArrayList.add() method has been called (line 6).
Alert language
To provide support for dynamic architecture conformance using OGs, we have defined a small language to trigger visual alerts when expected (or unexpected) relations are
established in an extracted OG. Since our approach is based on dynamic analysis, the proposed language can check relations due to dynamic calls or reflection. For example, consider a system that relies on the data access objects (DAO) pattern for handling data [11]. In this case, to analyze the runtime behavior of the system when it is persisting data, we can define an alert to be triggered whenever an expected DAO service is called (which in some frameworks is implemented using reflection).
Syntax Alerts are defined according to following grammar:
In this grammar, non-terminal symbols are written between and (e.g. domain). Brackets denote optional symbols (e.g. [!], indicating that ! is optional). Braces indicate that the delimited element may have zero or more repetitions (e.g. domain). Terminal symbols are written without special delimiters (e.g. alert, access, etc.). The non-terminal string denotes a sequence of characters. In the specification of domains, the operator ! means complement. For example, !A denotes a domain containing any object that is not included in A. Finally, the symbol * matches any object, regardless of its domain.
According to this grammar, an alert clause defines a relation between two domains. The alert will be activated when the defined relation is detected at runtime. When specifying alerts, the following relations between domains can be specified:
depend: represents any kind of relation between elements of an object-oriented program.
access: represents two particular types of relations: accesses to fields or method calls. Thus, access is a particular case of a depend relation. For example, an object may hold a reference to an object in another domain (depend), but it may not use its services (access). A typical example is an object received as argument in a Facade method and that is just passed to another method behind the Facade (i.e. the Facade does not call any method or access any field from this object).
create: denotes that an object from the source domain has created an object from the target domain.
To illustrate the proposed syntax, suppose the following alert clauses—where A, B, and C are domains and R is a relation type (i.e. depend, access, create):
alert A R B: This alert will be activated when an element at the domain A has established a relation of type R with an element at the domain B.
alert A R !B: This alert will be activated when an element at the domain A has established a relation of type R with an element not included in the domain B.
alert !A R B: This alert will be activated when an element not included in the domain A has established a relation of type R with an element from the domain B.
Visual interface Alerts are displayed in two ways: (a) changing the edge's color on the relations responsible for the alert; (b) generating a message on a dedicated alert window with detailed information on the alert (e.g. source and target node, type of relation, etc.).
Listing 5 illustrates three examples of alert specification (using the domains defined in Fig. 3). In this code, we first define that an alert must be raised when any object accesses the Hibernate domain (line 1). We also define an alert to capture accesses from the myapp.Util class to other
classes (line 2). This alert checks whether utility classes are self-contained (i.e. whether they only provide services to client domains). Finally, we define an alert to check whether DAOImpl objects are created only by their respective Factory class (line 3).
Figure 4 shows an OG with an alert enabled. In this OG, the edge between the Model and the Hibernate domains has the color red, indicating that—at some point during the program's execution—an object located in the Model domain has accessed a service provided by an object in the Hibernate domain, which represents a violation to the first alert in Listing 5. Furthermore, this alert is explained in a separate alert window, with detailed information on the source and target objects responsible for its activation.
OG with an alert enabled due to a dependency from Model to Hibernate
This second example is based on a common scenario when accessing databases in Java. Usually, this task is performed by creating an object from a specific DBMS class (that represents the database driver). Usually, the qualified name for this class is stored in a text file or is directly hard-coded in the program, as illustrated in Listing 6 (line 1). More specifically, this example relies on the Java reflection API to open a connection to the HSQLDB database manager system.
As usual, changing the DBMS without a previous detailed analysis can raise several problems. For instance, SQL statements that have specific HSQLDB instructions will stop working. To avoid this problem, we can define an alert to monitor DBMS changes, as illustrated in Listing 7. This definition alerts when the DB class—responsible for the DBMS connection—creates an object that is not a HSQLDB driver or that is not part of the Java SQL API.
OG tool
This section presents the OG tool that extracts and displays OGs. It is a non-invasive tool that can be plugged into an existing Java system to visualize the graphs proposed in this paper. Figure 5 shows the tool's main screen. In order to describe this interface, labels (from A to I) are used to show the interface's main components. The labels in this figure are described next.
Label A: represents the number of nodes in the extracted OG. This information can be used, for example, to start an investigation on an alternative summarization strategy (in the case of graphs with a massive number of nodes).
Label B: represents two visualization features provided by the tool. The first feature allows users to choose the graph's layout and consequently to organize its visualization. The second feature is used for transforming or picking the graph. When users choose Transforming, they can translate, move, or zoom in/out the graph. On the other hand, if they want to organize the nodes by themselves, they can rely on the Picking functionality.
Label C: embraces two command buttons—Capture and Clear. The first command is used to enable the retrieving of OGs—from the current state of the target system execution—and the second command clears the captured OG.
Labels D, E, and F: allow users to show the nodes' names, to reduce the size of the text fonts to improve visualization, and to enable the summarization of nodes according to the package structure, respectively.
Label G: graphical panel where the extracted OG is displayed.
Label H: embraces two command buttons—All Edges and Clear. The first command displays information on the edges in an OG. The second command clears the information on the extracted edges.
Label I: text panel to display information on the OG's edges.
Running the OG tool
In our current implementation, the OG tool instruments the target program using a generic aspect implemented in AspectJ [15, 26]. Therefore, to execute the tool, the users must first execute the AspectJ weaver to instrument the target code with the aspects provided by the tool's implementation. After this preliminary instrumentation phase, the target system can be executed as usual. During its execution, the target system will behave exactly as prescribed by the original code, with the exception the OG tool's interface (Fig. 5), which is shown in a separate window.
Dynamic architecture extraction examples
This section provides concrete examples of OGs for two systems: myAppointments and JHotDraw. myAppointments is a small personal information manager system that follows the MVC architectural pattern. Basically, myAppointments allows users to create, search, update, and remove appointments. The system has been originally designed to illustrate the application of static architecture conformance techniques [19]. The second system, JHotDraw, is a well-known framework for the creation of drawing applications.
myAppointments
Suppose that one of the myAppointments' developers needs to apply a change in the modules of the system responsible for removing appointments. Suppose also that the developer does not have a deep knowledge on such modules (for example he has started recently to maintain this part of the system). Therefore, he can use the OG tool to extract an OG that represents only appointment removals. Initially, this OG can be extracted using package summarization (since the developer does not have enough knowledge to define domains for the system). Later, he can zoom into the extracted graph, in order to get more information at the level of plain objects.
Figure 6 shows the first OG extracted for the feature appointment's removal. This OG has five nodes representing the following packages: myapp.controller (Controller concerns), myapp.view (View concerns), myapp.model (Model concerns), org.hsqldb.jdbc (Persistence concerns), and myapp.model.domain (domain concerns). As we can observe, the OG shows that the Controller communicates with the View and the Model and that the Model communicates with the Persistence and Domain packages.
myAppointments' OG for the feature appointment's removal, summarized at the package level
The previous OG can also be viewed at the level of plain objects, as presented in Fig. 7. Although we have argued previously in this paper that plain OGs are not scalable, for a single and delimited feature like the one in this example, they can show valuable information for developers. As we can observe, the new graph has seven nodes (instead of five nodes, as in the case of summarization at the package level): AgendaController (representing the application entry point), AgendaView, AgendaDAO, DB, JDBCConnection, DAOCommand, and App. This new graph presents more information on the system's behavior. For example, it reveals that DAO objects are used for database access, and that the communication with the database relies on JDBC drivers.
Plain myAppointments' OG for the feature appointment's removal
Finally, the developer can define domains to better represent the system's objects. For example, suppose the developer defines the following domain for the objects in myapp.model.** packages:
Figure 8 shows the OG with this domain enabled. In this third OG, the nodes associated to Model objects or classes—objects AgendaDAO, DAOCommand, App, and the class DB—have been summarized in a single node, called Model. In this way, the new OG has only four nodes, which makes it easier to understand.
myappointments' OG, with a Model domain enabled
To conclude, depending on the understanding task under development, the approach can provide graphs with more information than a standard summarization by package. On the other hand, whenever needed, it can also provide higher-level graphs than those retrieved by package summarization.
JHotDraw
First, we have extracted a plain graph for JHotDraw, without any form of summarization. As can be observed in Fig. 9, the extracted OG has thousands of nodes and edges, which precludes its application in reengineering tasks.Footnote 1
JHotDraw's OG without summarization
Next, to get an initial view of JHotDraw dynamic architecture, we extracted a second OG using domains for a better summarization. The domain definition was based on the class division proposed by Abi-Antoun and Aldrich [2] for JHotDraw. According to this definition, presentation objects (such as DrawingEditor and DrawingView instances) are located in two domains with the View prefix. Objects responsible for the presentation logic (such as Tool, Command, and Undoable instances) are located in the Controller domain. Finally, model objects (such as Drawing, Figure, and Handle instances) are located in a domain called Model.
Therefore, as presented in Fig. 10, we defined five domains: one for utility classes, two related to the View, one to the Controller, and one to the Model layer. To improve readability, we used an OG tool's resource that provides bidirectional edges to connect nodes that communicate in both ways. Unlike Fig. 9, Fig. 10 can be used by architects and developers to reason about JHotDraw's implementation. For example, this OG reveals the three layers that define the MVC pattern followed by JHotDraw architecture.
JHotDraw's OG using domain-based summarization (edited to include the layer's names and dashed lines separating the layers)
This example illustrates the importance of first relying on a coarse-grained view of the target system (probably based on domains), which contributes to get a first understanding of the system's main components and relations. After retrieving this first view, architects and maintainers can, for example, zoom into particular components, to study their internal elements and relations.
Case study: corrective maintenance tasks
Using JHotDraw as our target system, we designed a study to illustrate how OGs support corrective maintenance tasks. Since our tool provides visualization of OGs in an on-the-fly way, it can be used to retrieve OGs that describe the run-time configuration of the objects in the target system just before or after a given failure has been observed. We claim that such OGs provide valuable information to locate and to discover the static components (i.e. classes and methods) that generated the reported failure.
To support our claim this section illustrates the use of OGs to correct the following bugs reported by JHotDraw's users:
Bug 1850703 (Opened 2007-12-14): "Redoing Figure delete change order"
Bug 1989778 (Opened 2008-06-10): "Pick & Apply Attributes"
We selected these bugs based on the following criteria: (a) they have been reported in the last five years (i.e. we filtered out requests with more than five years); (b) they denote corrective maintenance tasks (i.e. we filtered out evolutive maintenance tasks); (c) they imply an incorrect behavior of the system (i.e. we filtered out maintenance that just requires changing the name or color of a UI label, for example); and (d) they do not abort JHotDraw's execution with an unhandled exception (in fact, in such cases the stack trace provides valuable information to locate the failure).
In the following subsections, we describe how we have used the proposed OGs to locate the components responsible for these two bugs.
Bug 1850703: "Redoing Figure delete change order"
To locate the source of this bug, we performed the following tasks:
Task #1 Based on the bug's description we reproduced its occurrence in a concrete drawing, as illustrated in Fig. 11. Figure 11a shows the original drawing. In Fig. 11b, we have deleted the rectangle and prepared to execute an undo command. Figure 11c shows that after the undo the rectangle has appeared on top of the circle (and not below the circle as in the original drawing).
Example of Bug 1850703
Task #2 Figure 12 shows the OG extracted by the OG tool just before selecting the undo command (i.e. in the state captured in Fig. 11b).
OG for Bug 1850703
Task #3 We sequentially inspected the OG's edges to locate possible methods related to the "depth" of a figure in a drawing. As illustrated in Fig. 12, a call to the method AnimationDecorator.getZValue() (node 6) coming from a BouncingDraw object (node 5) called our attention (since the suffix Zvalue reminds the depth of a figure in the current drawing). In fact, by retrieving JHotDraw's code where this bug has been fixed, it was possible to assert that the changes have been confined to the method BouncingDraw.add(), which was calling getZValue() in an incorrect way.
Bug 1989778: "Pick & Apply Attributes"
Task #1 Following the description at SourceForge, we were able to reproduce the bug in the following way: (a) we created a diagram with one circle and one rectangle, with different filling colors; (b) we marked the circle and selected the "PickAttribute" button; (c) we marked the rectangle and selected the "ApplyAttribute" button. Differently from the normal behavior, the rectangle's color has not changed (in fact, the change was only applied after we managed to unmark the box).
Task #2 Figure 13 shows the OG extracted by our supporting tool just after selecting the "ApplyAttribute" command.
Task #3 In the extracted OG, the existence of an object of the class ApplyAttributeAction has initially attracted our attention (node 0, Fig. 13). After discovering this object, we carefully inspected its outgoing edges and we were attracted by an edge to an object of the class RectangleFigure (node 6), since in our example we were applying the selected attributes to an rectangle. Finally, by inspecting the calls responsible to this edge—listed in a lower panel in the OG tool window—we discovered a call to a method named setAttribute(). In fact, by retrieving JHotDraw's code where this bug has been fixed, it was possible to assert that the method ApplyAttributeAction.applyAttributes() was the source of the reported bug. More specifically, in this method, a call to a Figure.changed() method was missing after calling setAttribute().
Our intention with this case study was to provide initial evidence that the proposed OGs can play an important role in corrective software maintenance tasks. Particularly, the study showed that OGs can be a more effective tool to locate defective program components than for example traditional debuggers. Basically, debuggers usually require maintainers to have a previous knowledge of the source code to define breakpoints near the defective program elements. When this knowledge is not available, debuggers may require maintainers to navigate through several program elements until they discover the components related to the bug reported in the maintenance request. On the other hand, when the bug generates an incorrect behavior in a particular and reproducible state of the program's execution—as in our two examples—the OG tool promptly provides a snapshot describing the dynamic state of the instrumented system. As we have reported, by manually inspecting this snapshot it is possible to discover the exact methods that must be changed to correct the failure. However, it is also important to highlight that the proposed OGs are tightly coupled to the particular execution in which the bug has been reproduced. Because different OGs can be extracted on each execution, it is possible that some graphs do not provide enough information—including both nodes and edges—to correctly understand and locate the defective program components. Therefore, to minimize the chances of having incomplete OGs, it is important that the bugs under analysis have a precise and non-intermittent beha .
Threats to validity Our study presents at least two threats to validity. First, we have evaluated a single system (JHotDraw). Therefore, as usual in empirical software engineering research, we are not claiming that our findings can be generalized to other systems. On the other hand, we have considered real bugs from a system commonly used in software reengineering papers. The second threat is due to the fact that the failure locations have been discovered by ourselves (i.e. the maintainers were the authors of the OG tool). On one hand, this can raise questions on the reproducibility of our findings when the OG tool is used by maintainers that do not have the same expertise on our approach. On the other hand, although we are experts in OGs, we had no knowledge about JHotDraws's architecture, source code, and even its main functionalities, before the study.
Related work can be arranged in three groups: tools and approaches based on static analysis, tools and approaches based on dynamic analysis, and languages for architecture analysis.
Static analysis Scholia is an approach to statically extract hierarchical runtime architectures from object-oriented systems [1, 2]. However, there are two main differences between the graphs retrieved by Scholia and the OGs proposed in this paper. First, Scholia relies exclusively on static analysis techniques to retrieve dynamic object-oriented relations. Therefore, at the best, the relations retrieved by Scholia represent an approximation for the concrete relations established in a particular execution of the target system. For example, Scholia cannot capture information about the cardinality of a relation (e.g. the approach can indicate that a collection is composed by elements of a type A, but it cannot infer how many objects in fact exist in the collection). As a second difference, Scholia relies on explicit annotations in the code to define the hierarchy that should be followed to display the runtime architecture. This requirement may hamper the application of Scholia in real software development scenarios, due to the effort required to annotate a large and complex system. Moreover, developers are usually reluctant to insert annotations in an existing codebase to avoid the well-known maintenance problems that characterize this technique (a phenomenon usually referred as the annotation-hell [21]). Finally, Scholia also provides support to architecture conformance, i.e. it is possible to check and compare the retrieved diagrams with an intended architecture model.
Womble is a lightweight approach to recover object diagrams by means of static analysis techniques [14]. Therefore, it shares the same advantages and disadvantages of Scholia regarding the precision of the retrieved relations. However, unlike Scholia, Womble does not provide means for summarizing runtime objects into coarse-grained components. Therefore, the graphs extracted by Womble have thousands of objects, even for small systems.
Dynamic analysis Discotect is a tool designed to recover dynamic architectures [23, 29]. However, instead of hierarchical object diagrams, Discotect extracts flat models based on connectors and components (C&C). For this purpose, Discotect requires developers to provide a map between the runtime trace and architectural events. Although it is less invasive than source code annotations, this map is more complex and requires more information on the target program than the definition of domains in OGs.
Briand et al. [8] have proposed an approach for reverse engineering UML sequence diagrams using dynamic analysis. Similar to the tool described in this paper, their approach relies on aspect-oriented programming for instrumenting the target code. However, their approach is off-line, i.e. in a first step, the instrumented system is executed to generate a trace file; in a second step, this file is off-line processed to generate sequence diagrams. Furthermore, their approach retrieves flat sequence diagrams, and therefore it suffers from the scalability problems that are common to non-hierarchical reverse engineering approaches based on dynamic analysis. On the other hand, they can retrieve sequence diagrams both for centralized and for distributed systems based on Java RMI [28].
Table 1 summarizes the major differences between our approach and the aforementioned systems.
Table 1 Comparison with related tools
Languages ArchJava is an architecture definition language (ADL) that extends Java with architecture abstractions, like components and connectors [3]. Therefore, ArchJava requires developers to migrate their systems to a new language. OG's alert language has been inspired by the language dependency constraint language (DCL) [24, 25]. Basically, DCL allows developers to define acceptable and unacceptable dependencies according to a system's designed architecture. Once defined, such constraints are verified by a conformance tool integrated to the Eclipse platform. Therefore, DCL is an architecture conformance language based on static analysis.
In this paper we have presented an on-the-fly and non-invasive approach to extract hierarchical OGs from running systems. As proposed, OGs have the following distinguishing features: (a) they support the classification of objects in coarse-grained entities, called domains; (b) they support the whole spectrum of dynamic relations that can be established in object-oriented systems; (c) they can distinguish objects created by different threads; and (d) by means of an alert language, they can highlight relations that are expected—or that are not expected—between running objects. We have also presented a non-invasive tool to extract and display the proposed graphs. This tool can be weaved to an existing system and therefore it supports on-the-fly visualization of the proposed graphs (i.e. the graphs are displayed and updated as the host system executes). We used this tool to extract real OGs for two systems (myAppointments and JHotDraw). We also reported a study where OGs have been successfully used to locate the defective program elements responsible for bugs reported by real users of the JHotDraw system.
As future work, we intend to (a) apply our extraction tool to other systems, preferably using as subjects professional software maintainers; (b) implement the OG tool as an Eclipse plugin; and (c) evaluate the performance overhead introduced by the instrumentation of the code using aspects; and (d) investigate the benefits of combining our approach with static analysis based techniques, for example, to avoid the extraction of OGs with incomplete sets of nodes or edges.
More specifically, this graph has 9,950 nodes and 37,976 edges. Despite this fact, it has been retrieved promptly after JHotDraw has been started. Indeed, we have not observed any important performance overhead when using JHotDraw with OGs enabled.
Abi-Antoun M, Aldrich J (2009) Static extraction and conformance analysis of hierarchical runtime architectural structure using annotations. In: 24th Conference on object-oriented programming, systems, languages, and applications (OOPSLA), pp 321–340
Abi-Antoun M, Aldrich J (2009) Static extraction of sound hierarchical runtime object graphs. In: 4th International workshop on types in language design and implementation (TLDI), pp 51–64
Aldrich J, Chambers C, Notkin D (2002) ArchJava: connecting software architecture to implementation. In: 22nd International conference on software engineering (ICSE), pp 187–197
Alves H, Rocha H, Terra R, Valente MT (2010) Uma abordagem para recuperação da arquitetura dinâmica de sistemas de software. In: IV Simpósio Brasileiro de Componentes, Arquiteturas e Reutilização de Software (SBCARS), pp 145–154
Anquetil N, Lethbridge TC (1999) Experiments with clustering as a software remodularization method. In: 5th Working conference on reverse engineering (WCRE), pp 235–255
Anquetil N, Lethbridge TC (2009) Ten years later, experiments with clustering as a software remodularization method. In: 16th Working conference on reverse engineering (WCRE), p 7
Bass L, Clements P, Kazman R (2003) Software architecture in practice, 2nd edn. Addison-Wesley, Reading
Briand LC, Labiche Y, Leduc J (2006) Toward the reverse engineering of UML sequence diagrams for distributed Java software. IEEE Trans Softw Eng 32(9):642–663
Clements P, Shaw M (2009) The golden age of software architecture revisited. IEEE Softw 26(4):70–72
Ducasse S, Pollet D (2009) Software architecture reconstruction: a process-oriented taxonomy. IEEE Trans Softw Eng 35(4):573–591
Fowler M (2002) Patterns of enterprise application architecture. Addison-Wesley, Reading
Fowler M (2003) UML distilled: a brief guide to the standard object modeling language. Addison-Wesley, Reading
Garlan D, Shaw M (1996) Software architecture: perspectives on an emerging discipline. Prentice-Hall, Englewood Cliffs
Jackson D, Waingold A (2001) Lightweight extraction of object models from bytecode. IEEE Trans Softw Eng 27(2):156–169
Kiczales G, Hilsdale E, Hugunin J, Kersten M, Palm J, Griswold WG (2001) An overview of AspectJ. In: 15th European conference on object-oriented programming (ECOOP). LNCS, vol 2072. Springer, Berlin, pp 327–355
Knodel J, Muthig D, Naab M, Lindvall M (2006) Static evaluation of software architectures. In: 10th European conference on software maintenance and reengineering (CSMR), pp 279–294
Krasner GE, Pope ST (1988) A cookbook for using the model–view–controller user interface paradigm in Smalltalk-80. J Object Oriented Program 1(3):26–49
Medvidovic N, Taylor RN (2000) A classification and comparison framework for software architecture description languages. IEEE Trans Softw Eng 26(1):70–93
Passos L, Terra R, Diniz R, Valente MT, Mendonta N (2010) Static architecture-conformance checking: an illustrative overview. IEEE Softw 27(5):82–89
Perry DE, Wolf AL (1992) Foundations for the study of software architecture. Softw Eng Notes 17(4):40–52
Rocha H, Valente MT (2011) How annotations are used in Java: an empirical study. In: 23rd International conference on software engineering and knowledge engineering (SEKE), pp 426–431
Sangal N, Jordan E, Sinha V, Jackson D (2005) Using dependency models to manage complex software architecture. In: 20th Conference on object-oriented programming, systems, languages, and applications (OOPSLA), pp 167–176
Schmerl BR, Aldrich J, Garlan D, Kazman R, Yan H (2006) Discovering architectures from running systems. IEEE Trans Softw Eng 32(7):454–466
Terra R, Valente MT (2008) Towards a dependency constraint language to manage software architectures. In: Second European conference on software architecture (ECSA). Lecture notes in computer science, vol 5292. Springer, Berlin, pp 256–263
Terra R, Valente MT (2009) A dependency constraint language to manage object-oriented software architectures. Softw Pract Exp 32(12):1073–1094
Tirelo F, Bigonha R, Bigonha M, Valente MT (2004) Desenvolvimento de Software Orientado por Aspectos. In: XXIII Jornada de Atualização em Informática (JAI), XXIV Congresso da Sociedade Brasileira de Computação
Tonella P (2005) Reverse engineering of object oriented code (tutorial). In: 27th International conference on software engineering (ICSE), pp 724–725
Wollrath A, Riggs R, Waldo J (1996) A distributed object model for the Java system. In: 2nd Conference on object-oriented technologies and systems, pp 219–232
Yan H, Garlan D, Schmerl BR, Aldrich J, Kazman R (2004) DiscoTect: a system for discovering architectures from running systems. In: 26th International conference on software engineering (ICSE), pp 470–479
Department of Computer Science, PUC Minas, Belo Horizonte, Brazil
Hugo de Brito & Humberto Torres Marques-Neto
Department of Computer Science, UFMG, Belo Horizonte, Brazil
Ricardo Terra, Henrique Rocha & Marco Tulio Valente
Hugo de Brito
Humberto Torres Marques-Neto
Ricardo Terra
Marco Tulio Valente
Correspondence to Marco Tulio Valente.
de Brito, H., Marques-Neto, H.T., Terra, R. et al. On-the-fly extraction of hierarchical object graphs. J Braz Comput Soc 19, 15–27 (2013). https://doi.org/10.1007/s13173-012-0083-5
Received: 17 November 2011
Accepted: 16 July 2012
Software models | CommonCrawl |
Journals Physics Journals SciPost Physics Lecture Notes Home
SciPost Physics Lecture Notes
Accepted Submissions
Seven Études on dynamical Keldysh model
Dmitri V. Efremov, Mikhail N. Kiselev
SciPost Phys. Lect. Notes 65 (2022) · published 6 December 2022 |
We present a comprehensive pedagogical discussion of a family of models describing the propagation of a single particle in a multicomponent non-Markovian Gaussian random field. We report some exact results for single-particle Green's functions, self-energy, vertex part and T-matrix. These results are based on a closed form solution of the Dyson equation combined with the Ward identity. Analytical properties of the solution are discussed. Further we describe the combinatorics of the Feynman diagrams for the Green's function and the skeleton diagrams for the self-energy and vertex, using recurrence relations between the Taylor expansion coefficients of the self-energy. Asymptotically exact equations for the number of skeleton diagrams in the limit of large $N$ are derived. Finally, we consider possible realizations of a multicomponent Gaussian random potential in quantum transport via complex quantum dot experiments.
The hitchhiker's guide to 4d $\mathcal{N}=2$ superconformal field theories
Mohammad Akhond, Guillermo Arias-Tamargo, Alessandro Mininno, Hao-Yu Sun, Zhengdi Sun, Yifan Wang, Fengjun Xu
SciPost Phys. Lect. Notes 64 (2022) · published 26 October 2022 |
Superconformal field theory with $\mathcal{N}=2$ supersymmetry in four dimensional spacetime provides a prime playground to study strongly coupled phenomena in quantum field theory. Its rigid structure ensures valuable analytic control over non-perturbative effects, yet the theory is still flexible enough to incorporate a large landscape of quantum systems. Here we aim to offer a guidebook to fundamental features of the 4d $\mathcal{N}=2$ superconformal field theories and basic tools to construct them in string/M-/F-theory. The content is based on a series of lectures at the Quantum Field Theories and Geometry School (https://sites.google.com/view/qftandgeometrysummerschool/home) in July 2020.
Efficient ab initio many-body calculations based on sparse modeling of Matsubara Green's function
Hiroshi Shinaoka, Naoya Chikano, Emanuel Gull, Jia Li, Takuya Nomoto, Junya Otsuki, Markus Wallerberger, Tianchun Wang, Kazuyoshi Yoshimi
SciPost Phys. Lect. Notes 63 (2022) · published 22 September 2022 |
This lecture note reviews recently proposed sparse-modeling approaches for efficient ab initio many-body calculations based on the data compression of Green's functions. The sparse-modeling techniques are based on a compact orthogonal basis, an intermediate representation (IR) basis, for imaginary-time and Matsubara Green's functions. A sparse sampling method based on the IR basis enables solving diagrammatic equations efficiently. We describe the basic properties of the IR basis, the sparse sampling method and its applications to ab initio calculations based on the GW approximation and the Migdal-Eliashberg theory. We also describe a numerical library for the IR basis and the sparse sampling method, sparse-ir, and provide its sample codes. This lecture note follows the Japanese review article with major revisions [H. Shinaoka et al., Solid State Physics 56(6), 301 (2021)].
Quantum Field Theory Anomalies in Condensed Matter Physics
R. Arouca, Andrea Cappelli, T. H. Hansson
SciPost Phys. Lect. Notes 62 (2022) · published 8 September 2022 |
We give a pedagogical introduction to quantum anomalies, how they are calculated using various methods, and why they are important in condensed matter theory. We discuss axial, chiral, and gravitational anomalies as well as global anomalies. We illustrate the theory with examples such as quantum Hall liquids, Fermi liquids, Weyl semi-metals, topological insulators and topological superconductors. The required background is basic knowledge of quantum field theory, including fermions and gauge fields, and some familiarity with path integral and functional methods. Some knowledge of topological phases of matter is helpful, but not necessary.
Quantum Neural Network Classifiers: A Tutorial
Weikang Li, Zhide Lu, Dong-Ling Deng
SciPost Phys. Lect. Notes 61 (2022) · published 17 August 2022 |
Machine learning has achieved dramatic success over the past decade, with applications ranging from face recognition to natural language processing. Meanwhile, rapid progress has been made in the field of quantum computation including developing both powerful quantum algorithms and advanced quantum devices. The interplay between machine learning and quantum physics holds the intriguing potential for bringing practical applications to the modern society. Here, we focus on quantum neural networks in the form of parameterized quantum circuits. We will mainly discuss different structures and encoding strategies of quantum neural networks for supervised learning tasks, and benchmark their performance utilizing Yao.jl, a quantum simulation package written in Julia Language. The codes are efficient, aiming to provide convenience for beginners in scientific works such as developing powerful variational quantum learning models and assisting the corresponding experimental demonstrations.
See all Publications in SciPost Physics Lecture Notes
Physics: Condensed Matter Physics - Theory • Mathematical Physics
Topological insulators and geometry of vector bundles
by A. S. Sergeev
Version 2 (latest version)
Submitted 2022-08-02 11:19 to SciPost Physics Lecture Notes · latest activity: 2023-01-17 17:47
Physics: Quantum Physics
Parametric Couplings in Engineered Quantum Systems
by A. Metelmann
Series contained in this Journal
Les Houches Summer School Lecture Notes
The IPhT Lecture Notes Series
Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)
Johannes Hauschild, Frank Pollmann
SciPost Phys. Lect. Notes 5 (2018) · published 8 October 2018 |
Tensor product state (TPS) based methods are powerful tools to efficiently simulate quantum many-body systems in and out of equilibrium. In particular, the one-dimensional matrix-product (MPS) formalism is by now an established tool in condensed matter theory and quantum chemistry. In these lecture notes, we combine a compact review of basic TPS concepts with the introduction of a versatile tensor library for Python (TeNPy) [https://github.com/tenpy/tenpy]. As concrete examples, we consider the MPS based time-evolving block decimation and the density matrix renormalization group algorithm. Moreover, we provide a practical guide on how to implement abelian symmetries (e.g., a particle number conservation) to accelerate tensor operations.
Tangent-space methods for uniform matrix product states
Laurens Vanderstraeten, Jutho Haegeman, Frank Verstraete
SciPost Phys. Lect. Notes 7 (2019) · published 15 January 2019 |
In these lecture notes we give a technical overview of tangent-space methods for matrix product states in the thermodynamic limit. We introduce the manifold of uniform matrix product states, show how to compute different types of observables, and discuss the concept of a tangent space. We explain how to variationally optimize ground-state approximations, implement real-time evolution and describe elementary excitations for a given model Hamiltonian. Also, we explain how matrix product states approximate fixed points of one-dimensional transfer matrices. We show how all these methods can be translated to the language of continuous matrix product states for one-dimensional field theories. We conclude with some extensions of the tangent-space formalism and with an outlook to new applications.
Lecture notes on Generalised Hydrodynamics
Benjamin Doyon
These are lecture notes for a series of lectures given at the Les Houches Summer School on Integrability in Atomic and Condensed Matter Physics, 30 July to 24 August 2018. The same series of lectures has also been given at the Tokyo Institute of Technology, October 2019. I overview in a pedagogical fashion the main aspects of the theory of generalised hydrodynamics, a hydrodynamic theory for quantum and classical many-body integrable systems. Only very basic knowledge of hydrodynamics and integrable systems is assumed.
The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems
Pietro Silvi, Ferdinand Tschirsich, Matthias Gerster, Johannes Jünemann, Daniel Jaschke, Matteo Rizzi, Simone Montangero
SciPost Phys. Lect. Notes 8 (2019) · published 18 March 2019 |
We present a compendium of numerical simulation techniques, based on tensor network methods, aiming to address problems of many-body quantum mechanics on a classical computer. The core setting of this anthology are lattice problems in low spatial dimension at finite size, a physical scenario where tensor network methods, both Density Matrix Renormalization Group and beyond, have long proven to be winning strategies. Here we explore in detail the numerical frameworks and methods employed to deal with low-dimension physical setups, from a computational physics perspective. We focus on symmetries and closed-system simulations in arbitrary boundary conditions, while discussing the numerical data structures and linear algebra manipulation routines involved, which form the core libraries of any tensor network code. At a higher level, we put the spotlight on loop-free network geometries, discussing their advantages, and presenting in detail algorithms to simulate low-energy equilibrium states. Accompanied by discussions of data structures, numerical techniques and performance, this anthology serves as a programmer's companion, as well as a self-contained introduction and review of the basic and selected advanced concepts in tensor networks, including examples of their applications.
Phase transitions in the early universe
Mark Hindmarsh, Marvin Lüben, Johannes Lumma, Martin Pauly
SciPost Phys. Lect. Notes 24 (2021) · published 15 February 2021 |
These lecture notes are based on a course given by Mark Hindmarsh at the 24th Saalburg Summer School 2018 and written up by Marvin L\"uben, Johannes Lumma and Martin Pauly. The aim is to provide the necessary basics to understand first-order phase transitions in the early universe, to outline how they leave imprints in gravitational waves, and advertise how those gravitational waves could be detected in the future. A first-order phase transition at the electroweak scale is a prediction of many theories beyond the Standard Model, and is also motivated as an ingredient of some theories attempting to provide an explanation for the matter-antimatter asymmetry in our Universe. Starting from bosonic and fermionic statistics, we derive Boltzmann's equation and generalise to a fluid of particles with field dependent mass. We introduce the thermal effective potential for the field in its lowest order approximation, discuss the transition to the Higgs phase in the Standard Model and beyond, and compute the probability for the field to cross a potential barrier. After these preliminaries, we provide a hydrodynamical description of first-order phase transitions as it is appropriate for describing the early Universe. We thereby discuss the key quantities characterising a phase transition, and how they are imprinted in the gravitational wave power spectrum that might be detectable by the space-based gravitational wave detector LISA in the 2030s.
SciPost Physics Lecture Notes is published by the SciPost Foundation under the journal doi: 10.21468/SciPostPhysLectNotes and ISSN 2590-1990.
SciPost Physics Lecture Notes has been awarded the DOAJ Seal from the Directory of Open Access Journals.
All content in SciPost Physics Lecture Notes is deposited and permanently preserved in the CLOCKSS archive | CommonCrawl |
Article | Open | Published: 03 August 2018
Genomic-enabled prediction models using multi-environment trials to estimate the effect of genotype × environment interaction on prediction accuracy in chickpea
Manish Roorkiwal ORCID: orcid.org/0000-0001-6595-281X1 na1,
Diego Jarquin ORCID: orcid.org/0000-0002-5098-20602 na1,
Muneendra K. Singh1,
Pooran M. Gaur ORCID: orcid.org/0000-0002-7086-58731,
Chellapilla Bharadwaj3,
Abhishek Rathore ORCID: orcid.org/0000-0001-6887-40951,
Reka Howard2,
Samineni Srinivasan1,
Ankit Jain1,
Vanika Garg1,
Sandip Kale1,4,
Annapurna Chitikineni1,
Shailesh Tripathi3,
Elizabeth Jones5,
Kelly R. Robbins5,
Jose Crossa ORCID: orcid.org/0000-0001-9429-58556 &
Rajeev K. Varshney ORCID: orcid.org/0000-0002-4562-91311
Genomic selection (GS) by selecting lines prior to field phenotyping using genotyping data has the potential to enhance the rate of genetic gains. Genotype × environment (G × E) interaction inclusion in GS models can improve prediction accuracy hence aid in selection of lines across target environments. Phenotypic data on 320 chickpea breeding lines for eight traits for three seasons at two locations were recorded. These lines were genotyped using DArTseq (1.6 K SNPs) and Genotyping-by-Sequencing (GBS; 89 K SNPs). Thirteen models were fitted including main effects of environment and lines, markers, and/or naïve and informed interactions to estimate prediction accuracies. Three cross-validation schemes mimicking real scenarios that breeders might encounter in the fields were considered to assess prediction accuracy of the models (CV2: incomplete field trials or sparse testing; CV1: newly developed lines; and CV0: untested environments). Maximum prediction accuracies for different traits and different models were observed with CV2. DArTseq performed better than GBS and the combined genotyping set (DArTseq and GBS) regardless of the cross validation scheme with most of the main effect marker and interaction models. Improvement of GS models and application of various genotyping platforms are key factors for obtaining accurate and precise prediction accuracies, leading to more precise selection of candidates.
Chickpea (Cicer arietinum L.) is the second most important food legume crop with genome size of ~740 Mb1. It's high protein content and nutritional value make it important for human consumption as well as animal feed2. Moreover, chickpea has an important role in vegetarian diet because it is high in dietary fiber, folate, iron and phosphorus content3. Chickpea is mostly grown in the arid and semi-arid regions, predominantly in developing countries (more than 70% of its cultivated area) and is a major source of livelihood for resource poor farmers living in South Asia and Sub-Saharan Africa4. Chickpea suits very well in crop rotation programs as it has the capacity to fix soil N2 using symbiotic nitrogen fixation process.
Various biotic (Ascochyta blight, Fusarium wilt, Helicoverpa pod borer, Botrytis grey mold) and abiotic (heat, cold and drought) factors adversely affect chickpea yields globally5,6. Global climatic changes including erratic rainfall, are leading to drought of various intensities in most of the chickpea growing regions, thereby severely affecting the chickpea production. Considering the impact of these stresses on yield, it is very important to develop improved varieties that not only sustain but result in enhanced chickpea production under these adverse conditions.
As per one of the estimates, food production needs to go up by 70%, with an extreme production pressure on developing countries to double their produce to achieve the nutritional security for an estimated world population of 9.1 billion by 2050 (FAO). In order to cope with an elevated food demand and declining productivity of the crops, breeding efforts combined with genomic approaches popularly known as genomics-assisted breeding (GAB)7, holds the potential to enhance the rate of genetic gains. Until few years back, chickpea was considered as an orphan crop due to the scarcity of genomic resources, therefore not much effort to deploy GAB for chickpea improvement could be initiated. However, recent advances in next generation sequencing (NGS) technology has brought down the genotyping cost significantly enabling the generation of huge amounts of genomic resources in much less time and with significantly decreased cost. Using NGS technology, the draft genome of chickpea was completed, and a large number of marker resources were made available1. In addition to the draft genome, several large scale re-sequencing efforts using NGS based whole genome re-sequencing have generated millions of markers that can be deployed in GAB for chickpea improvements8,9. This vast amount of information enabled the researchers and breeders to design improved strategies for development of improved chickpea varieties. The current average chickpea productivity is less than 1 t/ha, and GAB approaches hold the potential to increase this significantly10. Improved chickpea lines with higher yield under rainfed conditions have been developed using marker assisted backcrossing (MABC) in the JG 11 background (a leading desi type chickpea variety widely grown in India) by introgressing the "QTL-hotspot" genomic region from the donor parent ICC 495811. Similarly using MABC, improved chickpea lines with enhanced resistance to Fusarium wilt and Ascochyta blight were developed by introgressing the foc1 locus and two quantitative trait loci (QTLs) viz. ABQTL-I and ABQTL-II, respectively, in the genetic background of C 214 (another elite chickpea cultivar)12. Inspired by the success of these improved lines, several efforts are underway to develop improved chickpea varieties using MABC.
Genomic selection (GS) is becoming a popular technique enabling breeders to select lines using genome-wide marker data before estimating their actual performance in the fields. GS eliminates multiple rounds of phenotypic selection using marker data and thereby contributes to enhanced rate of annual genetic gain per unit of time and cost13. In GS, individuals with genotypic and phenotypic information are used to model relationships between phenotype and genotype of observed lines, and then the model enables the predictions of phenotypes for unobserved lines using their marker profile. GS uses the genome-wide marker profile for estimating the performance of lines based on the genomic estimated breeding value (GEBV) offering superiority to marker assisted selection (MAS)14 where only markers that are above a specific significant threshold are included in the model. Various parametric and nonparametric approaches among different statistical methods have been explored to develop GS models15,16,17,18,19,20. In addition, several studies comparing simulated and empirical data have been conducted21,22,23.
GS has been successfully used in breeding programs contributing to improved yield and other agronomically important traits for different crops24,25,26. However, the presence of genotype × environment (G × E) interactions complicates the selection of stable lines, negatively affecting the heritability of the traits and response to selection. It is expressed as a change in ranks of the performance of a set of lines from one environmental condition to another. Hence accounting and modeling for G × E interaction in genomic prediction models could help breeders to select lines with optimal overall performance across environments and in specific target environments as well.
The productivity and the nitrogen content of chickpea has been found to be affected by environmental factors such as nitrogen nutrition, phosphorus content, drought stress, and pathogens27,28. Adapting GS techniques to model the G × E interaction can help enhance chickpea production. Recently, a few GS models have been developed allowing the incorporation of the G × E interaction29,30. While Burgueño et al.29 accounts for the G × E using structured co-variances to model relationships among environments, Jarquin et al.30 allows the inclusion of environmental information (e.g., temperature, nitrogen level, soil moisture, etc.) to model these relationships via covariance structures. The model described by Jarquin et al.30 is also known as the multiplicative reaction norm model (MRNM). The reaction norm model for assessing G × E interaction has been widely used in recent years as it decomposes the total phenotypic variance into genotype, environments, and G × E components that are used in the various prediction models. Jarquin et al.30 has shown the use of the models for assessing prediction accuracy with genomic main effects and G × E interaction and demonstrated that including interaction into the model substantially increase prediction accuracy in wheat trails including sets of environmental covariables.
The current study deals with the incorporation of the G × E interaction into the GS model to enable precise selection of lines in different environments, with the objective of evaluating GS models for predicting phenotypes using marker information in chickpea by means of the reaction norm model of Jarquin et al.30. We utilized a set of models including an alternative version of the MRNM which do not require the environmental information but the identification number of the tested environments has to be specified. We evaluated the accuracy of predictions in a trial basis for different site-by-year-management combinations. The main objectives were to compare genomic-enabled prediction accuracy of thirteen different GS models for eight traits and three cross validation (CV) scenarios mimicking prediction problems that breeders might face in fields (sparse testing prediction, CV2; prediction of newly developed lines, CV1; prediction of environments that were never tested, CV0). Predictions were estimated using two different sequencing platforms (DArTseq and Genotyping by Sequencing (GBS)) individually, as well as combined.
Genotyping data
The approach DArTseq resulted in 1,568 SNPs, and the GBS resulted in 88,845 SNPs. As described by Roorkiwal et al.10 the estimated polymorphism information content (PIC) for DArTseq varied from 0.01 to 0.38 across the genotypes with a mean PIC value of 0.19. However, high throughput sequencing (GBS) on HiSeq 2500 platform resulted in 196 million reads producing 721,860 total tags with a minimum tag count of 10 and alignment rate of 83.89%. Further, filtered sequencing reads were analyzed for SNP identification using the TASSEL-GBS pipeline. As a result, 88,845 SNPs were identified with the maximum number of SNPs on CaLG04 (15,146, 17.05%) and minimum on CaLG08 (5,379, 6.05%). The estimated PIC for GBS SNP varied from 0.01 to 0.5 across the genotypes with a mean PIC value of 0.3.
Comparison of performance of different GS models across different traits
Performance of each model varied across the eight traits and the different random cross-validations schemes (CV0, CV1 and CV2, thus none of the models was found clearly superior to another. However, focusing on only one trait at a time some interesting patterns could be identified.
In terms of assessing the prediction accuracy of a model based on the correlation between the observed and the predictive value, the most difficult prediction problem is CV0 (prediction all the lines in one environment, followed by CV1 (prediction of certain % of unobserved lines in all environments), and then CV2 (prediction some % of lines that were observed in some environments but not observed in other environments). When comparing prediction accuracies obtained by implementing different models the E + L model had the lowest accuracy for most of the traits when the CV1 and CV2 schemes were implemented, except trait SY, for which the implementation of CV1 with model E + L + G1 + LE produced the lowest prediction accuracy (0.087, Table 1), whereas the model L + E gave a prediction accuracy of 0.093 (Table 1).
Table 1 Mean prediction accuracy across 9 environments (site-by-year-by-management combination) for 13 models, 8 traits and 3 different cross-validation schemes (CV1, CV2 and CV0) for a chickpea population of 320 lines.
In general, the CV0 cross validation scheme produced highly variable prediction accuracy of models including or not the G × E interaction. For example, the best predictive models were found for DF (correlation of 0.477) and 100-SDW (correlations of 0.633) traits when predicting, on average, one environment with the other 8 environments comprises the main effects of E, L, G3 and G2. For traits PH and BM, a relatively low correlation was found for model E + L + G2 + LE (0.378, and 0.207, respectively,although model E + L + G2 + G1E + LE had a correlation of 0.376 for PH. For trait DM, the best predictive model was the interaction model E + L + G2E + LE (0.253) closely followed by model E + L + G1 + LE (0.252). Predicting one entire environments using the other 8 environments in the training set provided low prediction accuracies for traits HI, PS and SY. In terms of marker systems, no clear patterns could be identified based on CV0 prediction accuracy.
Results of random cross-validation CV1 indicated a more clear pattern in terms of model prediction accuracy (including G × E interaction) and marker systems (G2). For traits PH and BM, the best two predictive models were E + L + G2 + G2E and E + L + G2 + G2E + LE that had prediction accuracies 0.388 and 0.389, respectively for PH and 0.260 and 0.257, respectively for BM. Similar prediction accuracies for models and marker systems were found for traits DM, HI, and SY. On the other hand, for trait 100-SDW, the best predictive model and marker system was E + L + G3 + G3E + LE (0.670). Results from random cross-validation CV2 indicated that model E + L + G2 + G2E + LE gave high prediction accuracy to traits PH, BM, DM, HI, PS, and SY and model E + L + G3 + G3E + LE was the best model for traits DF and 100-SDW. For traits HI and PS, the model E + L + G2 + G2E showed the highest performance 0.172 and 0.151, respectively.
In the case of PH, the naïve and informed interaction model produced the highest prediction accuracies with both the CV1 and CV2 schemes. For PS the informed interaction model produced the highest prediction accuracy with the CV1 scheme, and the naïve interaction model produced the highest prediction accuracy when CV2 was implemented. All the traits produced the lowest prediction accuracies while implementing CV1 and CV2 scheme with main effect model except SY which did not produce the lowest prediction accuracy with CV1. Whereas implementation of CV0 scheme showed the contrasting observation of lowest prediction accuracies for all the traits except PS. Only PS produced the lowest prediction accuracies when implementing the CV0 scheme with main effect model, whereas the rest of the traits showed lower prediction accuracies with other GS models. For instance, the main effect model extended with naïve interaction (E + L + G + LE) produced the lowest prediction accuracies for HI with the CV0 scheme. Similarly for PH, BM, DM, 100-SDW and SY, the main effect model extended with informed interaction produced the lowest prediction accuracies, and the main effect model extended with the naïve and informed interaction produced the lowest prediction accuracies for DF.
While comparing CV0, CV1, and CV2 for the different traits and different GS models, it was observed that for five traits (PH, DF, DM, HI, 100-SDW) the maximum prediction accuracy always occurred when CV2 was used and for the remaining three traits (BM, PS, SY) it occurred when CV1 was used, and in all cases either models E + L + G + GE or E + L + G + GE + LE accounted for the maximum prediction accuracies. CV1 always resulted in prediction accuracy close to zero for some models as well as traits, but CV0 and CV2 did not. The main effect model across all the three CV schemes, for all the traits except SY, accounted for lowest prediction accuracies with the CV1 scheme.
Comparison of genotyping platforms on prediction accuracies
Application of different genotyping platform viz. GBS (G1), DArTseq (G2) and GBS together with DArTseq (G3) had a clear impact on the prediction accuracies. The DArTseq was found performing consistently better than the GBS platform among the main effect models E + L + G, E + L + G + LE, E + L + G + GE and E + L + G + LE + GE when CV1 and CV2 were implemented, whereas certain models for certain traits performed best with G1 in comparison to G2 and G3 when the CV0 scheme was implemented. For instance for BM and DF, G1 produced the best prediction accuracies in E + L + G + LE + GE in comparison to G2 and G3. GBS produced the lowest prediction accuracies among most of the interaction models. It was either DArTseq (in most of the cases) or GBS combined with DArTseq accounting for highest prediction accuracies (Table 1).
Impact of environment on prediction accuracies
On comparing the impact of different environments (year by location combinations) on prediction accuracies, trends were consistent among the different cross validation schemes, except for model E + L which lacks prediction accuracy under the CV1 scheme. Even though the models perform differently across environments, for most of the traits certain environments were identified that consistently resulted in the highest prediction accuracy. The prediction accuracies varied depending on the different cross validation schemes. For instance, the highest prediction accuracies were obtained for DF in environment ICRISAT12 regardless of CV scheme used (with an exception of the E + L model when the CV1 scheme was used). Similarly in the case of SY, the predictions were best for environment IARI12.
When the CV0 scheme was implemented it was hard to identify a superior model for the eight traits. For instance, on the implementation of CV0 model E + L + G2 + LE produced the highest prediction accuracies for four traits (PH, BM, PS, and SY) but the difference between the prediction accuracy for models were not significant. However, for all of the traits other than DF the model that resulted in the highest prediction accuracy included the G2 term.
For CV1, the model E + L + G2 + G2E had the highest prediction accuracy for four traits (BM, DM, HI and SY), the model L + E + G2 + G2E + LE had the highest prediction accuracy for two traits (PH and PS), and the model L + E + G3 + G3E had the highest prediction accuracy for the remaining two traits (DF and 100SDW).
For CV2, the model E + L + G2 + G2E + LE produced the highest prediction accuracy for four traits (PH, BM, DM, and SY). For most traits using the DArTseq data resulted higher prediction accuracies than using GBS data or the combination of GBS and DArTseq data.
The mean accuracy for the eight traits on the implementation of three CV schemes viz. CV0, CV1, and CV2 varied significantly (Figs 1–3). Within each panel variation in terms of predictions accuracies among the models and environments can be observed. There was a higher variation in terms of the mean prediction accuracy among the methods and environments when CV1 and CV2 were implemented compared to prediction accuracy for the CV0 scheme, and the variation is the highest for CV1. For SY, we could notice that when the CV0 scheme was utilized there was no significant difference between the schemes, and except to environment IARI12 all environments performed similarly in terms of prediction accuracy. The mean accuracy varied between 0 and 0.2 for most of the environments. Adding extra terms to the model did not improve the accuracy when CV0 was implemented. On implementation of the CV1 scheme a significant improvement in term of prediction accuracy could be observed on inclusion of the marker information and the interaction terms, compared to the simple main effect (E + L) model. Environment IARI12 performed the best in terms of prediction accuracies for SY. However, there was not a significant overall improvement in terms of prediction accuracy for trait SY in most of the environments for CV1 when we compared to CV0.
Prediction accuracy in a trial basis (within environment) of a chickpea population comprising 320 genotypes tested in 9 environments for nine models and eight traits under CV0 scheme (prediction of unobserved/new environments).
Prediction accuracy in a trial basis (within environment) of a chickpea population comprising 320 genotypes tested in 9 environments for nine models and eight traits under CV1 scheme (prediction of unobserved/new genotypes).
Prediction accuracy in a trial basis (within environment) of a chickpea population comprising 320 genotypes tested in 9 environments for nine models and eight traits under CV2 scheme (incomplete field trials - prediction of observed genotypes in observed environments).
The mean accuracy of prediction improved for CV2 for the last six models i.e models including informed interactions, and informed and naïve interactions. For most of the environments the mean accuracy was between 0 and 0.2, and environment IARI12 performed the best in terms of prediction accuracy. A significant difference was observed among CV0, CV1, and CV2 for SY; CV1 and CV2 had a significantly higher prediction accuracy than CV0 for models with interaction terms GE and LE, and there was not a significant difference among the models using DArTseq versus GBS data.
While comparing all of the other traits, no significant increase was observed for any model, but the prediction accuracy was higher for some environments (Fig. 1). However, we could not identify any specific environment that consistently showed the highest prediction accuracy across the traits. For traits DF and 100-SDW, environments IARI-Latesown14, ICRISAT-Irrigated14, and IARI-Normal14 showed lower mean accuracy values in comparison to the rest of the environments across all models with all the CV schemes (Fig. 2). Most traits showed a similar pattern to SY for CV1 when we compared the models, but for some environments the prediction accuracy improved significantly, and reached 0.9 (eg. 100-SDW in ICRISAT-Rain14). When CV2 was implemented (Fig. 3) we could see that for DF and 100-SDW the environments were clustered into two groups based on their prediction accuracies. Environments IARI-Late14, ICRISAT-Irrig14, and IARI-Norm14 performed better than all of the other environments for these two traits.
Conventional breeding coupled with genomic tools has evolved in modern breeding approaches, offering precise selection of genotype in endeavor to develop superior lines. Traditionally, breeding programs used to undertake line selection based on breeding values taking into account the pedigree and the heritability of the trait (considering only the phenotyping data)31. However, conventional methods have several pitfalls including costs, labour and efforts of accurate phenotyping, for handling complex traits. Advancement in NGS technology has significantly reduced the genotyping cost, resulting in the generation of large amount of genotyping data, and it has further drawn a wide interest of researchers towards livestock and plant breeding32. Availability of the genotyping data, especially information about genomic regions involved in governing traits, aid better precision in selection. Molecular breeding approaches like MAS, MABC, and marker assisted recurrent selection (MARS) have been successfully deployed in many crop plants including legumes for trait improvement33. However, these approaches are only successful for traits with simple genetic behavior whereas addressing complex traits that have an extensive amount of small and large effect QTLs with MABC and MARS remain problematic. GS is another modern breeding approach that performs selection using genome-wide marker data, and has the potential to address complex traits. GS allows prediction of performance of individuals utilizing genome wide marker data instead of utilizing a limited number of markers with large effect as used to be the case in traditional MAS approaches15,20.
GS has been proven to outperform phenotype based selections in terms of cost as well as for enhancing the rate of genetic gain. GS results in shortening of the length of the breeding cycle by predicting the breeding value without evaluating in field and therefore saving large amount of resources34,35,36. Accuracies of genotyping in comparison to phenotyping enhances the accuracy of predicted breeding values, hence make selection process accurate and more precise. However high quality phenotyping facilities, when integrated with advanced high throughput genotyping platforms holds the potential to enhance the prediction further. The large number of markers enhance the precision of GS, hence the population size37, marker types and number38,39, statistical models21 etc., are some of the critical factors that determine the success of GS experiments.
GS efforts are being initiated for enhancing the rate of genetic gain among livestock, various crops including legumes16,17,35,40,41,42,43. Advances in sequencing technology have revolutionized the chickpea genomics in such a way that a crop that used to fall in the orphan crop category in terms of marker availability, has now become a genome resource rich crop33. Large genome resources further make GS a better-suited molecular breeding approach for chickpea. In order to deploy the markers in chickpea breeding using GS approaches, efforts were made to standardize the GS models for yield and yield related traits using a set of 320 elite chickpea lines10. The present study targeted eight yield related traits having agronomic importance in terms of the estimation of the effect of different genotyping methods as well as the effect of the environment on prediction accuracy. Results from the current study validated the results from a previous study using DArTseq the occurrence of two major groups in dendrogram using GBS data. In the dendrogram, two major clusters were observed when GBS data were used. Similar occurrence of two major clusters was observed in our previous study where silicoDArT and DArTseq data were used10. Based on our previous study outcome, desi and kabuli were considered as a single set to calculate the prediction accuracies.
GBS has been a cost- and time-effective genotyping method for generating high density genotyping data for crop plants. GBS offers significant advantage over other genotyping methods, and has been successfully used for high density genetic mapping44 in chickpea and crop improvement efforts for GS in other species36,45. However, due to the high rate of missing data, the applicability of GBS for crop improvement is restricted and is being used by imputing the missing data, which sometimes affect the prediction accuracies. DArT (Diversity Array Technology) has been very useful in delineating the genetic diversity in chickpea46, and it has been used for initiating the GS efforts in chickpea10. Three different genotyping configurations (GBS, DArTseq, and combined genotyping data from DArTseq and GBS) were used in the present study to estimate the prediction accuracy for thirteen different statistical models.
Multiple variables ranging from environmental component to genomic components have an impact on genetic gain of crop plant and GS offers an opportunity to consider multiple variables simultaneously resulting in enhanced prediction accuracies47,48. Thus, different types of genotyping platforms and selection models with different interaction components viz. naïve and informative interactions were assessed in the current study. Higher prediction accuracies were obtained with models where only DArTseq data (G2) were considered, in comparison to models considering GBS data and the combined GBS - DArTseq data for traits PH, BM, DM, PS and SY. The possible reason for this observation can be the occurrence of large amount of missing data in GBS. Whereas small deviations from the pattern were observed for trait 100-SDW with the model with the naïve and informed interactions resulted in higher prediction accuracies using G3 than using G2 for both the CV1 and CV2 schemes. Similarly for PS, when CV0 was implemented with the model including the naïve and informed interaction prediction accuracies based on G3 were higher than G2. In the case of HI, CV0 consistently produced higher prediction accuracies for G3 based models than G2 based models regardless of any kind of interaction, whereas for CV1 and CV2 the reverse pattern was observed. DM could not display any pattern in prediction accuracies.
Combining the advanced high throughput genotyping platforms for generating genotyping data, and the integration of it with multi-location phenotyping data can help in the process of dissecting the complex quantitative traits and further allow assessing the impact of interactions of genotype with changing climatic conditions. Inclusion of various environmental variable further strengthen the possibility to make GS model more accurate49 and further allow the possibility to predict the performance of the test population under environmental condition that have not been sampled for the genotype.
Three different cross validation schemes have been used in the current study based on the prediction problems that plant breeder often encounters viz. (1) performance of prediction of untested genotypes (CV1); (2) performance of prediction of genotypes tested in some but not in other environments (CV2); (3) performance of prediction of tested genotypes in new environments (CV0). Likewise previous studies, performance of prediction of untested genotypes (CV1) resulted in lower prediction accuracies in most of the traits with different GS models in comparison to the other two cross validation schemes. It could be due to absence of the information from correlated environments which was considered in the rest of the schemes. As suggested earlier selection of cross validation will further have an impact on rate of genetic gain. For instance CV1 may result in selection of new lines without field testing, but will also result in poor predictive value which may further affect the rate of genetic gain29.
Considering the findings of the current study, there is a need to deploy models that take into consideration the impact of different environmental conditions at multiple locations for multiple years. Thus the model considering G × E effects may further improve the prediction accuracies50,51. Inclusion of the G × E effect in GS models holds the true potential to enhance GS in practice.
Phenotypic data
A set of 320 elite chickpea breeding lines including both desi and kabuli seed types from the International Chickpea Screening Nursery (ICSN) of ICRISAT were used in this study (as described in Roorkiwal et al.10). These lines were extensively phenotyped for three seasons (2012-13, 2013-14 and 2014-15) at two different geographical locations namely ICRISAT, Patancheru (17°31′48.00″N 78°16′12.00″E) and IARI, New Delhi (28.6374°N, 77.1629°E) in India. Phenotypic data on eight traits (100 Seed Weight (100-SDW), Biomass (BM), Days to 50% Flowering (DF), Days to Maturity (DM), Harvest Index (HI), Plant Height (PH), number of Plant Stand (PS), and Seed Yield (SY)), on these 320 lines under different water regimes (normal-rainfed, irrigated and late sown) were used for analysis (Fig. 4). Environments were defined as the location-by-year-by-water management combination, and 9 different combinations were observed (Table 2).
Graphical representation of phenotypic data on eight traits (100 Seed Weight (100-SDW), Biomass (BM), Days to 50% Flowering (DF), Days to Maturity (DM), Harvest Index (HI), Plant Height (PH), number of Plant Stand (PS), and Seed Yield (SY)) analyzed for three seasons at IARI, New Delhi and ICRISAT, Patancheru.
Table 2 Trials/environments as result of year-by-location/management combination.
Genotyping and SNP calling
High quality genomic DNA was isolated from the plant leaves collected from 15 days old seedlings using high throughput mini-DNA extraction method52. Quality and quantity of DNA were assessed using spectrophotometer (Shimadzu UV160A, Japan). All 320 lines with high-quality DNA were selected for sequencing using the GBS approach as described by Elshire et al.53. The GBS libraries for all 320 lines were prepared by digesting genomic DNA with ApeKI endonuclease (recognition site: G/CWCG). T4 DNA ligase was used for ligating uniquely barcoded adaptors with digested DNA fragments. Equal proportion of barcoded adaptors ligated DNA fragments from each sample were mixed for GBS libraries construction, which were amplified, purified in order to remove excess adapters, followed by sequencing on the HiSeq 2500 platform (Illumina Inc, San Diego, CA, USA). The reads obtained were analyzed using the TASSEL-GBS pipeline implemented in TASSEL 4.054. Sequence reads were first de-multiplexed based on the sampled barcodes and trimmed to the first 64 bases starting from enzyme cutting site, using in-house perl scripts. Sequence reads with presence of 'N' within the first 64 bases were not taken into consideration. Reads with more than 50% of low quality base pairs (Phred <5%) were discarded, and filtered data were used for SNP calling. The remaining good quality reads (called tags) were aligned against draft genome sequence (CaGAv1.0) of chickpea1 using the Bowtie 2 software55. Using GBS analysis pipeline alignment file was processed for SNP calling and genotyping. An allele was considered only if it was supported with a minimum tag count value of 10. The SNPs identified were further filtered to remove missing data and such filtered SNPs were used for further application.
In addition to GBS, DArTseq data on 320 lines described by Roorkiwal et al.10 were also used for analysis. In summary; data from two different platforms were used for analysis: (1) GBS data for 88 K SNPs denoted by G1, (2) DArTseq with 1.6 K SNPs denoted by G2, and (3) GBS data combined with DArTseq data denoted by G3.
Variants of the MRNM by Jarquín et al.30 were used for predictions. A total of thirteen models were fit; four of these models included only main effects, three included naïve interactions between genotype and environments (with no marker data involved in the interaction component), and the remaining six models included marker information in the interactions. The genomic models used the genomic matrix based on either the GBS or DArTseq data, or both the DArTseq and GBS to establish the relationships among pairs of genotypes and allow borrowing information among lines. Conceptually, the models can be described as follows: a basic model (E + L) which included the main effects of environments (E) and lines (L); a model (E + L + G) also including the main effects of markers (G); a naïve (genotype by environment) interaction model (E + L + G + LE), and an informed (marker by environment) interaction model (E + L + G + LE + GE). As described before, only the type of platform (GBS or DArTseq or both) were varied for the models that included the genomic component. Further details for all of the models are given below.
Main effects models
Main effects of environments and lines (E + L)
The response of the phenotypes (yij) defined by random baseline model is
$${y}_{ij}=\mu +{E}_{i}+{L}_{j}+{e}_{ij}$$
where μ is the overall mean, Ei is the random effect of the ith environment, Lj is the random effect of the jth line, ELij is the interaction between the ith environment and the jth line, and eij is the random error term. All random effects follow an independent and identically distributed (iid) multivariate normal distribution such that \({E}_{i} \sim N(0,{\boldsymbol{I}}{\sigma }_{E}^{2})\), \({L}_{j} \sim N(0,{\boldsymbol{I}}{\sigma }_{L}^{2})\), and \({e}_{ij} \sim N(0,{\boldsymbol{I}}{\sigma }_{e}^{2})\) where \({\sigma }_{E}^{2}\),\(\,{\sigma }_{L}^{2}\), \({\sigma }_{e}^{2}\,\)are the environment, line, and residual variances, respectively. The baseline model (1) could have included the line × environments interaction \(\,E{L}_{ij} \sim N(0,{\boldsymbol{I}}{\sigma }_{EL}^{2})\), where \({\sigma }_{EL}^{2}\) is the line × environment interaction variance.
In the model above, the random effect of the line (Lj) can be replaced by gj, which is an approximation of the genetic value of the jth line from the genomic relationship matrix [also, the effects of the line (Lj) can be replaced by aj, which is the additive effect obtained from the pedigree information]. In the models described below, we can use gj as well as its interactions with environment \(\,{E}_{i}(g{E}_{ij})\). Full descriptions of the different reaction norm models can be found in Jarquin et al.30. Below we give a brief description of the different models that were fitted using genomic information.
Models including the main effects of GBS (E + L + G1), DArTseq markers (E + L + G2) and both DArTseq markers and GBS SNPs (E + L + G3)
These models were fitted by adding the genomic random effect of the line gj to the previous model described by equation (1). This was an approximation of the genetic value of the jth line, and is defined by the regression of marker covariates \({g}_{j}=\sum _{m=1}^{p}{x}_{jm}{b}_{m}\), where xjm is the genotype of the jth line at the mth marker position (either from G1, G2 or G3), and bm is the effect of the mth marker assuming that \({b}_{m}\mathop{ \sim }\limits^{IID}N(0,{\sigma }_{b}^{2})\) (m = 1, …, p), with \({\sigma }_{b}^{2}\) being the common variance of the marker effects. The vector g = (g1, …, gj)′ contains the genomic values of all the lines and by properties of the multivariate normal distribution it follows a multivariate normal density with zero mean and covariance matrix \(Cov({\boldsymbol{g}})={\boldsymbol{G}}{\sigma }_{g}^{2}\), where G is the genomic relationship matrix, and \({\sigma }_{g}^{2}\propto {\sigma }_{b}^{2}\) is proportional to the genomic variance. The model with the environmental effect, line effect, and genomic effect could be written as
$${y}_{ij}=\mu +{E}_{i}+{L}_{j}+{g}_{j}+{e}_{ij}$$
where gj is a random variable that allows borrowing information between lines through genomic information. Specifically, vector g = (g1, …, gj)′ has the genomic value of the lines and it is assumed to follow a multivariate normal distribution such that \({\boldsymbol{g}} \sim N(0,{\boldsymbol{G}}{\sigma }_{g}^{2})\,\,\)where \({\sigma }_{g}^{2}\,\)is the genetic variance of the lines and G = \(\frac{{\boldsymbol{X}}{{\boldsymbol{X}}}^{{\boldsymbol{^{\prime} }}}}{p}\), with X as the centered and standardized matrix of molecular markers where p is the number of markers. The parameterization of this component is also known as the Genomic Best Linear Unbiased Predictor (GBLUP) model56,57. The random effects g = (g1, …, gj)′ are correlated such that model (2) allows exchanging information across lines.
Note that the term gj should account for the additive genetic effects and it approximates the true genetic values of the Lj line; the main effect of the lines also include non-additive effects that are not accounted by gj obtained from the linear kernel GBLUP. When the phenotype being model is controlled by additive genetic effects, Lj, can be dropped from the model. Here we choose to keep Lj, to account for any non-additive genetic effects influencing the phenotypes being modeled.
Main effects and interaction models
Models E + L + G1, E + L + G2 and E + L + G3 with extended naïve interactions (E + L + G1 + LE), (E + L + G2 + LE) and (E + L + G3 + LE)
Models E + L + G1 + LE, E + L + G2 + LE and E + L + G3 + LE are similar to models E + L + G1, E + L + G2 and E + L + G3 respectively but include the interaction of the jth line and the ith environment ELij. The model with interaction can be written as an extension of model (2)
$${y}_{ij}=\mu +{E}_{i}+{L}_{j}+{g}_{j}+E{L}_{ij}+{e}_{ij}$$
where the term ELij denotes the interaction of the jth line and the ith environment and the other terms are previously defined. The interaction term is assumed to have a normal distribution such that \({\boldsymbol{EL}} \sim N(0,({{\boldsymbol{Z}}}_{{\boldsymbol{L}}}{\boldsymbol{I}}{{\boldsymbol{Z}}}_{{\boldsymbol{L}}}^{{\boldsymbol{^{\prime} }}})^\circ ({{\boldsymbol{Z}}}_{{\boldsymbol{E}}}{{\boldsymbol{Z}}}_{{\boldsymbol{E}}}^{{\boldsymbol{^{\prime} }}}){\sigma }_{EL}^{2})\), where ZL and ZE are the incidence matrices for lines and environments, respectively, \({\sigma }_{EL}^{2}\) is the variance component of the interaction term EL, and ° denotes the Hadamar or Schur product (element by element product) between two matrices.
Models E + L + G1, E + L + G2 and E + L + G3 with informed interaction (between markers and environments (E + L + G1 + G1E), (E + L + G2 + G2E) and (E + L + G3 + G3E)
Models E + L + G1 + G1E, E + L + G2 + G2E and E + L + G3 + G3E were extended models of E + L + G1, E + L + G2 and E + L + G3 respectively. In models E + L + G1 + G1E, E + L + G2 + G2E and E + L + G3 + G3E a random interaction term is added between the random effect of the ith environment (Ei and the jth genomic component (g) of the lines denoted by Egij. The model can be written as
$${y}_{ij}=\mu +{E}_{i}+{L}_{j}+{g}_{j}+E{g}_{ij}+{e}_{ij}$$
where \({\boldsymbol{Eg}} \sim N(0,({{\boldsymbol{Z}}}_{{\boldsymbol{g}}}{\boldsymbol{G}}{{\boldsymbol{Z}}}_{{\boldsymbol{g}}}^{{\boldsymbol{^{\prime} }}})^\circ ({{\boldsymbol{Z}}}_{{\boldsymbol{E}}}{{\boldsymbol{Z}}}_{{\boldsymbol{E}}}^{{\boldsymbol{^{\prime} }}}){\sigma }_{Eg}^{2})\) conceptually represents the interaction between each genomic marker and each environment, Zg is the incidence matrix for the effects of the genomic values g, and \({\sigma }_{Eg}^{2}\) is the variance component of Eg. Matrix ZE is the incidence matrix for the environments. As previously indicated the genomic matrix G is used to account for the genomic main effects and for the genotype × environment interaction effect, which could be either derived from marker systems G1, G2 or G3.
Models E + L + G1, E + L + G2 and E + L + G3 with naïve interaction and informed interaction (E + L + G1 + G1E + LE), (E + L + G2 + G2E + LE) and (E + L + G3 + G3E + LE)
Models E + L + G1 + G1E + LE, E + L + G2 + G2E + LE and E + L + G3 + G3E + LE are extensions of models E + L + G1, E + L + G2 and E + L + G3, respectively, and they include the interaction between the environments and lines denoted by ELij, and the interaction between environments and the genomic values denoted by Egij. The model including the two interaction terms can be written as
$${y}_{ij}=\mu +{E}_{i}+{g}_{j}+{L}_{j}+E{L}_{ij}+E{g}_{ij}+{e}_{ij}$$
where all terms have been defined previously. In this model Egij approximates the effect of ELij, and its approximations will depend, among other factors, on the degree of linkage disequilibrium between the markers (or haplotypes) and the QTLs of the traits under study as well as the density and distribution of the markers or/and haplotypes in the genome.
Prediction assessment by cross-validation
Three different random CV schemes were used in the present study. The first cross-validation (CV1) evaluates the prediction accuracy of models when a set of lines have not been assessed in any of the environments (prediction of newly developed lines)29. The second cross-validation scheme (CV2) evaluates the prediction accuracy of models when some lines have been evaluated in some of the environments but not in other environments (sparse testing). For the CV2 scheme, information from related lines and correlated environments is used, and the prediction assessment benefits from borrowing information from lines within an environment, from lines across environments, and from correlated environments29. The third cross-validation (CV0) scheme predicts an unobserved environment using the remaining environments as a training set (predict untested environments by leave-one-ouy system). Predictability is measured using the Pearson correlation coefficient between the observed phenotype and the predicted genomic breeding value.
In both CV1 and CV2, a five-fold CV scheme was used to generate the training and testing sets, and the prediction accuracy was assessed for each testing set. For the CV1 approach, lines were divided into five folds such that approximately 20% of lines were in one group so phenotypes from the same line appear in the same group thus when a genotype is not observed in all environments it is hard to have groups with the same sample size.
For CV2, phenotypes were randomly divided into five subsets, where 80% of the lines were assigned to the training set, and 20% were assigned to the testing set. Four subsets were combined to form the training set, and the remaining subset was used as the validation set. The permutation of the five subsets led to five possible training and validation data sets. This procedure was repeated 20 times, and a total of 100 runs were performed for each trait-environment combination on each population. The same partition was used for analysis with all the GS models. Prediction accuracy was assessed as the average value of the correlations between the phenotype and the GEBVs from 100 runs calculated in each population for each trait-environment combination.
For CV0, simulating the scenario of prediction of unobserved set of environmental conditions, the leave-one-environment out strategy was adopted. Here, each environment was predicted using the remaining environments. Since no random process is involved assigning folds the correlation between predicted and observed values within each environment was computed only once.
Computational tools for analysis
The Bayesian Generalized Linear Regression (BGLR) R-package22,58,59 was used for fitting the described GS models. The package handles pedigree data in parametric and semiparametric contexts, thus allowing different random matrices with user defined covariance matrices. The scripts used are similar to those provided in Pérez-Rodríguez et al.59.
Varshney, R. K. et al. Draft genome sequence of chickpea (Cicer arietinum L.) provides a resource for trait improvement. Nat. Biotechnol. 31, 240–246 (2013).
Khatoon, N. & Prakash, J. Nutritional quality of microwave-cooked and pressure-cooked legumes. Int. J. Food Sci. Nutr. 55, 441–448 (2004).
Jukanti, A. K., Gaur, P. M., Gowda, C. L. & Chibbar, R. N. Nutritional quality and health benefits of chickpea (Cicer arietinum L.): a review. Br. J. Nutr. 108, S11–26 (2012).
Croser, J. S., Ahmad, F., Clarke, H. J. & Siddique, K. H. M. Utilisation of wild Cicer in chickpea improvement - progress, constraints, and prospects. Crop Pasture Sci. 54, 429–444 (2003).
Singh, U. Nutritional quality of chickpea (Cicer arietinum L.): current status and future research needs. Plant Foods Human Nut. 35, 339–351 (1985).
Singh, K. B. Chickpea (Cicer arietinum L.). Field Crops Res. 53, 161–170 (1997).
Varshney, R. K., Graner, A. & Sorrells, M. E. Genomics-assisted breeding for crop improvement. Trends Plant Sci. 10, 621–630 (2005).
Thudi, M. et al. Recent breeding programs enhanced genetic diversity in both desi and kabuli varieties of chickpea (Cicer arietinum L.). Sci. Rep. 6, 38636 (2016).
Thudi, M. et al. Whole genome re-sequencing reveals genome-wide variations among parental lines of 16 mapping populations in chickpea (Cicer arietinum L.). BMC Plant Biol. 16, 10 (2016).
Roorkiwal, M. et al. Genome-enabled prediction models for yield related traits in chickpea. Front Plant Sci. 7, 1666 (2016).
Varshney, R. K. et al. Fast-track introgression of "QTL-hotspot" for root traits and other drought tolerance traits in JG 11, an elite and leading variety of chickpea. The Plant Genome 6, 3 (2013).
Varshney, R. K. et al. Marker-assisted backcrossing to introgress resistance to fusarium wilt race 1 and ascochyta blight in C 214, an elite cultivar of chickpea. The Plant Genome 7, 1 (2014).
Desta, Z. A. & Ortiz, R. Genomic selection: genome-wide prediction in plant improvement. Trends Plant Sci. 19, 592–601 (2014).
Bernardo, R. & Yu, J. Prospects for genome-wide selection for quantitative traits in maize. Crop Sci. 47, 1082–1090 (2007).
Meuwissen, T. H. E., Hayes, B. J. & Goddard, M. E. Prediction of total genetic value using genome wide dense marker maps. Genetics 157, 1819–1829 (2001).
de los Campos, G., Gianola, D. & Rosa, G. J. M. Reproducing kernel Hilbert spaces regression: a general framework for genetic evaluation. J. Anim. Sci. 87, 1883–1887 (2009).
de los Campos, G. et al. Predicting quantitative traits with regression models for dense molecular markers and pedigree. Genetics 182, 375–385 (2009).
Crossa, J. et al. Prediction of genetic values of quantitative traits in plant breeding using pedigree and molecular markers. Genetics 186, 713–724 (2010).
Crossa, J. et al. Genomic selection in plant breeding: Methods, models, and perspectives. Trends Plant Sci. 22, 961–975 (2017).
Jannink, J.-L., Lorenz, A. J. & Iwata, H. Genomic selection in plant breeding: from theory to practice. Brief. Funct. Genomics 9, 166–177 (2010).
Heslot, N., Yang, H. P., Sorrells, M. E. & Jannink, J. L. Genomic selection in plant breeding: a comparison of models. Crop Sci. 52, 146–160 (2012).
de los Campos, G., Hickey, J. M., Pong-Wong, R., Daetwyler, H. D. & Calus, M. P. Whole-genome regression and prediction methods applied to plant and animal breeding. Genetics 193, 327–345 (2013).
Howard, R., Carriquiry, A. L. & Beavis, W. D. Parametric and nonparametric statistical methods for genomic selection of traits with additive and epistatic genetic architectures. G3 (Bethesda) 4, 1027–1046 (2014).
Windhausen, V. S. et al. Effectiveness of genomic prediction of maize hybrid performance in different breeding populations and environments. G3 (Bethesda) 2, 1427–1436 (2012).
Xu, S., Zhu, D. & Zhang, Q. Predicting hybrid performance in rice using genomic best linear unbiased prediction. Proc. Nat. Acad. Sci. 111, 12456–12461 (2014).
Zhao, Y., Mette, M. F., Gowda, M., Longin, C. F. & Reif, J. C. Bridging the gap between marker-assisted and genomic selection of heading time and plant height in hybrid wheat. Heredity 112, 638–645 (2014).
Mishra, U. S., Sirothia, P. & Bhadoria, U. S. Effects of phosphorus nutrition on growth and yield of chickpea (Cicer arietinum L.) under rain fed conditions. Int. J. Agri. Stat. Sci. 5, 85–88 (2009).
Bampidis, V. A. & Christodoulou, V. Chickpeas (Cicer arietinum L.) in animal nutrition: A review. Animal Feed Sci. Tech. 168, 1–20 (2011).
Burgueño, J. de los Campos, G., Weigel, K. & Crossa, J. Genomic prediction of breeding values when modeling genotype × environment interaction using pedigree and dense molecular markers. Crop Sci. 52, 707 (2012).
Jarquín, D. et al. A reaction norm model for genomic selection using high-dimensional genomic and environmental data. Theor. Appl. Genet. 127, 595–607 (2014).
Hayes, B. J., Lewin, H. A. & Goddard, M. E. The future of livestock breeding: genomic selection for efficiency, reduced emissions intensity, and adaptation. Trends Genet. 29, 206–214 (2013).
Pérez-Enciso, M., Rincón, J. C. & Legarra, A. Sequence-vs. chip-assisted genomic selection: accurate biological information is advised. Genet. Select. Evol. 47, 43 (2015).
Varshney, R. K. Exciting journey of 10 years from genomes to fields and markets: Some success stories of genomics-assisted breeding in chickpea, pigeonpea and groundnut. Plant Sci. 242, 98–107 (2016).
Heffner, E. L., Sorrells, M. E. & Jannink, J. L. Genomic selection for crop improvement. Crop Sci. 49, 1–12 (2009).
Heffner, E. L., Lorenz, A. J., Jannink, J. L. & Sorrells, M. E. Plant breeding with genomic selection: gain per unit time and cost. Crop Sci. 50, 1681–1690 (2010).
Isidro, J. et al. Training set optimization under population structure in genomic selection. Theor. Appl. Genet. 128, 145–158 (2015).
Daetwyler, H. D., Villanueva, B. & Woolliams, J. A. Accuracy of predicting the genetic risk of disease using a genome-wide approach. PloS One 3, e3395 (2008).
Chen, X. & Sullivan, P. F. Single nucleotide polymorphism genotyping: biochemistry, protocol, cost and throughput. Pharmacogenomics J. 3, 77–96 (2003).
Poland, J. & Rife, T. W. Genotyping-by-sequencing for plant breeding and genetics. The Plant Genome 5, 92–102 (2012).
Hayes, B. & Goddard, M. Genome-wide association and genomic selection in animal breeding. Genome 53, 876–883 (2010).
Goddard, M. E., Hayes, B. J. & Meuwissen, T. H. Genomic selection in livestock populations. Genet. Res. 92, 413–421 (2010).
Gorjanc, G., Hickey, J. M., Cleveland, M. A. & Houston, R. D. Potential of genotyping-by-sequencing for genomic selection in livestock populations. Genet. Select. Evol. 47, 12 (2015).
Jain, A., Roorkiwal, M., Pandey, M. & Varshney, R. K. Current status and prospects of genomic selection in legumes. In: Genomic Selection for Crop Improvement, R. K. Varshney et al. (eds), Springer International Publishing (2017).
Jaganathan, D. et al. Genotyping-by-sequencing based intra-specific genetic map refines a "QTL-hotspot" region for drought tolerance in chickpea. Mol. Gen. Genomics 290, 559–571 (2015).
Huang, Y. F., Poland, J. A., Wight, C. P., Jackson, E. W. & Tinker, N. A. Using genotyping-by-sequencing (GBS) for genomic discovery in cultivated oat. PLoS One 9, e102448 (2014).
Thudi, M. et al. Novel SSR markers from BAC-end sequences, DArT arrays and a comprehensive genetic map with 1,291 marker loci for chickpea (Cicer arietinum L.). PLoS One 6, e27275 (2011).
Bassi, F. M., Bentley, A. R., Charmet, G., Ortiz, R. & Crossa, J. Breeding schemes for the implementation of genomic selection in wheat (Triticum spp.). Plant Sci. 242, 23–36 (2016).
Bhat, J. A. et al. Genomic selection in the era of next generation sequencing for complex traits in plant breeding. Front. Genet. 7, 221 (2016).
Pierre, C. S. et al. Genomic prediction models for grain yield of spring bread wheat in diverse agro-ecological zones. Sci Rep. 6, 27312 (2016).
Jonas, E. & de Koning, D. J. Does genomic selection have a future in plant breeding? Trends Biotechnol. 31, 497–504 (2013).
Oakey, H. et al. Genomic selection in multi-environment crop trials. G3 (Bethesda) 6, 1313–1326 (2016).
Cuc, L. M. et al. Isolation and characterization of novel microsatellite markers and their application for diversity assessment in cultivated groundnut (Arachis hypogaea). BMC Plant Biol. 8, 55 (2008).
Elshire, R. J. et al. A robust, simple genotyping-by-sequencing (GBS) approach for high diversity species. PloS One 6, e19379 (2011).
Bradbury, P. J. et al. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics 23, 2633–2635 (2007).
Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 9, 357–359 (2012).
VanRaden, P. M. Genomic measures of relationship and inbreeding. Interbull Bull. 37, 33 (2007).
VanRaden, P. M. Efficient methods to compute genomic predictions. J. Dairy Sci. 91, 4414–4423 (2008).
Pérez-Rodríguez, P. & de los Campos, G. Genome-wide regression & prediction with the BGLR statistical package. Genetics 198, 483–495, https://doi.org/10.1534/genetics.114.164442 (2014).
Pérez-Rodríguez, P. et al. A pedigree-based reaction norm model for prediction of cotton yield in multi-environment trials. Crop Sci. 55, 1143–1151 (2015).
The authors are thankful to Bill & Melinda Gates Foundation (Tropical Legumes III [OPP124589], Genomic Open-source Breeding Informatics Initiative (GOBII)), and Department of Agriculture Cooperation & Farmers Welfare (DAC&FW), Govt. of India for financial assistance. The work reported in this article was undertaken as a part of the CGIAR Research Program on Grain Legumes and Dryland Cereals (GLDC). ICRISAT is a member of the CGIAR.
Manish Roorkiwal and Diego Jarquin contributed equally to this work.
International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), Hyderabad, India
Manish Roorkiwal
, Muneendra K. Singh
, Pooran M. Gaur
, Abhishek Rathore
, Samineni Srinivasan
, Ankit Jain
, Vanika Garg
, Sandip Kale
, Annapurna Chitikineni
& Rajeev K. Varshney
University of Nebraska-Lincoln, Lincoln, NE, 68583, USA
Diego Jarquin
& Reka Howard
Indian Agricultural Research Institute (IARI), Delhi, India
Chellapilla Bharadwaj
& Shailesh Tripathi
IPK-Gatersleben, D-06466, Gatersleben, Germany
Sandip Kale
Cornell University, Ithaca, NY 14850, USA
& Kelly R. Robbins
International Maize and Wheat Improvement Center (CIMMYT), Mexico, Mexico
Jose Crossa
Search for Manish Roorkiwal in:
Search for Diego Jarquin in:
Search for Muneendra K. Singh in:
Search for Pooran M. Gaur in:
Search for Chellapilla Bharadwaj in:
Search for Abhishek Rathore in:
Search for Reka Howard in:
Search for Samineni Srinivasan in:
Search for Ankit Jain in:
Search for Vanika Garg in:
Search for Sandip Kale in:
Search for Annapurna Chitikineni in:
Search for Shailesh Tripathi in:
Search for Elizabeth Jones in:
Search for Kelly R. Robbins in:
Search for Jose Crossa in:
Search for Rajeev K. Varshney in:
M.R. and D.J. performed the genotyping, phenotyping data analysis and compilation of results. M.K.S., P.M.G., C.B., S.S., S.T. recorded the phenotyping data and M.R., A.C., generated genotyping data. A.R., R.H., A.J., V.G., S.K., E.J., K.R.R. performed different analysis related to genomic prediction. M.R., D.J., J.C. and R.K.V. interpreted the results and wrote the manuscript; R.K.V. conceived, designed and supervised the study.
Corresponding authors
Correspondence to Jose Crossa or Rajeev K. Varshney.
Genomic prediction of maize yield across European environmental conditions
Emilie J. Millet
, Willem Kruijer
, Aude Coupel-Ledru
, Santiago Alvarez Prado
, Llorenç Cabrera-Bosquet
, Sébastien Lacube
, Alain Charcosset
, Claude Welcker
, Fred van Eeuwijk
& François Tardieu
Enhancing the rate of genetic gain in public-sector plant breeding programs: lessons from the breeder's equation
Joshua N. Cobb
, Roselyne U. Juma
, Partha S. Biswas
, Juan D. Arbelaez
, Jessica Rutkoski
, Gary Atlin
, Tom Hagen
, Michael Quinn
& Eng Hwa Ng
Theoretical and Applied Genetics (2019)
Toward the sequence-based breeding in legumes in the post-genome sequencing era
Rajeev K. Varshney
, Manish K. Pandey
, Abhishek Bohra
, Vikas K. Singh
, Mahendar Thudi
& Rachit K. Saxena | CommonCrawl |
Adiabatic preparation of Multipartite GHZ states via Rydberg ground-state blockade
Dong-Xiao Li, Tai-Yu Zheng, and Xiao-Qiang Shao
Dong-Xiao Li,1,2 Tai-Yu Zheng,1,2,3 and Xiao-Qiang Shao1,2,*
1Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, China
2Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China
[email protected]
*Corresponding author: [email protected]
D Li
T Zheng
X Shao
Dong-Xiao Li, Tai-Yu Zheng, and Xiao-Qiang Shao, "Adiabatic preparation of Multipartite GHZ states via Rydberg ground-state blockade," Opt. Express 27, 20874-20885 (2019)
Cold atoms
Quantum communications
Quantum information processing
Rydberg states
Revised Manuscript: May 30, 2019
The multipartite GHZ states are useful resources for quantum information processing. Here we put forward a scalable way to adiabatically prepare the multipartite GHZ states in a chain of Rydberg atoms. Building on the ground-state blockade effect of Rydberg atoms and the stimulated Raman adiabatic passage (STIRAP), we suppress the adverse effect of the atomic spontaneous emission, and obtain a high fidelity of the multipartite GHZ states without requirements on the operational time. After investigating the feasibility of the proposal, we show a 3-qubit GHZ state can be generated in a wide range of relevant parameters and a fidelity above $98\%$ is achievable with the current experimental technologies.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
One-step achievement of robust multipartite Greenberger–Horne–Zeilinger state and controlled-phase gate via Rydberg interaction
Xiao-Qiang Shao, Tai-Yu Zheng, C. H. Oh, and Shou Zhang
J. Opt. Soc. Am. B 31(4) 827-832 (2014)
One-step generation of multiatom Greenberger–Horne–Zeilinger states in separate cavities via adiabatic passage
Si-Yang Hao, Yan Xia, Jie Song, and Nguyen Ba An
Dipole blockade in a cold Rydberg atomic sample [Invited]
Daniel Comparat and Pierre Pillet
J. Opt. Soc. Am. B 27(6) A208-A232 (2010)
Dissipative preparation of three-atom entanglement state via quantum feedback control
Wen-Mei Sun, Shi-Lei Su, Zhao Jin, Yan Liang, Ai-Dong Zhu, Hong-Fu Wang, and Shou Zhang
J. Opt. Soc. Am. B 32(9) 1873-1880 (2015)
Improving the stimulated Raman adiabatic passage via dissipative quantum dynamics
Qi-Cheng Wu, Ye-Hong Chen, Bi-Hua Huang, Jie Song, Yan Xia, and Shi-Biao Zheng
A. Einstein, B. Podolsky, and N. Rosen, "Can quantum-mechanical description of physical reality be considered complete?" Phys. Rev. 47(10), 777–780 (1935).
J. S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge University, 1988).
C. H. Bennett and S. J. Wiesner, "Communication via one- and two-particle operators on einstein-podolsky-rosen states," Phys. Rev. Lett. 69(20), 2881–2884 (1992).
K. Mattle, H. Weinfurter, P. G. Kwiat, and A. Zeilinger, "Dense coding in experimental quantum communication," Phys. Rev. Lett. 76(25), 4656–4659 (1996).
C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, "Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels," Phys. Rev. Lett. 70(13), 1895–1899 (1993).
D. Bouwmeester, J. W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, "Experimental quantum teleportation," Nature 390(6660), 575–579 (1997).
D. Boschi, S. Branca, F. De Martini, L. Hardy, and S. Popescu, "Experimental realization of teleporting an unknown pure quantum state via dual classical and einstein-podolsky-rosen channels," Phys. Rev. Lett. 80(6), 1121–1125 (1998).
H. Buhrman, R. Cleve, J. Watrous, and R. de Wolf, "Quantum fingerprinting," Phys. Rev. Lett. 87(16), 167902 (2001).
R. T. Horn, S. A. Babichev, K.-P. Marzlin, A. I. Lvovsky, and B. C. Sanders, "Single-qubit optical quantum fingerprinting," Phys. Rev. Lett. 95(15), 150502 (2005).
M. Mohseni and D. A. Lidar, "Direct characterization of quantum dynamics," Phys. Rev. Lett. 97(17), 170501 (2006).
M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University, 2000).
R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, "Quantum entanglement," Rev. Mod. Phys. 81(2), 865–942 (2009).
V. Giovannetti, S. Lloyd, and L. Maccone, "Quantum-enhanced measurements: Beating the standard quantum limit," Science 306(5700), 1330–1336 (2004).
C.-P. Yang and S. Han, "Preparation of greenberger-horne-zeilinger entangled states with multiple superconducting quantum-interference device qubits or atoms in cavity qed," Phys. Rev. A 70(6), 062323 (2004).
O. Gühne, G. Tóth, and H. J. Briegel, "Multipartite entanglement in spin chains," New J. Phys. 7, 229 (2005).
C.-P. Yang, "Preparation of $n$n-qubit greenberger-horne-zeilinger entangled states in cavity qed: An approach with tolerance to nonidentical qubit-cavity coupling constants," Phys. Rev. A 83(6), 062302 (2011).
M. Hofmann, A. Osterloh, and O. Gühne, "Scaling of genuine multiparticle entanglement close to a quantum phase transition," Phys. Rev. B 89(13), 134101 (2014).
J. Stasińska, B. Rogers, M. Paternostro, G. De Chiara, and A. Sanpera, "Long-range multipartite entanglement close to a first-order quantum phase transition," Phys. Rev. A 89(3), 032330 (2014).
C.-P. Yang, Q.-P. Su, S.-B. Zheng, and F. Nori, "Entangling superconducting qubits in a multi-cavity system," New J. Phys. 18(1), 013025 (2016).
C. Song, K. Xu, W. Liu, C.-p. Yang, S.-B. Zheng, H. Deng, Q. Xie, K. Huang, Q. Guo, L. Zhang, P. Zhang, D. Xu, D. Zheng, X. Zhu, H. Wang, Y.-A. Chen, C.-Y. Lu, S. Han, and J.-W. Pan, "10-qubit entanglement and parallel logic operations with a superconducting circuit," Phys. Rev. Lett. 119(18), 180511 (2017).
L. Pezzè, M. Gabbrielli, L. Lepori, and A. Smerzi, "Multipartite entanglement in topological quantum phases," Phys. Rev. Lett. 119(25), 250401 (2017).
Q.-P. Su, H.-H. Zhu, L. Yu, Y. Zhang, S.-J. Xiong, J.-M. Liu, and C.-P. Yang, "Generating double noon states of photons in circuit qed," Phys. Rev. A 95(2), 022339 (2017).
D. Sauerwein, N. R. Wallach, G. Gour, and B. Kraus, "Transformations among pure multipartite entangled states via local operations are almost never possible," Phys. Rev. X 8(3), 031020 (2018).
C.-P. Yang and Z.-F. Zheng, "Deterministic generation of greenberger-horne-zeilinger entangled states of cat-state qubits in circuit qed," Opt. Lett. 43(20), 5126–5129 (2018).
P. Contreras-Tejada, C. Palazuelos, and J. I. de Vicente, "Resource theory of entanglement with a unique multipartite maximally entangled state," Phys. Rev. Lett. 122(12), 120503 (2019).
G. K. Naik, R. Singh, and S. K. Mishra, "Controlled generation of genuine multipartite entanglement in floquet ising spin models," Phys. Rev. A 99(3), 032321 (2019).
O. Gühne and G. Tóth, "Entanglement detection," Phys. Rep. 474(1-6), 1–75 (2009).
M. H. D. M. Greenberger and A. Zeilinger, Bell's Theorem, Quantum Theory, and Conceptions of the Universe (Kluwer Academic, Dordrecht, 1989), pp. 69–72.
M. Müller, I. Lesanovsky, H. Weimer, H. P. Büchler, and P. Zoller, "Mesoscopic rydberg gate based on electromagnetically induced transparency," Phys. Rev. Lett. 102(17), 170502 (2009).
M. Saffman, T. G. Walker, and K. Mølmer, "Quantum information with rydberg atoms," Rev. Mod. Phys. 82(3), 2313–2363 (2010).
F. Reiter, D. Reeb, and A. S. Sørensen, "Scalable dissipative preparation of many-body entanglement," Phys. Rev. Lett. 117(4), 040501 (2016).
J. R. Kuklinski, U. Gaubatz, F. T. Hioe, and K. Bergmann, "Adiabatic population transfer in a three-level system driven by delayed laser pulses," Phys. Rev. A 40(11), 6741–6744 (1989).
U. Gaubatz, P. Rudecki, S. Schiemann, and K. Bergmann, "Population transfer between molecular vibrational levels by stimulated raman scattering with partially overlapping laser fields. a new concept and experimental results," J. Chem. Phys. 92(9), 5363–5376 (1990).
K. Bergmann, H. Theuer, and B. W. Shore, "Coherent population transfer among quantum states of atoms and molecules," Rev. Mod. Phys. 70(3), 1003–1025 (1998).
Z. Kis and F. Renzoni, "Qubit rotation by stimulated raman adiabatic passage," Phys. Rev. A 65(3), 032318 (2002).
S.-B. Zheng, "Nongeometric conditional phase shift via adiabatic evolution of dark eigenstates: A new approach to quantum computation," Phys. Rev. Lett. 95(8), 080502 (2005).
N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, "Stimulated raman adiabatic passage in physics, chemistry, and beyond," Rev. Mod. Phys. 89(1), 015006 (2017).
D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, "Fast quantum gates for neutral atoms," Phys. Rev. Lett. 85(10), 2208–2211 (2000).
C. Ates, T. Pohl, T. Pattard, and J. M. Rost, "Antiblockade in rydberg excitation of an ultracold lattice gas," Phys. Rev. Lett. 98(2), 023002 (2007).
D. Møller, L. B. Madsen, and K. Mølmer, "Quantum gates and multiparticle entanglement by rydberg excitation blockade and adiabatic passage," Phys. Rev. Lett. 100(17), 170504 (2008).
T. Amthor, C. Giese, C. S. Hofmann, and M. Weidemüller, "Evidence of antiblockade in an ultracold rydberg gas," Phys. Rev. Lett. 104(1), 013001 (2010).
L. Isenhower, E. Urban, X. L. Zhang, A. T. Gill, T. Henage, T. A. Johnson, T. G. Walker, and M. Saffman, "Demonstration of a neutral atom controlled-not quantum gate," Phys. Rev. Lett. 104(1), 010503 (2010).
D. Barredo, S. Ravets, H. Labuhn, L. Béguin, A. Vernier, F. Nogrette, T. Lahaye, and A. Browaeys, "Demonstration of a strong rydberg blockade in three-atom systems with anisotropic interactions," Phys. Rev. Lett. 112(18), 183002 (2014).
D. Petrosyan and K. Mølmer, "Binding potentials and interaction gates between microwave-dressed rydberg atoms," Phys. Rev. Lett. 113(12), 123003 (2014).
D. D. B. Rao and K. Mølmer, "Robust rydberg-interaction gates with adiabatic passage," Phys. Rev. A 89(3), 030301 (2014).
S.-L. Su, E. Liang, S. Zhang, J.-J. Wen, L.-L. Sun, Z. Jin, and A.-D. Zhu, "One-step implementation of the rydberg-rydberg-interaction gate," Phys. Rev. A 93(1), 012306 (2016).
X.-F. Shi, "Rydberg quantum gates free from blockade error," Phys. Rev. Appl. 7(6), 064017 (2017).
S.-L. Su, Y. Gao, E. Liang, and S. Zhang, "Fast rydberg antiblockade regime and its applications in quantum logic gates," Phys. Rev. A 95(2), 022319 (2017).
X.-F. Shi, "Deutsch, toffoli, and cnot gates via rydberg blockade of neutral atoms," Phys. Rev. Appl. 9(5), 051001 (2018).
S. L. Su, H. Z. Shen, E. Liang, and S. Zhang, "One-step construction of the multiple-qubit rydberg controlled-phase gate," Phys. Rev. A 98(3), 032306 (2018).
M. Saffman and K. Mølmer, "Efficient multiparticle entanglement via asymmetric rydberg blockade," Phys. Rev. Lett. 102(24), 240502 (2009).
X. L. Zhang, L. Isenhower, A. T. Gill, T. G. Walker, and M. Saffman, "Deterministic entanglement of two neutral atoms via rydberg blockade," Phys. Rev. A 82(3), 030306 (2010).
T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshnychenko, P. Grangier, and A. Browaeys, "Entanglement of two individual neutral atoms using rydberg blockade," Phys. Rev. Lett. 104(1), 010502 (2010).
D. D. B. Rao and K. Mølmer, "Dark entangled steady states of interacting rydberg atoms," Phys. Rev. Lett. 111(3), 033606 (2013).
A. W. Carr and M. Saffman, "Preparation of entangled and antiferromagnetic states by dissipative rydberg pumping," Phys. Rev. Lett. 111(3), 033607 (2013).
S. Möbius, M. Genkin, A. Eisfeld, S. Wüster, and J. M. Rost, "Entangling distant atom clouds through rydberg dressing," Phys. Rev. A 87(5), 051602 (2013).
S.-L. Su, Q. Guo, H.-F. Wang, and S. Zhang, "Simplified scheme for entanglement preparation with rydberg pumping via dissipation," Phys. Rev. A 92(2), 022328 (2015).
M. Ostmann, J. Minář, M. Marcuzzi, E. Levi, and I. Lesanovsky, "Non-adiabatic quantum state preparation and quantum state transport in chains of rydberg atoms," New J. Phys. 19(12), 123015 (2017).
S.-L. Su, Y. Tian, H. Z. Shen, H. Zang, E. Liang, and S. Zhang, "Applications of the modified rydberg antiblockade regime with simultaneous driving," Phys. Rev. A 96(4), 042335 (2017).
J. Song, C. Li, Z.-J. Zhang, Y.-Y. Jiang, and Y. Xia, "Implementing stabilizer codes in noisy environments," Phys. Rev. A 96(3), 032336 (2017).
X. Q. Shao, J. H. Wu, X. X. Yi, and G.-L. Long, "Dissipative preparation of steady greenberger-horne-zeilinger states for rydberg atoms with quantum zeno dynamics," Phys. Rev. A 96(6), 062315 (2017).
I. I. Beterov, G. N. Hamzina, E. A. Yakshina, D. B. Tretyakov, V. M. Entin, and I. I. Ryabtsev, "Adiabatic passage of radio-frequency-assisted förster resonances in rydberg atoms for two-qubit gates and the generation of bell states," Phys. Rev. A 97(3), 032701 (2018).
D.-X. Li, X.-Q. Shao, J.-H. Wu, and X. X. Yi, "Dissipation-induced w state in a rydberg-atom-cavity system," Opt. Lett. 43(8), 1639–1642 (2018).
A.-X. Chen, "Implementation of deutsch-jozsa algorithm and determination of value of function via rydberg blockade," Opt. Express 19(3), 2037–2045 (2011).
H. Weimer, M. Muller, I. Lesanovsky, P. Zoller, and H. P. Buchler, "A rydberg quantum simulator," Nat. Phys. 6(5), 382–388 (2010).
Y. Han, B. He, K. Heshami, C.-Z. Li, and C. Simon, "Quantum repeaters based on rydberg-blockade-coupled atomic ensembles," Phys. Rev. A 81(5), 052311 (2010).
X. Q. Shao, D. X. Li, Y. Q. Ji, J. H. Wu, and X. X. Yi, "Ground-state blockade of rydberg atoms and application in entanglement generation," Phys. Rev. A 96(1), 012328 (2017).
T. G. Walker and M. Saffman, "Consequences of zeeman degeneracy for the van der waals blockade between rydberg atoms," Phys. Rev. A 77(3), 032723 (2008).
F. Nogrette, H. Labuhn, S. Ravets, D. Barredo, L. Béguin, A. Vernier, T. Lahaye, and A. Browaeys, "Single-atom trapping in holographic 2d arrays of microtraps with arbitrary geometries," Phys. Rev. X 4(2), 021034 (2014).
D. W. Schönleber, A. Eisfeld, M. Genkin, S. Whitlock, and S. Wüster, "Quantum simulation of energy transport with embedded rydberg aggregates," Phys. Rev. Lett. 114(12), 123005 (2015).
D. Barredo, S. de Léséleuc, V. Lienhard, T. Lahaye, and A. Browaeys, "An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays," Science 354(6315), 1021–1023 (2016).
M. Endres, H. Bernien, A. Keesling, H. Levine, E. R. Anschuetz, A. Krajenbrink, C. Senko, V. Vuletic, M. Greiner, and M. D. Lukin, "Atom-by-atom assembly of defect-free one-dimensional cold atom arrays," Science 354(6315), 1024–1027 (2016).
D. W. Schönleber, C. D. B. Bentley, and A. Eisfeld, "Engineering thermal reservoirs for ultracold dipole-dipole-interacting rydberg atoms," New J. Phys. 20(1), 013011 (2018).
X.-F. Zhang, Q. Sun, Y.-C. Wen, W.-M. Liu, S. Eggert, and A.-C. Ji, "Rydberg polaritons in a cavity: A superradiant solid," Phys. Rev. Lett. 110(9), 090402 (2013).
A. Gaëtan, Y. Miroshnychenko, T. Wilk, A. Chotia, M. Viteau, D. Comparat, P. Pillet, A. Browaeys, and P. Grangier, "Observation of collective excitation of two individual atoms in the rydberg blockade regime," Nat. Phys. 5(2), 115–118 (2009).
X. Q. Shao, J. H. Wu, and X. X. Yi, "Dissipative stabilization of quantum-feedback-based multipartite entanglement with rydberg atoms," Phys. Rev. A 95(2), 022317 (2017).
F. Brennecke, T. Donner, S. Ritter, T. Bourdel, M. Köhl, and T. Esslinger, "Cavity qed with a bose-einstein condensate," Nature 450(7167), 268–271 (2007).
A. Grankin, E. Brion, E. Bimbard, R. Boddeda, I. Usmani, A. Ourjoumtsev, and P. Grangier, "Quantum statistics of light transmitted through an intracavity rydberg medium," New J. Phys. 16(4), 043020 (2014).
Amthor, T.
Anschuetz, E. R.
Ates, C.
Babichev, S. A.
Barredo, D.
Béguin, L.
Bell, J. S.
Bennett, C. H.
Bentley, C. D. B.
Bergmann, K.
Bernien, H.
Beterov, I. I.
Bimbard, E.
Boddeda, R.
Boschi, D.
Bourdel, T.
Bouwmeester, D.
Branca, S.
Brassard, G.
Brennecke, F.
Briegel, H. J.
Brion, E.
Browaeys, A.
Buchler, H. P.
Büchler, H. P.
Buhrman, H.
Carr, A. W.
Chen, A.-X.
Chen, Y.-A.
Chotia, A.
Chuang, I. L.
Cirac, J. I.
Cleve, R.
Comparat, D.
Contreras-Tejada, P.
Côté, R.
Crépeau, C.
De Chiara, G.
de Léséleuc, S.
De Martini, F.
de Vicente, J. I.
de Wolf, R.
Deng, H.
Donner, T.
Eggert, S.
Eibl, M.
Einstein, A.
Eisfeld, A.
Endres, M.
Entin, V. M.
Esslinger, T.
Evellin, C.
Gabbrielli, M.
Gaëtan, A.
Gao, Y.
Gaubatz, U.
Genkin, M.
Giese, C.
Gill, A. T.
Giovannetti, V.
Gour, G.
Grangier, P.
Grankin, A.
Greenberger, M. H. D. M.
Greiner, M.
Gühne, O.
Guo, Q.
Hamzina, G. N.
Han, S.
Han, Y.
Hardy, L.
He, B.
Henage, T.
Heshami, K.
Hioe, F. T.
Hofmann, C. S.
Hofmann, M.
Horn, R. T.
Horodecki, K.
Horodecki, M.
Horodecki, P.
Horodecki, R.
Huang, K.
Isenhower, L.
Jaksch, D.
Ji, A.-C.
Ji, Y. Q.
Jiang, Y.-Y.
Jin, Z.
Johnson, T. A.
Jozsa, R.
Keesling, A.
Kis, Z.
Köhl, M.
Krajenbrink, A.
Kraus, B.
Kuklinski, J. R.
Kwiat, P. G.
Labuhn, H.
Lahaye, T.
Lepori, L.
Lesanovsky, I.
Levi, E.
Levine, H.
Li, C.
Li, C.-Z.
Li, D. X.
Li, D.-X.
Liang, E.
Lidar, D. A.
Lienhard, V.
Liu, J.-M.
Liu, W.
Liu, W.-M.
Lloyd, S.
Long, G.-L.
Lu, C.-Y.
Lukin, M. D.
Lvovsky, A. I.
Maccone, L.
Madsen, L. B.
Marcuzzi, M.
Marzlin, K.-P.
Mattle, K.
Minár, J.
Miroshnychenko, Y.
Mishra, S. K.
Möbius, S.
Mohseni, M.
Møller, D.
Mølmer, K.
Muller, M.
Müller, M.
Naik, G. K.
Nielsen, M. A.
Nogrette, F.
Nori, F.
Osterloh, A.
Ostmann, M.
Ourjoumtsev, A.
Palazuelos, C.
Pan, J. W.
Pan, J.-W.
Paternostro, M.
Pattard, T.
Peres, A.
Petrosyan, D.
Pezzè, L.
Pillet, P.
Podolsky, B.
Pohl, T.
Popescu, S.
Rangelov, A. A.
Rao, D. D. B.
Ravets, S.
Reeb, D.
Reiter, F.
Renzoni, F.
Ritter, S.
Rogers, B.
Rolston, S. L.
Rosen, N.
Rost, J. M.
Rudecki, P.
Ryabtsev, I. I.
Saffman, M.
Sanders, B. C.
Sanpera, A.
Sauerwein, D.
Schiemann, S.
Schönleber, D. W.
Senko, C.
Shao, X. Q.
Shao, X.-Q.
Shen, H. Z.
Shi, X.-F.
Shore, B. W.
Simon, C.
Singh, R.
Smerzi, A.
Song, C.
Song, J.
Sørensen, A. S.
Stasinska, J.
Su, Q.-P.
Su, S. L.
Su, S.-L.
Sun, L.-L.
Sun, Q.
Theuer, H.
Tian, Y.
Tóth, G.
Tretyakov, D. B.
Urban, E.
Usmani, I.
Vernier, A.
Vitanov, N. V.
Viteau, M.
Vuletic, V.
Walker, T. G.
Wallach, N. R.
Wang, H.
Wang, H.-F.
Watrous, J.
Weidemüller, M.
Weimer, H.
Weinfurter, H.
Wen, J.-J.
Wen, Y.-C.
Whitlock, S.
Wiesner, S. J.
Wilk, T.
Wolters, J.
Wootters, W. K.
Wu, J. H.
Wu, J.-H.
Wüster, S.
Xia, Y.
Xie, Q.
Xiong, S.-J.
Xu, D.
Xu, K.
Yakshina, E. A.
Yang, C.-P.
Yi, X. X.
Zang, H.
Zeilinger, A.
Zhang, L.
Zhang, P.
Zhang, S.
Zhang, X. L.
Zhang, X.-F.
Zhang, Y.
Zhang, Z.-J.
Zheng, D.
Zheng, S.-B.
Zheng, Z.-F.
Zhu, A.-D.
Zhu, H.-H.
Zhu, X.
Zoller, P.
J. Chem. Phys. (1)
Nat. Phys. (2)
New J. Phys. (5)
Opt. Express (1)
Phys. Rep. (1)
Phys. Rev. (1)
Phys. Rev. A (22)
Phys. Rev. Appl. (2)
Phys. Rev. B (1)
Phys. Rev. Lett. (26)
Phys. Rev. X (2)
Rev. Mod. Phys. (4)
Fig. 1. The total scheme preparing the $N$-qubit GHZ state composes of $N-1$ steps, which can be divided into two groups: Step 1 and Step $n~(n=2,3,\ldots ,N-1)$. For the Step 1, only the first two atoms will be addressed individually by three lasers, respectively. As for the Step $n$, we only coupled the $n$-th atom to one laser and the $(n+1)$-th atom to three lasers, respectively.
Fig. 2. (a) The shapes of pulses to prepare the 3-qubit GHZ state. For Step 1 ($\Omega _c t\in [0,6000]$), we set $t_c=6000/\Omega _c$. For Step 2 ($\Omega _c t\in (6000,14000]$), we set $t_c=8000/\Omega _c$. (b) The shapes of pulses to prepare the 4-qubit GHZ state. For Step 1 ($\Omega _c t\in [0,6000]$), we set $t_c=6000/\Omega _c$. For Steps 2 and 3 ($\Omega _c t\in (6000,14000]$ and $(14000,22000]$), we set $t_c=8000/\Omega _c$. (c) and (d) are the fidelity of the $3$- and $4$-qubit states governed by the original Hamiltonian and the effective Hamiltonian, where the fidelity of state $\rho _i=|i\rangle \langle i|$ is defined as $F=\textrm {Tr}\sqrt {\rho _i^{1/2}\rho (t)\rho _i^{1/2}}$ and $\rho (t)$ is the density matrix of system at time $t$. The other relevant parameters are all chosen as: $\Omega _a=\Omega _b=\sqrt {0.05}\Omega _c$, $\Delta _p=20\Omega _c$, $\Delta _r=20\Omega _c$, $T=0.15t_c$, and $\tau =0.1t_c$.
Fig. 3. The dynamical evolution of the fidelity for the 3-qubit GHZ state with different $\delta$. For the Step 1 and Step 2, we set $t_c=6000/\Omega _c$ and $t_c=8000/\Omega _c$, respectively. The other relevant parameters are $\Omega _{a}=\Omega _b=\sqrt {2}\Omega _c$, $\Delta _p=800\Omega _c$, $\Delta _r=20\Omega _c$, $T=0.15t_c$, and $\tau =0.1t_c$.
Fig. 4. The dynamical evolution of the fidelity for the 3-qubit GHZ state with different $\Delta _r$. For the Step 1 and Step 2, we set $t_c=6000/\Omega _c$ and $t_c=8000/\Omega _c$, respectively. The other relevant parameters are $\Omega _{a}=\Omega _b=\sqrt {2}\Omega _c$, $\Delta _p=800\Omega _c$, $\delta =0$, $T=0.15t_c$, and $\tau =0.1t_c$.
Fig. 5. The fidelity of the $3$-qubit GHZ state governed by the original master equation. The decay rates are $\gamma _p=3\Omega _c$ and $\gamma =0.001\Omega _c$. The other relevant parameters are the same as those of Fig. 2(c).
(1) Ω a ( t ) = Ω a exp [ − ( t − t c / 2 − τ ) 2 T 2 ] ,
(2) Ω b ( t ) = Ω b exp [ − ( t − t c / 2 + τ ) 2 T 2 ] ,
(3) H I 1 ( t ) = ∑ j = 1 , 2 Ω a ( t ) | p ⟩ j ⟨ 0 | e − i Δ p t + Ω b ( t ) | p ⟩ j ⟨ 1 | e − i Δ p t + Ω c | r ⟩ j ⟨ 1 | e − i Δ r t + H.c. + ∑ α ≠ β U α β | r r ⟩ α β ⟨ r r | ,
(4) H I 1 ( t ) = ∑ j = 1 , 2 Ω a ( t ) | p ⟩ j ⟨ 0 | + Ω b ( t ) | p ⟩ j ⟨ 1 | + Ω c | r ⟩ j ⟨ 1 | + H.c. − Δ p | p ⟩ j ⟨ p | − Δ r | r ⟩ j ⟨ r | + ∑ k = 1 N − 1 U k , k + 1 | r r ⟩ k , k + 1 ⟨ r r | .
(5) H I 1 ( t ) = 2 Ω a ( t ) | ψ 1 ⟩ 12 ⟨ 00 | + Ω b ( t ) | ψ 1 ⟩ 12 ⟨ ψ 0 | + Ω a ( t ) | ψ 2 ⟩ 12 ⟨ ψ 0 | + 2 Ω b ( t ) | ψ 2 ⟩ 12 ⟨ 11 | + Ω | 11 ⟩ 12 ⟨ r r | + H.c. − Δ p | ψ 1 ⟩ 12 ⟨ ψ 1 | − Δ p | ψ 2 ⟩ 12 ⟨ ψ 2 | ,
(6) H I 1 ( t ) = 2 Ω a ( t ) | ψ 1 ⟩ 12 ⟨ 00 | + Ω b ( t ) | ψ 1 ⟩ 12 ⟨ ψ 0 | + Ω a ( t ) | ψ 2 ⟩ 12 ⟨ ψ 0 | + Ω b ( t ) | ψ 2 ⟩ 12 ( ⟨ + | + ⟨ − | ) + H.c. + Ω ( | + ⟩ 12 ⟨ + | − | − ⟩ 12 ⟨ − | ) − Δ p | ψ 1 ⟩ 12 ⟨ ψ 1 | − Δ p | ψ 2 ⟩ 12 ⟨ ψ 2 | ,
(7) H eff 1 ( t ) = 2 Ω a ( t ) | ψ 1 ⟩ 12 ⟨ 00 | + Ω b ( t ) | ψ 1 ⟩ 12 ⟨ ψ 0 | + H.c. − Δ p | ψ 1 ⟩ 12 ⟨ ψ 1 | .
(8) | Φ ⟩ 12 = cos [ Θ ( t ) ] | 00 ⟩ 12 − sin [ Θ ( t ) ] | ψ 0 ⟩ 12 ,
(9) lim t → − ∞ cos [ Θ ( t ) ] = 1 , lim t → + ∞ cos [ Θ ( t ) ] = 0.
(10) H I n ( t ) = ∑ j = n , n + 1 ( Ω c | r ⟩ j ⟨ 1 | + H.c. − Δ r | r ⟩ j ⟨ r | ) + Ω a ( t ) | p ⟩ n + 1 ⟨ 0 | + Ω b ( t ) | p ⟩ n + 1 ⟨ 1 | + H.c. − Δ p | p ⟩ n + 1 ⟨ p | + U n , n + 1 | r r ⟩ n , n + 1 ⟨ r r | ,
(11) H eff n ( t ) = Ω a ( t ) | 0 p ⟩ n , n + 1 ⟨ 00 | + Ω b ( t ) | 0 p ⟩ n , n + 1 ⟨ 01 | + H.c. − Δ p | 0 p ⟩ n , n + 1 ⟨ 0 p | .
(12) | Φ ⟩ n , n + 1 = cos [ Θ ′ ( t ) ] | 00 ⟩ n , n + 1 − sin [ Θ ′ ( t ) ] | 01 ⟩ n , n + 1 .
(13) H I 1 ( t ) = 2 Ω a ( t ) | ψ 1 ⟩ 12 ⟨ 00 | + Ω b ( t ) | ψ 1 ⟩ 12 ⟨ ψ 0 | + Ω a ( t ) | ψ 2 ⟩ 12 ⟨ ψ 0 | + 2 Ω b ( t ) | ψ 2 ⟩ 12 ⟨ 11 | + Ω | 11 ⟩ 12 ⟨ r r | + H.c. − Δ p | ψ 1 ⟩ 12 ⟨ ψ 1 | − Δ p | ψ 2 ⟩ 12 ⟨ ψ 2 | + δ | r r ⟩ 12 ⟨ r r | .
(14) H I 1 ( t ) = 2 Ω a ( t ) | ψ 1 ⟩ 12 ⟨ 00 | + Ω b ( t ) | ψ 1 ⟩ 12 ⟨ ψ 0 | + Ω a ( t ) | ψ 2 ⟩ 12 ⟨ ψ 0 | + Ω b ( t ) | ψ 2 ⟩ 12 ( sin θ ⟨ + ~ | − cos θ ⟨ − ~ | ) + H.c. + Ω ~ + | + ~ ⟩ 12 ⟨ + ~ | + Ω ~ − | − ~ ⟩ 12 ⟨ − ~ | − Δ p | ψ 1 ⟩ 12 ⟨ ψ 1 | − Δ p | ψ 2 ⟩ 12 ⟨ ψ 2 | .
(15) ρ ˙ = − i [ H I j ( t ) , ρ ] + L j ρ + L j + 1 ρ ,
(16) L j ρ = ∑ k = 1 3 L j k ρ L j k † − 1 2 ( L j k † L j k ρ + ρ L j k † L j k ) ,
(17) L j 1 = γ p 2 | 0 ⟩ j ⟨ p | ,
(19) L j 3 = γ | 1 ⟩ j ⟨ r | . | CommonCrawl |
Dynamic Stiffness Analysis and Experimental Verification of Axial Magnetic Bearing Based on Air Gap Flux Variation in Magnetically Suspended Molecular Pump | springerprofessional.de Skip to main content
PDF-Version jetzt herunterladen
Ausgabenarchiv
vorheriger Artikel A Comparative Study of Fractional Order Models ...
nächster Artikel Study on Cutting Force, Cutting Temperature and...
PatentFit aktivieren
Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Tipp schließen
01.12.2020 | Original Article | Ausgabe 1/2020 Open Access
Dynamic Stiffness Analysis and Experimental Verification of Axial Magnetic Bearing Based on Air Gap Flux Variation in Magnetically Suspended Molecular Pump
Chinese Journal of Mechanical Engineering > Ausgabe 1/2020
Jinji Sun, Wanting Wei, Jiqiang Tang, Chun-E Wang
» Zur Zusammenfassung PDF-Version jetzt herunterladen
Molecular pump is a kind of high-end scientific instrument for obtaining high vacuum environment, such as cyclotron, laser, mass spectrometer, gyro equipment, and so on [ 1 – 3 ]. The magnetic bearing has broad prospects in industrial applications [ 4 – 6 ] compared with the traditional bearing because of its special advantages, such as non-contact, no friction, low power consumption, low maintenance cost, dynamical controllability and active control ability of rotor dynamic imbalance. The magnetic bearing used as the support unite in magnetically suspended molecular pump (MSMP) can realize the oil-free and wear-free operation of the molecular pump, quiet operation and the minimal vibration, which is especially suitable for the semiconductor industry, such as ultrahigh vacuum applications [ 7 – 9 ].
In most magnetically suspended molecular pumps, the support unite of magnetic bearing includes radial active magnetic bearing (RAMB) and axial magnetic bearing (AMB). The magnetic bearings as the supporting parts are used to maintained the stable suspension. In the actual situation, the magnetic bearings are always subject to time-varying forces. The coil currents are changed to adjust the bearing force in response to the applied load. These time-varying currents and air gap will cause flux variations in the magnetic path, and hence inducing eddy currents, which not only result in a power loss of the system but also cause magnitude decrease of the bearing force and stiffness. This will affect the dynamic performance and stability of the whole system [ 10 – 12 ].
Several studies have been carried out to develop analytical models for magnetic bearings from different aspects. Kucera et al. [ 13 ] presented an analytical method of axial bearings considering eddy currents, and the magnetic field solutions for a semi-infinite plate were used to approximate the flux distributions. The analytical solution for the flux, the impedance and the force of an axial bearing can be derived. The traditional magnetic circuit method is improved by considering the dynamic effective reluctance depended on frequency [ 14 – 16 ]. The cylindrical magnetic actuator (The cylindrical magnetic actuator has the same working principle as the axial magnetic bearing.) was divided into several elements and the frequency-dependent reluctance of each element was obtained according to the flux distribution, and then derived the dynamic force and stiffness of the axial magnetic bearing. Based on these studies, according to the actual situations, Sun et al. [ 17 ] were extended to include the magnetic actuator with a center hole common in rotating machinery, and factors such as saturation, leakage, and fringing flux were considered. A simple magnetic circuit model including eddy currents was presented, from which the stiffness can be derived analytically. A magnetic circuit calculation model is proposed for the core eddy current loss caused by the axial high frequency vibration of axial magnetic bearing [ 18 , 19 ]. The dynamic air gap flux and electromagnetic force are analyzed. By analytical and finite element method, the segmented AMBs were developed to reducing the effect of eddy currents [ 20 , 21 ]. The results demonstrate that the segmentation of the stator results in dramatic improvements in actuator dynamic performance. Considering the driving method of excitation, an analytical model for a solid-core AMB including the eddy-current effect under voltage drive based on the magnetic circuit method was presented in Ref. [ 22 ], which indicates the voltage drive can effectively reduce the dynamic performance.
Moreover, the measurement of dynamic stiffness is especially important. Ref. [ 23 ] proposed the axial displacement stiffness measurement method of the axial passive magnetic bearing and verified the correctness in the magnetically suspended control momentum gyro. In Ref. [ 24 ], a new stiffness measurement method of repulsive passive axial magnetic bearing with Halbach magnetic array is put forward. Ref. [ 25 ] proposed a new stiffness measurement method for magnetically suspended flywheel to measure current stiffness and displacement stiffness of permanent magnet biased radial magnetic bearing. Refs. [ 26 , 27 ] described a detailed stiffness measurement method for a radial hybrid magnetic bearing. The above researches for stiffness measurement consider the stiffness of magnetic bearing as a constant. However, some researchers have found that the stiffness of magnetic bearing would decrease significantly with the field frequency increasing, which would influence the stability of radial magnetic bearing system [ 28 , 29 ]. Sun et al. [ 17 ] carried out the experiment about the dynamic stiffness including eddy currents, but the specific experimental methods are not described.
In conclusion, early models of solid magnetic actuators assumed that the flux density in the air gap was homogeneous because no eddy current is produced in the air gap. However, the flux density varies with axial position along the air gap. Neglecting the flux variation across the air gap during the analytical stage will bring the inaccurate stiffness result. The inaccurate analytical model result in the biased feedback for control system and even worsen control performance for the whole system. This variation of flux density across the air gap is an important aspect of actuator behavior that must be captured in high fidelity models. Furthermore, in order to verify the correctness of analytical dynamic stiffness model, it is necessary to measure the actual dynamic stiffness.
In this paper, considering the dynamic characteristics that time-varying force leads to time-varying currents and air gap with a specific frequency, the dynamic stiffness model of axial magnetic bearing including the eddy-current effect considering the variation of flux density across the air gap is firstly built up by analyzing the dynamic equivalent magnetic circuit model. Then, the dynamic stiffness measurement method is adopted in Magnetically Suspended Molecular Pump to verify the validity of the theoretical analyze results.
2 Magnetic Circuit Model of Axial Magnetic Bearing for MSMP
The magnetically suspended molecular pump model is shown Figure 1, which is mainly composed of one rotor shaft, two RAMBs, one axial active magnetic bearing, one high-speed BLDCM, two integrated displacement sensors and stator blades. The configuration of the AMB is pictured in Figure 2, the materials of the stator and rotor core are silicon steels and the corresponding parameters are described in Table 1.
Magnetically suspended molecular pump model
Configuration of AMBs
Main parameters of AMB
Rotor outer diameter
r 0 (mm)
Inner magnetic conducting ring outer diameter
Inner magnetic conducting ring inner diameter
Outer magnetic conducting ring inner diameter
Outer magnetic conducting ring outer diameter
Air gap length
Thrust plate thickness
h 1 (mm)
Axial length of winding slot
Axial length of outside magnetic conducting ring
Number of turns
N (turns/pole)
Bias current
I 0 (A)
σ (S/m)
7.46 × 10 6
The traditional static magnetic circuit theory has been widely used in the design of an AMB. The magnetic circuit model consists of the magneto motive force (MMF) determined by coil turns and currents and the reluctance of inner and outer air gap. The flux and bearing force can be derived conveniently with the reluctance of the air gap and MMF applied. However when the load force of the magnetic bearing varies with a specific frequency, the time-varying force leads to time-varying currents and air gap, which generates eddy currents in the rotor and stator. This situation referred to in this paper that the time-varying force leads to time-varying currents and air gap is defined as dynamic characteristics. In this case, the reluctance of the iron core varies influenced by eddy current effect. According to the analysis method of effective magnetic reluctance in Ref. [ 10 ], the eddy current formed by the alternating magnetic field on the solid iron core is equivalent in the form of eddy current magnetic reluctance, and then the air gap flux and electromagnetic force can be calculated according to the equivalent magnetic circuit method.
In order to obtain the effective magnetic reluctance on the iron core, the magnetic path of the axial magnetic bearing is divided into several different parts (ignoring magnetic leakage), as shown in Figure 3. The region 1 includes a transition region at the top of the inner magnetic pole, an air gap corresponding to the inner magnetic pole, and a transition region in the thrust disk corresponding to the inner magnetic pole. The region 3 includes a transition region at the top of the outer magnetic pole, an air gap corresponding to the outer magnetic pole, and a transition region in the corresponding thrust disk. In region 1 and 3, the magnetic lines of force in the transition region of the thrust disk are distributed in the radial direction, and the magnetic lines of force in the air gap are parallel to the axial direction. The region 2 is other than the region 1 and the region 3 in the thrust disk part. The region 5 is the portion corresponding to the region 2 in the stator, the magnetic lines of force in the region 2 and the region 5 are all parallel to the radial direction. The region 4 is the portion of the inner magnetically conductive ring from which the region 1 is removed, and the region 6 is the outer portion.
Axial magnetic bearing area division
The corresponding equivalent reluctance is calculated in each part of the core, and the equivalent magnetoresistance expression is simplified by Ad Hoc approximation and Taylor series and pade approximation respectively. Finally, the equivalent reluctance of each region can be obtained. According to Ref. [ 10 ], the effective reluctances expression of the elements is:
$$R_{1}^{0} = \frac{g}{{\pi \mu_{0} (r_{2}^{2} - r_{1}^{2} )}},$$
$$R_{2} = \frac{{\ln (r_{3} /r_{2} )}}{{2\pi h_{1} \mu_{r} \mu_{0} }} + \frac{{\ln (r_{3} /r_{2} )}}{{2\uppi}}\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt s ,$$
$$R_{4} = \frac{{h_{2} }}{{\pi \mu_{r} \mu_{0} (r_{2}^{2} - r_{1}^{2} )}} + \frac{{h_{2} }}{{2\pi r_{2} }}\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt s ,$$
$$R_{5} = \frac{{\ln (r_{3} /r_{2} )}}{{2\pi h_{3} \mu_{r} \mu_{0} }} + \frac{{\ln (r_{3} /r_{2} )}}{2\pi }\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt s ,$$
$$R_{6} = \frac{{h_{2} }}{{\pi \mu_{r} \mu_{0} (r_{4}^{2} - r_{3}^{2} )}} + \frac{{h_{2} }}{{2\pi r_{3} }}\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt s .$$
Each dynamic equivalent reluctance can be expressed as:
$$R_{j} = R_{j}^{0} + R_{j}^{e} = R_{j}^{0} + c_{j} \sqrt s ,\quad j = 1, \ldots ,6.$$
\(R_{j}^{0}\) is the static reluctance without eddy current, \(R_{j}^{e} = c_{j} \sqrt s\) is the dynamic reluctance including eddy current.
Although there is no eddy current is in the air gap, due to eddy-current effect in the iron core, the flux density in solid actuators varies across the air gap. According to Ref. [ 9 ], if there is no flux leak, the magnetic flux of magnetic bearing is:
$$\phi = \frac{{\mu_{0} NAI}}{{2x + \frac{{l_{fe} }}{{\mu_{r} }}\frac{\gamma }{\tanh \left( \gamma \right)}}} = \mu_{0} NAI\frac{1}{2g}\frac{1}{{1 + \frac{1}{{a_{fe} }}\frac{\gamma }{\tanh \left( \gamma \right)}}},$$
$$a_{fe} = \mu_{r} \frac{2g}{{l_{fe} }},$$
$$\gamma = h_{1} \sqrt {s\mu \sigma } ,$$
where l fe is the length of the solid core. Dynamic reluctance of inner and outer air gap including eddy current is:
$$R_{(1,3)}^{e} = \frac{NI}{\phi } = \frac{{2g\left[ {1 + \frac{1}{{a_{fe} }}\frac{\gamma }{\tanh \left( \gamma \right)}} \right]}}{{\mu_{0} A_{(n,w)} }},$$
where A n and A w is the section surface of inner and outer air gap respectively.
Considering the flux density variation across the air gap, the equivalent magnetic circuit diagram of this axial electromagnetic magnetic bearing is shown in Figure 4.
Dynamic equivalent magnetic circuit model of AMB
Where the internal air gap magnetic fluxes ϕ n and external air gap magnetic fluxes ϕ w are equal. The dynamic total reluctance based on the flux variation can be expressed as:
$$R_{t} = \sum\limits_{j = 1}^{6} {R_{j} = } \sum\limits_{j = 1}^{6} {R_{j}^{0} + \sum\limits_{j = 1}^{6} {R_{j}^{e} } } .$$
The static total reluctance can be expressed as:
$$R_{t}^{0} = \sum\limits_{j = 1}^{6} {R_{j}^{0} } .$$
3 Analysis of Dynamic Stiffness of AMB FOR MSMP
According to the above analysis, considering the influence of the eddy-current effect including the flux density variation across the air gap, it can be seen from Figure 4 that when the thrust disk is stably suspended in the center position of the magnetic bearing, the internal and external air gap magnetic fluxes are equal, where ϕ w is the magnetic flux at the outer magnetic pole air gap and ϕ n is the magnetic flux at the inner magnetic pole air gap.
$$\phi_{n} = \phi_{w} = \frac{{NI_{0} }}{{R_{t} }}.$$
The force of the single-sided magnetic bearing (one side of the thrust plate) is:
$$\begin{aligned} F & = \frac{{\phi_{n}^{2} }}{{2\mu_{0} A_{n} }} + \frac{{\phi_{w}^{2} }}{{2\mu_{0} A_{w} }} = {{N^{2} I_{0}^{2} \left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)} \mathord{\left/ {\vphantom {{N^{2} I_{0}^{2} \left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)} {}}} \right. \kern-0pt} {}} \hfill \\ & 2\mu_{0} \left[ {R_{t}^{0} + \left( {\frac{{\ln (r_{3} /r_{2} )}}{\pi }{ + }\frac{{h_{2} }}{{2\pi r_{2} }}{ + }\frac{{h_{2} }}{{2\pi r_{3} }}} \right)\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt {jw} } \right. \hfill \\&\quad \left. { + }{\frac{{ 2g\left[ {1 + \frac{1}{{a_{fe} }}\frac{{h_{1} \sqrt {jw\mu \sigma } }}{{\tanh \left( {h_{1} \sqrt {jw\mu \sigma } } \right)}}} \right]}}{{\mu_{0} }}\left( {\frac{ 1}{{A_{n} }}{ + }\frac{ 1}{{A_{w} }}} \right)} \right]^{2}. \hfill \\ \end{aligned}$$
The relationship between dynamic force and frequency is shown in Figure 5, which illustrates that as the frequency increases, the dynamic force appears amplitude attenuation.
Dynamic axial force value at different frequency
3.1 Dynamic Current Stiffness
The bias current \(I_{0}\) and the differential alternating control current \(i_{c} = ie^{j\omega t}\) are applied to the above and below coils of the thrust disk, namely,
$$I = I_{0} \pm ie^{j\omega t} .$$
The magnetic fluxes on the above and below sides of the thrust plate are:
$$\left\{ \begin{aligned} \phi_{a} = \phi_{0} + \phi_{i} , \hfill \\ \phi_{b} = \phi_{0} - \phi_{i} , \hfill \\ \end{aligned} \right.$$
where \(\phi_{0}\) is the static bias magnetic flux generated by the bias current, \(\phi_{i}\) is the dynamic magnetic flux generated by the coils considering the eddy-current effect including the flux density variation across the air gap, expressed as:
$$\begin{aligned} \phi_{0} & = \frac{{NI_{0} }}{{R_{t}^{0} }} = {{\pi NI_{0} } \mathord{\left/ {\vphantom {{\pi NI_{0} } {\left[ {\frac{g}{{\mu_{0} (r_{2}^{2} - r_{1}^{2} )}}{ + }\frac{{\ln (r_{3} /r_{2} )}}{{2h_{1} \mu_{r} \mu_{0} }}{ + }\frac{g}{{\mu_{0} (r_{4}^{2} - r_{3}^{2} )}}} \right.}}} \right. \kern-0pt} {\left[ {\frac{g}{{\mu_{0} (r_{2}^{2} - r_{1}^{2} )}}{ + }\frac{{\ln (r_{3} /r_{2} )}}{{2h_{1} \mu_{r} \mu_{0} }}{ + }\frac{g}{{\mu_{0} (r_{4}^{2} - r_{3}^{2} )}}} \right.}} \\ & \quad { + }\left. {\frac{{h_{2} }}{{\mu_{r} \mu_{0} (r_{2}^{2} - r_{1}^{2} )}}{ + }\frac{{\ln (r_{3} /r_{2} )}}{{2h_{3} \mu_{r} \mu_{0} }}{ + }\frac{{h_{2} }}{{\mu_{r} \mu_{0} (r_{4}^{2} - r_{3}^{2} )}}} \right], \\ \end{aligned}$$
$$\begin{aligned} \phi_{i} &= \frac{{Nie^{jwt} }}{{R_{t} }} = {{Nie^{jwt} } \mathord{\left/ {\vphantom {{Nie^{jwt} } {\left[ {R_{t}^{0} + \left( {\frac{{\ln (r_{3} /r_{2} )}}{\pi }{ + }\frac{{h_{2} }}{{2\pi r_{2} }}{ + }\frac{{h_{2} }}{{2\pi r_{3} }}} \right)} \right.}}} \right. \kern-0pt} {\left[ {R_{t}^{0} + \left( {\frac{{\ln (r_{3} /r_{2} )}}{\pi }{ + }\frac{{h_{2} }}{{2\pi r_{2} }}{ + }\frac{{h_{2} }}{{2\pi r_{3} }}} \right)} \right.}} \hfill \\&\quad \left. { \times \sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt {jw} { + }\frac{{2g\left[ {1 + \frac{1}{{a_{fe} }}\frac{{h_{1} \sqrt {jw\mu \sigma } }}{{\tanh \left( {h_{1} \sqrt {jw\mu \sigma } } \right)}}} \right]}}{{\mu_{0} }}\left( {\frac{ 1}{{A_{n} }}{ + }\frac{ 1}{{A_{w} }}} \right)} \right]. \hfill \\ \end{aligned}$$
The resultant force of the above and below sides of the magnetic bearing can be obtained as:
$$\begin{aligned} F & = \frac{{(\phi_{0} + \phi_{i} )^{2} }}{{2\mu_{0} A_{n} }} + \frac{{(\phi_{0} + \phi_{i} )^{2} }}{{2\mu_{0} A_{w} }} - \frac{{(\phi_{0} - \phi_{i} )^{2} }}{{2\mu_{0} A_{n} }} \\ & \quad - \frac{{(\phi_{0} - \phi_{i} )^{2} }}{{2\mu_{0} A_{w} }} = \frac{{2\phi_{0} \phi_{i} }}{{\mu_{0} A_{n} }} + \frac{{2\phi_{0} \phi_{i} }}{{\mu_{0} A_{w} }}. \\ \end{aligned}$$
Current stiffness is calculated as:
$$\begin{aligned} k_{i} (\omega ) & = \left. {\frac{\partial F}{\partial i}} \right|_{{i_{c} = 0}} = \frac{{2N^{2} I_{0} }}{{\mu_{0} R_{t}^{o} R_{t} }}\left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right) = {{2N^{2} I_{0} \left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)} \mathord{\left/ {\vphantom {{2N^{2} I_{0} \left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)} {}}} \right. \kern-0pt} {}} \\ & \left[ {\mu_{0} R_{t}^{o} \left( {R_{t}^{0} + \left( {\frac{{\ln (r_{3} /r_{2} )}}{\pi }{ + }\frac{{h_{2} }}{{2\pi r_{2} }}{ + }\frac{{h_{2} }}{{2\pi r_{3} }}} \right)\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt {jw} } \right.} \right. \\ & \quad + \left. {\frac{{2g\left[ {1 + \frac{1}{{a_{fe} }}\frac{{h_{1} \sqrt {jw\mu \sigma } }}{{\tanh \left( {h_{1} \sqrt {jw\mu \sigma } } \right)}}} \right]}}{{\mu_{0} }}\left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)} \right]. \\ \end{aligned}$$
The relationship between dynamic current displacement stiffness and frequency is shown in Figure 6, which illustrates that as the frequency increases, the current stiffness appears amplitude attenuation.
Dynamic current stiffness at different frequency
3.2 Dynamic Displacement Stiffness
Keep the bias current in the coil unchanged, assuming that the rotor makes a small harmonic vibration at the center position expressed as \(g_{z} e^{j\omega t}\), the above and below axial air gap can be expressed as:
$$z = g \pm g_{z} e^{j\omega t} .$$
In this case, the air gap reluctance of above and below magnetic poles is expressed as:
$$R_{ag} = R_{g0} + R_{gz} e^{j\omega t} ,$$
$$R_{bg} = R_{g0} - R_{gz} e^{j\omega t} ,$$
where \(R_{g0}\) is the static air gap reluctance, expressed as \(R_{g0} = \frac{g}{{\mu_{0} }}\left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)\); \(R_{gz}\) is the dynamic air gap reluctance generated by the magnetic pole vibration, expressed as \(R_{gz} = \frac{{g_{z} }}{{\mu_{0} }}\left( {\frac{1}{{A_{n} }} + \frac{1}{{A_{w} }}} \right)\); Ignoring the magnetic flux leakage, then the magnetic flux of the above and below air gaps can be expressed as:
$$\begin{aligned} \phi_{a} = \frac{{NI_{0} }}{{R_{t} + R_{gz} e^{jwt} }}, \hfill \\ \phi_{b} = \frac{{NI_{0} }}{{R_{t} - R_{gz} e^{jwt} }}. \hfill \\ \end{aligned}$$
Since g z is much smaller than g, \(R_{gz}\) is much smaller than \(R_{t}\), then Eq. ( 25) can perform Taylor expansion, leaving only the first two terms, then the following formula can be obtained:
$$\begin{aligned} \phi_{a} \approx \phi_{0} + \phi_{nz} , \hfill \\ \phi_{b} \approx \phi_{0} - \phi_{nz} , \hfill \\ \end{aligned}$$
$$\phi_{0} = \frac{{NI_{0} }}{{R_{t}^{0} }},$$
$$\phi_{nz} = - \frac{{NI_{0} R_{gz} }}{{R_{t}^{0} R_{t} }}e^{jwt} .$$
In the same way as current stiffness, the formula for calculating the displacement stiffness considering the eddy-current effect including the flux density variation across the air gap (the resultant force on both sides of the thrust disk) is:
$$\begin{aligned} k_{z} (\omega ) &= \left. {\frac{{\partial F_{z} }}{\partial z}} \right|_{{g_{z} = 0}} = - \frac{{2N^{2} I_{0}^{2} }}{{\mu_{0}^{2} \left( {R_{t}^{0} } \right)^{2} R_{t} }}\left( {\frac{1}{{A_{n}^{2} }} + \frac{1}{{A_{w}^{2} }}} \right) \hfill \\& {{ = - 2N^{2} I_{0}^{2} \left( {\frac{1}{{A_{n}^{2} }} + \frac{1}{{A_{w}^{2} }}} \right)} \mathord{\left/ {\vphantom {{ = - 2N^{2} I_{0}^{2} \left( {\frac{1}{{A_{n}^{2} }} + \frac{1}{{A_{w}^{2} }}} \right)} {\left[ {\mu_{0}^{2} \left( {R_{t}^{0} } \right)^{2} \left( {R_{t}^{0} } \right.} \right.}}} \right. \kern-0pt} {\left[ {\mu_{0}^{2} \left( {R_{t}^{0} } \right)^{2} \left( {R_{t}^{0} } \right.} \right.}} \hfill \\ &\quad+ \left( {\frac{{\ln (r_{3} /r_{2} )}}{\pi }{ + }\frac{{h_{2} }}{{2\pi r_{2} }}{ + }\frac{{h_{2} }}{{2\pi r_{3} }}} \right)\sqrt {\frac{\sigma }{{\mu_{r} \mu_{0} }}} \sqrt {jw} \hfill \\ &\quad\left. {{ + }\frac{{2g\left[ {1 + \frac{1}{{a_{fe} }}\frac{{h_{1} \sqrt {jw\mu \sigma } }}{{\tanh \left( {h_{1} \sqrt {jw\mu \sigma } } \right)}}} \right]}}{{\mu_{0} }}\left( {\frac{1}{{A_{n}^{2} }} + \frac{1}{{A_{w}^{2} }}} \right)} \right], \hfill \\ \end{aligned}$$
where N represents the number of turns, \(I_{0}\) is the bias current.
The relationship between dynamic displacement stiffness and frequency is shown in Figure 7, which illustrates that as the frequency increases, the displacement stiffness appears amplitude attenuation.
Dynamic displacement stiffness at different frequency
4 Dynamic Stiffness Measurement Method
The stiffness obtained by analytical model may not satisfy the requirements of actual magnetic bearing design and application, so it is important to measure the actual stiffness value of the magnetic bearing. This chapter adopts a magnetic bearing stiffness test method, which can measure the current stiffness and displacement stiffness of the magnetic bearing rotor at a certain speed and be used in the molecular pump system to test the axial magnetic bearing dynamic stiffness test. The step of this method is first to adjust the reference value of the displacement sensor, then stabilize the rotor in a certain position by the magnetic bearing itself and inject a sinusoidal signal into the currents of the bearing to excite the closed-loop system, finally measure the control current of the corresponding position magnetic bearing and calculate the stiffness of the magnetic bearing.
When the magnetic bearing rotor system is stably suspended, a sinusoidal signal was injected into the currents of the bearing to represent the time-varying currents and air gap with a specific frequency. The electromagnetic force of the magnetic bearing at this time is a dynamic force related to frequency. In this paper, gravity is considered as the bearing force of the magnetic bearing. Due to the molecular pump structure of the internal rotor, it is difficult to implement the force sensor, and the actual means of measuring force are limited. In addition, the dynamic force affected by frequency is much smaller than gravity. Therefore, we use gravity to equivalent dynamic electromagnetic force. The measured channel should be adjusted to the same direction of gravity during the test. Because the mechanical structure and mass distribution of the whole system are the known conditions, the electromagnetic force of the measured magnetic bearing channel can be expressed as a function of gravity by force analysis.
$$F \approx f(mg),$$
where F is the electromagnetic force of the magnetic bearing measured channel, m is the mass of the rotor of magnetic bearing system.
Because the electromagnetic force can be linearized near the center of the magnetic bearing, which can be expressed:
$$i(\omega ) = - \frac{{k_{h} (\omega )}}{{k_{i} (\omega )}}(h - h_{0} ) + \frac{f(mg)}{{k_{i} (\omega )}},$$
where \(k_{i} (\omega )\) and \(k_{h} (\omega )\) are the current stiffness and displacement stiffness, \(i(\omega )\) is the control current, h is the rotor displacement, and h 0 is the magnetic center position of the magnetic bearing to be tested.
When the rotor system runs at a specific speed, the rotor is adjusted to a position h 1 near the magnetic center, and the corresponding control current can be measured as \(i_{1} (\omega )\), then the current stiffness and displacement stiffness can be calculated by Eqs. ( 32) and ( 33). In the magnetic bearing control system, the displacement of rotor is directly measured by eddy current displacement sensor, compared with the reference signal of displacement set in the program of magnetic bearing control system, and then the PID operation is performed to control the rotor to suspend stably at the reference position. The reference signal can be adjusted online by the control program. Therefore, at a certain frequency, the change of rotor position can be achieved by adjusting the displacement reference signal online. The reference signal of displacement 2048 is used as the central position to adjust the rotor stable suspension position. The reference signal of displacement adjustment range is 1600‒2500, and the corresponding rotor suspension position is −0.04 ~ 0.04 mm.
$$k_{i} (\omega ) = \frac{f(mg)}{{i_{0} (\omega )}},$$
$$k_{h} (\omega ) = \frac{{f\left( {mg} \right) - k_{i} (\omega ) \cdot (i_{1} (\omega ) - i_{0} (\omega ))}}{{h_{1} - h_{0} }}.$$
This proposed method is realized to measure the dynamic current stiffness and the dynamic displacement stiffness through adjusting the reference value of the displacement sensor by experiments. The process includes two stages.
The first stage is to determine the magnetic center.
The rotor is suspended stably perpendicular to the direction of gravity with 0 Hz,
The suspended position of rotor is adjusted until the measured current values in the two channels, z+ and z− channel, are the same. Thus, the position of the magnetic center is determined.
The second stage is to obtain the dynamic current stiffness and the dynamic displacement stiffness.
Adjust the z channel along the direction of gravity and make the rotor suspend at a specific revolving speed. A sinusoidal signal was injected into the currents of the bearing to excite the closed-loop system.
The corresponding control current \(i_{0} (\omega )\) of the magnetic bearing rotor at the magnetic center position is recorded.
The rotor is adjusted to a position h 1 near the magnetic center, and the corresponding control current is recorded as \(i_{1} (\omega )\).
The current stiffness \(k_{i} (\omega )\) and the displacement stiffness \(k_{h} (\omega )\) at a specific revolving speed can be obtained by Eqs. ( 32) and ( 33).
The course of experiment can be represented as a flow diagram shown in Figure 8.
Flow diagram of dynamic stiffness measurement experiment
5 Measurement and Experiment
In order to verify the feasibility of the proposed experimental method in this paper, the dynamic stiffness test experiment of AMB is completed in a MSMP platform with the rated speed of 500 Hz shown in Figure 9. The measured AMB prototype is presented in Figure 10.
Measurement test system
Prototyped AMB
5.1 Dynamic Current Stiffness Measurement and Results
According to the dynamic stiffness measurement method mentioned in the last chapter, the experiment is carried out in the MSMP, and the corresponding course is as follows.
Keep the z axis direction of MSMP parallel to the floor, and the magnetic center is determined according to the mentioned method in the last chapter.
To measure the dynamic current stiffness, the z-axis direction of the MSMP is perpendicular to the floor. The combined force of AMB is the gravity of the rotor.
A sinusoidal signal was injected into the currents of the bearing to excite the closed-loop system. The exciting signal, bearing currents, and displacement of the rotor were measured simultaneously, and the open-loop frequency response obtained. The corresponding dynamic current stiffness under each specific frequency can be obtained through force analysis of the MSMP system and Eq. ( 32).
The analytical dynamic current stiffness result considering the air gap flux variation compared with the result without considering the air gap flux variation is shown in Figure 11 as the curve 'Analytical Method' (without considering the air gap flux variation) and the curve 'Improved Analytical Method' (considering the air gap flux variation). Compared with the previous analytical method, the dynamic current stiffness result obtained from the improved model decreases by 1.8%. The measured dynamic current stiffness value at the corresponding frequency is shown as the scatter plot in Figure 11. If following the traditional static stiffness measurement method, it is obvious the static current stiffness which is under 0 frequency is 160 N/A, with frequency increasing, from the dynamic stiffness measurement data, we can see the dynamic current stiffness is gradually decreasing to 127.8 N/A, and the error of the dynamic current stiffness compared with the static current stiffness is increasing from 0% to 20.1%. For measured dynamic current stiffness value, least square regression method was used for dynamic current stiffness-frequency curve fitting with cubic equation. The fitting curve between measured dynamic current stiffness and different frequency compared with analytical results is presented as the curve 'measured' in Figure 11. The maximum error of measured values and analytical values considering the air gap flux variation is 1.3%. The results of experiment performed by the proposed measurement method agreed better with the analytical model considering the air gap flux variation.
Measured dynamic current stiffness at different frequency compared with analytical results
5.2 Dynamic Displacement Stiffness Measurement and Results
To measure the dynamic displacement stiffness, recording the rotor at a certain non-zero position, the control current at different frequencies can determine the displacement stiffness at the corresponding frequency. In this experiment, the corresponding course is:
The rotor offset from the magnetic center is adjusted to − 0.02 mm.
The dynamic displacement stiffness can be obtained by the recorded control current values under different frequency according to Eq. ( 33).
The analytical dynamic displacement stiffness result considering the air gap flux variation compared with the result without considering the air gap flux variation is shown in Figure 12 as the curve 'Analytical Method' (without considering the air gap flux variation) and the curve 'Improved Analytical Method' (considering the air gap flux variation). Compared with the previous analytical method, the dynamic displacement stiffness result obtained from the improved model decreases by 1.67%. The measured dynamic displacement stiffness at the corresponding frequency is shown as the scatter plot in Figure 12. According to the static stiffness measurement method, the static displacement stiffness which is under 0 frequency is 1.2 × 10 5 N/m. Using the method of dynamic stiffness measurement method, with frequency increasing, the dynamic displacement stiffness is gradually decreasing to 0.93 × 10 5 N/m, and the error of the dynamic displacement stiffness compared with the static displacement stiffness is increasing from 0% to 22.5%. For measured dynamic displacement stiffness value, least square regression method was used for dynamic displacement stiffness-frequency curve fitting with cubic equation. The fitting curve between measured dynamic displacement stiffness and different frequency compared with analytical results is presented as the curve 'measured' in Figure 12. The experimental results obtained by the proposed measurement method is consistent with the analytical model with maximum error 2.1%.
It can be seen that there is a certain error between the theoretical dynamic stiffness and the experimental results. The source of the error mainly includes the following aspects:
Dynamic stiffness testing is performed in the MSMP, and there is inevitably interference from the motor. During the measurement, the electromagnetic field generated by the motor will be superimposed on the electromagnetic field of the magnetic bearing. With the revolving speed increasing, the electromagnetic field generated by the motor changes due to the eddy-current effect. It means that the interference of the motor to the magnetic bearing is very complicated, and it is difficult to eliminate the interference by analyzing.
In the theoretical analysis, some idealized assumptions lead to the error, such as assuming that the material is linear, ignoring magnetic flux leakage, magnetic saturation, and hysteresis.
Through the dynamic stiffness measurement method, the electromagnetic force of the magnetic bearing is a dynamic force related to frequency. The gravity is used to equivalent dynamic electromagnetic force approximately, which leads to the measured force to be less than the actual dynamic force. As a result, the measured dynamic current stiffness at high frequency is smaller than the analytical results.
This paper firstly builds up the dynamic stiffness model of axial magnetic bearing including the eddy-current effect considering the variation of flux density across the air gap, and then a dynamic stiffness measurement method for AMB in MSMP system is adopted to verify the theoretical analyze results. It is clearly the dynamic stiffness result obtained from the improved model is closer to the experimental fitting curve than the previous analytical result without considering the air gap flux variation. Compared with the static stiffness obtained from the traditional static stiffness measurement method, the error of dynamic current stiffness and displacement stiffness obtained from dynamic stiffness measurement method are from 0% to 20.1% and from 0% to 22.5%, respectively with frequency increasing. The validity of the measurement method has been verified by the experiment performed on the prototyped MSMP. Comparing the experimental value to the theoretical value considering the air gap flux variation, the maximum error of the current stiffness is 1.3%, the maximum error of the displacement stiffness is 2.1%.
The research in this paper has a great progress in the research of the dynamic properties of AMB and provided the evaluation standard for dynamic performance improvement and structure optimization of AMB.
The authors declare no competing financial interests.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Zurück zum Zitat Z Y Huang, B C Han, Y Le. Modeling method of the modal analysis for turbomolecular pump rotor blades. Vacuum, 2017, 144: 145-151. CrossRef Z Y Huang, B C Han, Y Le. Modeling method of the modal analysis for turbomolecular pump rotor blades. Vacuum, 2017, 144: 145-151. CrossRef
Zurück zum Zitat K Mao, G Liu. An improved braking control method for the magnetically levitated TMP with a fast transient response. Vacuum, 2018, 148: 312-318. CrossRef K Mao, G Liu. An improved braking control method for the magnetically levitated TMP with a fast transient response. Vacuum, 2018, 148: 312-318. CrossRef
Zurück zum Zitat Y S Sun. Molecular pump patent preliminary analysis and summary of several key technologies. Modern Manufacturing Technology & Equipment, 2017. (in Chinese) Y S Sun. Molecular pump patent preliminary analysis and summary of several key technologies. Modern Manufacturing Technology & Equipment, 2017. (in Chinese)
Zurück zum Zitat Z Z Su, D Wang, J Q Chen, et al. Improving operational performance of magnetically suspended flywheel with PM-biased magnetic bearings using adaptive resonant controller and nonlinear compensation method. IEEE Transactions on Magnetics, 2016, 52(7): 1-4. CrossRef Z Z Su, D Wang, J Q Chen, et al. Improving operational performance of magnetically suspended flywheel with PM-biased magnetic bearings using adaptive resonant controller and nonlinear compensation method. IEEE Transactions on Magnetics, 2016, 52(7): 1-4. CrossRef
Zurück zum Zitat B Han, Q Xu, Q Yuan. Multiobjective optimization of a combined radial-axial magnetic bearing for magnetically suspended compressor. IEEE Transactions on Industrial Electronics, 2016, 63(4): 2284-2293. B Han, Q Xu, Q Yuan. Multiobjective optimization of a combined radial-axial magnetic bearing for magnetically suspended compressor. IEEE Transactions on Industrial Electronics, 2016, 63(4): 2284-2293.
Zurück zum Zitat J T Ju, X B Liu, Z G Xu, et al. Comparison of three-pole AMB and HMB for magnetic suspended molecular pump. Modern Physics Letters B, 2018, 32(5): 18400742. J T Ju, X B Liu, Z G Xu, et al. Comparison of three-pole AMB and HMB for magnetic suspended molecular pump. Modern Physics Letters B, 2018, 32(5): 18400742.
Zurück zum Zitat G Liu, K Mao. Investigation of the rotor temperature of a turbo-molecular pump with different motor drive methods. Vacuum, 2017, 146: 252-258. CrossRef G Liu, K Mao. Investigation of the rotor temperature of a turbo-molecular pump with different motor drive methods. Vacuum, 2017, 146: 252-258. CrossRef
Zurück zum Zitat B C Han, Z Y Huang, Y Le. Design aspects of a large scale turbomolecular pump with active magnetic bearings. Vacuum, 2017, 142: 96-105. CrossRef B C Han, Z Y Huang, Y Le. Design aspects of a large scale turbomolecular pump with active magnetic bearings. Vacuum, 2017, 142: 96-105. CrossRef
Zurück zum Zitat Z Y Huang, B C Han, Y Le. Multidisciplinary design strategies for turbomolecular pumps with ultrahigh vacuum performance. IEEE Transactions on Industrial Electronics, 2019: 1-1. Z Y Huang, B C Han, Y Le. Multidisciplinary design strategies for turbomolecular pumps with ultrahigh vacuum performance. IEEE Transactions on Industrial Electronics, 2019: 1-1.
Zurück zum Zitat Z Y Yin, Y W Cai, W J Wang, et al. Analysis and experiment of eddy current loss in radial magnetic bearings with solid rotor. Matec Web of Conferences, 2018, 198(6): 04002. CrossRef Z Y Yin, Y W Cai, W J Wang, et al. Analysis and experiment of eddy current loss in radial magnetic bearings with solid rotor. Matec Web of Conferences, 2018, 198(6): 04002. CrossRef
Zurück zum Zitat Y Le, J J Sun, B C Han. Modeling and design of 3-DOF magnetic bearing for high-speed motor including eddy-current effects and leakage effects. IEEE Transactions on Industrial Electronics, 2016, 63(6): 3656-3665. CrossRef Y Le, J J Sun, B C Han. Modeling and design of 3-DOF magnetic bearing for high-speed motor including eddy-current effects and leakage effects. IEEE Transactions on Industrial Electronics, 2016, 63(6): 3656-3665. CrossRef
Zurück zum Zitat J J Sun, H Zhou, Z Y Ju. Dynamic stiffness analysis and measurement of radial active magnetic bearing in magnetically suspended molecular pump. Scientific Reports, 2020, 10(1). J J Sun, H Zhou, Z Y Ju. Dynamic stiffness analysis and measurement of radial active magnetic bearing in magnetically suspended molecular pump. Scientific Reports, 2020, 10(1).
Zurück zum Zitat L Kucera, M Ahrens. A model for axial magnetic bearings including eddy currents. The 3rd Intl. Symposium on Magnetic Suspension Technology, 1996. L Kucera, M Ahrens. A model for axial magnetic bearings including eddy currents. The 3rd Intl. Symposium on Magnetic Suspension Technology, 1996.
Zurück zum Zitat L Zhu, C R Knospe, E H Maslen. Analytic model for a nonlaminated cylindrical magnetic actuator including eddy currents. IEEE Transactions on Magnetics, 2005, 41(4): 1248-1258. CrossRef L Zhu, C R Knospe, E H Maslen. Analytic model for a nonlaminated cylindrical magnetic actuator including eddy currents. IEEE Transactions on Magnetics, 2005, 41(4): 1248-1258. CrossRef
Zurück zum Zitat L Zhu, C R Knospe. Modeling of nonlaminated electromagnetic suspension systems. IEEE/ASME Transactions on Mechatronics, 2010, 15(1): 59-69. CrossRef L Zhu, C R Knospe. Modeling of nonlaminated electromagnetic suspension systems. IEEE/ASME Transactions on Mechatronics, 2010, 15(1): 59-69. CrossRef
Zurück zum Zitat R Seifert, K Rbenack, W Hofmann. Rational approximation of the analytical model of nonlaminated cylindrical magnetic actuators for flux estimation and control. IEEE Transactions on Magnetics, 2019, 55(12): 1-16. CrossRef R Seifert, K Rbenack, W Hofmann. Rational approximation of the analytical model of nonlaminated cylindrical magnetic actuators for flux estimation and control. IEEE Transactions on Magnetics, 2019, 55(12): 1-16. CrossRef
Zurück zum Zitat Y Sun, Y S Ho, L Yu. Dynamic stiffnesses of active magnetic thrust bearing including eddy-current effects. IEEE Transactions on Magnetics, 2009, 45(1): 139-149. CrossRef Y Sun, Y S Ho, L Yu. Dynamic stiffnesses of active magnetic thrust bearing including eddy-current effects. IEEE Transactions on Magnetics, 2009, 45(1): 139-149. CrossRef
Zurück zum Zitat X F Hu, G Liu, J J Sun, et al. Analysis on eddy current loss for axial magnetic bearings. Bearing, 2013(03): 22-27. X F Hu, G Liu, J J Sun, et al. Analysis on eddy current loss for axial magnetic bearings. Bearing, 2013(03): 22-27.
Zurück zum Zitat Y Le, K Wang. Design and optimization method of magnetic bearing for high-speed motor considering eddy current effects. Mechatronics, IEEE/ASME Transactions on, 2016, 21(4): 2061-2072. CrossRef Y Le, K Wang. Design and optimization method of magnetic bearing for high-speed motor considering eddy current effects. Mechatronics, IEEE/ASME Transactions on, 2016, 21(4): 2061-2072. CrossRef
Zurück zum Zitat Z W Whitlow, R L Fittro, C R Knospe. Dynamic performance of segmented active magnetic thrust bearings. IEEE Transactions on Magnetics, 2016, 52(11): 1-11. CrossRef Z W Whitlow, R L Fittro, C R Knospe. Dynamic performance of segmented active magnetic thrust bearings. IEEE Transactions on Magnetics, 2016, 52(11): 1-11. CrossRef
Zurück zum Zitat C H H M Custers, J W Jansen, M C Van Beurden, et al. 3D harmonic modeling of eddy currents in segmented conducting structures. Compel., 2019, 38(1): 2-23. CrossRef C H H M Custers, J W Jansen, M C Van Beurden, et al. 3D harmonic modeling of eddy currents in segmented conducting structures. Compel., 2019, 38(1): 2-23. CrossRef
Zurück zum Zitat L Zhou, L Li. Modeling and identification of a solid-core active magnetic bearing including eddy currents. IEEE/ASME Transactions on Mechatronics, 2016, 21(6): 2784-2792. CrossRef L Zhou, L Li. Modeling and identification of a solid-core active magnetic bearing including eddy currents. IEEE/ASME Transactions on Mechatronics, 2016, 21(6): 2784-2792. CrossRef
Zurück zum Zitat J J Sun, C E Wang, Y Le. Research on a novel high stiffness axial passive magnetic bearing for DGMSCMG. Journal of Magnetism and Magnetic Materials, 2016, 412(Aug.): 147–155. CrossRef J J Sun, C E Wang, Y Le. Research on a novel high stiffness axial passive magnetic bearing for DGMSCMG. Journal of Magnetism and Magnetic Materials, 2016, 412(Aug.): 147–155. CrossRef
Zurück zum Zitat J Sun, D Chen, Y Ren. Stiffness measurement method of repulsive passive magnetic bearing in SGMSCMG. IEEE Transactions on Instrumentation and Measurement, 2013, 62(11): 2960-2965. CrossRef J Sun, D Chen, Y Ren. Stiffness measurement method of repulsive passive magnetic bearing in SGMSCMG. IEEE Transactions on Instrumentation and Measurement, 2013, 62(11): 2960-2965. CrossRef
Zurück zum Zitat J J Sun, G C Bai, L Yang. Stiffness measurement of permanent magnet biased radial magnetic bearing in MSFW. Journal of Dynamic Systems, Measurement, and Control, 2015, 137(9): 94505. CrossRef J J Sun, G C Bai, L Yang. Stiffness measurement of permanent magnet biased radial magnetic bearing in MSFW. Journal of Dynamic Systems, Measurement, and Control, 2015, 137(9): 94505. CrossRef
Zurück zum Zitat J J Sun, G C Bai, L J Li. Stiffness measurement of radial hybrid magnetic bearing in MSFW. Transactions of the Institute of Measurement and Control, 2015, 37(8): 991-998. CrossRef J J Sun, G C Bai, L J Li. Stiffness measurement of radial hybrid magnetic bearing in MSFW. Transactions of the Institute of Measurement and Control, 2015, 37(8): 991-998. CrossRef
Zurück zum Zitat R Z Zhu, W Xu, C Y Ye, et al. Novel heteropolar radial hybrid magnetic bearing with low rotor core loss. IEEE Transactions on Magnetics, 2017, (99): 1. R Z Zhu, W Xu, C Y Ye, et al. Novel heteropolar radial hybrid magnetic bearing with low rotor core loss. IEEE Transactions on Magnetics, 2017, (99): 1.
Zurück zum Zitat Y Le, J C Fang, B C Han, et al. Dynamic circuit model of a radial magnetic bearing with permanent magnet bias and laminated cores. International Journal of Applied Electromagnetics and Mechanics, 2014, 46(1): 43-60. CrossRef Y Le, J C Fang, B C Han, et al. Dynamic circuit model of a radial magnetic bearing with permanent magnet bias and laminated cores. International Journal of Applied Electromagnetics and Mechanics, 2014, 46(1): 43-60. CrossRef
Zurück zum Zitat J J Sun, H Zhou, X Ma, et al. Study on PID tuning strategy based on dynamic stiffness for radial active magnetic bearing. ISA Transactions, 2018, 2018: S001905781830291X-. J J Sun, H Zhou, X Ma, et al. Study on PID tuning strategy based on dynamic stiffness for radial active magnetic bearing. ISA Transactions, 2018, 2018: S001905781830291X-.
Jinji Sun
Wanting Wei
Jiqiang Tang
Chun-E Wang
Chinese Journal of Mechanical Engineering
Elektronische ISSN: 2192-8258
Weitere Artikel der Ausgabe 1/2020
Development and Analysis of a Closed-Chain Wheel-Leg Mobile Platform
Novel Surface Design of Deployable Reflector Antenna Based on Polar Scissor Structures
Formability of Materials with Small Tools in Incremental Forming
Numerical Investigation on Fracture Initiation Properties of Interface Crack in Dissimilar Steel Welded Joints
Design of a Passive Gait-based Ankle-foot Exoskeleton with Self-adaptive Capability
Practical Structural Design Approach of Multiconfiguration Planar Single-Loop Metamorphic Mechanism with a Single Actuator
Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.
Zur Marktübersicht
in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH | CommonCrawl |
2.2: Functions and Function Notation
[ "article:topic", "independent variable", "dependent variable", "horizontal line test", "one-to-one function", "domain", "function", "range", "vertical line test", "input", "output", "relation", "license:ccby", "showtoc:no", "transcluded:yes", "authorname:openstaxjabramson", "source[1]-math-15053" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMission_College%2FMat_1_College_Algebra_(Carr)%2F02%253A_Functions%2F2.02%253A_Functions_and_Function_Notation
Mat 1 College Algebra (Carr)
2: Functions
Jay Abramson
Principal Lecturer (School of Mathematical and Statistical Sciences) at Arizona State University
Determining Whether a Relation Represents a Function
Using Function Notation
Representing Functions Using Tables
Finding Input and Output Values of a Function
Evaluation of Functions in Algebraic Forms
Evaluating Functions Expressed in Formulas
Evaluating a Function Given in Tabular Form
Finding Function Values from a Graph
Determining Whether a Function is One-to-One
Using the Vertical Line Test
Using the Horizontal Line Test
Identifying Basic Toolkit Functions
Key Equations
Determine whether a relation represents a function.
Find the value of a function.
Determine whether a function is one-to-one.
Use the vertical line test to identify functions.
Graph the functions listed in the library of functions.
A jetliner changes altitude as its distance from the starting point of a flight increases. The weight of a growing child increases with time. In each case, one quantity depends on another. There is a relationship between the two quantities that we can describe, analyze, and use to make predictions. In this section, we will analyze such relationships.
A relation is a set of ordered pairs. The set of the first components of each ordered pair is called the domain and the set of the second components of each ordered pair is called the range. Consider the following set of ordered pairs. The first numbers in each pair are the first five natural numbers. The second number in each pair is twice that of the first.
\[\{(1, 2), (2, 4), (3, 6), (4, 8), (5, 10)\}\tag{1.1.1}\]
The domain is \(\{1, 2, 3, 4, 5\}\). The range is \(\{2, 4, 6, 8, 10\}\).
Note that each value in the domain is also known as an input value, or independent variable, and is often labeled with the lowercase letter \(x\). Each value in the range is also known as an output value, or dependent variable, and is often labeled lowercase letter \(y\).
A function \(f\) is a relation that assigns a single value in the range to each value in the domain. In other words, no \(x\)-values are repeated. For our example that relates the first five natural numbers to numbers double their values, this relation is a function because each element in the domain, {1, 2, 3, 4, 5}, is paired with exactly one element in the range, \(\{2, 4, 6, 8, 10\}\).
Now let's consider the set of ordered pairs that relates the terms "even" and "odd" to the first five natural numbers. It would appear as
\[\mathrm{\{(odd, 1), (even, 2), (odd, 3), (even, 4), (odd, 5)\}} \tag{1.1.2}\]
Notice that each element in the domain, {even, odd} is not paired with exactly one element in the range, \(\{1, 2, 3, 4, 5\}\). For example, the term "odd" corresponds to three values from the domain, \(\{1, 3, 5\}\) and the term "even" corresponds to two values from the range, \(\{2, 4\}\). This violates the definition of a function, so this relation is not a function.
Figure \(\PageIndex{1}\) compares relations that are functions and not functions.
Figure \(\PageIndex{1}\): (a) This relationship is a function because each input is associated with a single output. Note that input \(q\) and \(r\) both give output \(n\). (b) This relationship is also a function. In this case, each input is associated with a single output. (c) This relationship is not a function because input \(q\) is associated with two different outputs.
A function is a relation in which each possible input value leads to exactly one output value. We say "the output is a function of the input."
The input values make up the domain, and the output values make up the range.
How To: Given a relationship between two quantities, determine whether the relationship is a function
Identify the input values.
Identify the output values.
If each input value leads to only one output value, classify the relationship as a function. If any input value leads to two or more outputs, do not classify the relationship as a function.
Example \(\PageIndex{1}\): Determining If Menu Price Lists Are Functions
The coffee shop menu, shown in Figure \(\PageIndex{2}\) consists of items and their prices.
Is price a function of the item?
Is the item a function of the price?
Figure \(\PageIndex{2}\): A menu of donut prices from a coffee shop where a plain donut is $1.49 and a jelly donut and chocolate donut are $1.99.
Let's begin by considering the input as the items on the menu. The output values are then the prices. See Figure \(\PageIndex{3}\).
Each item on the menu has only one price, so the price is a function of the item.
Two items on the menu have the same price. If we consider the prices to be the input values and the items to be the output, then the same input value could have more than one output associated with it. See Figure \(\PageIndex{4}\).
Figure \(\PageIndex{4}\): Association of the prices to the donuts.
Therefore, the item is a not a function of price.
Example \(\PageIndex{2}\): Determining If Class Grade Rules Are Functions
In a particular math class, the overall percent grade corresponds to a grade point average. Is grade point average a function of the percent grade? Is the percent grade a function of the grade point average? Table \(\PageIndex{1}\) shows a possible rule for assigning grade points.
Table \(\PageIndex{1}\): Class grade points.
Percent grade 0–56 57–61 62–66 67–71 72–77 78–86 87–91 92–100
Grade point average 0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0
For any percent grade earned, there is an associated grade point average, so the grade point average is a function of the percent grade. In other words, if we input the percent grade, the output is a specific grade point average.
In the grading system given, there is a range of percent grades that correspond to the same grade point average. For example, students who receive a grade point average of 3.0 could have a variety of percent grades ranging from 78 all the way to 86. Thus, percent grade is not a function of grade point average.
Table \(\PageIndex{2}\) lists the five greatest baseball players of all time in order of rank.
Table \(\PageIndex{2}\): Five greatest baseball players.
Babe Ruth 1
Willie Mays 2
Ty Cobb 3
Walter Johnson 4
Hank Aaron 5
Is the rank a function of the player name?
Is the player name a function of the rank?
yes. (Note: If two players had been tied for, say, 4th place, then the name would not have been a function of rank.)
Once we determine that a relationship is a function, we need to display and define the functional relationships so that we can understand and use them, and sometimes also so that we can program them into computers. There are various ways of representing functions. A standard function notation is one representation that facilitates working with functions.
To represent "height is a function of age," we start by identifying the descriptive variables \(h\) for height and \(a\) for age. The letters \(f\), \(g\),and \(h\) are often used to represent functions just as we use \(x\), \(y\),and \(z\) to represent numbers and \(A\), \(B\), and \(C\) to represent sets.
\[\begin{array}{ll} h \text{ is } f \text{ of }a \;\;\;\;\;\; & \text{We name the function }f \text{; height is a function of age.} \\ h=f(a) & \text{We use parentheses to indicate the function input.} \\ f(a) & \text{We name the function }f \text{ ; the expression is read as " }f \text{ of }a \text{."}\end{array}\]
Remember, we can use any letter to name the function; the notation \(h(a)\) shows us that \(h\) depends on \(a\). The value \(a\) must be put into the function \(h\) to get a result. The parentheses indicate that age is input into the function; they do not indicate multiplication.
We can also give an algebraic expression as the input to a function. For example \(f(a+b)\) means "first add \(a\) and \(b\), and the result is the input for the function \(f\)." The operations must be performed in this order to obtain the correct result.
The notation \(y=f(x)\) defines a function named \(f\). This is read as "\(y\) is a function of \(x\)." The letter \(x\) represents the input value, or independent variable. The letter \(y\), or \(f(x)\), represents the output value, or dependent variable.
Example \(\PageIndex{3}\): Using Function Notation for Days in a Month
Use function notation to represent a function whose input is the name of a month and output is the number of days in that month.
Using Function Notation for Days in a Month
The number of days in a month is a function of the name of the month, so if we name the function \(f\), we write \(\text{days}=f(\text{month})\) or \(d=f(m)\). The name of the month is the input to a "rule" that associates a specific number (the output) with each input.
Figure \(\PageIndex{5}\): The function \(31 = f(January)\) where 31 is the output, f is the rule, and January is the input.
For example, \(f(\text{March})=31\), because March has 31 days. The notation \(d=f(m)\) reminds us that the number of days, \(d\) (the output), is dependent on the name of the month, \(m\) (the input).
Note that the inputs to a function do not have to be numbers; function inputs can be names of people, labels of geometric objects, or any other element that determines some kind of output. However, most of the functions we will work with in this book will have numbers as inputs and outputs.
Example \(\PageIndex{3B}\): Interpreting Function Notation
A function \(N=f(y)\) gives the number of police officers, \(N\), in a town in year \(y\). What does \(f(2005)=300\) represent?
When we read \(f(2005)=300\), we see that the input year is 2005. The value for the output, the number of police officers \((N)\), is 300. Remember, \(N=f(y)\). The statement \(f(2005)=300\) tells us that in the year 2005 there were 300 police officers in the town.
Use function notation to express the weight of a pig in pounds as a function of its age in days \(d\).
\(w=f(d)\)
Instead of a notation such as \(y=f(x)\), could we use the same symbol for the output as for the function, such as \(y=y(x)\), meaning "\(y\) is a function of \(x\)?"
Yes, this is often done, especially in applied subjects that use higher math, such as physics and engineering. However, in exploring math itself we like to maintain a distinction between a function such as \(f\), which is a rule or procedure, and the output y we get by applying \(f\) to a particular input \(x\). This is why we usually use notation such as \(y=f(x),P=W(d)\), and so on.
A common method of representing functions is in the form of a table. The table rows or columns display the corresponding input and output values. In some cases, these values represent all we know about the relationship; other times, the table provides a few select examples from a more complete relationship.
Table \(\PageIndex{3}\) lists the input number of each month (\(\text{January}=1\), \(\text{February}=2\), and so on) and the output value of the number of days in that month. This information represents all we know about the months and days for a given year (that is not a leap year). Note that, in this table, we define a days-in-a-month function \(f\) where \(D=f(m)\) identifies months by an integer rather than by name.
Table \(\PageIndex{3}\): Months and number of days per month.
Month number, \(m\) (input)
Days in month, \(D\) (output) 31 28 31 30 31 30 31 31 30 31 30 31
Table \(\PageIndex{4}\) defines a function \(Q=g(n)\) Remember, this notation tells us that \(g\) is the name of the function that takes the input \(n\) and gives the output \(Q\).
Table \(\PageIndex{4}\): Function \(Q=g(n)\)
\(Q\) 8 6 7 6 8
Table \(\PageIndex{5}\) displays the age of children in years and their corresponding heights. This table displays just some of the data available for the heights and ages of children. We can see right away that this table does not represent a function because the same input value, 5 years, has two different output values, 40 in. and 42 in.
Table \(\PageIndex{5}\): Age of children and their corresponding heights.
Age in years, \(a\) (input)
Height in inches, \(h\) (output) 40 42 44 47 50 52 54
How To: Given a table of input and output values, determine whether the table represents a function
Identify the input and output values.
Check to see if each input value is paired with only one output value. If so, the table represents a function.
Example \(\PageIndex{5}\): Identifying Tables that Represent Functions
Which table, Table \(\PageIndex{6}\), Table \(\PageIndex{7}\), or Table \(\PageIndex{8}\), represents a function (if any)?
Table \(\PageIndex{6}\) and Table \(\PageIndex{7}\) define functions. In both, each input value corresponds to exactly one output value. Table \(\PageIndex{8}\) does not define a function because the input value of 5 corresponds to two different output values.
When a table represents a function, corresponding input and output values can also be specified using function notation.
The function represented by Table \(\PageIndex{6}\) can be represented by writing
\[f(2)=1\text{, }f(5)=3\text{, and }f(8)=6 \nonumber\]
Similarly, the statements
\[g(−3)=5\text{, }g(0)=1\text{, and }g(4)=5 \nonumber\]
represent the function in Table \(\PageIndex{7}\).
Table \(\PageIndex{8}\) cannot be expressed in a similar way because it does not represent a function.
Does Table \(\PageIndex{9}\) represent a function?
When we know an input value and want to determine the corresponding output value for a function, we evaluate the function. Evaluating will always produce one result because each input value of a function corresponds to exactly one output value.
When we know an output value and want to determine the input values that would produce that output value, we set the output equal to the function's formula and solve for the input. Solving can produce more than one solution because different input values can produce the same output value.
When we have a function in formula form, it is usually a simple matter to evaluate the function. For example, the function \(f(x)=5−3x^2\) can be evaluated by squaring the input value, multiplying by 3, and then subtracting the product from 5.
How To: Given the formula for a function, evaluate.
Given the formula for a function, evaluate.
Replace the input variable in the formula with the value provided.
Calculate the result.
Example \(\PageIndex{6A}\): Evaluating Functions at Specific Values
1. Evaluate \(f(x)=x^2+3x−4\) at
\(2\)
\(a\)
\(a+h\)
Evaluate \(\frac{f(a+h)−f(a)}{h}\)
Replace the x in the function with each specified value.
a. Because the input value is a number, 2, we can use simple algebra to simplify.
\[\begin{align*}f(2)&=2^2+3(2)−4\\&=4+6−4\\ &=6\end{align*}\]
b. In this case, the input value is a letter so we cannot simplify the answer any further.
\[f(a)=a^2+3a−4\nonumber\]
c. With an input value of \(a+h\), we must use the distributive property.
\[\begin{align*}f(a+h)&=(a+h)^2+3(a+h)−4\\&=a^2+2ah+h^2+3a+3h−4 \end{align*}\]
d. In this case, we apply the input values to the function more than once, and then perform algebraic operations on the result. We already found that
\[f(a+h)=a^2+2ah+h^2+3a+3h−4\nonumber\]
and we know that
\[f(a)=a^2+3a−4 \nonumber\]
Now we combine the results and simplify.
\[\begin{align*}\dfrac{f(a+h)−f(a)}{h}&=\dfrac{(a^2+2ah+h^2+3a+3h−4)−(a^2+3a−4)}{h}\\ &=\dfrac{(2ah+h^2+3h)}{h} \\ &=\dfrac{h(2a+h+3)}{h} & &\text{Factor out h.}\\ &=2a+h+3 & & \text{Simplify.}\end{align*}\]
Example \(\PageIndex{6B}\): Evaluating Functions
Given the function \(h(p)=p^2+2p\), evaluate \(h(4)\).
To evaluate \(h(4)\), we substitute the value 4 for the input variable p in the given function.
\[\begin{align*}h(p)&=p^2+2p\\h(4)&=(4)^2+2(4)\\ &=16+8\\&=24\end{align*}\]
Therefore, for an input of 4, we have an output of 24.
Given the function \(g(m)=\sqrt{m−4}\), evaluate \(g(5)\).
\(g(5)=1\)
Example \(\PageIndex{7}\): Solving Functions
Given the function \(h(p)=p^2+2p\), solve for \(h(p)=3\).
\[\begin{array}{rl} h(p)=3\\p^2+2p=3 & \text{Substitute the original function}\\ p^2+2p−3=0 & \text{Subtract 3 from each side.}\\(p+3)(p−1)=0&\text{Factor.}\end{array} \nonumber \]
If \((p+3)(p−1)=0\), either \((p+3)=0\) or \((p−1)=0\) (or both of them equal \(0\)). We will set each factor equal to \(0\) and solve for \(p\) in each case.
\[(p+3)=0,\; p=−3 \nonumber \]
\[(p−1)=0,\, p=1 \nonumber\]
This gives us two solutions. The output \(h(p)=3\) when the input is either \(p=1\) or \(p=−3\). We can also verify by graphing as in Figure \(\PageIndex{6}\). The graph verifies that \(h(1)=h(−3)=3\) and \(h(4)=24\).
Figure \(\PageIndex{6}\): Graph of \(h(p)=p^2+2p\)
Given the function \(g(m)=\sqrt{m−4}\), solve \(g(m)=2\).
\(m=8\)
Some functions are defined by mathematical rules or procedures expressed in equation form. If it is possible to express the function output with a formula involving the input quantity, then we can define a function in algebraic form. For example, the equation \(2n+6p=12\) expresses a functional relationship between \(n\) and \(p\). We can rewrite it to decide if \(p\) is a function of \(n\).
How to: Given a function in equation form, write its algebraic formula.
Solve the equation to isolate the output variable on one side of the equal sign, with the other side as an expression that involves only the input variable.
Use all the usual algebraic methods for solving equations, such as adding or subtracting the same quantity to or from both sides, or multiplying or dividing both sides of the equation by the same quantity.
Example \(\PageIndex{8A}\): Finding an Equation of a Function
Express the relationship \(2n+6p=12\) as a function \(p=f(n)\), if possible.
To express the relationship in this form, we need to be able to write the relationship where \(p\) is a function of \(n\), which means writing it as \(p=[\text{expression involving }n]\).
\[\begin{align*}2n+6p&=12 \\ 6p&=12−2n && \text{Subtract 2n from both sides.} \\ p&=\dfrac{12−2n}{6} & &\text{Divide both sides by 6 and simplify.} \\ p&=\frac{12}{6}−\frac{2n}{6} \\ p&=2−\frac{1}{3}n\end{align*}\]
Therefore, \(p\) as a function of \(n\) is written as
\[p=f(n)=2−\frac{1}{3}n \nonumber\]
It is important to note that not every relationship expressed by an equation can also be expressed as a function with a formula.
Example \(\PageIndex{8B}\): Expressing the Equation of a Circle as a Function
Does the equation \(x^2+y^2=1\) represent a function with \(x\) as input and \(y\) as output? If so, express the relationship as a function \(y=f(x)\).
First we subtract \(x^2\) from both sides.
\[y^2=1−x^2 \nonumber\]
We now try to solve for \(y\) in this equation.
\[y=\pm\sqrt{1−x^2} \nonumber\]
\[\text{so, }y=\sqrt{1−x^2}\;\text{and}\;y = −\sqrt{1−x^2} \nonumber\]
We get two outputs corresponding to the same input, so this relationship cannot be represented as a single function \(y=f(x)\).
If \(x−8y^3=0\), express \(y\) as a function of \(x\).
\(y=f(x)=\dfrac{\sqrt[3]{x}}{2}\)
Are there relationships expressed by an equation that do represent a function but which still cannot be represented by an algebraic formula?
Yes, this can happen. For example, given the equation \(x=y+2^y\), if we want to express y as a function of x, there is no simple algebraic formula involving only \(x\) that equals \(y\). However, each \(x\) does determine a unique value for \(y\), and there are mathematical procedures by which \(y\) can be found to any desired accuracy. In this case, we say that the equation gives an implicit (implied) rule for \(y\) as a function of \(x\), even though the formula cannot be written explicitly.
As we saw above, we can represent functions in tables. Conversely, we can use information in tables to write functions, and we can evaluate functions using the tables. For example, how well do our pets recall the fond memories we share with them? There is an urban legend that a goldfish has a memory of 3 seconds, but this is just a myth. Goldfish can remember up to 3 months, while the beta fish has a memory of up to 5 months. And while a puppy's memory span is no longer than 30 seconds, the adult dog can remember for 5 minutes. This is meager compared to a cat, whose memory span lasts for 16 hours.
The function that relates the type of pet to the duration of its memory span is more easily visualized with the use of a table (Table \(\PageIndex{10}\)).
Pet Memory
span in hours
Puppy 0.008
Adult Dog 0.083
Goldfish 2160
Beta Fish 3600
At times, evaluating a function in table form may be more useful than using equations. Here let us call the function \(P\). The domain of the function is the type of pet and the range is a real number representing the number of hours the pet's memory span lasts. We can evaluate the function \(P\) at the input value of "goldfish." We would write \(P(goldfish)=2160\). Notice that, to evaluate the function in table form, we identify the input value and the corresponding output value from the pertinent row of the table. The tabular form for function P seems ideally suited to this function, more so than writing it in paragraph or function form.
How To: Given a function represented by a table, identify specific output and input values
1. Find the given input in the row (or column) of input values.
2. Identify the corresponding output value paired with that input value.
3. Find the given output values in the row (or column) of output values, noting every time that output value appears.
4. Identify the input value(s) corresponding to the given output value.
Example \(\PageIndex{9}\): Evaluating and Solving a Tabular Function
Using Table \(\PageIndex{11}\),
a. Evaluate \(g(3)\).
b. Solve \(g(n)=6\).
\(g(n)\) 8 6 7 6 8
a. Evaluating \(g(3)\) means determining the output value of the function \(g\) for the input value of \(n=3\). The table output value corresponding to \(n=3\) is 7, so \(g(3)=7\).
b. Solving \(g(n)=6\) means identifying the input values, n,that produce an output value of 6. Table \(\PageIndex{12}\) shows two solutions: 2 and 4.
When we input 2 into the function \(g\), our output is 6. When we input 4 into the function \(g\), our output is also 6.
Using Table \(\PageIndex{12}\), evaluate \(g(1)\).
Evaluating a function using a graph also requires finding the corresponding output value for a given input value, only in this case, we find the output value by looking at the graph. Solving a function equation using a graph requires finding all instances of the given output value on the graph and observing the corresponding input value(s).
Example \(\PageIndex{10}\): Reading Function Values from a Graph
Given the graph in Figure \(\PageIndex{7}\),
Evaluate \(f(2)\).
Solve \(f(x)=4\).
Figure \(\PageIndex{7}\): Graph of a positive parabola centered at \((1, 0)\).
To evaluate \(f(2)\), locate the point on the curve where \(x=2\), then read the y-coordinate of that point. The point has coordinates \((2,1)\), so \(f(2)=1\). See Figure \(\PageIndex{8}\).
\(\PageIndex{8}\): Graph of a positive parabola centered at \((1, 0)\) with the labeled point \((2, 1)\) where \(f(2) =1\).
To solve \(f(x)=4\), we find the output value 4 on the vertical axis. Moving horizontally along the line \(y=4\), we locate two points of the curve with output value 4: \((−1,4)\) and \((3,4)\). These points represent the two solutions to \(f(x)=4\): −1 or 3. This means \(f(−1)=4\) and \(f(3)=4\), or when the input is −1 or 3, the output is 4. See Figure \(\PageIndex{9}\).
Figure \(\PageIndex{9}\): Graph of an upward-facing parabola with a vertex at \((0,1)\) and labeled points at \((-1, 4)\) and \((3,4)\). A line at \(y = 4\) intersects the parabola at the labeled points.
Given the graph in Figure \(\PageIndex{7}\), solve \(f(x)=1\).
\(x=0\) or \(x=2\)
Some functions have a given output value that corresponds to two or more input values. For example, in the stock chart shown in the Figure at the beginning of this chapter, the stock price was $1000 on five different dates, meaning that there were five different input values that all resulted in the same output value of $1000.
However, some functions have only one input value for each output value, as well as having only one output for each input. We call these functions one-to-one functions. As an example, consider a school that uses only letter grades and decimal equivalents, as listed in Table \(\PageIndex{13}\).
Table \(\PageIndex{13}\): Letter grades and decimal equivalents.
D 1.0
This grading system represents a one-to-one function, because each letter input yields one particular grade point average output and each grade point average corresponds to one input letter.
To visualize this concept, let's look again at the two simple functions sketched in Figures \(\PageIndex{1a}\) and \(\PageIndex{1b}\). The function in part (a) shows a relationship that is not a one-to-one function because inputs \(q\) and \(r\) both give output \(n\). The function in part (b) shows a relationship that is a one-to-one function because each input is associated with a single output.
One-to-One Functions
A one-to-one function is a function in which each output value corresponds to exactly one input value.
Example \(\PageIndex{11}\): Determining Whether a Relationship Is a One-to-One Function
Is the area of a circle a function of its radius? If yes, is the function one-to-one?
A circle of radius \(r\) has a unique area measure given by \(A={\pi}r^2\), so for any input, \(r\), there is only one output, \(A\). The area is a function of radius\(r\).
If the function is one-to-one, the output value, the area, must correspond to a unique input value, the radius. Any area measure \(A\) is given by the formula \(A={\pi}r^2\). Because areas and radii are positive numbers, there is exactly one solution:\(\sqrt{\frac{A}{\pi}}\). So the area of a circle is a one-to-one function of the circle's radius.
Exercise \(\PageIndex{11A}\)
Is a balance a function of the bank account number?
Is a bank account number a function of the balance?
Is a balance a one-to-one function of the bank account number?
a. yes, because each bank account has a single balance at any given time;
b. no, because several bank account numbers may have the same balance;
c. no, because the same output may correspond to more than one input.
Exercise \(\PageIndex{11B}\)
Evaluate the following:
If each percent grade earned in a course translates to one letter grade, is the letter grade a function of the percent grade?
If so, is the function one-to-one?
a. Yes, letter grade is a function of percent grade;
b. No, it is not one-to-one. There are 100 different percent numbers we could get but only about five possible letter grades, so there cannot be only one percent number that corresponds to each letter grade.
As we have seen in some examples above, we can represent a function using a graph. Graphs display a great many input-output pairs in a small space. The visual information they provide often makes relationships easier to understand. By convention, graphs are typically constructed with the input values along the horizontal axis and the output values along the vertical axis.
The most common graphs name the input value \(x\) and the output \(y\), and we say \(y\) is a function of \(x\), or \(y=f(x)\) when the function is named \(f\). The graph of the function is the set of all points \((x,y)\) in the plane that satisfies the equation \(y=f(x)\). If the function is defined for only a few input values, then the graph of the function is only a few points, where the x-coordinate of each point is an input value and the y-coordinate of each point is the corresponding output value. For example, the black dots on the graph in Figure \(\PageIndex{10}\) tell us that \(f(0)=2\) and \(f(6)=1\). However, the set of all points \((x,y)\) satisfying \(y=f(x)\) is a curve. The curve shown includes \((0,2)\) and \((6,1)\) because the curve passes through those points
Figure \(\PageIndex{10}\): Graph of a polynomial.
The vertical line test can be used to determine whether a graph represents a function. If we can draw any vertical line that intersects a graph more than once, then the graph does not define a function because a function has only one output value for each input value. See Figure \(\PageIndex{11}\).
Figure \(\PageIndex{11}\): Three graphs visually showing what is and is not a function.
Howto: Given a graph, use the vertical line test to determine if the graph represents a function
Inspect the graph to see if any vertical line drawn would intersect the curve more than once.
If there is any such line, determine that the graph does not represent a function.
Example \(\PageIndex{12}\): Applying the Vertical Line Test
Which of the graphs in Figure \(\PageIndex{12}\) represent(s) a function \(y=f(x)\)?
Figure \(\PageIndex{12}\): Graph of a polynomial (a), a downward-sloping line (b), and a circle (c).
If any vertical line intersects a graph more than once, the relation represented by the graph is not a function. Notice that any vertical line would pass through only one point of the two graphs shown in parts (a) and (b) of Figure \(\PageIndex{12}\). From this we can conclude that these two graphs represent functions. The third graph does not represent a function because, at most x-values, a vertical line would intersect the graph at more than one point, as shown in Figure \(\PageIndex{13}\).
Figure \(\PageIndex{13}\): Graph of a circle.
Does the graph in Figure \(\PageIndex{14}\) represent a function?
Figure \(\PageIndex{14}\): Graph of absolute value function.
Once we have determined that a graph defines a function, an easy way to determine if it is a one-to-one function is to use the horizontal line test. Draw horizontal lines through the graph. If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function.
Howto: Given a graph of a function, use the horizontal line test to determine if the graph represents a one-to-one function
Inspect the graph to see if any horizontal line drawn would intersect the curve more than once.
If there is any such line, determine that the function is not one-to-one.
Example \(\PageIndex{13}\): Applying the Horizontal Line Test
Consider the functions shown in Figure \(\PageIndex{12a}\) and Figure \(\PageIndex{12b}\). Are either of the functions one-to-one?
The function in Figure \(\PageIndex{12a}\) is not one-to-one. The horizontal line shown in Figure \(\PageIndex{15}\) intersects the graph of the function at two points (and we can even find horizontal lines that intersect it at three points.)
Figure \(\PageIndex{15}\): Graph of a polynomial with a horizontal line crossing through 2 points
The function in Figure \(\PageIndex{12b}\) is one-to-one. Any horizontal line will intersect a diagonal line at most once.
Is the graph shown in Figure \(\PageIndex{13}\) one-to-one?
No, because it does not pass the horizontal line test.
In this text, we will be exploring functions—the shapes of their graphs, their unique characteristics, their algebraic formulas, and how to solve problems with them. When learning to read, we start with the alphabet. When learning to do arithmetic, we start with numbers. When working with functions, it is similarly helpful to have a base set of building-block elements. We call these our "toolkit functions," which form a set of basic named functions for which we know the graph, formula, and special properties. Some of these functions are programmed to individual buttons on many calculators. For these definitions we will use x as the input variable and \(y=f(x)\) as the output variable.
We will see these toolkit functions, combinations of toolkit functions, their graphs, and their transformations frequently throughout this book. It will be very helpful if we can recognize these toolkit functions and their features quickly by name, formula, graph, and basic table properties. The graphs and sample table values are included with each function shown in Table \(\PageIndex{14}\).
Table \(\PageIndex{14}\): Toolkit Functions
Constant \(f(x)=c\) where \(c\) is a constant
Identity \(f(x)=x\)
Absolute Value \(f(x)=|x|\)
Quadratic \(f(x)=x^2\)
Cubic \(f(x)=x^3\)
reciprocal \(f(x)=\dfrac{1}{x}\)
Reciprocal squared \(f(x)=\dfrac{1}{x^2}\)
Square root \(f(x)=\sqrt{x}\)
Cube root \(f(x)=\sqrt[3]{x}\)
Constant function \(f(x)=c\), where \(c\) is a constant
Identity function \(f(x)=x\)
Absolute value function \(f(x)=|x|\)
Quadratic function \(f(x)=x^2\)
Cubic function \(f(x)=x^3\)
Reciprocal function \(f(x)=\dfrac{1}{x}\)
Reciprocal squared function \(f(x)=\frac{1}{x^2}\)
Square root function \(f(x)=\sqrt{x}\)
Cube root function \(f(x)=3\sqrt{x}\)
A relation is a set of ordered pairs. A function is a specific type of relation in which each domain value, or input, leads to exactly one range value, or output.
Function notation is a shorthand method for relating the input to the output in the form \(y=f(x)\).
In tabular form, a function can be represented by rows or columns that relate to input and output values.
To evaluate a function, we determine an output value for a corresponding input value. Algebraic forms of a function can be evaluated by replacing the input variable with a given value.
To solve for a specific function value, we determine the input values that yield the specific output value.
An algebraic form of a function can be written from an equation.
Input and output values of a function can be identified from a table.
Relating input values to output values on a graph is another way to evaluate a function.
A function is one-to-one if each output value corresponds to only one input value.
A graph represents a function if any vertical line drawn on the graph intersects the graph at no more than one point.
The graph of a one-to-one function passes the horizontal line test.
1 http://www.baseball-almanac.com/lege.../lisn100.shtml. Accessed 3/24/2014.
2 www.kgbanswers.com/how-long-i...y-span/4221590. Accessed 3/24/2014.
dependent variable
an output variable
the set of all possible input values for a relation
a relation in which each input value yields a unique output value
horizontal line test
a method of testing whether a function is one-to-one by determining whether any horizontal line intersects the graph more than once
an input variable
each object or value in a domain that relates to another object or value by a relationship known as a function
one-to-one function
a function for which each value of the output is associated with a unique input value
each object or value in the range that is produced when an input value is entered into a function
the set of output values that result from the input values in a relation
a set of ordered pairs
vertical line test
a method of testing whether a graph represents a function by determining whether a vertical line intersects the graph no more than once
Jay Abramson (Arizona State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at https://openstax.org/details/books/precalculus.
2.1: Prelude to Functions
2.3: Domain and Range
source[1]-math-15053 | CommonCrawl |
How does an operating system create entropy for random seeds?
On Linux, the files /dev/random and /dev/urandom files are the blocking and non-blocking (respectively) sources of pseudo-random bytes.
They can be read as normal files:
$ hexdump /dev/random
0000000 28eb d9e7 44bb 1ac9 d06f b943 f904 8ffa
0000010 5652 1f08 ccb8 9ee2 d85c 7c6b ddb2 bcbe
0000020 f841 bd90 9e7c 5be2 eecc e395 5971 ab7f
0000030 864f d402 74dd 1aa8 925d 8a80 de75 a0e3
0000040 cb64 4422 02f7 0c50 6174 f725 0653 2444
Many other unix variants provide /dev/random and /dev/urandom as well, without the blocking/non-blocking distinction.
The Windows equivalent is the CryptGenRandom() function.
How does the operating system generate pseudo-randomness?
operating-systems cryptography randomness security entropy
Wandering Logic
Adam MatanAdam Matan
$\begingroup$ What research have you done? Have you looked on standard sites, like Wikipedia, or on Security.SE or Crypto.SE? Wikipedia has an article on this: en.wikipedia.org/wiki//dev/random. When Wikipedia has an article that answers your question, that's pretty much the definition of not having done enough research. (We expect you to do a significant amount of research before asking here, and to show us in the question research you've done.) $\endgroup$
The title and the body of your question ask two different questions: how the OS creates entropy (this should really be obtains entropy), and how it generates pseudo-randomness from this entropy. I'll start by explaining the difference.
Where does randomness come from?
Random number generators (RNG) come in two types:
Pseudo-random number generators (PRNG), also called deterministic random bit generators (DRBG) or combinations thereof, are deterministic algorithms which maintain a fixed-size variable internal state and compute their output from that state.
Hardware random number generator (HRNG), also called "true" random number generators, are based on physical phenomena. "True" is a bit of a misnomer, because there are no sources of information that are known to be truly random, only sources of information that are not known to be predictable.
Some applications, such as simulations of physical phenomena, can be content with random numbers that pass statistical tests. Other applications, such as the generation of cryptographic keys, require a stronger property: unpredictability. Unpredictability is a security property, not (only) a statistical property: it means that an adversary cannot guess the output of the random number generator. (More precisely, you can measure the quality of the RNG by measuring the probability for an adversary to guess each bit of RNG output. If the probability is measurably different from 1/2, the RNG is bad.)
There are physical phenomena that produce random data with good statistical properties — for example, radioactive decay, or some astronomical observations of background noise, or stock market fluctuations. Such physical measurements need conditioning (whitening), to turn biased probability distributions into a uniform probability distribution. A physical measurement that is known to everyone isn't good for cryptography: stock market fluctuations might be good for geohashing, but you can't use them to generate secret keys.
Cryptography requires secrecy: an adversary must not be able to find out the data that went into conditioning. There are cryptographically secure pseudo-random number generators (CSPRNG): PRNG algorithms whose output is suitable for use in cryptographic applications, in addition to having good statistical properties. One of the properties that make a CSPRNG cryptographically secure is that its output does not allow an adversary to reconstruct the internal state (knowing all the bits but one produced by a CSPRNG does not help to find the missing bit). I won't go into how to make a CSPRNG because that's the easy bit — you can follow recipes given by professional cryptographers (use a standard algorithm, such as Hash_DRBG, HMAC_DRBG or CTR_DRBG from NIST SP 800-90A) or the ANSI X9.31 PRNG. The CSPRNG requires two properties of its state in order to be secure:
The state must be kept secret from the start and at all times (though exposure of the state will not reveal past outputs).
The state must be linear: the RNG must never be started twice from the same state.
Architecture of a random number generator
In practice, almost all good random number generators combine a CSPRNG with one or more entropy sources. To put it succintly, entropy is a measure of the unpredictability of a source of data. Basing a random number generator purely on a hardware RNG is difficult:
The raw physical data is likely to need conditioning anyway, to turn probabilistic data into a uniform distribution.
The output from the source of randomness must be kept secret.
Entropy sources are often slow compared with the demand.
Thus the RNG in an operating system almost always works like this:
Accumulate sufficient entropy to build an unpredictable internal state.
Run a CSPRNG, using the accumulated entropy as the seed, i.e. as the initial value of the internal state.
Optionally, periodically mix additional entropy into the internal state. (This is not strictly necessary, since entropy is not "consumed" at any measurable rate. It helps against certain threats that leak the RNG state without compromising the whole system.)
A random number generation service is part of the job of an operating system, because entropy gathering requires access to hardware, and entropy sources constitute a shared resource: the operating system must assemble them and derive output from them that will suit applications. Pseudo-random conditioning of the entropy sources is required in the operating system; it might as well be cryptographically secure, because this isn't fundamentally harder (and it is required on operating systems where applications do not trust each other; on fully cooperative systems, each application would have to run its own CSPRNG internally if the operating system didn't provide one anyway).
Most systems with persistent storage will load an RNG seed from disk (I'll use "disk" as an abbreviation for any kind of persistent storage) when they boot, and overwrite the seed with some fresh pseudo-random data generated from that seed, or if available with random data generated from that seed plus another entropy source. This way, even if entropy is not available after a reboot, the entropy from a previous session is reused.
Some care must be taken about the saved state. Remember how I said the state must be linear? If you boot twice from the same disk state, you'll get the same RNG outputs. If this is a possibility in your environment, you need another source of entropy. Take care when restoring from backups, or when cloning a virtual machine. One technique for cloning is to mix the stored entropy with some environmental data that is predictable but unique (e.g. time and MAC address); beware that if the environmental data is predictable, anyone in possession of the stored VM state can reconstruct the seed of a forked VM instance.
Entropy sources
Finding (and correctly using) entropy sources is the most challenging part of random number generation in an operating system. The available entropy sources will necessarily be dependent on the hardware and on which environment the hardware runs in.
If you're lucky, your hardware provides a peripheral which can be used as an entropy source: a hardware random number generator, either dedicated or side-purposed. For example:
thermal noise
avalanche noise from an avalanche noise
various types of (combinations of) oscillators, such as ring oscillators
radioactive decay
various quantum phenomena that I couldn't explain
acoustic noise
camera noise
NIST SP800-90B provides design guidelines for hardware RNG. Evaluating a hardware RNG is difficult. Hardware RNG are typically delicate beasts, which need to be used with care: many types require some time after boot and some time between reads in order to destabilize, they are often sensitive to environmental conditions such as the temperature, etc.
Intel x86-64 processors based on the Ivy Bridge architecture provide the RdRand instruction which provides the output from a CSPRNG seeded by thermal noise. Most smartphone processors include a hardware entropy source, though Android doesn't always use it.
Systems that lack a strong entropy source have to make do with combining weak entropy sources and hoping (ensuring would be too strong a word) that they will suffice. Random mouse movements are popular for client machines, and you might have seen the security show by certain cryptography programs that ask you to move the mouse (even though on any 21st century PC operating system the OS will have accumulated entropy without the application needing to bother).
If you want to look at an example, you can look at Linux, though beware that it isn't perfect. In particular, /dev/random blocks too often (because it blocks until enough entropy is available, with an overly conservative notion of entropy), whereas /dev/urandom is almost always good except on first boot but gives no indication when it doesn't have enough entropy. Linux has drivers for many HRNG devices, and in accumulates entropy from various devices (including input devices) and disk timings.
If you have (confidential) persistent storage, you can use it to save entropy from one boot to the next, as indicated above. The first boot is a delicate time: the system may be in a fairly predictable state at that point, especially on mass-produced devices that essentially operate out of the factory in the same way. Some embedded devices that have persistent storage are provisioned with an initial seed in the factory (produced by a RNG running on a computer in the factory). In virtualized server environments, initial entropy can be provisioned when instantiating a virtual machine from the host or from an entropy server.
Badly-seeded devices are a widespread problem in practice — a study of public RSA keys found that many servers and devices had keys that were generated with a poor RNG, most likely a good PRNG that was insufficiently seeded. As an OS designer, you cannot solve this problem on your own: it is the job of the entity in control of the deployment chain to ensure that the RNG will be properly seeded at first boot. Your task as an OS designer is to provide a proper RNG, including an interface to provide that first seed, and to ensure proper error signaling if the RNG is used before it is properly seeded.
Gilles 'SO- stop being evil'Gilles 'SO- stop being evil'
$\begingroup$ Freaking stellar answer. $\endgroup$
– Adam Maras
$\begingroup$ This == awesome. $\endgroup$
– BryceAtNetwork23
$\begingroup$ "there are no sources of information that are known to be truly random" I disagree. The prevalent (Copenhagen) interpretation of quantum mechanics says that the result of measuring a system that is in a superposition of possible outcomes is truly random, in the sense that we can never predict the outcome but can just give a probability distribution (at best). Source: physics grad student here $\endgroup$
– balu
In addition to Gilles answer, interrupts can also be used to establish entropy. In Linux for example, when adding an interrupt handler you can define whether the occurrence of this interrupt should be used as a contribution to the kernel's entropy pool.
Of course this should never be the case with interrupts that an attacker might be able to determine. For example, at first glance network traffic (i.e. their resulting interrupts) seems to be a good source of randomness. However, an attacker might be able to manipulate your traffic in a way that he then can predict the randomness you're needing for some operation.
Philipp MurryPhilipp Murry
Not the answer you're looking for? Browse other questions tagged operating-systems cryptography randomness security entropy or ask your own question.
Increasing entropy of random walk
How does a program execute with respect to the operating system?
How does an Operating System without kernel mode work?
Compression of Random Data is Impossible?
How to extract randomness from a file?
Achieving Randomness
Is von Neumann's randomness in sin quote no longer applicable?
Finding the Entropy of a random experiment with probability of $\frac{1}{3}$
How to determine seed collision probability in a PRNG?
Does the host operating system know the content of your messages in a messaging applications? | CommonCrawl |
JMD Home
Quadratic irrationals and linking numbers of modular knots
October 2012, 6(4): 563-596. doi: 10.3934/jmd.2012.6.563
A dynamical approach to Maass cusp forms
Anke D. Pohl 1,
Mathematisches Institut, Georg-August-Universität Göttingen, Bunsenstr. 3–5, 37073 Göttingen, Germany
Received September 2012 Published January 2013
For nonuniform cofinite Fuchsian groups $\Gamma$ that satisfy a certain additional geometric condition, we show that the Maass cusp forms for $\Gamma$ are isomorphic to $1$-eigenfunctions of a finite-term transfer operator. The isomorphism is constructive.
Keywords: geodesic flow., symbolic dynamics, Maass cusp forms, period functions, transfer operator.
Mathematics Subject Classification: Primary: 11F37, 37C30; Secondary: 37B10, 37D35, 37D40, 11F6.
Citation: Anke D. Pohl. A dynamical approach to Maass cusp forms. Journal of Modern Dynamics, 2012, 6 (4) : 563-596. doi: 10.3934/jmd.2012.6.563
E. Artin, Ein mechanisches system mit quasiergodischen bahnen,, Abh. Math. Sem. Univ. Hamburg, 3 (1924), 170. Google Scholar
R. Bruggeman, Automorphic forms, hyperfunction cohomology, and period functions,, J. Reine Angew. Math., 492 (1997), 1. doi: 10.1515/crll.1997.492.1. Google Scholar
R. Bruggeman, J. Lewis and D. Zagier, Period functions for Maass wave forms. II: Cohomology,, preprint, (2012). Google Scholar
R. W. Bruggeman and T. Mühlenbruch, Eigenfunctions of transfer operators and cohomology,, J. Number Theory, 129 (2009), 158. doi: 10.1016/j.jnt.2008.08.003. Google Scholar
C.-H. Chang and D. Mayer, The period function of the nonholomorphic Eisenstein series for $PSL(2,\mathbf Z)$,, Math. Phys. Electron. J., 4 (1998). Google Scholar
_____, The transfer operator approach to Selberg's zeta function and modular and Maass wave forms for $PSL(2,\mathbf Z)$,, in, 109 (1999), 73. Google Scholar
_____, Eigenfunctions of the transfer operators and the period functions for modular groups,, in, 290 (2001), 1. Google Scholar
_____, An extension of the thermodynamic formalism approach to Selberg's zeta function for general modular groups,, in, (2001), 523. Google Scholar
A. Deitmar and J. Hilgert, A Lewis correspondence for submodular groups,, Forum Math., 19 (2007), 1075. doi: 10.1515/FORUM.2007.042. Google Scholar
I. Efrat, Dynamics of the continued fraction map and the spectral theory of $SL(2,\mathbf Z)$,, Invent. Math., 114 (1993), 207. doi: 10.1007/BF01232667. Google Scholar
M. Fraczek, D. Mayer and T. Mühlenbruch, A realization of the Hecke algebra on the space of period functions for $\Gamma_0(n)$,, J. Reine Angew. Math., 603 (2007), 133. Google Scholar
D. Fried, Symbolic dynamics for triangle groups,, Invent. Math., 125 (1996), 487. doi: 10.1007/s002220050084. Google Scholar
J. Hilgert, D. Mayer and H. Movasati, Transfer operators for $\Gamma_0(n)$ and the Hecke operators for the period functions of $PSL(2,\mathbf Z)$,, Math. Proc. Cambridge Philos. Soc., 139 (2005), 81. doi: 10.1017/S0305004105008480. Google Scholar
J. Hilgert and A. Pohl, Symbolic dynamics for the geodesic flow on locally symmetric orbifolds of rank one,, in, (2009), 97. doi: 10.1142/9789812832825_0006. Google Scholar
J. Lewis, Spaces of holomorphic functions equivalent to the even Maass cusp forms,, Invent. Math., 127 (1997), 271. doi: 10.1007/s002220050120. Google Scholar
J. Lewis and D. Zagier, Period functions for Maass wave forms. I,, Ann. of Math. (2), 153 (2001), 191. doi: 10.2307/2661374. Google Scholar
B. Maskit, On Poincaré's theorem for fundamental polygons,, Advances in Math., 7 (1971), 219. Google Scholar
D. Mayer, On a $\zeta $ function related to the continued fraction transformation,, Bull. Soc. Math. France, 104 (1976), 195. Google Scholar
_____, On the thermodynamic formalism for the Gauss map,, Comm. Math. Phys., 130 (1990), 311. Google Scholar
_____, The thermodynamic formalism approach to Selberg's zeta function for $PSL(2,\mathbf Z)$,, Bull. Amer. Math. Soc. (N.S.), 25 (1991), 55. Google Scholar
D. Mayer, T. Mühlenbruch and F. Strömberg, The transfer operator for the Hecke triangle groups,, Discrete Contin. Dyn. Syst., 32 (2012), 2453. Google Scholar
D. Mayer and F. Strömberg, Symbolic dynamics for the geodesic flow on Hecke surfaces,, J. Mod. Dyn., 2 (2008), 581. doi: 10.3934/jmd.2008.2.581. Google Scholar
M. Möller and A. Pohl, Period functions for Hecke triangle groups, and the Selberg zeta function as a Fredholm determinant,, Ergodic Theory and Dynamical Systems, (2011). Google Scholar
T. Morita, Markov systems and transfer operators associated with cofinite Fuchsian groups,, Ergodic Theory Dynam. Systems, 17 (1997), 1147. doi: 10.1017/S014338579708632X. Google Scholar
R. S. Phillips and P. Sarnak, On cusp forms for co-finite subgroups of $PSL(2,\mathbf R)$,, Invent. Math., 80 (1985), 339. doi: 10.1007/BF01388610. Google Scholar
_____, The Weyl theorem and the deformation of discrete groups,, Comm. Pure Appl. Math., 38 (1985), 853. Google Scholar
A. Pohl, Odd and even Maass cusp forms for Hecke triangle groups, and the billiard flow,, in preparation., (). Google Scholar
_____, Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds,, \arXiv{1008.0367}, (2010). Google Scholar
_____, Period functions for Maass cusp forms for $\Gamma_0(p)$: A transfer operator approach,, International Mathematics Research Notices, (2012). doi: 10.1093/imrn/rns146. Google Scholar
M. Pollicott, Some applications of thermodynamic formalism to manifolds with constant negative curvature,, Adv. in Math., 85 (1991), 161. doi: 10.1016/0001-8708(91)90054-B. Google Scholar
D. Ruelle, "Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval,", CRM Monograph Series, 4 (1994). Google Scholar
_____, Dynamical zeta functions and transfer operators,, Notices Amer. Math. Soc., 49 (2002), 887. Google Scholar
C. Series, The modular surface and continued fractions,, J. London Math. Soc. (2), 31 (1985), 69. doi: 10.1112/jlms/s2-31.1.69. Google Scholar
Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581
Anke D. Pohl. Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2173-2241. doi: 10.3934/dcds.2014.34.2173
Frédéric Naud. Birkhoff cones, symbolic dynamics and spectrum of transfer operators. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 581-598. doi: 10.3934/dcds.2004.11.581
Yury Arlinskiĭ, Eduard Tsekanovskiĭ. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003 (Special) : 48-56. doi: 10.3934/proc.2003.2003.48
Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725
Dieter Mayer, Tobias Mühlenbruch, Fredrik Strömberg. The transfer operator for the Hecke triangle groups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2453-2484. doi: 10.3934/dcds.2012.32.2453
Feng Luo. Geodesic length functions and Teichmuller spaces. Electronic Research Announcements, 1996, 2: 34-41.
Jim Wiseman. Symbolic dynamics from signed matrices. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 621-638. doi: 10.3934/dcds.2004.11.621
George Osipenko, Stephen Campbell. Applied symbolic dynamics: attractors and filtrations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 43-60. doi: 10.3934/dcds.1999.5.43
Michael Hochman. A note on universality in multidimensional symbolic dynamics. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 301-314. doi: 10.3934/dcdss.2009.2.301
Mark F. Demers, Hong-Kun Zhang. Spectral analysis of the transfer operator for the Lorentz gas. Journal of Modern Dynamics, 2011, 5 (4) : 665-709. doi: 10.3934/jmd.2011.5.665
Zhenqi Jenny Wang. The twisted cohomological equation over the geodesic flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3923-3940. doi: 10.3934/dcds.2019158
Jose S. Cánovas, Tönu Puu, Manuel Ruiz Marín. Detecting chaos in a duopoly model via symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 269-278. doi: 10.3934/dcdsb.2010.13.269
Nicola Soave, Susanna Terracini. Symbolic dynamics for the $N$-centre problem at negative energies. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3245-3301. doi: 10.3934/dcds.2012.32.3245
Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487
David Ralston. Heaviness in symbolic dynamics: Substitution and Sturmian systems. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 287-300. doi: 10.3934/dcdss.2009.2.287
Jyrki Lahtonen, Gary McGuire, Harold N. Ward. Gold and Kasami-Welch functions, quadratic forms, and bent functions. Advances in Mathematics of Communications, 2007, 1 (2) : 243-250. doi: 10.3934/amc.2007.1.243
Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198
Gary Froyland, Simon Lloyd, Anthony Quas. A semi-invertible Oseledets Theorem with applications to transfer operator cocycles. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 3835-3860. doi: 10.3934/dcds.2013.33.3835
Mike Boyle. The work of Mike Hochman on multidimensional symbolic dynamics and Borel dynamics. Journal of Modern Dynamics, 2019, 15: 427-435. doi: 10.3934/jmd.2019026
Anke D. Pohl | CommonCrawl |
Recurrent thoughts about mathematics, science, politics, music, religion, and ...
Robert George on Mill and Newman
Robert Marks - Five Years Later, Still No Answers!
Why Can't Creationists Do Mathematics?
My Lunch with Jerry Garcia
David Gelernter Makes a Fool of Himself Again
Yet More Egnorance
Inference - A "Journal" Exposed
Moose War
The Only Map that Matters
Happy Quaternions Day
Lee Rudolph on Robert George on Mill and Newman
Marcus Ranum on Robert George on Mill and Newman
leerudolph on Robert George on Mill and Newman
brucegee1962 on Robert George on Mill and Newman
Andreas Avester on Robert George on Mill and Newman
anbheal on Robert Marks – Five Years Later, Still No Answers!
Mathematics Gives Us Life Skills and Mental Tools | Mind Matters on Why Can't Creationists Do Mathematics?
shallit on Why Can't Creationists Do Mathematics?
Robert Marks – Five Years Later, Still No Answers! » « My Lunch with Jerry Garcia
Why Can't Creationists Do Mathematics?
I suppose it's not so remarkable that creationists can't do mathematics. After all, almost by definition, they don't understand evolution, so that alone should suggest some sort of cognitive deficit. What surprises me is that even creationists with math or related degrees often have problems with basic mathematics.
I wrote before about Marvin Bittinger, a mathematician who made up an entirely bogus "time principle" to estimate probabilities of events. And about Kirk Durston, who speaks confidently about infinity, but gets nearly everything wrong.
And here's yet another example: creationist Jonathan Bartlett, who is director of something called the Blyth Institute (which, mysteriously, lists no actual people associated with it, and seems to consist entirely of Jonathan Bartlett himself), has recently published a post about mathematics, in which he makes a number of very dubious assertions. I'll just mention two.
First, Bartlett calls polynomials the "standard algebraic functions". This is definitely nonstandard terminology, and not anything a mathematician would say. For mathematicians, an "algebraic function" is one that satisfies the analogue of an algebraic equation. For example, consider the function f(x) defined by f^2 + f + x = 0. The function (-1 + sqrt(1-4x))/2 satisfies this equation, and hence it would be called algebraic.
Second, Bartlett claims that "every calculus student learns a method for writing sine and cosine" in terms of polynomials, even though he also states this is "impossible". How can one resolve this contradiction? Easy! He explains that "If, however, we allow ourselves an infinite number of polynomial terms, we can indeed write sine and cosine in terms of polynomial functions".
This reminds me of the old joke about Lincoln: "In discussing the question, he used to liken the case to that of the boy who, when asked how many legs his calf would have if he called its tail a leg, replied, "Five," to which the prompt response was made that calling the tail a leg would not make it a leg."
If one allows "an infinite number of polynomial terms", then the result is not a polynomial! How hard can this be to understand? Such a thing is called a "power series"; it is not the same as a polynomial at all. Mathematicians even use a different notation to distinguish between these. Polynomials over a field F in one variable are written using the symbol F[x]; power series are written as F[[x]].
Moral of the story: don't learn mathematics from creationists.
P.S. Another example of Bartlett getting basic things wrong is here.
shallit
that calling the tail a leg would not make it a leg
True, but to nitpick – in certain linguistic contexts it would. Words are defined by use, so if in a language did not distinguish between legs and tails and had one word for both, and that word were "leg", then tails would be legs in the sense that all tails are legs, but not all legs are (functionally) tails.
For example slavic languages do not distinguish between fingers and toes, there is only the generic term "digit" for both.
leerudolph says
To be fairer to Bartlett than he deserves, in Abraham Robinson's Non-Standard Analysis one can have that (to quote his book with that title, p. 147) "the degree and the rank of a polynomial may now be infinite natural numbers" (the "rank" is how many coefficients are non-zero). And Polya, in Induction and Analogy in Mathematics, sections II.5–6, recalls how Euler in effect treated an expression which has "an infinity of terms, is of 'infinite degree'", as if it were a polynomial, to derive a conclusion that "was daring. 'The method was new and never used yet for such a purpose,' he wrote ten years later. He saw some objections himself and many objections were raised by his mathematical friends when they recovered from their firs admiring surprise.'" Eventually, "He found a new proof. This proof, although hidden and ingenious, was based on more usual considerations and was accepted as completely rigorous." I believe that I have read (but can't find it by Googling; I have Polya on my bookshelf, but my present access to Robinson is limited, via Google Books, and anyway I don't think what I remember is actually there) that, with due care, there is an entirely rigorous proof (within NSA) of the given result of Euler that basically follows Euler's approach by using polynomials of infinite degree.
I doubt that Bartlett knows or cares about any of this.
Adrian Keister says
Can you please point out to me in Bartlett's article where exactly Bartlett claims that the result of writing sine or cosine in terms of polynomials is itself a polynomial? Writing $$\sin(x)=\sum_{k=0}^{\infty}\left[\frac{(-1)^k}{(2k+1)!}\,x^{2k+1}\right]$$ certainly is writing $\sin(x)$ in terms of polynomials, since each $x^{2k+1}$ qualifies on its own as a polynomial, but I don't see where Bartlett claims that the result of such a sum is itself a polynomial.
Can I also ask why Bartlett's creationism has anything at all to do with the quality of his mathematics? Because, on the face of it, your argument for why nobody should learn mathematics from creationists goes like this:
Premise: Bartlett is a terrible mathematician.
Premise: Bartlett is a creationist.
Intermediate Conclusion: Therefore all creationists are terrible mathematicians.
Premise: You should only learn mathematics from good mathematicians.
Conclusion: Therefore, you should not learn mathematics from creationists.
Is this a fair assessment of your argument? If so, I have a problem with it:
The Intermediate Conclusion commits the fallacy of hasty generalization and the fallacy of the illicit major or minor, depending on how you group the premises. Bartlett alone, or even Bartlett plus Durston plus Bittinger, is surely not a representative sample of all creationist mathematicians. No statistician would admit that a sample size of 3 is sufficient for anything. But even if Bartlett is a terrible mathematician, the intermediate conclusion that all creationists are terrible mathematicians is completely unwarranted. In the Intermediate Conclusion, you are talking about all creationists, but neither of the two first premises talks about all creationists (the term is not distributed in the premises). So that's the fallacy of the illicit major or minor.
As for the "standard algebraic functions", in the ILATE mnemonic device for integration by parts, the A refers to "algebraic", and is understood to mean, usually but certainly including, polynomials. And that's what would come to my mind if someone used the term "algebraic functions". Functions that express the roots of polynomials, of course, are included in the term as well. Perhaps Bartlett should have written, "It is impossible to write these functions in terms of standard algebraic functions (e.g., polynomial functions)." instead of "It is impossible to write these functions in terms of standard algebraic functions (i.e., polynomial functions)." On the other hand, mathematicians, including myself, sometimes have a tendency to dismiss what someone else says to a non-mathematical audience if it isn't precise. The problem with precise is that it's often pedagogically or rhetorically unsound. I'm not saying people should lie in order to persuade, just that not all details are critical for every situation. Even mathematicians know that, as proof details vary considerably depending on audience.
shallit says
Dear Adrian "It is my firm belief that evolution is not science" Keister:
You invent an imaginary argument that you ascribe to me. That's a fallacy called the "straw man". Look it up.
Nobody equates polynomials with "algebraic functions". One class is a strict subset of the other.
The last name is "Keister", not "Keisler", please.
The whole point of me summing up your argument is to make sure that you understand that I understand your argument, precisely to avoid strawman, of which I am well aware, having taught logic myself.
You have not done what I asked: if my summing up of your argument is not satisfactory to you, then please indicate how and where it doesn't describe your argument.
You have not responded at all to my comment about where Bartlett claims that the result of summing up an infinite collection of polynomials is itself a polynomial.
Or if I didn't make it clear, please let me make it clear now: could you please sum up your argument in a premise/conclusion fashion to your satisfaction?
You seem very confused. Not every blog post constitutes an argument. My post is commentary, not a formal deductive argument of any kind. You don't get to make up an argument and ascribe it to me; that's rude and unprofessional.
As for your other comment, I didn't reply because I am unable to teach reading comprehension in a comment. It is completely clear from the context that this is what he was implying.
Well obviously you don't think you have anything to learn from me, as evidenced by your rude, condescending remarks that are clearly calculated to bring this "conversation" to a close. I wouldn't be the least bit offended if someone tried to sum up an argument I had made, even if not done perfectly. I certainly did not try to "make up an argument" and ascribe it to you. I tried to sum up what I thought your argument was. I certainly don't regard such doings as rude and unprofessional; it happens all the time as part of scholarship. Your blog post is definitely an argument, because there is a statement at the end, "Moral of the story: don't learn mathematics from creationists." which is something you allege to follow from the statements before.
I firmly believe I can learn something from every person on this planet, including yourself; but if you're not interested, there's really nothing more to be said.
Indeed, you did precisely what I said you did. You made up an argument and ascribed it to me. That's where the rudeness comes from. You're just mad that I called you on it.
As for condescending, if you say stuff like "It is my firm belief that evolution is not science", don't expect most educated people to take you seriously.
If you think the phrase 'the moral of the story' signifies a logical argument has been made, you must be constantly disappointed when you read Christian literature.
You are contradicting yourself.
On the one hand, you say he should have said "It is impossible to write these functions in terms of standard algebraic functions (e.g., polynomial functions)." Presumably you intend this to be a true statement.
But you also say "Writing $$\sin(x)=\sum_{k=0}^{\infty}\left[\frac{(-1)^k}{(2k+1)!}\,x^{2k+1}\right]$$ certainly is writing $\sin(x)$ in terms of polynomials". Presumably you intend this to be true, also.
So you contradict yourself.
Everyone can see the linguistic con you're running here. The first "in terms of" means a finite expression. In the second you suddenly allow an infinite expression.
My comment was more intended to address the "standard algebraic functions (i.e., polynomial functions)" bit instead of the statement as a whole. As in, comparing the set of algebraic functions to the set of polynomial functions. I'm not sure I agree with the statement "It is impossible to write these functions in terms of standard algebraic functions (e.g., polynomial functions)." as it stands. The expression I wrote out, the standard Taylor series, is writing the sine function in terms of polynomials. I think most mathematicians would agree with that.
So I think you're right that Bartlett might be over-stating the impossibility of writing sine and cosine "in terms of" polynomials. But also keep in mind his audience, which is clearly not mathematicians. He's writing for the general public, and possibly students as well, where the subtleties involved might not be appropriate.
No, I don't think "most mathematicians" would agree that a power series expresses "sine and cosine in terms of polynomials".
I certainly wouldn't, any more than I would say that a series like π = 3 + 1/10 + 4/100 + 1/1000 + … writes "π in terms of rational numbers".
I'm still not seeing any necessary or sufficient connection at all between being a creationist and being good (or not) at mathematics, which is the subject of the post. What has the one to do with the other?
Could you just as easily find evolutionists that are bad at mathematics? As there are loads of people bad at mathematics, I would be shocked if there weren't evolutionists that are bad at mathematics. Does that mean I shouldn't learn mathematics from an evolutionist? I don't think so! One of the best math professors I had at Virginia Tech was an evolutionist, and I learned much from him.
Well, this one's easy to answer: people who are ignorant, misinformed, or intellectually dishonest, in one area (evolution) are likely to be ignorant, misinformed, or intellectually dishonest in another.
Your example of the "evolutionist" is not symmetric, since the "evolutionist" is not the one denying established science.
It's certainly possible that people who are ignorant, misinformed, etc., in one field are more likely to be so in another. But it's often just the reverse. Exceptional actors, e.g., can (and very often do) have horrible economics and politics. There are enough counterexamples to your claim to make it rather empty. Besides, you seem to equate being against established science with being ignorant, misinformed, and intellectually dishonest. By that reasoning, Isaac Newton would be ignorant, misinformed, and intellectually dishonest for disagreeing with the established Aristotelian physics which had held sway for hundreds of years before him.
You're painting with too broad a brush, and it just doesn't hold.
However, I do not sense that there are any words whatsoever I could say that will change your mind. I have no ethos with you, and me spending more time on it is a waste of both our time. It's your post, so you get the last say, but I'm not inclined to respond further.
Taking issue with the results in a paper or two is one thing. Claiming that an entire field with literally thousands of people working and publishing good papers in it, for over 100 years, "is not science" is just deranged.
I use the phrase "in terms of" in its usual sense. If you can write a true equation with $y$ on the LHS (typically), and an expression involving $x$ on the RHS, then you've written $y$ 'in terms of' $x.$ Certainly, in some contexts, additional clarification is necessary, like saying, "We write it formally as…" to indicate that you're not saying the equation is true, yet.
We might have to agree to disagree about whether most mathematicians would agree that a power series expresses "sine and cosine in terms of polynomials". I certainly would say that writing $\pi=3+1/10+4/100+\dots$ is writing $\pi$ in terms of rational numbers. Properties of the parts don't necessarily translate to the whole, and vice versa. If that's understood, there's nothing wrong with the phrase "writing $\pi$ in terms of rational numbers".
I think the main point of the article is flawed. Even if you find Bartlett's mathematics wrong, you have no basis whatever for making the immense non sequitur of saying "don't learn mathematics from creationists."
If you are asked, for example, to write ζ(3) "in terms of" known constants, and your answer is an infinite series of rational numbers, no one will take you seriously. I'm beginning to think you're just trolling.
bugzpodder says
shallit, I took some of your theory classes back in the day (ten years ago) and they are absolutely some of the best. Your topics were always explained in a manner that's crisp clear and are highly enjoyable. Just want to say thank you! It's always satisfying to see concepts like Thue-Morse sequence pop up from time to time, I can't believe I've never heard of it before you showed it to me!
You are too kind. Thank you.
WMDKitty -- Survivor says
Well, when you believe that 1+1+1 somehow equals 1…
Yeah, the doctrine of the Trinity is one of the more incoherent & unbelievable aspects of Christianity. Why do people nod and accept such drivel with eagerness?
Darwinist Jeffrey Shallit asks, why can't creationists do math? | Uncommon Descent says:
[…] Referencing Jonathan Bartlett's "Doing the impossible: A step-by-step guide," mathematician and Darwinian Jeffrey Shallit huffs, […]
Mathematics Gives Us Life Skills and Mental Tools | Mind Matters says:
[…] recent article on teaching calculus generated some pushback (crossposted here as well) from Jeffrey Shallit, a computer science professor who seems to consider […]
Leave a Reply to shallit Cancel reply | CommonCrawl |
A Scatter Metric for the RBN
I recently posted a series of maps that showed the highly non-uniform geographic distribution of posting stations on the RBN as a function of time and frequency. While informative, these maps are very inconvenient, and do not lend themselves to any kind of quantitative analysis of the RBN's posting stations. What is needed is some kind of scatter metric, S, that in some way describes the non-uniform distribution of posting stations.
At least three basic approaches to creating such a metric suggest themselves (to me, anyway; there are probably others):
Metrics based directly on a distance metric;
Metrics based on grid occupancy;
Metrics based indirectly on a distance metric.
Metrics based Directly on a Distance Metric
Several scatter metrics directly on a distance metric suggest themselves, perhaps the most obvious of which would be some number that compares the actual geographic dispersion of stations reporting at a particular frequency and over some period of time (such as a year or a month) to an "ideal" dispersion of the same number of stations equally spread around the world. This would allow calculation of an efficiency metric that could then be used to compare the geographic efficiency of the RBN as a function of time and frequency.
Definition of a Distance Metric
But before we go too far down this path, we need to consider what distance metric to us. For two points on the surface of a sphere (we'll assume that the Earth is a sphere -- and hence its surface is a 2-sphere -- which keeps things simple at little cost in accuracy), there are two obvious reasonable ways of defining the distance between them: the ordinary 3-space cartesian distance, and the length of the shortest path on the 2-sphere surface. We will denote these metrics by ℓC and ℓS respectively.
If we denote the radius of the Earth by RE, then it is easy to derive the following (equivalent) relationships between ℓC and ℓS in the domain 0 ≤ ℓC ≤ 2RE:
ℓC2 = 2RE2(1 - cos (ℓS / RE))
ℓS = RE × acos(1 - (ℓC2 / 2RE2))
We can plot this relationship with a trivial gnuplot program:
R = 6371
lc(ls) = sqrt(2) * R * sqrt(1 - cos(ls / R))
set xlabel "ℓ_S 2-sphere distance (km)"
set ylabel "ℓ_C 3-space distance (km)"
set title "ℓ_C as a Function of ℓ_S for Spherical Earth"
plot [ls=0:pi*R] lc(ls) notitle
which produces:
No surprise there, of course: ℓC is a strictly monotonically increasing non-linear function of ℓS. (Well, to be more precise, that is true throughout the domain except at the point ℓS = π × RE .)
Anyway, this tells us that if a scatter metric is to be based on distances between posting stations, it makes no fundamental difference whether we measure distances by ℓC or ℓS: the details of the calculations might (and in general would) change because of the non-linear relationship between the two ways of measuring distance, but there is no intrinsic reason to prefer one measurement over the other since there is a one-to-one mapping between the two measures.
Once a distance metric is defined, one can define some function based on that metric to express the amount of scatter of RBN posting stations across the globe. Ideally, as mentioned above, one could then compare that scatter to the scatter of the same number of stations uniformly spread over the surface of the Earth.
The most obvious scatter function defined along these lines is the mean separation between posting stations, using one of the two distance metrics described above. Since it makes no substantive difference which we distance metric choose, we will use ℓS, simply because that corresponds to the most common meaning of "distance" as used in amateur radio.
For N points (i.e., posting stations) P1, P2, ... PN we can define a plausible scatter function S by:
$$ S = {2 \over {N \times (N - 1) }} \sum_{i=1}^{N-1}\sum_{j=i+1}^{N} {_i}{\Delta}_j$$
where iΔj is the value of the distance metric ℓS for the points Pi and Pj.
We can then compare this to the ideal value obtained from points uniformly spread across the surface of the Earth.
Unfortunately, there is no known algebraic method for determining such an ideal "uniformly spread" dispersion, which is equivalent to finding a solution to what is known as the Tammes problem, which is a particular (unsolved) problem in the theory of spherical codes. There are various ways of computing solutions that are likely to be equal, or very close to, the ideal dispersion (for example: place N points on a sphere each the source of a repulsive force that decays with distance; add some friction, and then determine the location of all the points when they have all ceased to move). However, one could hardly regard this as a clean process, especially as it is not guaranteed to provide the true "most efficient" distribution.
Instead of a "uniformly spread" dispersion (which would maximise the minimum distance calculated across the set of points), though, we can compare the value of the metric S for the RBN to the value of the same metric for a random distribution of points. (By "a random distribution of points" here I mean that the probability of finding some number P points within any particular area A on the 2-sphere is independent of the position and shape of A.)
Symmetry arguments lead us to the (perhaps surprising) conclusion that the expectation value of the mean separation of such a random distribution of points must be independent of the number of points in the network, and will have a value equal to half the maximal value of the distance metric on the 2-sphere. (To put it in terms of the Earth: this means that the expectation value will be one quarter the length of the planet's circumference, or almost exactly 10,000 km.) Thus, we can immediately compare the scatter metric S of the RBN defined above to the "ideal" value of 10,000 -- or, since the ideal value is independent of the number of points, we can simply use the value of the scatter metric as-is, and mentally divide by 10,000 to obtain an "efficiency percentage".
Mensal values of the metric S show a gradual but sustained increase (the Pearson correlation coefficient is 0.78):
The red line is the best-fit linear regression to the data, which suggests that this scatter metric is increasing at the rate of about 145 per year.
Instead of plotting the metric as a function of time, it probably makes more sense to plot it as a function the number of posting stations (excluding those posting stations for which location information is unavailable from the RBN):
The correlation coefficient of the values on this plot is 0.75 -- almost identical to that of the plot based on time (the slope is about 4.8 per poster).
The story of both these plots is that the scatter metric has really changed very little since the RBN's inception: it has increased by only about 30%, even though the number of posters has increased by several hundred percent. This suggests that while the organisers of the RBN have been successful in persuading many additional stations to join the network, there has been but little improvement in the geographical diversity of those stations (as indeed is apparent if one looks at the maps; this in turn suggests that this metric is a not-unreasonable quantitative reflection of the amount of geographic scatter in the underlying network).
This post is now long enough... I'll move on to metrics based on grid occupancy in another post.
Versions of the LANL GPS Charged-Particle Dataset
HF Beacons
RBN Signals From DXpeditions in 2016
Compressed Archive of LANL GPS Charged-Particle Data
Stations That Have Not Responded to Direct QSL Req...
More on the Growth of the RBN
Revised augmented logs for CQ WW SSB from 2005 to ...
Why I Don't Use a Word Processor (2) | CommonCrawl |
New proof systems for sustainable blockchains: proofs of space and verifiable delay functions Abstract
Krzysztof Pietrzak
The distinctive feature of Bitcoin is that it achieves decentralisation in an open setting where everyone can join. This is achieved at a high price, honest parties must constantly dedicate more computational power towards securing Bitcoin's blockchain than is available to a potential adversary, which leads to a massive waste of energy; at its hitherto peak, the electricity used for Bitcoin mining equaled the electricity consumption of Austria. In this lecture I will discuss how disk-space, instead of computation, can be used as a resource to construct a more sustainable blockchain. We will see definitions and constructions of "proof of space" and "verifiable delay functions", and how they can be used to construct a Blockchain with similar dynamics and security properties as the Bitcoin blockchain.
Streamlined blockchains: A simple and elegant approach (tutorial) Abstract
Elaine Shi
A blockchain protocol (also called state machine replication) allows a set of nodes to agree on an ever-growing, linearly ordered log of transactions. In this tutorial, we present a new paradigm called "streamlined blockchains". This paradigm enables a new family of protocols that are extremely simple and natural: every epoch, a proposer proposes a block extending from a notarized parent chain, and nodes vote if the proposal's parent chain is not too old. Whenever a block gains enough votes, it becomes notarized. Whenever a node observes a notarized chain with several blocks of consecutive epochs at the end, then the entire chain chopping off a few blocks at the end is final. By varying the parameters highlighted in blue, we illustrate two variants for the partially synchronous and synchronous settings respectively. We present very simple proofs of consistency and liveness. We hope that this tutorial provides a compelling argument why this new family of protocols should be used in lieu of classical candidates (e.g., PBFT, Paxos, and their variants), both in practical implementation and for pedagogical purposes.
Towards Attribute-Based Encryption for RAMs from LWE: Sub-linear Decryption, and More Abstract
Prabhanjan Ananth Xiong Fan Elaine Shi
Attribute based encryption (ABE) is an advanced encryption system with a built-in mechanism to generate keys associated with functions which in turn provide restricted access to encrypted data. Most of the known candidates of attribute based encryption model the functions as circuits. This results in significant efficiency bottlenecks, especially in the setting where the function associated with the ABE key is represented by a random access machine (RAM) and a database, with the runtime of the RAM program being sublinear in the database size. In this work we study the notion of attribute based encryption for random access machines (RAMs), introduced in the work of Goldwasser, Kalai, Popa, Vaikuntanathan and Zeldovich (Crypto 2013). We present a construction of attribute based encryption for RAMs satisfying sublinear decryption complexity assuming learning with errors; this is the first construction based on standard assumptions. Previously, Goldwasser et al. achieved this result based on non-falsifiable knowledge assumptions. We also consider a dual notion of ABE for RAMs, where the database is in the ciphertext and we show how to achieve this dual notion, albeit with large attribute keys, also based on learning with errors.
On the Non-existence of Short Vectors in Random Module Lattices Abstract
Ngoc Khanh Nguyen
Recently, Lyubashevsky & Seiler (Eurocrypt 2018) showed that small polynomials in the cyclotomic ring $$\mathbb {Z}_q[X]/(X^n+1)$$, where n is a power of two, are invertible under special congruence conditions on prime modulus q. This result has been used to prove certain security properties of lattice-based constructions against unbounded adversaries. Unfortunately, due to the special conditions, working over the corresponding cyclotomic ring does not allow for efficient use of the Number Theoretic Transform (NTT) algorithm for fast multiplication of polynomials and hence, the schemes become less practical.In this paper, we present how to overcome this limitation by analysing zeroes in the Chinese Remainder (or NTT) representation of small polynomials. As a result, we provide upper bounds on the probabilities related to the (non)-existence of a short vector in a random module lattice with no assumptions on the prime modulus. We apply our results, along with the generic framework by Kiltz et al. (Eurocrypt 2018), to a number of lattice-based Fiat-Shamir signatures so they can both enjoy tight security in the quantum random oracle model and support fast multiplication algorithms (at the cost of slightly larger public keys and signatures), such as the Bai-Galbraith signature scheme (CT-RSA 2014), $$\mathsf {Dilithium\text {-}QROM}$$ (Kiltz et al., Eurocrypt 2018) and $$\mathsf {qTESLA}$$ (Alkim et al., PQCrypto 2017). Our techniques can also be applied to prove that recent commitment schemes by Baum et al. (SCN 2018) are statistically binding with no additional assumptions on q.
Non-Committing Encryption with Quasi-Optimal Ciphertext-Rate Based on the DDH Problem Abstract
Yusuke Yoshida Fuyuki Kitagawa Keisuke Tanaka
Non-committing encryption (NCE) was introduced by Canetti et al. (STOC '96). Informally, an encryption scheme is non-committing if it can generate a dummy ciphertext that is indistinguishable from a real one. The dummy ciphertext can be opened to any message later by producing a secret key and an encryption random coin which "explain" the ciphertext as an encryption of the message. Canetti et al. showed that NCE is a central tool to achieve multi-party computation protocols secure in the adaptive setting. An important measure of the efficiently of NCE is the ciphertext rate, that is the ciphertext length divided by the message length, and previous works studying NCE have focused on constructing NCE schemes with better ciphertext rates.We propose an NCE scheme satisfying the ciphertext rate based on the decisional Diffie-Hellman (DDH) problem, where is the security parameter. The proposed construction achieves the best ciphertext rate among existing constructions proposed in the plain model, that is, the model without using common reference strings. Previously to our work, an NCE scheme with the best ciphertext rate based on the DDH problem was the one proposed by Choi et al. (ASIACRYPT '09) that has ciphertext rate . Our construction of NCE is similar in spirit to that of the recent construction of the trapdoor function proposed by Garg and Hajiabadi (CRYPTO '18).
4-Round Luby-Rackoff Construction is a qPRP Abstract
Akinori Hosoyamada Tetsu Iwata
The Luby-Rackoff construction, or the Feistel construction, is one of the most important approaches to construct secure block ciphers from secure pseudorandom functions. The 3- and 4-round Luby-Rackoff constructions are proven to be secure against chosen-plaintext attacks (CPAs) and chosen-ciphertext attacks (CCAs), respectively, in the classical setting. However, Kuwakado and Morii showed that a quantum superposed chosen-plaintext attack (qCPA) can distinguish the 3-round Luby-Rackoff construction from a random permutation in polynomial time. In addition, Ito et al. recently showed a quantum superposed chosen-ciphertext attack (qCCA) that distinguishes the 4-round Luby-Rackoff construction. Since Kuwakado and Morii showed the result, a problem of much interest has been how many rounds are sufficient to achieve provable security against quantum query attacks. This paper answers to this fundamental question by showing that 4-rounds suffice against qCPAs. Concretely, we prove that the 4-round Luby-Rackoff construction is secure up to $$O(2^{n/12})$$ quantum queries. We also give a query upper bound for the problem of distinguishing the 4-round Luby-Rackoff construction from a random permutation by showing a distinguishing qCPA with $$O(2^{n/6})$$ quantum queries. Our result is the first to demonstrate the security of a typical block-cipher construction against quantum query attacks, without any algebraic assumptions. To give security proofs, we use an alternative formalization of Zhandry's compressed oracle technique.
Forkcipher: A New Primitive for Authenticated Encryption of Very Short Messages Abstract
Elena Andreeva Virginie Lallemand Antoon Purnal Reza Reyhanitabar Arnab Roy Damian Vizár
Highly efficient encryption and authentication of short messages is an essential requirement for enabling security in constrained scenarios such as the CAN FD in automotive systems (max. message size 64 bytes), massive IoT, critical communication domains of 5G, and Narrowband IoT, to mention a few. In addition, one of the NIST lightweight cryptography project requirements is that AEAD schemes shall be "optimized to be efficient for short messages (e.g., as short as 8 bytes)".In this work we introduce and formalize a novel primitive in symmetric cryptography called forkcipher. A forkcipher is a keyed primitive expanding a fixed-lenght input to a fixed-length output. We define its security as indistinguishability under a chosen ciphertext attack (for n-bit inputs to 2n-bit outputs). We give a generic construction validation via the new iterate-fork-iterate design paradigm.We then propose $$ {\mathsf {ForkSkinny}} $$ as a concrete forkcipher instance with a public tweak and based on SKINNY: a tweakable lightweight cipher following the TWEAKEY framework. We conduct extensive cryptanalysis of $$ {\mathsf {ForkSkinny}} $$ against classical and structure-specific attacks.We demonstrate the applicability of forkciphers by designing three new provably-secure nonce-based AEAD modes which offer performance and security tradeoffs and are optimized for efficiency of very short messages. Considering a reference block size of 16 bytes, and ignoring possible hardware optimizations, our new AEAD schemes beat the best SKINNY-based AEAD modes. More generally, we show forkciphers are suited for lightweight applications dealing with predominantly short messages, while at the same time allowing handling arbitrary messages sizes.Furthermore, our hardware implementation results show that when we exploit the inherent parallelism of $$ {\mathsf {ForkSkinny}} $$ we achieve the best performance when directly compared with the most efficient mode instantiated with SKINNY.
Structure-Preserving and Re-randomizable RCCA-Secure Public Key Encryption and Its Applications Abstract
Antonio Faonio Dario Fiore Javier Herranz Carla Ràfols
Re-randomizable RCCA-secure public key encryption (Rand-RCCA PKE) schemes reconcile the property of re-randomizability of the ciphertexts with the need of security against chosen-ciphertexts attacks. In this paper we give a new construction of a Rand-RCCA PKE scheme that is perfectly re-randomizable. Our construction is structure-preserving, can be instantiated over Type-3 pairing groups, and achieves better computation and communication efficiency than the state of the art perfectly re-randomizable schemes (e.g., Prabhakaran and Rosulek, CRYPTO'07). Next, we revive the Rand-RCCA notion showing new applications where our Rand-RCCA PKE scheme plays a fundamental part: (1) We show how to turn our scheme into a publicly-verifiable Rand-RCCA scheme; (2) We construct a malleable NIZK with a (variant of) simulation soundness that allows for re-randomizability; (3) We propose a new UC-secure Verifiable Mix-Net protocol that is secure in the common reference string model. Thanks to the structure-preserving property, all these applications are efficient. Notably, our Mix-Net protocol is the most efficient universally verifiable Mix-Net (without random oracle) where the CRS is an uniformly random string of size independent of the number of senders. The property is of the essence when such protocols are used in large scale.
Indifferentiability of Truncated Random Permutations Abstract
Wonseok Choi Byeonghak Lee Jooyoung Lee
One of natural ways of constructing a pseudorandom function from a pseudorandom permutation is to simply truncate the output of the permutation. When n is the permutation size and m is the number of truncated bits, the resulting construction is known to be indistinguishable from a random function up to $$2^{{n+m}\over 2}$$ queries, which is tight.In this paper, we study the indifferentiability of a truncated random permutation where a fixed prefix is prepended to the inputs. We prove that this construction is (regularly) indifferentiable from a public random function up to $$\min \{2^{{n+m}\over 3}, 2^{m}, 2^\ell \}$$ queries, while it is publicly indifferentiable up to $$\min \{ \max \{2^{{n+m}\over 3}, 2^{n \over 2}\}, 2^\ell \}$$ queries, where $$\ell $$ is the size of the fixed prefix. Furthermore, the regular indifferentiability bound is proved to be tight when $$m+\ell \ll n$$.Our results significantly improve upon the previous bound of $$\min \{ 2^{m \over 2}, 2^\ell \}$$ given by Dodis et al. (FSE 2009), allowing us to construct, for instance, an $$\frac{n}{2}$$-to-$$\frac{n}{2}$$ bit random function that makes a single call to an n-bit permutation, achieving $$\frac{n}{2}$$-bit security.
Anonymous AE Abstract
John Chan Phillip Rogaway
The customary formulation of authenticated encryption (AE) requires the decrypting party to supply the correct nonce with each ciphertext it decrypts. To enable this, the nonce is often sent in the clear alongside the ciphertext. But doing this can forfeit anonymity and degrade usability. Anonymity can also be lost by transmitting associated data (AD) or a session-ID (used to identify the operative key). To address these issues, we introduce anonymous AE, wherein ciphertexts must conceal their origin even when they are understood to encompass everything needed to decrypt (apart from the receiver's secret state). We formalize a type of anonymous AE we call anAE, anonymous nonce-based AE, which generalizes and strengthens conventional nonce-based AE, nAE. We provide an efficient construction for anAE, NonceWrap, from an nAE scheme and a blockcipher. We prove NonceWrap secure. While anAE does not address privacy loss through traffic-flow analysis, it does ensure that ciphertexts, now more expansively construed, do not by themselves compromise privacy.
iUC: Flexible Universal Composability Made Simple Abstract
Jan Camenisch Stephan Krenn Ralf Küsters Daniel Rausch
Proving the security of complex protocols is a crucial and very challenging task. A widely used approach for reasoning about such protocols in a modular way is universal composability. A perfect model for universal composability should provide a sound basis for formal proofs and be very flexible in order to allow for modeling a multitude of different protocols. It should also be easy to use, including useful design conventions for repetitive modeling aspects, such as corruption, parties, sessions, and subroutine relationships, such that protocol designers can focus on the core logic of their protocols.While many models for universal composability exist, including the UC, GNUC, and IITM models, none of them has achieved this ideal goal yet. As a result, protocols cannot be modeled faithfully and/or using these models is a burden rather than a help, often even leading to underspecified protocols and formally incorrect proofs.Given this dire state of affairs, the goal of this work is to provide a framework for universal composability which combines soundness, flexibility, and usability in an unmatched way. Developing such a security framework is a very difficult and delicate task, as the long history of frameworks for universal composability shows.We build our framework, called iUC, on top of the IITM model, which already provides soundness and flexibility while lacking sufficient usability. At the core of iUC is a single simple template for specifying essentially arbitrary protocols in a convenient, formally precise, and flexible way. We illustrate the main features of our framework with example functionalities and realizations.
Anomalies and Vector Space Search: Tools for S-Box Analysis Abstract
Xavier Bonnetain Léo Perrin Shizhu Tian
S-boxes are functions with an input so small that the simplest way to specify them is their lookup table (LUT). How can we quantify the distance between the behavior of a given S-box and that of an S-box picked uniformly at random?To answer this question, we introduce various "anomalies". These real numbers are such that a property with an anomaly equal to a should be found roughly once in a set of $$2^{a}$$ random S-boxes. First, we present statistical anomalies based on the distribution of the coefficients in the difference distribution table, linear approximation table, and for the first time, the boomerang connectivity table.We then count the number of S-boxes that have block-cipher like structures to estimate the anomaly associated to those. In order to recover these structures, we show that the most general tool for decomposing S-boxes is an algorithm efficiently listing all the vector spaces of a given dimension contained in a given set, and we present such an algorithm.Combining these approaches, we conclude that all permutations that are actually picked uniformly at random always have essentially the same cryptographic properties and the same lack of structure.
Sponges Resist Leakage: The Case of Authenticated Encryption Abstract
Jean Paul Degabriele Christian Janson Patrick Struck
In this work we advance the study of leakage-resilient Authenticated Encryption with Associated Data (AEAD) and lay the theoretical groundwork for building such schemes from sponges. Building on the work of Barwell et al. (ASIACRYPT 2017), we reduce the problem of constructing leakage-resilient AEAD schemes to that of building fixed-input-length function families that retain pseudorandomness and unpredictability in the presence of leakage. Notably, neither property is implied by the other in the leakage-resilient setting. We then show that such a function family can be combined with standard primitives, namely a pseudorandom generator and a collision-resistant hash, to yield a nonce-based AEAD scheme. In addition, our construction is quite efficient in that it requires only two calls to this leakage-resilient function per encryption or decryption call. This construction can be instantiated entirely from the T-sponge to yield a concrete AEAD scheme which we call $${ \textsc {Slae}}$$. We prove this sponge-based instantiation secure in the non-adaptive leakage setting. $${ \textsc {Slae}}$$ bears many similarities and is indeed inspired by $${ \textsc {Isap}}$$, which was proposed by Dobraunig et al. at FSE 2017. However, while retaining most of the practical advantages of $${ \textsc {Isap}}$$, $${ \textsc {Slae}}$$ additionally benefits from a formal security treatment.
Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes Abstract
Thomas Debris-Alazard Nicolas Sendrier Jean-Pierre Tillich
We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes, the Wave-PSA family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized $$(U,U+V)$$-codes. Our proof follows the GPV strategy [28]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized $$(U,U+V)$$-codes to design a "hash-and-sign" signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model.
Leakage Resilience of the Duplex Construction Abstract
Christoph Dobraunig Bart Mennink
Side-channel attacks, especially differential power analysis (DPA), pose a serious threat to cryptographic implementations deployed in a malicious environment. One way to counter side-channel attacks is to design cryptographic schemes to withstand them, an area that is covered amongst others by leakage resilient cryptography. So far, however, leakage resilient cryptography has predominantly focused on block cipher based designs, and insights in permutation based leakage resilient cryptography are scarce. In this work, we consider leakage resilience of the keyed duplex construction: we present a model for leakage resilient duplexing, derive a fine-grained bound on the security of the keyed duplex in said model, and map it to ideas of Taha and Schaumont (HOST 2014) and Dobraunig et al. (ToSC 2017) in order to use the duplex in a leakage resilient manner.
CSI-FiSh: Efficient Isogeny Based Signatures Through Class Group Computations Abstract
Ward Beullens Thorsten Kleinjung Frederik Vercauteren
In this paper we report on a new record class group computation of an imaginary quadratic field having 154-digit discriminant, surpassing the previous record of 130 digits. This class group is central to the CSIDH-512 isogeny based cryptosystem, and knowing the class group structure and relation lattice implies efficient uniform sampling and a canonical representation of its elements. Both operations were impossible before and allow us to instantiate an isogeny based signature scheme first sketched by Stolbunov. We further optimize the scheme using multiple public keys and Merkle trees, following an idea by De Feo and Galbraith. We also show that including quadratic twists allows to cut the public key size in half for free. Optimizing for signature size, our implementation takes 390 ms to sign/verify and results in signatures of 263 bytes, at the expense of a large public key. This is 300 times faster and over 3 times smaller than an optimized version of SeaSign for the same parameter set. Optimizing for public key and signature size combined, results in a total size of 1468 bytes, which is smaller than any other post-quantum signature scheme at the 128-bit security level.
Dual Isogenies and Their Application to Public-Key Compression for Isogeny-Based Cryptography Abstract
Michael Naehrig Joost Renes
The isogeny-based protocols SIDH and SIKE have received much attention for being post-quantum key agreement candidates that retain relatively small keys. A recent line of work has proposed and further improved compression of public keys, leading to the inclusion of public-key compression in the SIKE proposal for Round 2 of the NIST Post-Quantum Cryptography Standardization effort. We show how to employ the dual isogeny to significantly increase performance of compression techniques, reducing their overhead from 160–182% to 77–86% for Alice's key generation and from 98–104% to 59–61% for Bob's across different SIDH parameter sets. For SIKE, we reduce the overhead of (1) key generation from 140–153% to 61–74%, (2) key encapsulation from 67–90% to 38–57%, and (3) decapsulation from 59–65% to 34–39%. This is mostly achieved by speeding up the pairing computations, which has until now been the main bottleneck, but we also improve (deterministic) basis generation.
Verifiable Delay Functions from Supersingular Isogenies and Pairings Abstract
Luca De Feo Simon Masson Christophe Petit Antonio Sanso
We present two new Verifiable Delay Functions (VDF) based on assumptions from elliptic curve cryptography. We discuss both the advantages and drawbacks of our constructions, we study their security and we demonstrate their practicality with a proof-of-concept implementation.
New Code-Based Privacy-Preserving Cryptographic Constructions Abstract
Khoa Nguyen Hanh Tang Huaxiong Wang Neng Zeng
Code-based cryptography has a long history but did suffer from periods of slow development. The field has recently attracted a lot of attention as one of the major branches of post-quantum cryptography. However, its subfield of privacy-preserving cryptographic constructions is still rather underdeveloped, e.g., important building blocks such as zero-knowledge range proofs and set membership proofs, and even proofs of knowledge of a hash preimage, have not been known under code-based assumptions. Moreover, almost no substantial technical development has been introduced in the last several years.This work introduces several new code-based privacy-preserving cryptographic constructions that considerably advance the state-of-the-art in code-based cryptography. Specifically, we present 3 major contributions, each of which potentially yields various other applications. Our first contribution is a code-based statistically hiding and computationally binding commitment scheme with companion zero-knowledge (ZK) argument of knowledge of a valid opening that can be easily extended to prove that the committed bits satisfy other relations. Our second contribution is the first code-based zero-knowledge range argument for committed values, with communication cost logarithmic in the size of the range. A special feature of our range argument is that, while previous works on range proofs/arguments (in all branches of cryptography) only address ranges of non-negative integers, our protocol can handle signed fractional numbers, and hence, can potentially find a larger scope of applications. Our third contribution is the first code-based Merkle-tree accumulator supported by ZK argument of membership, which has been known to enable various interesting applications. In particular, it allows us to obtain the first code-based ring signatures and group signatures with logarithmic signature sizes.
A Critical Analysis of ISO 17825 ('Testing Methods for the Mitigation of Non-invasive Attack Classes Against Cryptographic Modules') Abstract
Carolyn Whitnall Elisabeth Oswald
The ISO standardisation of 'Testing methods for the mitigation of non-invasive attack classes against cryptographic modules' (ISO/IEC 17825:2016) specifies the use of the Test Vector Leakage Assessment (TVLA) framework as the sole measure to assess whether or not an implementation of (symmetric) cryptography is vulnerable to differential side-channel attacks. It is the only publicly available standard of this kind, and the first side-channel assessment regime to exclusively rely on a TVLA instantiation.TVLA essentially specifies statistical leakage detection tests with the aim of removing the burden of having to test against an ever increasing number of attack vectors. It offers the tantalising prospect of 'conformance testing': if a device passes TVLA, then, one is led to hope, the device would be secure against all (first-order) differential side-channel attacks.In this paper we provide a statistical assessment of the specific instantiation of TVLA in this standard. This task leads us to inquire whether (or not) it is possible to assess the side-channel security of a device via leakage detection (TVLA) only. We find a number of grave issues in the standard and its adaptation of the original TVLA guidelines. We propose some innovations on existing methodologies and finish by giving recommendations for best practice and the responsible reporting of outcomes.
Optimized Method for Computing Odd-Degree Isogenies on Edwards Curves Abstract
Suhri Kim Kisoon Yoon Young-Ho Park Seokhie Hong
In this paper, we present an efficient method to compute arbitrary odd-degree isogenies on Edwards curves. By using the w-coordinate, we optimized the isogeny formula on Edwards curves by Moody and Shumow. We demonstrate that Edwards curves have an additional benefit when recovering the coefficient of the image curve during isogeny computation. For $$\ell $$-degree isogeny where $$\ell =2s+1$$, our isogeny formula on Edwards curves outperforms Montgomery curves when $$s \ge 2$$. To better represent the performance improvements when w-coordinate is used, we implement CSIDH using our isogeny formula. Our implementation is about 20% faster than the previous implementation. The result of our work opens the door for the usage of Edwards curves in isogeny-based cryptography, especially for CSIDH which requires higher degree isogenies.
Strongly Secure Authenticated Key Exchange from Supersingular Isogenies Abstract
Xiu Xu Haiyang Xue Kunpeng Wang Man Ho Au Song Tian
This paper aims to address the open problem, namely, to find new techniques to design and prove security of supersingular isogeny-based authenticated key exchange (AKE) protocols against the widest possible adversarial attacks, raised by Galbraith in 2018. Concretely, we present two AKEs based on a double-key PKE in the supersingular isogeny setting secure in the sense of CK$$^+$$, one of the strongest security models for AKE. Our contributions are summarised as follows. Firstly, we propose a strong OW-CPA secure PKE, $$\mathsf {2PKE_{sidh}}$$, based on SI-DDH assumption. By applying modified Fujisaki-Okamoto transformation, we obtain a [OW-CCA, OW-CPA] secure KEM, $$\mathsf {2KEM_{sidh}}$$. Secondly, we propose a two-pass AKE, $$\mathsf {SIAKE}_2$$, based on SI-DDH assumption, using $$\mathsf {2KEM_{sidh}}$$ as a building block. Thirdly, we present a modified version of $$\mathsf {2KEM_{sidh}}$$ that is secure against leakage under the 1-Oracle SI-DH assumption. Using the modified $$\mathsf {2KEM_{sidh}}$$ as a building block, we then propose a three-pass AKE, $$\mathsf {SIAKE}_3$$, based on 1-Oracle SI-DH assumption. Finally, we prove that both $$\mathsf {SIAKE}_2$$ and $$\mathsf {SIAKE}_3$$ are CK$$^+$$ secure in the random oracle model and supports arbitrary registration. We also provide an implementation to illustrate the efficiency of our schemes. Our schemes compare favourably against existing isogeny-based AKEs. To the best of our knowledge, they are the first of its kind to offer security against arbitrary registration, wPFS, KCI, and MEX simultaneously. Regarding efficiency, our schemes outperform existing schemes in terms of bandwidth as well as CPU cycle count.
Location, Location, Location: Revisiting Modeling and Exploitation for Location-Based Side Channel Leakages Abstract
Christos Andrikos Lejla Batina Lukasz Chmielewski Liran Lerman Vasilios Mavroudis Kostas Papagiannopoulos Guilherme Perin Giorgos Rassias Alberto Sonnino
Near-field microprobes have the capability to isolate small regions of a chip surface and enable precise measurements with high spatial resolution. Being able to distinguish the activity of small regions has given rise to the location-based side-channel attacks, which exploit the spatial dependencies of cryptographic algorithms in order to recover the secret key. Given the fairly uncharted nature of such leakages, this work revisits the location side-channel to broaden our modeling and exploitation capabilities. Our contribution is threefold. First, we provide a simple spatial model that partially captures the effect of location-based leakages. We use the newly established model to simulate the leakage of different scenarios/countermeasures and follow an information-theoretic approach to evaluate the security level achieved in every case. Second, we perform the first successful location-based attack on the SRAM of a modern ARM Cortex-M4 chip, using standard techniques such as difference of means and multivariate template attacks. Third, we put forward neural networks as classifiers that exploit the location side-channel and showcase their effectiveness on ARM Cortex-M4, especially in the context of single-shot attacks and small memory regions. Template attacks and neural network classifiers are able to reach high spacial accuracy, distinguishing between 2 SRAM regions of 128 bytes each with 100% success rate and distinguishing even between 256 SRAM byte-regions with 32% success rate. Such improved exploitation capabilities revitalize the interest for location vulnerabilities on various implementations, ranging from RSA/ECC with large memory footprint, to lookup-table-based AES with smaller memory usage.
Hard Isogeny Problems over RSA Moduli and Groups with Infeasible Inversion Abstract
Salim Ali Altuğ Yilei Chen
We initiate the study of computational problems on elliptic curve isogeny graphs defined over RSA moduli. We conjecture that several variants of the neighbor-search problem over these graphs are hard, and provide a comprehensive list of cryptanalytic attempts on these problems. Moreover, based on the hardness of these problems, we provide a construction of groups with infeasible inversion, where the underlying groups are the ideal class groups of imaginary quadratic orders.Recall that in a group with infeasible inversion, computing the inverse of a group element is required to be hard, while performing the group operation is easy. Motivated by the potential cryptographic application of building a directed transitive signature scheme, the search for a group with infeasible inversion was initiated in the theses of Hohenberger and Molnar (2003). Later it was also shown to provide a broadcast encryption scheme by Irrer et al. (2004). However, to date the only case of a group with infeasible inversion is implied by the much stronger primitive of self-bilinear map constructed by Yamakawa et al. (2014) based on the hardness of factoring and indistinguishability obfuscation (iO). Our construction gives a candidate without using iO.
Streamlined Blockchains: A Simple and Elegant Approach (A Tutorial and Survey) Abstract
A blockchain protocol (also called state machine replication) allows a set of nodes to agree on an ever-growing, linearly ordered log of transactions. The classical consensus literature suggests two approaches for constructing a blockchain protocol: (1) through composition of single-shot consensus instances often called Byzantine Agreement; and (2) through direct construction of a blockchain where there is no clear-cut boundary between single-shot consensus instances. While conceptually simple, the former approach precludes cross-instance optimizations in a practical implementation. This perhaps explains why the latter approach has gained more traction in practice: specifically, well-known protocols such as Paxos and PBFT all follow the direct-construction approach.In this tutorial, we present a new paradigm called "streamlined blockchains" for directly constructing blockchain protocols. This paradigm enables a new family of protocols that are extremely simple and natural: every epoch, a proposer proposes a block extending from a notarized parent chain, and nodes vote if the proposal's parent chain is not . Whenever a block gains votes, it becomes notarized. Whenever a node observes a notarized chain with blocks of consecutive epochs at the end, then the entire chain chopping off blocks at the end is final.By varying the parameters highlighted in , we illustrate two variants for the partially synchronous and synchronous settings respectively. We present very simple proofs of consistency and liveness. We hope that this tutorial provides a compelling argument why this new family of protocols should be used in lieu of classical candidates (e.g., PBFT, Paxos, and their variants), both in practical implementation and for pedagogical purposes.
Collision Resistant Hashing from Sub-exponential Learning Parity with Noise Abstract
Yu Yu Jiang Zhang Jian Weng Chun Guo Xiangxue Li
The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Inspired by the recent work of Applebaum et al. (ITCS 2017), we introduce a general construction of CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN) 1.constant-noise LPN is $$2^{n^{0.5+\varepsilon }}$$-hard for any constant $$\varepsilon >0$$;2.constant-noise LPN is $$2^{\varOmega (n/\log n)}$$-hard given $$q=\mathsf {poly}(n)$$ samples;3.low-noise LPN (of noise rate $$1/\sqrt{n}$$) is $$2^{\varOmega (\sqrt{n}/\log n)}$$-hard given $$q=\mathsf {poly}(n)$$ samples. there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN $$\rightarrow $$ bSVP $$\rightarrow $$ CRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE $$\rightarrow $$ SIS $$\rightarrow $$ CRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai's CRH functions based on the Short Integer Solution (SIS) problem.Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender combining the best of the two worlds, namely, maximized (depth one) parallelizability and polynomial shrinkage. In particular, assume $$2^{n^{0.5+\varepsilon }}$$-hard constant-noise LPN or $$2^{n^{0.25+\varepsilon }}$$-hard low-noise LPN, we obtain a collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and shrinks polynomially.
Approximate Trapdoors for Lattices and Smaller Hash-and-Sign Signatures Abstract
Yilei Chen Nicholas Genise Pratyay Mukherjee
We study a relaxed notion of lattice trapdoor called approximate trapdoor, which is defined to be able to invert Ajtai's one-way function approximately instead of exactly. The primary motivation of our study is to improve the efficiency of the cryptosystems built from lattice trapdoors, including the hash-and-sign signatures.Our main contribution is to construct an approximate trapdoor by modifying the gadget trapdoor proposed by Micciancio and Peikert [Eurocrypt 2012]. In particular, we show how to use the approximate gadget trapdoor to sample short preimages from a distribution that is simulatable without knowing the trapdoor. The analysis of the distribution uses a theorem (implicitly used in past works) regarding linear transformations of discrete Gaussians on lattices.Our approximate gadget trapdoor can be used together with the existing optimization techniques to improve the concrete performance of the hash-and-sign signature in the random oracle model under (Ring-)LWE and (Ring-)SIS assumptions. Our implementation shows that the sizes of the public-key & signature can be reduced by half from those in schemes built from exact trapdoors.
Dual-Mode NIZKs from Obfuscation Abstract
Dennis Hofheinz Bogdan Ursu
Two standard security properties of a non-interactive zero-knowledge (NIZK) scheme are soundness and zero-knowledge. But while standard NIZK systems can only provide one of those properties against unbounded adversaries, dual-mode NIZK systems allow to choose dynamically and adaptively which of these properties holds unconditionally. The only known dual-mode NIZK schemes are Groth-Sahai proofs (which have proved extremely useful in a variety of applications), and the FHE-based NIZK constructions of Canetti et al. and Peikert et al, which are concurrent and independent to this work. However, all these constructions rely on specific algebraic settings.Here, we provide a generic construction of dual-mode NIZK systems for all of NP. The public parameters of our scheme can be set up in one of two indistinguishable ways. One way provides unconditional soundness, while the other provides unconditional zero-knowledge. Our scheme relies on subexponentially secure indistinguishability obfuscation and subexponentially secure one-way functions, but otherwise only on comparatively mild and generic computational assumptions. These generic assumptions can be instantiated under any one of the DDH, $$k$$-LIN, DCR, or QR assumptions.As an application, we reduce the required assumptions necessary for several recent obfuscation-based constructions of multilinear maps. Combined with previous work, our scheme can be used to construct multilinear maps from obfuscation and a group in which the strong Diffie-Hellman assumption holds. We also believe that our work adds to the understanding of the construction of NIZK systems, as it provides a conceptually new way to achieve dual-mode properties.
Simple Refreshing in the Noisy Leakage Model Abstract
Stefan Dziembowski Sebastian Faust Karol Żebrowski
Masking schemes are a prominent countermeasure against power analysis and work by concealing the values that are produced during the computation through randomness. The randomness is typically injected into the masked algorithm using a so-called refreshing scheme, which is placed after each masked operation, and hence is one of the main bottlenecks for designing efficient masking schemes. The main contribution of our work is to investigate the security of a very simple and efficient refreshing scheme and prove its security in the noisy leakage model (EUROCRYPT'13). Compared to earlier constructions our refreshing is significantly more efficient and uses only n random values and $${<}2n$$ operations, where n is the security parameter. In addition we show how our refreshing can be used in more complex masked computation in the presence of noisy leakage. Our results are established using a new methodology for analyzing masking schemes in the noisy leakage model, which may be of independent interest.
On Kilian's Randomization of Multilinear Map Encodings Abstract
Jean-Sébastien Coron Hilder V. L. Pereira
Indistinguishability obfuscation constructions based on matrix branching programs generally proceed in two steps: first apply Kilian's randomization of the matrix product computation, and then encode the matrices using a multilinear map scheme. In this paper we observe that by applying Kilian's randomization after encoding, the complexity of the best attacks is significantly increased for CLT13 multilinear maps. This implies that much smaller parameters can be used, which improves the efficiency of the constructions by several orders of magnitude.As an application, we describe the first concrete implementation of multiparty non-interactive Diffie-Hellman key exchange secure against existing attacks. Key exchange was originally the most straightforward application of multilinear maps; however it was quickly broken for the three known families of multilinear maps (GGH13, CLT13 and GGH15). Here we describe the first implementation of key exchange that is resistant against known attacks, based on CLT13 multilinear maps. For $$N=4$$ users and a medium level of security, our implementation requires 18 GB of public parameters, and a few minutes for the derivation of a shared key.
Decisional Second-Preimage Resistance: When Does SPR Imply PRE? Abstract
Daniel J. Bernstein Andreas Hülsing
There is a well-known gap between second-preimage resistance and preimage resistance for length-preserving hash functions. This paper introduces a simple concept that fills this gap. One consequence of this concept is that tight reductions can remove interactivity for multi-target length-preserving preimage problems, such as the problems that appear in analyzing hash-based signature systems. Previous reduction techniques applied to only a negligible fraction of all length-preserving hash functions, presumably excluding all off-the-shelf hash functions.
Output Compression, MPC, and iO for Turing Machines Abstract
Saikrishna Badrinarayanan Rex Fernando Venkata Koppula Amit Sahai Brent Waters
In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output-compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set $$\{$$ LWE, DDH, N $$^{th}$$ Residuosity $$\}$$ .We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO): 1.Compact MPC for Turing Machines in the Random Oracle Model. In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubácek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model.2.Succinct iO for Turing Machines in the Shared Randomness Model. In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set $$\{$$ LWE, DDH, N $$^{th}$$ Residuosity $$\}$$ .
The Exchange Attack: How to Distinguish Six Rounds of AES with $2^{88.2}$Chosen Plaintexts Abstract
Navid Ghaedi Bardeh Sondre Rønjom
In this paper we present exchange-equivalence attacks which is a new cryptanalytic attack technique suitable for SPN-like block cipher designs. Our new technique results in the first secret-key chosen plaintext distinguisher for 6-round AES. The complexity of the distinguisher is about $$2^{88.2}$$ in terms of data, memory and computational complexity. The distinguishing attack for AES reduced to six rounds is a straight-forward extension of an exchange attack for 5-round AES that requires $$2^{30}$$ in terms of chosen plaintexts and computation. This is also a new record for AES reduced to five rounds. The main result of this paper is that AES up to at least six rounds is biased when restricted to exchange-invariant sets of plaintexts.
Cryptanalysis of CLT13 Multilinear Maps with Independent Slots Abstract
Jean-Sébastien Coron Luca Notarnicola
Many constructions based on multilinear maps require independent slots in the plaintext, so that multiple computations can be performed in parallel over the slots. Such constructions are usually based on CLT13 multilinear maps, since CLT13 inherently provides a composite encoding space, with a plaintext ring $$\bigoplus _{i=1}^n \mathbb {Z}/g_i\mathbb {Z}$$ for small primes $$g_i$$ 's. However, a vulnerability was identified at Crypto 2014 by Gentry, Lewko and Waters, with a lattice-based attack in dimension 2, and the authors have suggested a simple countermeasure. In this paper, we identify an attack based on higher dimension lattice reduction that breaks the author's countermeasure for a wide range of parameters. Combined with the Cheon et al. attack from Eurocrypt 2015, this leads to the recovery of all the secret parameters of CLT13, assuming that low-level encodings of almost zero plaintexts are available. We show how to apply our attack against various constructions based on composite-order CLT13. For the [FRS17] construction, our attack enables to recover the secret CLT13 plaintext ring for a certain range of parameters; however, breaking the indistinguishability of the branching program remains an open problem.
Algebraic Cryptanalysis of STARK-Friendly Designs: Application to MARVELlous and MiMC Abstract
Martin R. Albrecht Carlos Cid Lorenzo Grassi Dmitry Khovratovich Reinhard Lüftenegger Christian Rechberger Markus Schofnegger
The block cipher Jarvis and the hash function Friday, both members of the MARVELlous family of cryptographic primitives, are among the first proposed solutions to the problem of designing symmetric-key algorithms suitable for transparent, post-quantum secure zero-knowledge proof systems such as ZK-STARKs. In this paper we describe an algebraic cryptanalysis of Jarvis and Friday and show that the proposed number of rounds is not sufficient to provide adequate security. In Jarvis, the round function is obtained by combining a finite field inversion, a full-degree affine permutation polynomial and a key addition. Yet we show that even though the high degree of the affine polynomial may prevent some algebraic attacks (as claimed by the designers), the particular algebraic properties of the round function make both Jarvis and Friday vulnerable to Gröbner basis attacks. We also consider MiMC, a block cipher similar in structure to Jarvis. However, this cipher proves to be resistant against our proposed attack strategy. Still, our successful cryptanalysis of Jarvis and Friday does illustrate that block cipher designs for "algebraic platforms" such as STARKs, FHE or MPC may be particularly vulnerable to algebraic attacks.
Collusion Resistant Watermarking Schemes for Cryptographic Functionalities Abstract
Rupeng Yang Man Ho Au Junzuo Lai Qiuliang Xu Zuoxia Yu
A cryptographic watermarking scheme embeds a message into a program while preserving its functionality. Recently, a number of watermarking schemes have been proposed, which are proven secure in the sense that given one marked program, any attempt to remove the embedded message will substantially change its functionality.In this paper, we formally initiate the study of collusion attacks for watermarking schemes, where the attacker's goal is to remove the embedded messages given multiple copies of the same program, each with a different embedded message. This is motivated by practical scenarios, where a program may be marked multiple times with different messages.The results of this work are twofold. First, we examine existing cryptographic watermarking schemes and observe that all of them are vulnerable to collusion attacks. Second, we construct collusion resistant watermarking schemes for various cryptographic functionalities (e.g., pseudorandom function evaluation, decryption, etc.). To achieve our second result, we present a new primitive called puncturable functional encryption scheme, which may be of independent interest.
Algebraic XOR-RKA-Secure Pseudorandom Functions from Post-Zeroizing Multilinear Maps Abstract
Michel Abdalla Fabrice Benhamouda Alain Passelègue
Due to the vast number of successful related-key attacks against existing block-ciphers, related-key security has become a common design goal for such primitives. In these attacks, the adversary is not only capable of seeing the output of a function on inputs of its choice, but also on related keys. At Crypto 2010, Bellare and Cash proposed the first construction of a pseudorandom function that could provably withstand such attacks based on standard assumptions. Their construction, as well as several others that appeared more recently, have in common the fact that they only consider linear or polynomial functions of the secret key over complex groups. In reality, however, most related-key attacks have a simpler form, such as the XOR of the key with a known value. To address this problem, we propose the first construction of RKA-secure pseudorandom function for XOR relations. Our construction relies on multilinear maps and, hence, can only be seen as a feasibility result. Nevertheless, we remark that it can be instantiated under two of the existing multilinear-map candidates since it does not reveal any encodings of zero. To achieve this goal, we rely on several techniques that were used in the context of program obfuscation, but we also introduce new ones to address challenges that are specific to the related-key-security setting.
MILP-aided Method of Searching Division Property Using Three Subsets and Applications Abstract
Senpeng Wang Bin Hu Jie Guan Kai Zhang Tairong Shi
Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and then conventional bit-based division property (CBDP) and bit-based division property using three subsets (BDPT) were proposed by Todo and Morii at FSE 2016. At the very beginning, the two kinds of bit-based division properties once couldn't be applied to ciphers with large block size just because of the huge time and memory complexity. At ASIACRYPT 2016, Xiang et al. extended Mixed Integer Linear Programming (MILP) method to search integral distinguishers based on CBDP. BDPT can find more accurate integral distinguishers than CBDP, but it couldn't be modeled efficiently.This paper focuses on the feasibility of searching integral distinguishers based on BDPT. We propose the pruning techniques and fast propagation of BDPT for the first time. Based on these, an MILP-aided method for the propagation of BDPT is proposed. Then, we apply this method to some block ciphers. For SIMON64, PRESENT, and RECTANGLE, we find more balanced bits than the previous longest distinguishers. For LBlock, we find a better 16-round integral distinguisher with less active bits. For other block ciphers, our results are in accordance with the previous longest distinguishers.Cube attack is an important cryptanalytic technique against symmetric cryptosystems, especially for stream ciphers. And the most important step in cube attack is superpoly recovery. Inspired by the CBDP based cube attack proposed by Todo at CRYPTO 2017, we propose a method which uses BDPT to recover the superpoly in cube attack. We apply this new method to round-reduced Trivium. To be specific, the time complexity of recovering the superpoly of 832-round Trivium at CRYPTO 2017 is reduced from $$2^{77}$$ to practical, and the time complexity of recovering the superpoly of 839-round Trivium at CRYPTO 2018 is reduced from $$2^{79}$$ to practical. Then, we propose a theoretical attack which can recover the superpoly of Trivium up to 841 round.
Valiant's Universal Circuits Revisited: An Overall Improvement and a Lower Bound Abstract
Shuoyao Zhao Yu Yu Jiang Zhang Hanlin Liu
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size n). At STOC 1976 Valiant presented a graph theoretic approach to the construction of UCs, where a UC is represented by an edge universal graph (EUG) and is recursively constructed using a dedicated graph object (referred to as supernode). As a main end result, Valiant constructed a 4-way supernode of size 19 and an EUG of size $$4.75n\log n$$ (omitting smaller terms), which remained the most size-efficient even to this day (after more than 4 decades).Motivated by the emerging applications of UCs in various privacy preserving computation scenarios, we revisit Valiant's universal circuits, and propose a 4-way supernode of size 18, and an EUG of size $$4.5n\log n$$. As confirmed by our implementations, we reduce the size of universal circuits (and the number of AND gates) by more than 5% in general, and thus improve upon the efficiency of UC-based cryptographic applications accordingly. Our approach to the design of optimal supernodes is computer aided (rather than by hand as in previous works), which might be of independent interest. As a complement, we give lower bounds on the size of EUGs and UCs in Valiant's framework, which significantly improves upon the generic lower bound on UC size and therefore reduces the gap between theory and practice of universal circuits.
Numerical Method for Comparison on Homomorphically Encrypted Numbers Abstract
Jung Hee Cheon Dongwoo Kim Duhyeong Kim Hun Hee Lee Keewoo Lee
We propose a new method to compare numbers which are encrypted by Homomorphic Encryption (HE). Previously, comparison and min/max functions were evaluated using Boolean functions where input numbers are encrypted bit-wise. However, the bit-wise encryption methods require relatively expensive computations for basic arithmetic operations such as addition and multiplication.In this paper, we introduce iterative algorithms that approximately compute the min/max and comparison operations of several numbers which are encrypted word-wise. From the concrete error analyses, we show that our min/max and comparison algorithms have $$\varTheta (\alpha )$$ and $$\varTheta (\alpha \log \alpha )$$ computational complexity to obtain approximate values within an error rate $$2^{-\alpha }$$, while the previous minimax polynomial approximation method requires the exponential complexity $$\varTheta (2^{\alpha /2})$$ and $$\varTheta (\sqrt{\alpha }\cdot 2^{\alpha /2})$$, respectively. Our algorithms achieve (quasi-)optimality in terms of asymptotic computational complexity among polynomial approximations for min/max and comparison operations. The comparison algorithm is extended to several applications such as computing the top-k elements and counting numbers over the threshold in encrypted state.Our method enables word-wise HEs to enjoy comparable performance in practice with bit-wise HEs for comparison operations while showing much better performance on polynomial operations. Computing an approximate maximum value of any two $$\ell $$-bit integers encrypted by HEAAN, up to error $$2^{\ell -10}$$, takes only 1.14 ms in amortized running time, which is comparable to the result based on bit-wise HEs.
The Broadcast Message Complexity of Secure Multiparty Computation Abstract
Sanjam Garg Aarushi Goel Abhishek Jain
We study the broadcast message complexity of secure multiparty computation (MPC), namely, the total number of messages that are required for securely computing any functionality in the broadcast model of communication.MPC protocols are traditionally designed in the simultaneous broadcast model, where each round consists of every party broadcasting a message to the other parties. We show that this method of communication is sub-optimal; specifically, by eliminating simultaneity, it is, in fact, possible to reduce the broadcast message complexity of MPC.More specifically, we establish tight lower and upper bounds on the broadcast message complexity of n-party MPC for every $$t<n$$ corruption threshold, both in the plain model as well as common setup models. For example, our results show that the optimal broadcast message complexity of semi-honest MPC can be much lower than 2n, but necessarily requires at least three rounds of communication. We also extend our results to the malicious setting in setup models.
Cryptanalysis of GSM Encryption in 2G/3G Networks Without Rainbow Tables Abstract
Bin Zhang
The GSM standard developed by ETSI for 2G networks adopts the A5/1 stream cipher to protect the over-the-air privacy in cell phone and has become the de-facto global standard in mobile communications, though the emerging of subsequent 3G/4G standards. There are many cryptanalytic results available so far and the most notable ones share the need of a heavy pre-computation with large rainbow tables or distributed cracking network. In this paper, we present a fast near collision attack on GSM encryption in 2G/3G networks, which is completely new and more threatening compared to the previous best results. We adapt the fast near collision attack proposed at Eurocrypt 2018 with the concrete irregular clocking manner in A5/1 to have a state recovery attack with a low complexity. It is shown that if the first 64 bits of one keystream frame are available, the secret key of A5/1 can be reliably found in $$2^{31.79}$$ cipher ticks, given around 1 MB memory and after the pre-computation of $$2^{20.26}$$ cipher ticks. Our current implementation clearly certified the validity of the suggested attack. Due to the fact that A5/3 and GPRS share the same key with A5/1, this can be converted into attacks against any GSM network eventually.
Multi-Key Homomorphic Encryption from TFHE Abstract
Hao Chen Ilaria Chillotti Yongsoo Song
In this paper, we propose a Multi-Key Homomorphic Encryption (MKHE) scheme by generalizing the low-latency homomorphic encryption by Chillotti et al. (ASIACRYPT 2016). Our scheme can evaluate a binary gate on ciphertexts encrypted under different keys followed by a bootstrapping.The biggest challenge to meeting the goal is to design a multiplication between a bootstrapping key of a single party and a multi-key RLWE ciphertext. We propose two different algorithms for this hybrid product. Our first method improves the ciphertext extension by Mukherjee and Wichs (EUROCRYPT 2016) to provide better performance. The other one is a whole new approach which has advantages in storage, complexity, and noise growth.Compared to previous work, our construction is more efficient in terms of both asymptotic and concrete complexity. The length of ciphertexts and the computational costs of a binary gate grow linearly and quadratically on the number of parties, respectively. We provide experimental results demonstrating the running time of a homomorphic NAND gate with bootstrapping. To the best of our knowledge, this is the first attempt in the literature to implement an MKHE scheme.
Beyond Honest Majority: The Round Complexity of Fair and Robust Multi-party Computation Abstract
Arpita Patra Divya Ravi
Two of the most sought-after properties of Multi-party Computation (MPC) protocols are fairness and guaranteed output delivery (GOD), the latter also referred to as robustness. Achieving both, however, brings in the necessary requirement of malicious-minority. In a generalised adversarial setting where the adversary is allowed to corrupt both actively and passively, the necessary bound for a n-party fair or robust protocol turns out to be $$t_a + t_p < n$$, where $$t_a,t_p$$ denote the threshold for active and passive corruption with the latter subsuming the former. Subsuming the malicious-minority as a boundary special case, this setting, denoted as dynamic corruption, opens up a range of possible corruption scenarios for the adversary. While dynamic corruption includes the entire range of thresholds for $$(t_a,t_p)$$ starting from $$(\lceil \frac{n}{2} \rceil - 1, \lfloor n/2 \rfloor )$$ to $$(0,n-1)$$, the boundary corruption restricts the adversary only to the boundary cases of $$(\lceil \frac{n}{2} \rceil - 1, \lfloor n/2 \rfloor )$$ and $$(0,n-1)$$. Notably, both corruption settings empower an adversary to control majority of the parties, yet ensuring the count on active corruption never goes beyond $$\lceil \frac{n}{2} \rceil - 1$$. We target the round complexity of fair and robust MPC tolerating dynamic and boundary adversaries. As it turns out, $$\lceil n/2 \rceil + 1$$ rounds are necessary and sufficient for fair as well as robust MPC tolerating dynamic corruption. The non-constant barrier raised by dynamic corruption can be sailed through for a boundary adversary. The round complexity of 3 and 4 is necessary and sufficient for fair and GOD protocols respectively, with the latter having an exception of allowing 3 round protocols in the presence of a single active corruption. While all our lower bounds assume pair-wise private and broadcast channels and are resilient to the presence of both public (CRS) and private (PKI) setup, our upper bounds are broadcast-only and assume only public setup. The traditional and popular setting of malicious-minority, being restricted compared to both dynamic and boundary setting, requires 3 and 2 rounds in the presence of public and private setup respectively for both fair as well as GOD protocols.
Tightly Secure Inner Product Functional Encryption: Multi-input and Function-Hiding Constructions Abstract
Junichi Tomida
Tightly secure cryptographic schemes have been extensively studied in the fields of chosen-ciphertext secure public-key encryption, identity-based encryption, signatures and more. We extend tightly secure cryptography to inner product functional encryption (IPFE) and present the first tightly secure schemes related to IPFE.We first construct a new IPFE scheme that is tightly secure in the multi-user and multi-challenge setting. In other words, the security of our scheme does not degrade even if an adversary obtains many ciphertexts generated by many users. Our scheme is constructible on a pairing-free group and secure under the matrix decisional Diffie-Hellman (MDDH) assumption, which is the generalization of the decisional Diffie-Hellman (DDH) assumption. Applying the known conversions by Lin (CRYPTO 2017) and Abdalla et al. (CRYPTO 2018) to our scheme, we can obtain the first tightly secure function-hiding IPFE scheme and multi-input IPFE (MIPFE) scheme respectively.Our second main contribution is the proposal of a new generic conversion from function-hiding IPFE to function-hiding MIPFE, which was left as an open problem by Abdalla et al. (CRYPTO 2018). We obtain the first tightly secure function-hiding MIPFE scheme by applying our conversion to the tightly secure function-hiding IPFE scheme described above.Finally, the security reductions of all our schemes are fully tight, which means that the security of our schemes is reduced to the MDDH assumption with a constant security loss.
Homomorphic Encryption for Finite Automata Abstract
Nicholas Genise Craig Gentry Shai Halevi Baiyu Li Daniele Micciancio
We describe a somewhat homomorphic GSW-like encryption scheme, natively encrypting matrices rather than just single elements. This scheme offers much better performance than existing homomorphic encryption schemes for evaluating encrypted (nondeterministic) finite automata (NFAs). Differently from GSW, we do not know how to reduce the security of this scheme from LWE, instead we reduce it from a stronger assumption, that can be thought of as an inhomogeneous variant of the NTRU assumption. This assumption (that we term iNTRU) may be useful and interesting in its own right, and we examine a few of its properties. We also examine methods to encode regular expressions as NFAs, and in particular explore a new optimization problem, motivated by our application to encrypted NFA evaluation. In this problem, we seek to minimize the number of states in an NFA for a given expression, subject to the constraint on the ambiguity of the NFA.
Card-Based Cryptography Meets Formal Verification Abstract
Alexander Koch Michael Schrempp Michael Kirsten
Card-based cryptography provides simple and practicable protocols for performing secure multi-party computation (MPC) with just a deck of cards. For the sake of simplicity, this is often done using cards with only two symbols, e.g., and . Within this paper, we target the setting where all cards carry distinct symbols, catering for use-cases with commonly available standard decks and a weaker indistinguishability assumption. As of yet, the literature provides for only three protocols and no proofs for non-trivial lower bounds on the number of cards. As such complex proofs (handling very large combinatorial state spaces) tend to be involved and error-prone, we propose using formal verification for finding protocols and proving lower bounds. In this paper, we employ the technique of software bounded model checking (SBMC), which reduces the problem to a bounded state space, which is automatically searched exhaustively using a SAT solver as a backend.Our contribution is twofold: (a) We identify two protocols for converting between different bit encodings with overlapping bases, and then show them to be card-minimal. This completes the picture of tight lower bounds on the number of cards with respect to runtime behavior and shuffle properties of conversion protocols. For computing , we show that there is no protocol with finite runtime using four cards with distinguishable symbols and fixed output encoding, and give a four-card protocol with an expected finite runtime using only random cuts. (b) We provide a general translation of proofs for lower bounds to a bounded model checking framework for automatically finding card- and length-minimal protocols and to give additional confidence in lower bounds. We apply this to validate our method and, as an example, confirm our new protocol to have a shortest run for protocols using this number of cards.
Public-Key Function-Private Hidden Vector Encryption (and More) Abstract
James Bartusek Brent Carmer Abhishek Jain Zhengzhong Jin Tancrède Lepoint Fermi Ma Tal Malkin Alex J. Malozemoff Mariana Raykova
We construct public-key function-private predicate encryption for the "small superset functionality," recently introduced by Beullens and Wee (PKC 2019). This functionality captures several important classes of predicates:Point functions. For point function predicates, our construction is equivalent to public-key function-private anonymous identity-based encryption.Conjunctions. If the predicate computes a conjunction, our construction is a public-key function-private hidden vector encryption scheme. This addresses an open problem posed by Boneh, Raghunathan, and Segev (ASIACRYPT 2013).d-CNFs and read-once conjunctions of d-disjunctions for constant-size d. Our construction extends the group-based obfuscation schemes of Bishop et al. (CRYPTO 2018), Beullens and Wee (PKC 2019), and Bartusek et al. (EUROCRYPT 2019) to the setting of public-key function-private predicate encryption. We achieve an average-case notion of function privacy, which guarantees that a decryption key $$\mathsf {sk} _f$$ reveals nothing about f as long as f is drawn from a distribution with sufficient entropy. We formalize this security notion as a generalization of the (enhanced) real-or-random function privacy definition of Boneh, Raghunathan, and Segev (CRYPTO 2013). Our construction relies on bilinear groups, and we prove security in the generic bilinear group model.
Efficient Explicit Constructions of Multipartite Secret Sharing Schemes Abstract
Qi Chen Chunming Tang Zhiqiang Lin
Multipartite secret sharing schemes are those having a multipartite access structure, in which the set of participants is divided into several parts and all participants in the same part play an equivalent role. Secret sharing schemes for multipartite access structures have received considerable attention due to the fact that multipartite secret sharing can be seen as a natural and useful generalization of threshold secret sharing.This work deals with efficient and explicit constructions of ideal multipartite secret sharing schemes, while most of the known constructions are either inefficient or randomized. Most ideal multipartite secret sharing schemes in the literature can be classified as either hierarchical or compartmented. The main results are the constructions for ideal hierarchical access structures, a family that contains every ideal hierarchical access structure as a particular case such as the disjunctive hierarchical threshold access structure and the conjunctive hierarchical threshold access structure, and the constructions for compartmented access structures with upper bounds and compartmented access structures with lower bounds, two families of compartmented access structures.On the basis of the relationship between multipartite secret sharing schemes, polymatroids, and matroids, the problem of how to construct a scheme realizing a multipartite access structure can be transformed to the problem of how to find a representation of a matroid from a presentation of its associated polymatroid. In this paper, we give efficient algorithms to find representations of the matroids associated to the three families of multipartite access structures. More precisely, based on know results about integer polymatroids, for each of the three families of access structures, we give an efficient method to find a representation of the integer polymatroid over some finite field, and then over some finite extension of that field, we give an efficient method to find a presentation of the matroid associated to the integer polymatroid. Finally, we construct ideal linear schemes realizing the three families of multipartite access structures by efficient methods.
Multi-Client Functional Encryption for Linear Functions in the Standard Model from LWE Abstract
Benoît Libert Radu Ţiţiu
Multi-client functional encryption (MCFE) allows $$\ell $$ clients to encrypt ciphertexts $$(\mathbf {C}_{t,1},\mathbf {C}_{t,2},\ldots ,\mathbf {C}_{t,\ell })$$ under some label. Each client can encrypt his own data $$X_i$$ for a label t using a private encryption key $$\mathsf {ek}_i$$ issued by a trusted authority in such a way that, as long as all $$\mathbf {C}_{t,i}$$ share the same label t, an evaluator endowed with a functional key $$\mathsf {dk}_f$$ can evaluate $$f(X_1,X_2,\ldots ,X_\ell )$$ without learning anything else on the underlying plaintexts $$X_i$$. Functional decryption keys can be derived by the central authority using the master secret key. Under the Decision Diffie-Hellman assumption, Chotard et al. (Asiacrypt 2018) recently described an adaptively secure MCFE scheme for the evaluation of linear functions over the integers. They also gave a decentralized variant (DMCFE) of their scheme which does not rely on a centralized authority, but rather allows encryptors to issue functional secret keys in a distributed manner. While efficient, their constructions both rely on random oracles in their security analysis. In this paper, we build a standard-model MCFE scheme for the same functionality and prove it fully secure under adaptive corruptions. Our proof relies on the Learning-With-Errors ($$\mathsf {LWE}$$) assumption and does not require the random oracle model. We also provide a decentralized variant of our scheme, which we prove secure in the static corruption setting (but for adaptively chosen messages) under the $$\mathsf {LWE}$$ assumption.
Quantum Algorithms for the Approximate k-List Problem and Their Application to Lattice Sieving Abstract
Elena Kirshanova Erik Mårtensson Eamonn W. Postlethwaite Subhayan Roy Moulik
The Shortest Vector Problem (SVP) is one of the mathematical foundations of lattice based cryptography. Lattice sieve algorithms are amongst the foremost methods of solving SVP. The asymptotically fastest known classical and quantum sieves solve SVP in a d-dimensional lattice in $$2^{\mathsf {c}d + o(d)}$$ time steps with $$2^{\mathsf {c}' d + o(d)}$$ memory for constants $$c, c'$$ . In this work, we give various quantum sieving algorithms that trade computational steps for memory.We first give a quantum analogue of the classical k-Sieve algorithm [Herold–Kirshanova–Laarhoven, PKC'18] in the Quantum Random Access Memory (QRAM) model, achieving an algorithm that heuristically solves SVP in $$2^{0.2989d + o(d)}$$ time steps using $$2^{0.1395d + o(d)}$$ memory. This should be compared to the state-of-the-art algorithm [Laarhoven, Ph.D Thesis, 2015] which, in the same model, solves SVP in $$2^{0.2653d + o(d)}$$ time steps and memory. In the QRAM model these algorithms can be implemented using $$\mathrm {poly}(d)$$ width quantum circuits.Secondly, we frame the k-Sieve as the problem of k-clique listing in a graph and apply quantum k-clique finding techniques to the k-Sieve.Finally, we explore the large quantum memory regime by adapting parallel quantum search [Beals et al., Proc. Roy. Soc. A'13] to the 2-Sieve, and give an analysis in the quantum circuit model. We show how to solve SVP in $$2^{0.1037d + o(d)}$$ time steps using $$2^{0.2075d + o(d)}$$ quantum memory.
Perfectly Secure Oblivious RAM with Sublinear Bandwidth Overhead Abstract
Michael Raskin Mark Simkin
Oblivious RAM (ORAM) has established itself as a fundamental cryptographic building block. Understanding which bandwidth overheads are possible under which assumptions has been the topic of a vast amount of previous works. In this work, we focus on perfectly secure ORAM and we present the first construction with sublinear bandwidth overhead in the worst-case. All prior constructions with perfect security require linear communication overhead in the worst-case and only achieve sublinear bandwidth overheads in the amortized sense. We present a fundamentally new approach for constructing ORAM and our results significantly advance our understanding of what is possible with perfect security.Our main construction, Lookahead ORAM, is perfectly secure, has a worst-case bandwidth overhead of , and a total storage cost of on the server-side, where N is the maximum number of stored data elements. In terms of concrete server-side storage costs, our construction has the smallest storage overhead among all perfectly and statistically secure ORAMs and is only a factor 3 worse than the most storage efficient computationally secure ORAM. Assuming a client-side position map, our construction is the first, among all ORAMs with worst-case sublinear overhead, that allows for a online bandwidth overhead without server-side computation. Along the way, we construct a conceptually extremely simple statistically secure ORAM with a worst-case bandwidth overhead of , which may be of independent interest.
Middle-Product Learning with Rounding Problem and Its Applications Abstract
Shi Bai Katharina Boudgoust Dipayan Das Adeline Roux-Langlois Weiqiang Wen Zhenfei Zhang
At CRYPTO 2017, Roşca et al. introduce a new variant of the Learning With Errors (LWE) problem, called the Middle-Product LWE ( $${\mathrm {MP}\text {-}\mathrm{LWE}}$$ ). The hardness of this new assumption is based on the hardness of the Polynomial LWE (P-LWE) problem parameterized by a set of polynomials, making it more secure against the possible weakness of a single defining polynomial. As a cryptographic application, they also provide an encryption scheme based on the $${\mathrm {MP}\text {-}\mathrm{LWE}}$$ problem. In this paper, we propose a deterministic variant of their encryption scheme, which does not need Gaussian sampling and is thus simpler than the original one. Still, it has the same quasi-optimal asymptotic key and ciphertext sizes. The main ingredient for this purpose is the Learning With Rounding (LWR) problem which has already been used to derandomize LWE type encryption. The hardness of our scheme is based on a new assumption called Middle-Product Computational Learning With Rounding, an adaption of the computational LWR problem over rings, introduced by Chen et al. at ASIACRYPT 2018. We prove that this new assumption is as hard as the decisional version of MP-LWE and thus benefits from worst-case to average-case hardness guarantees.
From Single-Input to Multi-client Inner-Product Functional Encryption Abstract
Michel Abdalla Fabrice Benhamouda Romain Gay
We present a new generic construction of multi-client functional encryption (MCFE) for inner products from single-input functional inner-product encryption and standard pseudorandom functions. In spite of its simplicity, the new construction supports labels, achieves security in the standard model under adaptive corruptions, and can be instantiated from the plain DDH, LWE, and Paillier assumptions. Prior to our work, the only known constructions required discrete-log-based assumptions and the random-oracle model. Since our new scheme is not compatible with the compiler from Abdalla et al. (PKC 2019) that decentralizes the generation of the functional decryption keys, we also show how to modify the latter transformation to obtain a decentralized version of our scheme with similar features.
Quantum Attacks Without Superposition Queries: The Offline Simon's Algorithm Abstract
Xavier Bonnetain Akinori Hosoyamada María Naya-Plasencia Yu Sasaki André Schrottenloher
In symmetric cryptanalysis, the model of superposition queries has led to surprising results, with many constructions being broken in polynomial time thanks to Simon's period-finding algorithm. But the practical implications of these attacks remain blurry. In contrast, the results obtained so far for a quantum adversary making classical queries only are less impressive.In this paper, we introduce a new quantum algorithm which uses Simon's subroutines in a novel way. We manage to leverage the algebraic structure of cryptosystems in the context of a quantum attacker limited to classical queries and offline quantum computations. We obtain improved quantum-time/classical-data tradeoffs with respect to the current literature, while using only as much hardware requirements (quantum and classical) as a standard exhaustive search with Grover's algorithm. In particular, we are able to break the Even-Mansour construction in quantum time $$\tilde{O}(2^{n/3})$$, with $$O(2^{n/3})$$ classical queries and $$O(n^2)$$ qubits only. In addition, we improve some previous superposition attacks by reducing the data complexity from exponential to polynomial, with the same time complexity.Our approach can be seen in two complementary ways: reusing superposition queries during the iteration of a search using Grover's algorithm, or alternatively, removing the memory requirement in some quantum attacks based on a collision search, thanks to their algebraic structure.We provide a list of cryptographic applications, including the Even-Mansour construction, the FX construction, some Sponge authenticated modes of encryption, and many more.
How to Correct Errors in Multi-server PIR Abstract
Kaoru Kurosawa
Suppose that there exist a user and $$\ell $$ servers $$S_1,\ldots ,S_{\ell }$$. Each server $$S_j$$ holds a copy of a database $$\mathbf {x}=(x_1, \ldots , x_n) \in \{0,1\}^n$$, and the user holds a secret index $$i_0 \in \{1, \ldots , n\}$$. A b error correcting $$\ell $$ server PIR (Private Information Retrieval) scheme allows a user to retrieve $$x_{i_0}$$ correctly even if and b or less servers return false answers while each server learns no information on $$i_0$$ in the information theoretic sense. Although there exists such a scheme with the total communication cost $$ O(n^{1/(2k-1)} \times k\ell \log {\ell } ) $$ where $$k=\ell -2b$$, the decoding algorithm is very inefficient.In this paper, we show an efficient decoding algorithm for this b error correcting $$\ell $$ server PIR scheme. It runs in time $$O(\ell ^3)$$.
UC-Secure Multiparty Computation from One-Way Functions Using Stateless Tokens Abstract
Saikrishna Badrinarayanan Abhishek Jain Rafail Ostrovsky Ivan Visconti
We revisit the problem of universally composable (UC) secure multiparty computation in the stateless hardware token model. We construct a three round multi-party computation protocol for general functions based on one-way functions where each party sends two tokens to every other party. Relaxing to the two-party case, we also construct a two round protocol based on one-way functions where each party sends a single token to the other party, and at the end of the protocol, both parties learn the output.One of the key components in the above constructions is a new two-round oblivious transfer protocol based on one-way functions using only one token, which can be reused an unbounded polynomial number of times. All prior constructions required either stronger complexity assumptions, or larger number of rounds, or a larger number of tokens.
Quantum Random Oracle Model with Auxiliary Input Abstract
Minki Hhan Keita Xagawa Takashi Yamakawa
The random oracle model (ROM) is an idealized model where hash functions are modeled as random functions that are only accessible as oracles. Although the ROM has been used for proving many cryptographic schemes, it has (at least) two problems. First, the ROM does not capture quantum adversaries. Second, it does not capture non-uniform adversaries that perform preprocessings. To deal with these problems, Boneh et al. (Asiacrypt'11) proposed using the quantum ROM (QROM) to argue post-quantum security, and Unruh (CRYPTO'07) proposed the ROM with auxiliary input (ROM-AI) to argue security against preprocessing attacks. However, to the best of our knowledge, no work has dealt with the above two problems simultaneously.In this paper, we consider a model that we call the QROM with (classical) auxiliary input (QROM-AI) that deals with the above two problems simultaneously and study security of cryptographic primitives in the model. That is, we give security bounds for one-way functions, pseudorandom generators, (post-quantum) pseudorandom functions, and (post-quantum) message authentication codes in the QROM-AI.We also study security bounds in the presence of quantum auxiliary inputs. In other words, we show a security bound for one-wayness of random permutations (instead of random functions) in the presence of quantum auxiliary inputs. This resolves an open problem posed by Nayebi et al. (QIC'15). In a context of complexity theory, this implies $$ \mathsf {NP}\cap \mathsf {coNP} \not \subseteq \mathsf {BQP/qpoly}$$ relative to a random permutation oracle, which also answers an open problem posed by Aaronson (ToC'05).
Rate-1 Trapdoor Functions from the Diffie-Hellman Problem Abstract
Nico Döttling Sanjam Garg Mohammad Hajiabadi Kevin Liu Giulio Malavolta
Trapdoor functions (TDFs) are one of the fundamental building blocks in cryptography. Studying the underlying assumptions and the efficiency of the resulting instantiations is therefore of both theoretical and practical interest. In this work we improve the input-to-image rate of TDFs based on the Diffie-Hellman problem. Specifically, we present: (a)A rate-1 TDF from the computational Diffie-Hellman (CDH) assumption, improving the result of Garg, Gay, and Hajiabadi [EUROCRYPT 2019], which achieved linear-size outputs but with large constants. Our techniques combine non-binary alphabets and high-rate error-correcting codes over large fields.(b)A rate-1 deterministic public-key encryption satisfying block-source security from the decisional Diffie-Hellman (DDH) assumption. While this question was recently settled by Döttling et al. [CRYPTO 2019], our scheme is conceptually simpler and concretely more efficient. We demonstrate this fact by implementing our construction.
An LLL Algorithm for Module Lattices Abstract
Changmin Lee Alice Pellet-Mary Damien Stehlé Alexandre Wallet
The LLL algorithm takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. We provide a generalization to R-modules contained in $$K^n$$ for arbitrary number fields K and dimension n, with R denoting the ring of integers of K. Concretely, we introduce an algorithm that efficiently finds short vectors in rank-n modules when given access to an oracle that finds short vectors in rank-2 modules, and an algorithm that efficiently finds short vectors in rank-2 modules given access to a Closest Vector Problem oracle for a lattice that depends only on K. The second algorithm relies on quantum computations and its analysis is heuristic.
Efficient UC Commitment Extension with Homomorphism for Free (and Applications) Abstract
Ignacio Cascudo Ivan Damgård Bernardo David Nico Döttling Rafael Dowsley Irene Giacomelli
Homomorphic universally composable (UC) commitments allow for the sender to reveal the result of additions and multiplications of values contained in commitments without revealing the values themselves while assuring the receiver of the correctness of such computation on committed values. In this work, we construct essentially optimal additively homomorphic UC commitments from any (not necessarily UC or homomorphic) extractable commitment, while the previous best constructions require oblivious transfer. We obtain amortized linear computational complexity in the length of the input messages and rate 1. Next, we show how to extend our scheme to also obtain multiplicative homomorphism at the cost of asymptotic optimality but retaining low concrete complexity for practical parameters. Moreover, our techniques yield public coin protocols, which are compatible with the Fiat-Shamir heuristic. These results come at the cost of realizing a restricted version of the homomorphic commitment functionality where the sender is allowed to perform any number of commitments and operations on committed messages but is only allowed to perform a single batch opening of a number of commitments. Although this functionality seems restrictive, we show that it can be used as a building block for more efficient instantiations of recent protocols for secure multiparty computation and zero knowledge non-interactive arguments of knowledge.
The Local Forking Lemma and Its Application to Deterministic Encryption Abstract
Mihir Bellare Wei Dai Lucy Li
We bypass impossibility results for the deterministic encryption of public-key-dependent messages, showing that, in this setting, the classical Encrypt-with-Hash scheme provides message-recovery security, across a broad range of message distributions. The proof relies on a new variant of the forking lemma in which the random oracle is reprogrammed on just a single fork point rather than on all points past the fork.
QFactory: Classically-Instructed Remote Secret Qubits Preparation Abstract
Alexandru Cojocaru Léo Colisson Elham Kashefi Petros Wallden
The functionality of classically-instructed remotely prepared random secret qubits was introduced in (Cojocaru et al. 2018) as a way to enable classical parties to participate in secure quantum computation and communications protocols. The idea is that a classical party (client) instructs a quantum party (server) to generate a qubit to the server's side that is random, unknown to the server but known to the client. Such task is only possible under computational assumptions. In this contribution we define a simpler (basic) primitive consisting of only BB84 states, and give a protocol that realizes this primitive and that is secure against the strongest possible adversary (an arbitrarily deviating malicious server). The specific functions used, were constructed based on known trapdoor one-way functions, resulting to the security of our basic primitive being reduced to the hardness of the Learning With Errors problem. We then give a number of extensions, building on this basic module: extension to larger set of states (that includes non-Clifford states); proper consideration of the abort case; and verifiablity on the module level. The latter is based on "blind self-testing", a notion we introduced, proved in a limited setting and conjectured its validity for the most general case.
Structure-Preserving Signatures on Equivalence Classes from Standard Assumptions Abstract
Mojtaba Khalili Daniel Slamanig Mohammad Dakhilalian
Structure-preserving signatures on equivalence classes (SPS-EQ) introduced at ASIACRYPT 2014 are a variant of SPS where a message is considered as a projective equivalence class, and a new representative of the same class can be obtained by multiplying a vector by a scalar. Given a message and corresponding signature, anyone can produce an updated and randomized signature on an arbitrary representative from the same equivalence class. SPS-EQ have proven to be a very versatile building block for many cryptographic applications.In this paper, we present the first EUF-CMA secure SPS-EQ scheme under standard assumptions. So far only constructions in the generic group model are known. One recent candidate under standard assumptions are the weakly secure equivalence class signatures by Fuchsbauer and Gay (PKC'18), a variant of SPS-EQ satisfying only a weaker unforgeability and adaption notion. Fuchsbauer and Gay show that this weaker unforgeability notion is sufficient for many known applications of SPS-EQ. Unfortunately, the weaker adaption notion is only proper for a semi-honest (passive) model and as we show in this paper, makes their scheme unusable in the current models for almost all of their advertised applications of SPS-EQ from the literature.We then present a new EUF-CMA secure SPS-EQ scheme with a tight security reduction under the SXDH assumption providing the notion of perfect adaption (under malicious keys). To achieve the strongest notion of perfect adaption under malicious keys, we require a common reference string (CRS), which seems inherent for constructions under standard assumptions. However, for most known applications of SPS-EQ we do not require a trusted CRS (as the CRS can be generated by the signer during key generation). Technically, our construction is inspired by a recent work of Gay et al. (EUROCRYPT'18), who construct a tightly secure message authentication code and translate it to an SPS scheme adapting techniques due to Bellare and Goldwasser (CRYPTO'89).
Scalable Private Set Union from Symmetric-Key Techniques Abstract
Vladimir Kolesnikov Mike Rosulek Ni Trieu Xiao Wang
We present a new efficient protocol for computing private set union (PSU). Here two semi-honest parties, each holding a dataset of known size (or of a known upper bound), wish to compute the union of their sets without revealing anything else to either party. Our protocol is in the OT hybrid model. Beyond OT extension, it is fully based on symmetric-key primitives. We motivate the PSU primitive by its direct application to network security and other areas.At the technical core of our PSU construction is the reverse private membership test (RPMT) protocol. In RPMT, the sender with input $$x^*$$ interacts with a receiver holding a set X. As a result, the receiver learns (only) the bit indicating whether $$x^* \in X$$, while the sender learns nothing about the set X. (Previous similar protocols provide output to the opposite party, hence the term "reverse" private membership.) We believe our RPMT abstraction and constructions may be a building block in other applications as well.We demonstrate the practicality of our proposed protocol with an implementation. For input sets of size $$2^{20}$$ and using a single thread, our protocol requires 238 s to securely compute the set union, regardless of the bit length of the items. Our protocol is amenable to parallelization. Increasing the number of threads from 1 to 32, our protocol requires only 13.1 s, a factor of $$18.25{\times }$$ improvement.To the best of our knowledge, ours is the first protocol that reports on large-size experiments, makes code available, and avoids extensive use of computationally expensive public-key operations. (No PSU code is publicly available for prior work, and the only prior symmetric-key-based work reports on small experiments and focuses on the simpler 3-party, 1-corruption setting.) Our work improves reported PSU state of the art by factor up to $$7,600{\times }$$ for large instances.
Fine-Grained Cryptography Revisited Abstract
Shohei Egashira Yuyu Wang Keisuke Tanaka
Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show the constructions of three key fundamental fine-grained cryptographic primitives: one-way permutations, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions. All of our constructions are computable in $$\mathsf {NC^1}$$ and secure against (non-uniform) $$\mathsf {NC^1}$$ circuits under the widely believed worst-case assumption $$\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}$$.
Quisquis: A New Design for Anonymous Cryptocurrencies Abstract
Prastudy Fauzi Sarah Meiklejohn Rebekah Mercer Claudio Orlandi
Despite their usage of pseudonyms rather than persistent identifiers, most existing cryptocurrencies do not provide users with any meaningful levels of privacy. This has prompted the creation of privacy-enhanced cryptocurrencies such as Monero and Zcash, which are specifically designed to counteract the tracking analysis possible in currencies like Bitcoin. These cryptocurrencies, however, also suffer from some drawbacks: in both Monero and Zcash, the set of potential unspent coins is always growing, which means users cannot store a concise representation of the blockchain. Additionally, Zcash requires a common reference string and the fact that addresses are reused multiple times in Monero has led to attacks to its anonymity.In this paper we propose a new design for anonymous cryptocurrencies, Quisquis, that achieves provably secure notions of anonymity. Quisquis stores a relatively small amount of data, does not require trusted setup, and in Quisquis each address appears on the blockchain at most twice: once when it is generated as output of a transaction, and once when it is spent as input to a transaction. Our result is achieved by combining a DDH-based tool (that we call updatable keys) with efficient zero-knowledge arguments.
Shorter QA-NIZK and SPS with Tighter Security Abstract
Masayuki Abe Charanjit S. Jutla Miyako Ohkubo Jiaxin Pan Arnab Roy Yuyu Wang
Quasi-adaptive non-interactive zero-knowledge proof (QA-NIZK) systems and structure-preserving signature (SPS) schemes are two powerful tools for constructing practical pairing-based cryptographic schemes. Their efficiency directly affects the efficiency of the derived advanced protocols.We construct more efficient QA-NIZK and SPS schemes with tight security reductions. Our QA-NIZK scheme is the first one that achieves both tight simulation soundness and constant proof size (in terms of number of group elements) at the same time, while the recent scheme from Abe et al. (ASIACRYPT 2018) achieved tight security with proof size linearly depending on the size of the language and the witness. Assuming the hardness of the Symmetric eXternal Diffie-Hellman (SXDH) problem, our scheme contains only 14 elements in the proof and remains independent of the size of the language and the witness. Moreover, our scheme has tighter simulation soundness than the previous schemes.Technically, we refine and extend a partitioning technique from a recent SPS scheme (Gay et al., EUROCRYPT 2018). Furthermore, we improve the efficiency of the tightly secure SPS schemes by using a relaxation of NIZK proof system for OR languages, called designated-prover NIZK system. Under the SXDH assumption, our SPS scheme contains 11 group elements in the signature, which is shortest among the tight schemes and is the same as an early non-tight scheme (Abe et al., ASIACRYPT 2012). Compared to the shortest known non-tight scheme (Jutla and Roy, PKC 2017), our scheme achieves tight security at the cost of 5 additional elements.All the schemes in this paper are proven secure based on the Matrix Diffie-Hellman assumptions (Escala et al., CRYPTO 2013). These are a class of assumptions which include the well-known SXDH and DLIN assumptions and provide clean algebraic insights to our constructions. To the best of our knowledge, our schemes achieve the best efficiency among schemes with the same functionality and security properties. This naturally leads to improvement of the efficiency of cryptosystems based on simulation-sound QA-NIZK and SPS.
Divisible E-Cash from Constrained Pseudo-Random Functions Abstract
Florian Bourse David Pointcheval Olivier Sanders
Electronic cash (e-cash) is the digital analogue of regular cash which aims at preserving users' privacy. Following Chaum's seminal work, several new features were proposed for e-cash to address the practical issues of the original primitive. Among them, divisibility has proved very useful to enable efficient storage and spendings. Unfortunately, it is also very difficult to achieve and, to date, quite a few constructions exist, all of them relying on complex mechanisms that can only be instantiated in one specific setting. In addition security models are incomplete and proofs sometimes hand-wavy.In this work, we first provide a complete security model for divisible e-cash, and we study the links with constrained pseudo-random functions (PRFs), a primitive recently formalized by Boneh and Waters. We exhibit two frameworks of divisible e-cash systems from constrained PRFs achieving some specific properties: either key homomorphism or delegability. We then formally prove these frameworks, and address two main issues in previous constructions: two essential security notions were either not considered at all or not fully proven. Indeed, we introduce the notion of clearing, which should guarantee that only the recipient of a transaction should be able to do the deposit, and we show the exculpability, that should prevent an honest user to be falsely accused, was wrong in most proofs of the previous constructions. Some can easily be repaired, but this is not the case for most complex settings such as constructions in the standard model. Consequently, we provide the first construction secure in the standard model, as a direct instantiation of our framework.
Efficient Noninteractive Certification of RSA Moduli and Beyond Abstract
Sharon Goldberg Leonid Reyzin Omar Sagga Foteini Baldimtsi
In many applications, it is important to verify that an RSA public key (N, e) specifies a permutation over the entire space $$\mathbb {Z}_N$$ , in order to prevent attacks due to adversarially-generated public keys. We design and implement a simple and efficient noninteractive zero-knowledge protocol (in the random oracle model) for this task. Applications concerned about adversarial key generation can just append our proof to the RSA public key without any other modifications to existing code or cryptographic libraries. Users need only perform a one-time verification of the proof to ensure that raising to the power e is a permutation of the integers modulo N. For typical parameter settings, the proof consists of nine integers modulo N; generating the proof and verifying it both require about nine modular exponentiations.We extend our results beyond RSA keys and also provide efficient noninteractive zero-knowledge proofs for other properties of N, which can be used to certify that N is suitable for the Paillier cryptosystem, is a product of two primes, or is a Blum integer. As compared to the recent work of Auerbach and Poettering (PKC 2018), who provide two-message protocols for similar languages, our protocols are more efficient and do not require interaction, which enables a broader class of applications.
Shorter Pairing-Based Arguments Under Standard Assumptions Abstract
Alonso González Carla Ràfols
This paper constructs efficient non-interactive arguments for correct evaluation of arithmetic and boolean circuits with proof size O(d) group elements, where d is the multiplicative depth of the circuit, under falsifiable assumptions. This is achieved by combining techniques from SNARKs and QA-NIZK arguments of membership in linear spaces. The first construction is very efficient (the proof size is $$\approx 4d$$ group elements and the verification cost is $$\approx 4d$$ pairings and $$O(n+n'+d)$$ exponentiations, where n is the size of the input and $$n'$$ of the output) but one type of attack can only be ruled out assuming the knowledge soundness of QA-NIZK arguments of membership in linear spaces. We give an alternative construction which replaces this assumption with a decisional assumption in bilinear groups at the cost of approximately doubling the proof size. The construction for boolean circuits can be made zero-knowledge with Groth-Sahai proofs, resulting in a NIZK argument for circuit satisfiability based on falsifiable assumptions in bilinear groups of proof size $$O(n+d)$$.Our main technical tool is what we call an "argument of knowledge transfer". Given a commitment $$C_1$$ and an opening x, such an argument allows to prove that some other commitment $$C_2$$ opens to f(x), for some function f, even if $$C_2$$ is not extractable. We construct very short, constant-size, pairing-based arguments of knowledge transfer with constant-time verification for any linear function and also for Hadamard products. These allow to transfer the knowledge of the input to lower levels of the circuit.
A Novel CCA Attack Using Decryption Errors Against LAC Abstract
Qian Guo Thomas Johansson Jing Yang
Cryptosystems based on Learning with Errors or related problems are central topics in recent cryptographic research. One main witness to this is the NIST Post-Quantum Cryptography Standardization effort. Many submitted proposals rely on problems related to Learning with Errors. Such schemes often include the possibility of decryption errors with some very small probability. Some of them have a somewhat larger error probability in each coordinate, but use an error correcting code to get rid of errors. In this paper we propose and discuss an attack for secret key recovery based on generating decryption errors, for schemes using error correcting codes. In particular we show an attack on the scheme LAC, a proposal to the NIST Post-Quantum Cryptography Standardization that has advanced to round 2.In a standard setting with CCA security, the attack first consists of a precomputation of special messages and their corresponding error vectors. This set of messages are submitted for decryption and a few decryption errors are observed. In a statistical analysis step, these vectors causing the decryption errors are processed and the result reveals the secret key. The attack only works for a fraction of the secret keys. To be specific, regarding LAC256, the version for achieving the 256-bit classical security level, we recover one key among approximately $$2^{64}$$ public keys with complexity $$2^{79}$$, if the precomputation cost of $$2^{162}$$ is excluded. We also show the possibility to attack a more probable key (say with probability $$2^{-16}$$). This attack is verified via extensive simulation.We further apply this attack to LAC256-v2, a new version of LAC256 in round 2 of the NIST PQ-project and obtain a multi-target attack with slightly increased precomputation complexity (from $$2^{162}$$ to $$2^{171}$$). One can also explain this attack in the single-key setting as an attack with precomputation complexity of $$2^{171}$$ and success probability of $$2^{-64}$$.
Order-LWE and the Hardness of Ring-LWE with Entropic Secrets Abstract
Madalina Bolboceanu Zvika Brakerski Renen Perlman Devika Sharma
We propose a generalization of the celebrated Ring Learning with Errors (RLWE) problem (Lyubashevsky, Peikert and Regev, Eurocrypt 2010, Eurocrypt 2013), wherein the ambient ring is not the ring of integers of a number field, but rather an order (a full rank subring). We show that our Order-LWE problem enjoys worst-case hardness with respect to short-vector problems in invertible-ideal lattices of the order.The definition allows us to provide a new analysis for the hardness of the abundantly used Polynomial-LWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018). This suggests that Order-LWE may be used to analyze and possibly design useful relaxations of RLWE.We show that Order-LWE can naturally be harnessed to prove security for RLWE instances where the "RLWE secret" (which often corresponds to the secret-key of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worst-case hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.Lastly, we address the long standing question of whether high-entropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
Simple and Efficient KDM-CCA Secure Public Key Encryption Abstract
Fuyuki Kitagawa Takahiro Matsuda Keisuke Tanaka
We propose two efficient public key encryption (PKE) schemes satisfying key dependent message security against chosen ciphertext attacks (KDM-CCA security). The first one is KDM-CCA secure with respect to affine functions. The other one is KDM-CCA secure with respect to polynomial functions. Both of our schemes are based on the KDM-CPA secure PKE schemes proposed by Malkin, Teranishi, and Yung (EUROCRYPT 2011). Although our schemes satisfy KDM-CCA security, their efficiency overheads compared to Malkin et al.'s schemes are very small. Thus, efficiency of our schemes is drastically improved compared to the existing KDM-CCA secure schemes.We achieve our results by extending the construction technique by Kitagawa and Tanaka (ASIACRYPT 2018). Our schemes are obtained via semi-generic constructions using an IND-CCA secure PKE scheme as a building block. We prove the KDM-CCA security of our schemes based on the decisional composite residuosity (DCR) assumption and the IND-CCA security of the building block PKE scheme.Moreover, our security proofs are tight if the IND-CCA security of the building block PKE scheme is tightly reduced to its underlying computational assumption. By instantiating our schemes using existing tightly IND-CCA secure PKE schemes, we obtain the first tightly KDM-CCA secure PKE schemes whose ciphertext consists only of a constant number of group elements. | CommonCrawl |
COM-1 and Hongcheon: New monazite reference materials for the microspot analysis of oxygen isotopic composition
Jeongmin Kim1,
Changkun Park2,
Keewook Yi1,
Shinae Lee1,
Sook Ju Kim1,
Min-Ji Jung1,3 &
Albert Chang-sik Cheong ORCID: orcid.org/0000-0002-6004-71721,3
Monazite, a moderately common light rare earth element (LREE) and thorium phosphate mineral, has chemical, age, and isotopic characteristics that are useful in the investigation of the origin and evolution of crustal melts and fluid-rock interactions. Multiple stages of growth and partial recrystallization commonly observed in monazite inevitably require microspot chemical and isotopic analyses, for which well-characterized reference materials are essential to correct instrumental biases. In this study, we introduce new monazite reference materials COM-1 and Hongcheon for the use in the microspot analysis of oxygen isotopic composition.
COM-1 and Hongcheon were derived from a late Mesoproterozoic (~ 1080 Ma) pegmatite dyke in Colorado, USA, and a Late Triassic (~ 230 Ma) carbonatite-hosted REE ore in central Korea, respectively. The COM-1 monazite has much higher levels of Th (8.77 ± 0.56 wt.%), Si (0.82 ± 0.07 wt.%) and lower REE contents (total REE = 49.5 ± 1.2 wt.%) than does the Hongcheon monazite (Th, 0.23 ± 0.11 wt.%; Si, < 0.1 wt.%; total REE, 59.9 ± 0.7 wt.%). Their oxygen isotopic compositions (δ18OVSMOW) were determined by gas-source mass spectrometry with laser fluorination (COM-1, 6.67 ± 0.08‰; Hongcheon-1, 6.60 ± 0.02‰; Hongcheon-2, 6.08 ± 0.07‰). Oxygen isotope measurements performed by a Cameca IMS1300-HR3 ion probe showed a strong linear dependence (R2 = 0.99) of the instrumental mass fractionation on the total REE contents.
We characterized chemical and oxygen isotopic compositions of COM-1 and Hongcheon monazites. Their internal homogeneity in oxygen isotopic composition and chemical difference provide an efficient tool for calibrating instrumental mass fractionation occurring during secondary ion mass spectrometry analyses.
Geological and environmental samples are commonly composed of chemically or isotopically heterogeneous domains that may have their own genetic significance. The high sensitivity and spatial resolution of modern microbeam techniques, typically on micrometer or submicrometer scale, allow researchers to analyze individual micro-domains with no significant loss of analytical precision. The integration of multifaceted data from single microspots has opened new avenues in geochemical research. For example, the cool early Earth hypothesis was suggested by multiple lines of evidence from a single Hadean zircon (Valley, 2005). Microanalytical data combined with textural and petrographic observations revealed that mineral phases in igneous rocks are commonly not in isotopic equilibrium with their groundmass, reflecting progressive changes in magma composition (Davidson et al., 2007).
Chemical and isotopic data obtained from microbeam analyses should be corrected and calibrated due to instrumental biases; this goal is typically achieved using matrix-matched reference materials (RMs) to check data accuracy and, more importantly, to calculate inter-elemental isotopic ratios or instrumental mass fractionation (IMF) factors. In the latter case, the reference value should be measured and evaluated with great care because it directly affects the results for unknown samples. This is particularly true in the microspot measurement of the isotopic composition of oxygen, Earth's most abundant element.
Oxygen has three stable isotopes: 16O (~ 99.76%), 17O (~ 0.04%), and 18O (~ 0.2%) (Meija et al., 2016). By convention, oxygen isotopic ratios are expressed using delta notation, relative to standard mean oceanic water (SMOW; 18O/16O = 0.0020052; Baertschi, 1976), as follows:
$${\updelta }^{18} {\text{O}}_{{{\text{SMOW}}}} = \, \left[ {{{\left( {^{18} {\text{O}}/^{16} {\text{O}}} \right)_{{{\text{sample}}}} } \mathord{\left/ {\vphantom {{\left( {^{18} {\text{O}}/^{16} {\text{O}}} \right)_{{{\text{sample}}}} } {\left( {^{18} {\text{O}}/^{16} {\text{O}}} \right)_{{{\text{SMOW}}}} }}} \right. \kern-\nulldelimiterspace} {\left( {^{18} {\text{O}}/^{16} {\text{O}}} \right)_{{{\text{SMOW}}}} }} \, } \right] \, {-} \, 1$$
For most terrestrial fractionation processes, δ17O correlates closely with δ18O (δ17O = ~ 0.52 × δ18O; Hoefs, 2018). Oxygen isotopic fractionation is associated chiefly with low-temperature surface processes because the log-transformed fractionation factor and temperature have an inverse quadratic relationship (Hoefs, 2018). 18O and 17O are concentrated on the surface because 16O, the lighter isotope is released preferentially during weathering. Meteoric water is isotopically light as a result of Rayleigh distillation upon vapor transport and precipitation. Magma inherits the oxygen isotopic signatures of its sources or assimilants because isotopic fractionations among melts and minerals are relatively small at magmatic temperatures, typically less than 2‰. As documented in abundant literatures (e.g., Faure and Mensing, 2005; Sharp, 2017; Hoefs, 2018), oxygen isotopes have become an essential tool for a wide range of geochemical and cosmochemical applications. The most precise method for oxygen isotope measurement in minerals is gas-source mass spectrometry with laser fluorination, a technique developed in the 1990s (Sharp, 1990). This bulk analysis provides a basis for the calibration of microspot data currently obtained by secondary ion mass spectrometry (SIMS).
Monazite, a light rare earth element (LREE) and thorium phosphate mineral commonly occurring in clastic sedimentary rocks, low- to medium-pressure metamorphic rocks, peraluminous granites, and hydrothermal ore deposits, is more likely to be affected by fluids and melts than silicate minerals and thus frequently found as a complexly zoned mineral (Foster et al., 2002; Catlos, 2013). Microspot oxygen isotope data obtained from individual zones, particularly when combined with geochronological and petrographic information, provide an excellent opportunity for the investigation of interactions between the hydrosphere and lithosphere. SIMS has been used for the analysis of oxygen isotopes in monazite and other phosphate minerals since the mid-2000s (Ayers et al., 2006; Breecker and Sharp, 2007); however, no consensus on IMF-related calibration has been reached (Rubatto et al., 2014; Didier et al., 2017) and the RMs required for monazite oxygen isotope analysis with SIMS remain insufficient. The matrix effect resulting from the common substitutions of huttonite (ThSiO4) and cheralite [CaTh(PO4)2] into the monazite structure is non-negligible. Here, we present high-resolution SIMS and laser fluorination oxygen isotope data for new monazite RMs COM-1 and Hongcheon, along with chemical data obtained by an electron probe microanalyzer (EPMA) and laser ablation inductively coupled plasma mass spectrometry (ICP-MS).
The COM-1 monazite is a translucent pale-brown crystal obtained from a pegmatite dyke in Colorado, USA (Fig. 1), purchased by the Korea Basic Science Institute (KBSI) from eBay in 2014. No further information about the nature of its host rock is available. Kim et al. (2015) reported a weighted mean 206Pb/238U age of 1078.9 ± 5.0 Ma for this monazite, obtained using a sensitive high-resolution ion microprobe (SHRIMP IIe/MC) installed at the KBSI. This age is consistent with the SHRIMP 208Pb/232Th and laser ablation multi-collector ICP-MS 206Pb/238U ages also measured at the KBSI (1087.2 ± 8.4 and 1076.6 ± 9.4 Ma, respectively).
Sample photographs and representative backscattered electron images of the monazite reference materials. Scale bars in backscattered electron images are 100 μm
Hongcheon monazites were collected by Kim et al. (2016) from a Late Triassic carbonatite-hosted REE ore in central Korea (Fig. 1). The host rocks of Hongcheon-1 (sample 304-5A) and Hongcheon-2 (sample K12-A) were taken from an outcrop in the southern ore body (37°51′41.8″ N, 128°00′56.2″ E) and a 76-m-deep core drilled into the central ore body, respectively. The former was a massive, medium- to coarse-grained pinkish carbonatite with interstitial patches of quartz aggregates and few magnetite grains. The latter was a massive, fine-grained pale-gray carbonatite with variable amounts of disseminated magnetite. Monazite grains separated from these samples yielded a SHRIMP 208Pb/232Th age of 232.9 ± 1.6 Ma (Kim et al., 2016). Their REE contents are relatively high (total REE oxide > 66 wt.%) and vary narrowly, irrespective of the textural occurrence. Thorium contents in Hongcheon monazites are relatively low (average = ~ 0.25 wt.%), and Th/U ratios are unusually high (average = ~ 2200). These chemical properties were confirmed in this study.
USGS-44069 and TM monazites were also analyzed in this study to ensure data accuracy. USGS-44069 originates from an amphibolite to granulite facies metasedimentary unit (the Wissahickon Formation) of the Wilmington Complex, Delaware, USA; it has a well-constrained thermal ionization mass spectrometric U/Pb age of 424.9 ± 0.4 Ma (Aleinikoff et al., 2006), which was interpreted to represent a metamorphic overprint. The grains are < 200 μm in diameter and honey-yellow in color. This monazite contains negligible huttonite substitution and variable amounts of cheralite, with a relatively low Th content (~ 2 wt.%) (Rubatto et al., 2014). The TM monazite is from the Thompson Nickel Mine, central Manitoba, Canada, and has been used as a U/Pb standard, with an age of 1766 Ma (Williams et al., 1996). The TM crystals are relatively large (several 100 s of microns in diameter) and yellow to orange in color, with a particularly high Th content of > 10 wt.% (Rubatto et al., 2014).
Monazite grains were embedded in epoxy and polished to expose a pristine surface. Backscattered electron (BSE) images of the grains were examined using a scanning electron microscope (JSM-6610LV; JEOL) at the KBSI. Chemical compositions of the monazites were determined at the Center for Research Facilities, Gyeongsang National University using an EPMA (JXA-8530F PLUS; JEOL) equipped with five wavelength-dispersive X-ray spectrometers. The acceleration voltage was set to 15 kV, and the beam current was set to 20 nA. The counting times for peaks were 10–30 s.
Monazites were also analyzed using a 343-nm femtosecond laser ablation microprobe (J200 LA; Applied Spectra Inc.) coupled with an iCapQ (Thermo Fisher Scientific) quadrupole ICP-MS at the Core Research Facilities of Pusan National University. The instrumental parameters for laser ablation and ICP-MS, basically the same as those in Cheong et al. (2019), were optimized to provide the highest sensitivity whilst maintaining the ratio of ThO+/Th+ below 0.005. External standardization was performed relative to NIST SRM 610–612 glasses (Jochum et al., 2011), and the internal standard was the Ce content measured by EPMA.
The bulk measurement of oxygen isotopes in the monazite grains was conducted using a dual-inlet gas-source mass spectrometer (MAT 253 Plus; Thermo Fisher Scientific) at the Korea Polar Research Institute (KOPRI). Several milligrams of monazite grains, ~ 100 µm in diameter, were selected carefully under microscopy to avoid any inclusion or coexisting mineral. At least three aliquots of KOPRI in-house standard obsidian (δ18OVSMOW = 8.40‰) were loaded into the reaction chamber with the monazite grains and analyzed before and after the monazite analyses to check the accuracy and external reproducibility of the oxygen isotope data. The reaction chamber containing obsidian and monazite grains was heated to 150 °C overnight in a vacuum to remove moisture adsorbed onto the sample surfaces. To remove any possible remaining moisture, a small amount of BrF5 gas was introduced into the reaction chamber for 1 h, and the chamber was evacuated before the initiation of analysis. After pre-fluorination, ~ 110 mbar BrF5 was introduced into the chamber and the sample was heated by gradually increasing the power of CO2 laser to 15 W (60% of the maximum power). All gaseous species released from the samples were expanded to the first cryogenic trap and condensable gases at liquid nitrogen temperature (− 196 °C) were removed. Non-condensable F2 gas was converted into bromine (Br2) through the KBr getter, and the Br2 was trapped in the second cryogenic trap. Finally, the purified O2 gas was collected in a cryogenic trap containing a pellet of 13X molecular sieve (MS13X) for 15 min and released to the mass spectrometer at room temperature. This analytical procedure is described in greater detail elsewhere (Kim et al., 2019). A laboratory working standard O2 gas (δ18OVSMOW = − 9.62‰) was calibrated by measuring oxygen isotopes of Vienna Standard Mean Ocean Water (VSMOW, δ18OVSMOW = 0‰ per definition) and Standard Light Antarctica Precipitation (SLAP) with the same purification line (Kim et al., 2020). Although two-point normalization (VSMOW–SLAP) is recommended to correct inter-laboratory biases, we report δ18O values relative to VSMOW to enable comparison of the values with data reported in the literature as δ18OVSMOW. The precision of δ18O measurement at KOPRI is typically < 0.15‰ [1 standard deviation (SD); Kim et al., 2020]. Some COM-1 monazite grains were sputtered and ejected from the sample holder during laser heating, resulting in a low O2 gas yield and low δ18O values (Additional file 1: Table S1). Previous studies have also addressed such issue related to laser fluorination of phosphate and particularly monazite (Breecker and Sharp, 2007; Rubatto et al., 2014). More experiments are needed to find optimum conditions of BrF5 pressure and laser power to prevent violent reaction and to obtain high O2 yield.
Monazite oxygen isotopes were also measured using a Cameca IMS1300-HR3 large-geometry SIMS at the KBSI. The epoxy mount was Au-coated at a thickness of 20 nm for SIMS analysis. The analytical conditions are summarized in Additional file 2: Table S2. A focused Cs+ primary ion beam (Gaussian mode) was accelerated at 10 kV, with an ion current of ~ 3.0 nA and a spot diameter of ~ 15 μm. Secondary ions were accelerated by − 10 kV on the sample surface and transferred to field aperture. To maximize secondary ion transmission, the transfer lens optics were set to a magnification of ~ 200. The secondary ion beam was automatically centered on the field aperture and entrance slit/contrast aperture prior to each analysis. Charge buildup on the sample surface was compensated using a normal incidence electron gun. The contrast aperture, entrance slit, field aperture, and energy slit were set to 400 µm diameter, ~ 70.3 µm width, 3000 × 3000 µm2, and 50 eV width at the low-energy peak, respectively. 16O− and 18O− ions were detected simultaneously using two Faraday cups with 1010 and 1011 Ω pre-amplifiers, respectively. A 500-μm exit slit with a mass-resolving power of ~ 2000, defined as M/ΔM at 50% peak height, was used for both detectors. Under these conditions, 16O– and 18O– count rates were typically ~ 3 × 109 cps and ~ 6 × 106 cps, respectively. The internal precision of 18O/16O measurement was ~ 0.2‰ (20 cycles, 2 standard errors).
BSE texture and chemical composition
The BSE images (Fig. 1) showed no zoning in most COM-1 crystals, although some grains had linear patches of alteration along a network of microfractures. These grains contained inclusions of thorite, calcite, and biotite, mainly along the alteration zone. Hongcheon-1 crystals exhibited euhedral to subhedral external shapes. Some grains consisted of mosaics of patches, with weak contrast in BSE brightness. Inclusions of apatite, strontianite, dolomite, and Fe oxides were found in Hongcheon-1 monazites. Hongcheon-2 crystals also exhibited euhedral to subhedral external shapes. These grains showed BSE zoning with combinations of banded and patch patterns, which may have correlated with Th content (Möller et al., 2003; Rubatto et al., 2014). Inclusions were rare in Hongcheon-2 grains. As reported by Rubatto et al. (2014), USGS-44069 and TM monazites exhibited polygonal and oscillatory BSE zoning. Microfracture networks and alteration zones were commonly observed in TM crystals.
The chemical compositions of monazites analyzed by EPMA and laser ablation ICP-MS are listed in Additional file 3: Table S3 and Additional file 4: Table S4, respectively, and summarized in Table 1. Based on the ICP-MS data, COM-1 monazites had an average total REE content of 49.5 ± 1.2 wt.% (1 SD unless otherwise noted). The grains showed slightly positive Ce and distinctly negative Eu anomalies (Ce/Ce* = 1.13 ± 0.05; Eu/Eu* = 0.027 ± 0.002) in the chondrite-normalized (McDonough and Sun, 1995) diagram, with relatively low La/Yb ratios [(La/Yb)normalized = 84 ± 11; Fig. 2]. Thorium (8.77 ± 0.56 wt.%) and Si (0.82 ± 0.07 wt.%) contents (EPMA data) were relatively high. EPMA data-based mole fractions of huttonite and cheralite were determined to be 0.070 ± 0.006 and 0.100 ± 0.011, respectively.
Table 1 Summary of monazite reference material compositions
Chondrite-normalized rare earth element patterns of the monazite reference materials. The chondrite value was obtained from McDonough and Sun (1995)
As Kim et al. (2016) reported, Hongcheon monazites had small variations in total REE contents (59.9 ± 0.7 wt.%, ICP–MS data) and showed heavy REE-depleted chondrite-normalized patterns [average (La/Yb)normalized > 50,000], with no Ce and small Eu anomalies (Ce/Ce* = 1.02 ± 0.02; Eu/Eu* = 0.72 ± 0.07; Fig. 2). Relative to COM-1, they were strongly enriched in Sr (0.23 ± 0.12 wt.%) but depleted in Th (0.23 ± 0.11 wt.%, EPMA data) and U (< 5 ppm). The EPMA data-based mole fraction of cheralite was 0.036 ± 0.012. Huttonite substitution was negligible. The USGS-44069 and TM monazites showed the same REE trends reported by Rubatto et al. (2014) (Fig. 2).
Oxygen isotopic composition
The bulk oxygen isotopic compositions of monazite RMs determined by laser fluorination are listed in Table 2. As noted in earlier studies (Breecker and Sharp, 2007; Rubatto et al., 2014; Didier et al., 2017), oxygen isotopic analysis of monazite by laser fluorination is challenging due to unstable yields of O2 gas. Oxygen isotope data obtained by laser fluorination with F2 (Rubatto et al., 2014; Didier et al., 2017) and BrF5 gas (this study) clearly indicate that low yields of O2 gas from samples result in relatively lower δ18O values. Our data listed in Additional file 1: Table S1 show a distinct positive correlation between oxygen yields (42–96%) and measured δ18O values (5.9–6.8 ‰) for the COM-1 monazite (δ18O = 0.0151 × yield (%) + 5.2806, R2 = 0.80).
Table 2 Oxygen isotopic compositions of the COM-1 and Hongcheon monazites, measured by gas-source mass spectrometry with laser fluorination
O2 gas yields lower than 90% cause mass-dependent fractionation in oxygen isotopes (up to 0.8‰ in δ18O) (Additional file 1: Table S1). Fluorination of monazite may produce POF3 and unidentified P-O-Fx gases in addition to O2 gas (Breecker and Sharp, 2007; Jones et al., 1999). Kinetic fractionation of oxygen isotopes may occur in this process, which is followed by purification of O2 gas and detection of isotopically fractionated O2 gas by the mass spectrometer. Low δ18O values with low O2 gas yields likely indicate that light oxygen isotope (16O) is preferentially fractionated into O2 gas rather than into POF3 and other P-O-Fx gaseous species. Additionally, isotopic fractionation of O2 gas may be related to the pressures in the bellows of the mass spectrometer (Yan et al., 2022). However, it should be noted that the change of δ18O induced by this pressure-dependent fractionation is expected to be negligible (< 0.1‰) during routine oxygen isotope analyses of 6 to 8 samples. Regardless of the fractionation mechanism, it is clear that low O2 gas yields of monazite during bulk-sample laser fluorination produce inaccurate data. Therefore only δ18O values obtained with high O2 yields (> 90%) were considered in this study.
Ayers et al. (2006) revealed no matrix effects for compositionally different monazites, but Breecker and Sharp (2007) reported that the IMF of oxygen isotopic measurements obtained using the same SIMS model (Cameca IMS1270) correlated with the monazite Th content. Rubatto et al. (2014) suggested that the IMF of SHRIMP oxygen measurements results from huttonite and cheralite substitutions in monazite. More recently, Didier et al. (2017) concluded that the IMF of Cameca IMS1280 measurements is a function of monazite [YREEPO4], and less importantly, Th content. They showed that the IMF factor, defined as δ18Olaser fluorination–δ18OSIMS, correlated positively with the YREE (and thus negatively with the Th) content, as also suggested by Breecker and Sharp (2007). Wu et al. (2020) proposed a power-law relationship between the IMF factor of Cameca IMS1280 oxygen isotope data and monazite Si content.
Our SIMS oxygen isotope data are presented with EPMA data in Additional file 3: Table S3. The three RMs introduced in this study had homogeneous oxygen isotopic compositions (SD < 0.3‰). The δ18O difference between Hongcheon-1 and Hongcheon-2 was consistent within errors in the laser fluorination (0.52‰) and average SIMS results (0.54‰). Given their oxygen isotopic homogeneity and substantial differences in chemical composition, the COM-1 and Hongcheon monazites are useful materials with which to calibrate IMF-related SIMS effects. We calculated the IMF factor using laser fluorination results obtained in this study and the value reported for the USGS-44069 monazite (7.67‰; Rubatto et al., 2014). The IMF factor (δ18Olaser fluorination–average δ18OSIMS) correlated inversely with the REE content (R2 = 0.99), contrary to the conclusion reached by Didier et al. (2017), and weakly and nonlinearly with the Th and Ca contents (Fig. 3). Since the SIMS IMF is strongly affected by electron gun tuning and the configuration of secondary ion optics, the fractionation factor and its correlation trend with chemical compositions may differ among instruments. Indeed, Didier et al. (2017) and Wu et al. (2020) reported quite different IMF factors (> 16‰) from the same monazite standard materials, even though they utilized the same SIMS model (Cameca IMS1280) with similar analytical conditions. Our data defined a planar linear regression in a three-dimensional IMF versus Ca + Si 3D diagram (not shown), as Rubatto et al. (2014) suggested, but with a weak correlation coefficient (R2 = 0.7). The TM monazite yielded an average corrected δ18OVSMOW value of 9.57 ± 0.47‰, obtained using the equation derived from our data (δ18Ocorrected = − 0.3017 × [total REE, wt.%] + 18.085; Fig. 3). This value awaits further verification.
Secondary ion mass spectrometry δ18O values plotted against inductively coupled plasma mass spectrometry-based total rare earth element and electron probe microanalyzer-based ThO2 and CaO contents
In this study, we characterized chemical and oxygen isotopic compositions of the new monazite RMs COM-1 (δ18OVSMOW = 6.67 ± 0.08‰), Hongcheon-1 (= 6.60 ± 0.02‰), and Hongcheon-2 (= 6.08 ± 0.07‰). We conclude that the IMF of SIMS oxygen isotope measurement on monazite occurs differently from instrument to instrument, and IMF correction with well-established oxygen isotope standards is essential in every analytical session.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Backscattered electron
EPMA:
Electron probe microanalyzer
ICP-MS:
Inductively coupled plasma mass spectrometry
IMF:
Instrumental mass fractionation
KBSI:
Korea Basic Science Institute
KOPRI:
Korea Polar Research Institute
NIST:
National Institute of Standards and Technology of the USA
REE:
Rare earth element
RM:
SD:
SHRIMP:
Sensitive high-resolution ion microprobe
SIMS:
Secondary ion mass spectrometry
SLAP:
Standard light Antarctica precipitation
SMOW:
Standard mean ocean water
SRM:
Standard reference material
USGS:
VSMOW:
Vienna Standard mean ocean water
Aleinikoff JN, Schenck WS, Plank MO, Srogi L, Fanning CM, Kamo SL, Bosbyshell H. Deciphering igneous and metamorphic events in high-grade rocks of the Wilmington Complex, Delaware: Morphology, cathodoluminescence and backscattered electron zoning, and SHRIMP U-Pb geochronology of zircon and monazite. Geol Soc Am Bull. 2006. https://doi.org/10.1130/B25659.1.
Ayers JC, Loflin M, Miller CF, Barton MD, Coath CD. In situ oxygen isotope analysis of monazite as a monitor of fluid infiltration during contact metamorphism: Birch Creek Pluton aureole, White Mountains, eastern California. Geology. 2006. https://doi.org/10.1130/G22185.1.
Baertschi P. Absolute 18O Content of standard mean ocean water. Earth Planet Sci Lett. 1976. https://doi.org/10.1016/0012-821X(76)90115-1.
Breecker DO, Sharp ZD. A monazite oxygen isotope thermometer. Am Mineral. 2007. https://doi.org/10.2138/am.2007.2367.
Catlos EJ. Generalizations about monazite: Implications for geochronologic studies. Am Miner. 2013. https://doi.org/10.2138/am.2013.4336.
Cheong ACS, Jeong YJ, Lee S, Yi K, Jo HJ, Lee HS, Park C, Kim NK, Li XH, Kamo SL. LKZ-1: a new zircon working standard for the in situ determination of U-Pb age, O-Hf isotopes, and trace element composition. Minerals. 2019. https://doi.org/10.3390/min9050325.
Davidson JP, Morgan DJ, Charlier BLA, Harlou R, Hora JM. Microsampling and isotopic analysis of igneous rocks: Implications for the study of magmatic systems. Annu Rev Earth Planet Sci. 2007. https://doi.org/10.1146/annurev.earth.35.031306.140211.
Didier A, Putlitz B, Baumgartner LP, Bouvier AS, Vennemann TW. Evaluation of potential monazite reference materials for oxygen isotope analyses by SIMS and laser assisted fluorination. Chem Geol. 2017. https://doi.org/10.1016/j.chemgeo.2016.12.031.
Faure G, Mensing TM. Isotopes: principles and applications. 3rd ed. New York: John Wiley & Sons; 2005.
Foster G, Gibson HD, Parrish R, Horstwood M, Fraser J, Tindle A. Textural, chemical and isotopic insights into the nature and behaviour of metamorphic monazite. Chem Geol. 2002. https://doi.org/10.1016/S0009-2541(02)00156-0.
Hoefs J. Stable isotope geochemistry. 8th ed. Cham: Springer; 2018. https://doi.org/10.1007/978-3-319-78527-1.
Jochum KP, Weis U, Stoll B, Kuzmin D, Yang Q, Raczek I, Jacob DE, Stracke A, Birbaum K, Frick DA, Günther D, Enzweiler J. Determination of reference values for NIST SRM 610–617 glasses following ISO guidelines. Geostand Geoanal Res. 2011. https://doi.org/10.1111/j.1751-908X.2011.00120.x.
Jones AM, Iacumin P, Young ED. High-resolution δ18O analysis of tooth enamel phosphate by isotope ratio monitoring gas chromatography mass spectrometry and ultraviolet laser fluorination. Chem Geol. 1999. https://doi.org/10.1016/S0009-2541(98)00162-4.
Kim N, Cheong ACS, Yi K, Jeong YJ, Koh SM. Post-collisional carbonatite-hosted rare earth element mineralization in the Hongcheon area, central Gyeonggi massif, Korea: Ion microprobe monazite U-Th-Pb geochronology and Nd-Sr isotope geochemistry. Ore Geol Rev. 2016. https://doi.org/10.1016/j.oregeorev.2016.05.016.
Kim NK, Kusakabe M, Park C, Lee JI, Nagao K, Enokido Y, Yamashita S, Park SY. An automated laser fluorination technique for high-precision analysis of three oxygen isotopes in silicates. Rapid Commun Mass Spectrom. 2019. https://doi.org/10.1002/rcm.8389.
Kim NK, Park C, Kusakabe M. Two-point normalization for reducing inter-laboratory discrepancies in δ17O, δ18O, and Δ′17O of reference silicates. J Anal Sci Technol. 2020. https://doi.org/10.1186/s40543-020-00248-0.
Kim SJ, Lee TH, Yi K, Jeong YJ, Cheong CS. Characterization of new zircon and monazite working standards LKZ-1, BRZ-1, COM-1 and BRM-1. In: Paper presented at the 1st Japan-Korea SHRIMP meeting, Hiroshima University, Higashi-Hiroshima, pp. 14–16 September 2015.
McDonough WF, Sun SS. The composition of the Earth. Chem Geol. 1995. https://doi.org/10.1016/0009-2541(94)00140-4.
Meija J, Coplen TB, Berglund M, Brand WA, Bièvre PD, Gröning M, Holden NE, Irrgeher J, Loss RD, Walczyk T, Prohaska T. Isotopic compositions of the elements 2013 (IUPAC technical report). Pure Appl Chem. 2016. https://doi.org/10.1515/pac-2015-0503.
Möller A, Hensen BJ, Armstrong RA, Mezger K, Ballèvre M. U-Pb zircon and monazite age constraints on granulite-facies metamorphism and deformation in the Strangways Metamorphic Complex (central Australia). Contrib Mineral Petrol. 2003. https://doi.org/10.1007/s00410-003-0460-3.
Rubatto D, Putlitz B, Gauthiez-Putallaz L, Crépisson C, Buick IS, Zheng YF. Measurement of in-situ oxygen isotope ratios in monazite by SHRIMP ion microprobe: Standards, protocols and implications. Chem Geol. 2014. https://doi.org/10.1016/j.chemgeo.2014.04.029.
Sharp ZD. A laser-based microanalytical method for the in situ determination of oxygen isotope ratios of silicates and oxides. Geochim Cosmochim Acta. 1990. https://doi.org/10.1016/0016-7037(90)90160-M.
Sharp Z. Principles of stable isotope geochemistry. 2nd ed. New Mexico: The University of New Mexico; 2017. https://doi.org/10.25844/h9q1-0p82.
Valley JW. A cool early earth. Sci Am. 2005. https://doi.org/10.1038/scientificamerican1005-58.
Williams IS, Buick IS, Cartwright I. An extended episode of early Mesoproterozoic metamorphic fluid flow in the Reynolds range, central Australia. J Metamorph Geol. 1996. https://doi.org/10.1111/j.1525-1314.1996.00029.x.
Wu LG, Li QL, Liu Y, Tang GQ, Lu K, Ling XX, Li XH. Rapid and accurate SIMS microanalysis of monazite oxygen isotopes. J Anal Atom Spectrom. 2020. https://doi.org/10.1039/D0JA00069H.
Yan H, Peng Y, Bao H. Isotope fractionation during capillary leaking in an isotope ratio mass spectrometer. Rapid Commun Mass Spectrom. 2022. https://doi.org/10.1002/rcm.9290.
We thank Jong Ok Jeong, Ho-Sun Lee, Nak Kyu Kim, Hwayoung Kim, Pilmo Kang, and Yuyoung Lee for their assistance in chemical and oxygen isotopic measurements. Constructive comments from the journal reviewers improved the manuscript significantly.
This work was jointly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2021R1A2C1003363), KBSI grants (C280100, C200500 and C230120), and a KOPRI grant funded by the Ministry of Oceans and Fisheries (PE22050).
Korea Basic Science Institute, Cheongju, 28119, Republic of Korea
Jeongmin Kim, Keewook Yi, Shinae Lee, Sook Ju Kim, Min-Ji Jung & Albert Chang-sik Cheong
Korea Polar Research Institute, Incheon, 21990, Republic of Korea
Changkun Park
Graduate School of Analytical Science and Technology, Chungnam National University, Daejeon, 34134, Republic of Korea
Min-Ji Jung & Albert Chang-sik Cheong
Jeongmin Kim
Keewook Yi
Shinae Lee
Sook Ju Kim
Min-Ji Jung
Albert Chang-sik Cheong
ACSC designed the research. ACSC, JK, and CP wrote the manuscript. JK, CP, KY, SL, SJK, and MJJ carried out the experiment and contributed to the interpretation of the results. All authors read and approved the final manuscript.
Correspondence to Albert Chang-sik Cheong.
Additional file 1. Table S1:
Oxygen isotopic compositions of the COM-1 monazite, obtained with variable oxygen yield.
Analytical conditions for secondary ion mass spectrometry measurement of oxygen isotopes in monazite.
Uncorrected secondary ion mass spectrometry δ18O values (‰) and electron probe microanalyzer data (wt.%) for the monazite reference materials.
Laser ablation inductively coupled plasma mass spectrometry data for the monazite reference materials (ppm).
Kim, J., Park, C., Yi, K. et al. COM-1 and Hongcheon: New monazite reference materials for the microspot analysis of oxygen isotopic composition. J Anal Sci Technol 13, 34 (2022). https://doi.org/10.1186/s40543-022-00342-5
Received: 28 July 2022
Monazite
COM-1
Hongcheon
Oxygen isotopes | CommonCrawl |
Bidirectional prefrontal-hippocampal dynamics organize information transfer during sleep in humans
Randolph F. Helfrich1,
Janna D. Lendner ORCID: orcid.org/0000-0002-1967-61101,2,
Bryce A. Mander3,
Heriberto Guillen4,
Michelle Paff5,
Lilit Mnatsakanyan4,
Sumeet Vadera5,
Matthew P. Walker1,6,
Jack J. Lin4,7 na1 &
Robert T. Knight1,6 na1
Non-REM sleep
Slow-wave sleep
How are memories transferred from short-term to long-term storage? Systems-level memory consolidation is thought to be dependent on the coordinated interplay of cortical slow waves, thalamo-cortical sleep spindles and hippocampal ripple oscillations. However, it is currently unclear how the selective interaction of these cardinal sleep oscillations is organized to support information reactivation and transfer. Here, using human intracranial recordings, we demonstrate that the prefrontal cortex plays a key role in organizing the ripple-mediated information transfer during non-rapid eye movement (NREM) sleep. We reveal a temporally precise form of coupling between prefrontal slow-wave and spindle oscillations, which actively dictates the hippocampal-neocortical dialogue and information transfer. Our results suggest a model of the human sleeping brain in which rapid bidirectional interactions, triggered by the prefrontal cortex, mediate hippocampal activation to optimally time subsequent information transfer to the neocortex during NREM sleep.
Systems-level memory consolidation theory posits that initial memory encoding is supported by the hippocampus, but that overtime, memory representations become increasingly dependent upon the neocortex1. A prevailing view is that such information transfer occurs, in part, during NREM sleep1. Specifically, the hippocampus generates sharp-wave ripples (~80–120 Hz in humans), which initially facilitate the reactivation of recently learned information2,3. Ripples do no occur in isolation, but appear tightly coupled to both slow oscillations (SO; <1.25 Hz) and sleep spindles (12–16 Hz)4,5,6,7, which are thought to mediate synaptic plasticity1,8,9. In particular, cortical SOs govern the precise temporal coordination of sleep spindles and predict overnight hippocampus-dependent memory formation4,10.
Currently, the majority of evidence supporting this theory stems from invasive recordings in rodents, given the difficulty of imaging the human hippocampus with a sufficiently high temporal resolution needed to detect ripples. Furthermore, it is unclear how findings obtained in rodents relate to humans, given the different anatomical organization, in particular in the prefrontal cortex (PFC)11,12.
One influential model suggests that hippocampal ripple activity is associated with reactivation of newly acquired information13, which strengthens the mnemonic representation, before it is transferred to the neocortex14. Ripples are thought to trigger subsequent cortical SOs and spindles, which promote neuroplasticity to facilitate long-term storage15,16. Contrary to this model, several recent reports indicated that cortical activity might actually precede hippocampal ripples17,18,19, bringing into question who is the main driver of these interactions in support of memory consolidation.
While several lines of research converged on the notion that the coupling of SOs, spindles and ripples is important for memory consolidation4,5,10,16, it is unclear if coupled SO-spindles simply index preceding hippocampal processing, or instead, if they play a functional role in organizing hippocampal activity1,9. The latter finding would indicate cortical control of hippocampal reactivations, one that may advantageously insure that the neocortex receives information at an optimal time point when information transfer20,21 and plasticity dynamics22,23 are maximal.
By directly recording from the hippocampus and PFC, we demonstrate that prefrontal SO-spindle coupling initiates a bidirectional processing cascade. The quality of the SO-spindle coupling predicts hippocampal dynamics and shapes the hippocampal-neocortical information transfer in the human brain.
We combined intracranial recordings from the human medial temporal lobe (MTL) and prefrontal cortex (PFC) in epilepsy patients with scalp EEG recordings to test if coupled SO-spindle activity mediates hippocampal-neocortical information transfer during a full night of sleep (Fig. 1a, see also Supplementary Table 1). In 18 subjects, we simultaneously obtained frontal scalp EEG and MTL intracranial EEG, while a subset of 15 patients also had intracranial coverage of three PFC subregions (dlPFC, mPFC, and OFC). The majority of MTL electrode contacts were placed in the hippocampus (CA1, CA3/DG, and Subiculum), but we also assessed adjacent contacts in entorhinal cortex (ERC), perirhinal cortex (PRC), and parahippocampal gyrus (PHG) given that ripples have been observed in both the ERC24 as well as the PHG6. We collectively refer to this group as 'MTL' electrodes throughout the manuscript, unless we found effects that were specific to the hippocampus proper. Furthermore, we utilized bipolar referencing to minimize effects of volume spread on our connectivity analyses; hence, most contacts contained hippocampal activity after re-referencing (~70%; Supplementary Table 2).
Sleep architecture and coupled NREM oscillations. a Top: hypnogram of a single subject (MT = movement time). Bottom: corresponding multi-taper spectrogram with superimposed number of detected SO and sleep spindle events (white solid lines; 5 min averages). b Top: normalized circular histogram of detected spindle events relative to the SO phase. Note the peak around 0° (circular mean = 15.27°, i.e. after the SO peak). Bottom: peak-locked sleep spindle average across all detected events in NREM sleep (black). Low-pass filtered events (red) highlight that the sleep spindles preferentially peaked during the SO 'up-state'. c Peak-locked spindle grand average (black; N = 18; scalp EEG) superimposed with the SO low-pass filtered signal (red). Inset: spindles were significantly nested in the SO peak (V-test: p = 0.0050; red dots depict individual subjects). d Directional SO-spindle coupling as measured by the phase-slope index (PSI; mean ± SEM). SO phases significantly predicted spindle amplitudes (one-tailed t-test: p = 0.0436). Red dots depict individual subjects
Analysis strategy
Interictal epileptic discharges were excluded from all analyses (Supplementary Fig. 1)7,25. We focused on the most anterior available scalp electrode (typically Fz; subsequently referred to as 'EEG'). This approach was in line with previous reports, which also utilized scalp electrodes as a surrogate of prefrontal activity6,7. We then performed similar analyses using only the intracranial PFC contacts (subsequently referred to as 'PFC') to investigate e.g. the spatial extent of network interactions and information transfer or directional cross-frequency coupling (CFC), which cannot be obtained from a single scalp electrode. Where applicable, we show both the EEG-MTL as well as PFC-MTL interactions to indicate that frontal EEG sensors indeed reflect a valid surrogate of the underlying population activity as measured by intracranial EEG.
To effectively combine scalp and intracranial EEG, we adopted the following analysis strategy: First, we replicated and extended our recent observation that prefrontal SOs shape spindle activity (Fig. 1b–d)10. Note that all SOs and spindle events were detected at the prefrontal EEG electrode. Second, we established the relationship between the activity at scalp EEG level and all available MTL electrodes (Fig. 2). We utilized two complementary approaches: State-dependent (based on continuous 30 s segments as defined by the hypnogram) as well as event-related (based on automatic event detection) analyses. Third, we dissected the precise relationship between cortical spindle and MTL high-frequency band (HFB; 70–150 Hz) activity (Fig. 3), which captures both non-oscillatory broadband as well as oscillatory ripple events. Subsequently, we investigated whether the coupled HFB signatures reflected ripple oscillations (Fig. 4) and how those were precisely related to distinct cortical events, such as SOs and spindles. Having established that these three cardinal NREM oscillations dynamically interact during sleep, we then investigated the overall network connectivity as a function of brain state (Fig. 5) as well as in an event-related approach (Fig. 6) to further investigate the precise role of SO-spindle coupling on inter-areal communication. Finally, we quantified interregional information transfer relative to the ripple events (Fig. 7) that have been implicated in information reactivation, replay, transfer, and consolidation2,3,15.
Intracranial electrodes and hierarchical nesting of sleep oscillations. a Schematic depiction in MNI space of the intracranial electrode placement in the medial temporal lobe (MTL; in black) and three PFC regions-of-interest (ROI) across the subset of subjects (N = 15) with simultaneous coverage of all three prefrontal ROIs (dlPFC: n = 167; OFC: n = 62; mPFC: n = 116). b Schematic depiction of MTL contacts (n = 184 electrodes in 18 subjects) on a hippocampal mesh in MNI space. Note that electrode size has been rescaled for display purposes and does not reflect the original electrode size. Precise electrode localizations were determined in native space prior to warping to MNI space. c Single subject time-frequency spectrogram from one MTL channel relative to a rescaled trough-locked cortical average SO (black). Note the circumscribed increase in high-frequency-band activity (~80–100 Hz) just prior to the SO peak. d Single subject time-frequency spectrogram from one MTL channel relative to a rescaled peak-locked cortical spindle average (black). High-frequency band activity exhibited several bursts during spindle events. e Single subject, single electrode example of simultaneous nesting of HFB power in the cortical SO peak (x-axis) and the cortical spindle trough (y-axis). f Cross-frequency coupling analysis indicated increased SO-Spindle, SO-HFB, and Spindle-HFB coupling during NREM sleep (gray dots depict individual subjects; mean ± SEM). g Full comodulogram (wake vs. NREM) depicting the effect size of the difference (Cohen's d)
Precise cortical SO-spindle coupling phase predicts MTL HFB activity. a Single subject average of all coupled spindles that were nested in the SO peak (in red) and trough (in blue). b All uncoupled spindles, i.e. that did not coincide with a separate SO detection. c Cortical spindle to MTL HFB phase amplitude coupling for coupled and uncoupled spindles as well as event-free random NREM intervals (normalized by the individual mean to reduce inter-subject variability). d Single subject HFB activity in the MTL as a function of the precise cortical SO-Spindle phase at the time of the spindle peak. Bars depict the average across electrodes; gray lines depict single electrodes. The red arrow indicates the SO-spindle coupling phase where the highest HFB amplitude was observed (~0°). e Different single subject example. Same conventions as in panel d. Note that HFB peaked after the SO peak (>0°). f Coupling strength: in every subject, we assessed the coupling strength by comparing the observed circular-linear correlation between SO-spindle coupling phase and HFB amplitude to a surrogate distribution. The correlation coefficient was then z-scored relative to a phase-shuffled distribution of correlation coefficients and considered significant at z = 1.96 in individual subjects (corresponding to a two-tailed p < 0.05; red solid line). The gray bar depicts the mean across subjects; black dots depict individual subjects. We observed significant HFB modulations in 18/18 subjects (mean z = 3.40, pgroup = 0.0007). g Preferred coupling phase: circular histogram of the best phase (panels d and e) where HFB peaked in individual subjects. At the group level, HFB amplitude was highest during the spindle when the spindle was nested in the SO peak (mvl = 0.44). h Preferred coupling phase across time (circular mean ± circular SEM). We repeated the analysis depicted in panel f for all time points (±1.25 s around the spindle peak). Shaded bars depict cluster-corrected significant Rayleigh tests (cluster p < 0.001) for circular non-uniformity (test for phase consistency at the group level), indicating that a preferred phase across subjects emerged around the spindle peak (cluster 1: −0.3 to 0.04 s; cluster 2: 0.12 to 0.60 s); hence, the effect was transient and not sustained. i Rayleigh Z and significant clusters as described in panel h. j Directional CFC analysis (gray dots depict individual subjects; mean ± SEM) indicating that the PFC spindle phase was predictive of hippocampal HFB, but not vice versa (rank sum test: p = 0.0005). All error bars indicate the mean ± SEM
Physiological and artifactual ripple activity in the MTL. a Ripple-locked grand average (blue; N = 18) and the band-bass filtered signal (black; mean ± SEM). b Artifactual ripple events grand average (red) that are characterized by sharp transients that are artificially rendered sinusoidal by band-pass filtering (black). c 1/f-corrected power spectrum (mean ± SEM; irregular resampling) in the MTL during a ripple event exhibits two distinct oscillatory peaks: in the delta and spindle frequency range. Gray shaded areas indicate significant oscillatory activity (cluster test: p = 0.001). d Histogram of ripple occurrence relative to the cortical SO as detected at scalp level. Note ripple occurrence is enhanced during the cortical upstate (i.e., the SO peak, first cluster: p = 0.0010, d = 2.82; second cluster: p = 0.0040, d = 1.42), but significant more ripples occur prior to the downstate (in gray; 400 ms window around the SO peaks, paired t-test: t17 = 5.01, p < 0.0001, d = 1.88). e This relationship was not present for artifactual ripple events, which occurred at random (smallest cluster p = 0.450; no peak differences: t17 = −0.63, p = 0.5381, d = 0.27). f Physiologic ripples were preferentially nested in the trough of the cortical spindle oscillation (V-test against ± pi: v = 5.13, p = 0.0434, mvl = 0.29). g This tight temporal relationship was again not present for artifactual ripple events (v = 1.46, p = 0.3135, mvl = 0.08). h Single subject ripple-locked EEG spectrogram indicates the presence of cortical spindle oscillations during a hippocampal ripple. Note that activity in the spindle range can be observed before and after the ripple (every ~3 s; asterisks indicate elevate spindle power). i Left Spectral analysis of the spindle amplitude (green), its fractal (1/f estimate; black dashed line) as obtained from irregular resampling as well as the wake condition as a control (absence of spindle activity; cluster test: p = 0.0490, d = 0.49). Right: after subtraction of the fractal component from the NREM spectrum, we observed a distinct oscillatory peak indicating a rhythmic comodulation at ~0.37 ± 0.03 Hz (median ± SEM; sign test against zero: p < 0.0001; d = 0.70). All error bars indicate the mean ± SEM
PFC-MTL connectivity and directionality. a Phase locking analyses revealed increased spindle phase coupling during NREM sleep (cluster tests: p = 0.0120, d = 1.37), but no elevated SO synchrony (p = 0.3087, d = 0.69). During wakefulness, we observed enhanced alpha-band coupling (p = 0.005, d = 0.83). b Power correlations (mean ± SEM) between scalp EEG and hippocampus revealed differences in amplitude couplings during NREM sleep (SO cluster: p = 0.0080, d = 1.07; spindle cluster: p = 0.0070, d = 1.20). c State-dependent directionality analysis indicates significant EEG to MTL influences (phase-slope index; mean ± SEM) in the spindle-band during NREM sleep (cluster test: p = 0.0410, d = 0.84). Inset: given the inter-subject variability in the precise spindle peak frequency, we also compared the peak-aligned (14.9 Hz ± 2.7 Hz; mean ± SD) PSI estimates to a surrogate distribution (gray), which confirmed the original observation (t-test: t17 = 2.16, p = 0.0452, d = 0.69). d Directional spindle coupling for every PFC ROI (lower panel; RM-ANOVA: F1.57,21.92 = 9.33, p = 0.002, η2 = 0.25) indicating a stronger modulatory influence from both the dlPFC and the mPFC but not the OFC. All error bars indicate the mean ± SEM
Precise SO-spindle coupling phase modulates MTL-PFC connectivity. a Single subject example of EEG-MTL connectivity calculated separately for coupled spindles (black), uncoupled/isolated spindles (purple), and random event-free intervals (gray). Event numbers were equated by a bootstrapping procedure. Note the distinct peak in the spindle band. b We re-aligned power spectra to the individual spindle peak (14.5 ± 2.6 Hz; mean ± SD) and mean-normalized the aligned spectra to minimize inter-subject variability. c We then extracted the normalized EEG-MTL iPLV peak values for all events. A repeated measures ANOVA revealed a significant main effect (F1.49,25.27 = 7.93, p = 0.0043, η2 = 0.21; p-values from post-hoc pair-wise comparisons are depicted). We found significantly more inter-areal connectivity for coupled than for uncoupled spindles or random event-free intervals. d Directionality analysis during spindle events revealed a significant influence from prefrontal EEG spindles to the MTL (dark gray) as compared to a surrogate distribution (light gray). e Two single subject examples of event-locked spindle connectivity as a function of the precise coupling phase (mean ± SEM). Upper: the spindle band exhibits a strong peak in the spectrum and also peaks in the standard deviation spectrum (inset in red), thus, indicating that the spindle band exhibits the largest variance. f Spindle-band connectivity binned as a function of the precise coupling phase revealed strong non-uniform distributions. We highlighted the worst (lowest synchrony; red) and best (highest synchrony; blue) phase. g Anti-phasic relationship between best and worst connectivity phase (EEG-MTL: V-test against ±π: v = 7.33, p = 0.0073, mvl = 0.49). h Realigned (to the best phase) grand average spindle (12–16 Hz) iPLV histogram reveals a non-uniform relationship (mean ± SEM) between SO-Spindle coupling and EEG-MTL phase locking. Note we excluded the central aligned phase bin from subsequent testing (RM-ANOVA: F5.23,88.83 = 3.13, p = 0.011, η2 = 0.15). All error bars indicate the mean ± SEM
Ripple-mediated information transfer. a Illustration of the information analysis strategy. A single EEG trace featuring SOs and spindles (in blue) and the corresponding MTL trace (in red) are depicted relative to a ripple event (see close-up; in black). Undirectional information was calculated by a moving time-window approach (center left), while directional information was calculated by keeping one window centered on the MTL ripple time point (center right; here fixed MTL window to moving EEG window; EEG to MTL information was calculated accordingly). b Left: time-resolved Mutual Information (MI) relative to the hippocampal ripple event (mean ± SEM). We detected enhanced MI between the hippocampus and prefrontal cortex ~1 s after the ripple event (1.3–2.0 s; cluster test: p = 0.0040, d = 1.06), but also observed two non-significant clusters (positive cluster from 0.25 to 0.35 s, p = 0.0849; negative cluster from −0.15 to −0.10 s, p = 0.1349). Right: spectrally resolved undirected mutual information. Significant (two-tailed p < 0.05) deviations from the average MI. Prior to ripple onset, mutual information was enhanced in the spindle and SO/delta range, which after 1 s dropped. c Left: time-resolved MI relative to the ripple event. We disentangled the information flow from prefrontal EEG to the MTL (purple) and vice versa (cyan). A significant cluster was observed after the ripple (cluster test: negative cluster from 0.15 to 0.50 s; p = 0.0130, d = 1.31), indicating increased information flow from the prefrontal EEG to the MTL. We found a second significant cluster where MI increased after ~1 s, which mainly reflected MTL to prefrontal EEG information flow (positive cluster from 1.1 to 1.85 s; p = 0.001, d = 1.08). Center/right: frequency- and directionality-resolved information flow. Outlined areas (black) reflect significant clusters (p < 0.05), where information flow was enhanced relative to the mean. Center: frequency-specific information flow from the MTL to neocortex was primarily increased in the SO (<2 Hz) and spindle-bands (~16 Hz). Right: information flow from the prefrontal EEG to MTL was not frequency-specific. d Upper panel: phase transfer entropy between the prefrontal EEG and the MTL (>1) is stronger than in the opposite direction (<1), in particular in the spindle-band (from −1–1.4 s; cluster test: p < 0.0010, d = 2.02). Lower left: excerpt during the ripple (t = 0) of the normalized PTE spectrum highlights the peak at ~16 Hz. Lower right: single subject observations in the spindle-band during the ripple. e Normalized spindle occurrence relative to the information peak. Spindle occurrence was significantly enhanced prior to the information peak (−2 to −0.5 s; peak time −0.91 s; cluster test: p = 0.001, d = 3.37). f Left: spindle-locked time-frequency representation of a single subject over 30 s highlights the slow (~0.4 Hz) pattern in spindle power. Superimposed is the rescaled, time-resolved undirected MI between the MTL and scalp EEG (black). Right: average spindle power (red; demeaned) and undirected MI (black; z-scored) indicating an anti-phasic relationship (gray arrows indicate MI peaks that coincide with spindle troughs). Note that the y-axis is truncated at around 0 to highlight the side lobes. Inset: group-level results revealing a statistically significant anti-phasic relationship between spindle power and MI between the MTL and frontal EEG sensors. g MI was enhanced only after a physiologic ripple as compared to an artifactual ripple event. Given that artifactual ripples were noisy and hence, variance was increased, we averaged in the time domain (cluster in panel a) prior to statistical testing. h MI was enhanced only after a physiologic ripple as compared to an artifactual ripple event (paired t-test: t17 = 2.82, p = 0.0117, d = 0.70). Note that artifactual ripples were also not significantly different from the baseline (paired t-test: t17 = −0.04, p = 0.9699, d = 0.01), thus, indicating that no information was transferred. The asterisk indicates that physiologic ripples were different from baseline as tested in panel a. i Topographical depiction of significant MI increases. All electrodes that showed a significant MI increase (z > 1.96, uncorrected p < 0.05) are color-coded according to the z-score, while electrodes without significant modulation are depicted in white. Note, the increase in MI was widespread and not restricted to a circumscribed cortical region. j MI increases per prefrontal sub-region (RM-ANOVA: F1.64, 23.01 = 9.13, p = 0.0020, η2 = 0.05). The strongest increase was observed in the dlPFC. All error bars indicate the mean ± SEM
Cortical SO-spindle coupling shapes MTL HFB activity
We first detected SO and spindle oscillations in scalp EEG (Fig. 1) using established algorithms7,10. We utilized multi-taper spectral analyses10,26 to visualize spectral dynamics throughout the night (Fig. 1a). Event detection closely tracked spectral sleep signatures over a whole night of sleep. For every participant, we then determined the precise SO phase during the spindle peaks. We found significant (p < 0.05; Rayleigh test) non-uniform distributions in 14/18 subjects (Fig. 1b; p = 0.0154; one-tailed Binomial test; mean Rayleigh Z = 25.56 ± 6.32; resultant vector length: 0.17 ± 0.02, mean ± SEM). Spindles were preferentially nested in the SO peak (Fig. 1c; V-test against 0°: v = 7.73, p = 0.0050, mvl = 0.43; coupling phase: 1.94° ± 14.42°, circular mean ± SEM). We further quantified the directional influence using the phase-slope index27,28 (PSI) of SOs on spindles as reported previously (Fig. 1d)10. We again found that SO predicted spindle activity as indicated by a positive PSI (0.0128 ± 0.0071; mean ± SEM; one-tailed t-test against 0: t17 = 1.81, p = 0.0436, d = 0.61), which correlated negatively with age (Supplementary Fig. 2)10.
Next, we assessed how MTL activity (for electrode placement see Fig. 2a, b and Supplementary Table 2) was modulated as a function of SO and spindle activity. We found significant power modulations in the high-frequency band (HFB; 70–150 Hz), which captures both ripple oscillations as well as multi-unit spiking activity29,30, in the MTL relative to cortical SOs (Fig. 2c) and spindles (Fig. 2d). In particular, we found evidence in line with previous reports6,7 that maximal HFB activity was nested in the cortical SO peak and cortical spindle trough (Fig. 2e, see also below). Next, we calculated state-dependent cross-frequency coupling estimates using the modulation index7,31 between the three cardinal NREM frequency bands (Fig. 2f). We found significantly stronger SO-spindle (paired t-tests: t17 = −4.55, p = 0.0003), SO-HFB (t17 = −4.81, p = 0.0002) and Spindle-HFB coupling (t17 = −2.32, p = 0.0328) during NREM sleep. This finding was further corroborated by a full comodulogram, thus, replicating and confirming previous reports using different spectral estimates (Fig. 2g)7.
To determine how these three signatures dynamically interact within the human brain, we examined how hippocampal HFB activity is modulated by the precise cortical SO-spindle coupling phase. We first identified coupled (Fig. 3a) and uncoupled (Fig. 3b) spindles. Spindles were classified as coupled when a separate SO was detected in the same interval (±2.5 s; reflecting ± 2 SO cycles). We also extracted the precise SO coupling phase for every spindle (see Fig. 3a for spindles at 0° and ±180°). Next, we compared the inter-areal (EEG-MTL) spindle-HFB coupling between coupled, uncoupled and random, event-free NREM intervals (Fig. 3c). We found that inter-areal spindle-HFB coupling differed significantly (repeated measures ANOVA: F1.30,22.15 = 6.38, p = 0.013, η2 = 0.27). Post-hoc t-test indicated that coupled spindles exhibited more inter-areal CFC than uncoupled spindles (t17 = 3.82, p = 0.0013, d = 1.04) or random intervals (t17 = 2.91, p = 0.0097, d = 1.30). Uncoupled spindles and event-free random intervals did not differ significantly (t17 = 1.37, p = 0.1885, d = 0.59). Note, this result was independent of the chosen time window for co-occurrence detection (9 linearly spaced bins between 0.5 and 2.5 s; 2 × 2 RM-ANOVA with factors spindle coupling and time bin: Coupled spindles exhibited more inter-areal CFC: F1,17 = 15.42, p = 0.001, η2 = 0.39. No significant impact of factor time bin: F2.40,40.75 = 2.88, p = 0.059, η2 = 0.01; no significant interaction: F2.90,49.31 = 1.03, p = 0.384, η2 = 0.01). Furthermore, coupling estimates can be confounded by differences in oscillatory power as well as event numbers. Here we observed no differences in power between coupled and uncoupled spindles (t-test: t17 = 1.08, p = 0.2969) and stratification of trial numbers (50 repetitions) confirmed that inter-areal CFC of coupled spindles was significantly enhanced as compared to uncoupled spindles (t-test: t17 = 4.09, p = 0.0008).
Next, we sought to determine if not only the co-occurrence of SOs and spindles modulates HFB power, but if the precise temporal relationship fine-tuned HFB activity in the MTL. Therefore, we calculated the average HFB power in the MTL as a function of the precise SO-spindle coupling phase in 24 bins. Figure 3d, e highlights two single subject examples. Coupling strength was quantified in every subject as the circular-linear correlation coefficient between the coupling phase and the HFB amplitude, which was z-scored relative to a phase-shuffled surrogate distribution (1000 repetitions). We found significant coupling in 18/18 subjects (Fig. 3f; z > 1.96 corresponds to a two-tailed p < 0.05; mean z = 3.40, which corresponds to a group p = 0.0007; see also Supplementary Fig. 3). Comparable results were obtained when we utilized a RM-ANOVA with factor bins in all subject (significant at p < 0.05 in 15/18 subjects; Binomial test: p = 0.0075; η2 = 0.38 ± 0.06, mean ± SEM). Note that the precise phase bin where HFB amplitude peaked varied across subjects (red arrows in Fig. 3d, e). However, at the group level we found that HFB amplitude was significantly stronger coupled when the spindle peaked during the SO up-state (Fig. 3g; −1.00° ± 14.28°, circular mean ± SEM; Rayleigh test: p = 0.0281, Rayleigh Z = 3.49, mvl = 0.44).
Next, we investigated whether this triple coupling emerged in a temporally precise manner. Therefore, we repeated the analysis for all time points around the spindle peak (±1.25 s) and extracted the preferred phase for every subject at every time point (Fig. 3h). Using cluster-corrected Rayleigh tests, we found that a preferred coupling phase at the group level emerged around the spindle peak (1st cluster: −0.30 to 0.04 s, p < 0.001, mvl = 0.46; 2nd cluster: 0.12–0.60 s, p < 0.001, mvl = 0.51), indicating a temporal selective triple cortical-MTL coupling during the SO-spindle complex. Finally, we tested the directionality of the spindle-HFB coupling. Given that HFB cannot be reliably extracted from scalp EEG, we restricted this analysis to the 15 subjects with simultaneous intracranial PFC and MTL coverage. We calculated the inter-areal CFC between spindle phase and HFB amplitude for all PFC-to-MTL and MTL-to-PFC pairs (Fig. 3j) and observed significantly stronger directional coupling from the PFC-to-MTL than vice versa (Wilcoxon rank sum test, given the unequal variance: z = 3.48, p = 0.0005, d = 1.50).
Taken together, our analyses demonstrated that MTL HFB activity became selectively coupled to spindles, when spindles were precisely nested in the SO peak. We conclude that the quality of cortical SO-spindle coupling predicts and coordinates MTL HFB dynamics.
MTL HFB activity reflects coupled ripple oscillations
We have focused on HFB activity, which reflects both aperiodic broadband activity as well as oscillatory ripple events. Both signatures exhibit similar spectral characteristics. Therefore, we identified distinct ripple events during SO-spindle coupling in the time domain (Fig. 4a and Supplementary Fig. 4a–c)7,24. We then excluded sharp artifactual and epileptic transients, which can be mistaken as ripples in the frequency domain (Fig. 4b). For every subject, we determined the channel with the highest number of physiologic ripples (Supplementary Fig. 4d). Subsequently, we centered our ripple-locked analyses on this channel to minimize the impact of artifacts and the bipolar referencing (phase reversals as seen Supplementary Fig. 4c) in line with previous reports7.
In order to investigate if there were any other prominent rhythms during a ripple event in the MTL, we transformed ripple epochs (±2.5 s) into the frequency domain. After discounting the 1/f contribution by irregular resampling10,32, we found two distinct oscillatory peaks in the delta (~3 Hz, reflecting the hippocampal sharp wave; cluster-based permutation test: p = 0.001, d = 1.69) and spindle range (~14.5 Hz, p = 0.001, d = 0.91; Fig. 4c).
Next, we assessed the relationship of ripples to cortical SOs and spindles. We calculated the ripple occurrence relative to cortical SOs. We found that hippocampal ripple occurrence is enhanced during the SO peak (Fig. 4d; cluster-based permutation test: 1st cluster: p = 0.0010, d = 2.82; 2nd cluster: p = 0.0040, d = 1.42), particularly evident prior to the down-state (i.e. SO trough; 400 ms window around the SO peaks, paired t-test: t17 = 5.01, p < 0.0001, d = 1.88). No such relationship was observed for artifactual ripple events (Fig. 4e; smallest cluster p = 0.450; no peak differences: t17 = −0.63, p = 0.5381, d = 0.27). This is in line with the pattern that was observed for the HFB (Fig. 2c–e). Likewise, we found that ripples were preferentially nested in the trough of the cortical spindle oscillation (Fig. 4f; V-test against ±pi: v = 5.13, p = 0.0434, mvl = 0.29; see also to Fig. 2d, e). This tight temporal relationship was again not present for artifactual ripple-like events (Fig. 4g; v = 1.46, p = 0.3135, mvl = 0.08).
To investigate cortical dynamics relative to the ripple on longer time-scales, we calculated ripple-locked EEG time-frequency representations (±15 s) and observed a striking pattern in the spindle band, where spindle rhythmically re-occurred every 3–6 s (Fig. 4h). In order to quantify this effect, we calculated the spectral content of the state-dependent spindle amplitude by means of an FFT and used wakening as a control condition (absence of spindle activity). In a cluster-based permutation test, we found significantly enhanced power in the <0.5 Hz range (cluster-based permutation test: p = 0.0490, d = 0.49). We also estimated the 1/f contribution by means of irregular resampling (Fig. 4i, right panel), which further corroborated our finding that spindle amplitudes exhibit a slow, rhythmic comodulation over time (peak frequency: 0.37 ± 0.03; median ± SEM). Taken together, we identified oscillatory ripples events in the MTL, which exhibit a tight temporal relationship to cortical SOs and spindles on multiple timescales.
Brain state dependent connectivity and directionality
We established that the three cardinal spectral signatures of NREM sleep form an oscillatory hierarchy. This concept further implies that band-limited neural oscillations establish and support communication channels across distant cortical regions33,34. To address this, we determined the frequency bands in which the cortex and the MTL were coupled during the NREM sleep. We first focused on brain state dependent connectivity, which was calculated in 30 s segments as defined by the hypnogram. Undirected connectivity was calculated using metrics that suppressed effects of volume spread in the cortical tissue. We assessed two different coupling modes: Phase synchrony as measured by the imaginary phase locking value (iPLV)35 and amplitude-envelope correlations as measured by orthogonalized power correlations36.
We found that phase synchrony was only elevated in the spindle band (Fig. 5a; cluster-based permutation test: p = 0.0120, d = 1.37) but interestingly not in the SO range (p = 0.3087, d = 0.69). In addition, we found a negative cluster that indicated enhanced theta/alpha coupling during wakefulness as compared to NREM sleep (p = 0.005, d = 0.83). To further elucidate whether there is a long-range interaction in the SO range, which might not be phase-specific, we also computed power correlations between the EEG and MTL. This analysis revealed significant differences in both the SO (Fig. 5b; cluster-based permutation test: p = 0.0080, d = 1.07) as well as the spindle range (p = 0.0070, d = 1.20) as compared to wakefulness. However, from the observed pattern it is unclear whether this reflects a selective increase during NREM sleep or if the effect is driven by a relative increase of theta/alpha coupling during wakefulness. Notably, connectivity profiles as obtained from scalp EEG mimicked the pattern when intracranial PFC electrodes were used for connectivity analyses (Supplementary Fig. 5a, b). These state-dependent phase synchrony profiles exhibited a pronounced 1/f drop-off and the numerical differences were small.
An important contributing factor to this effect was that only a few spindle events occurred in a given 30 s epoch, hence, event-related synchrony effects might be underestimated when data is averaged in long epochs. Therefore, we investigated event-related connectivity profiles (see Fig. 6). An important observation that emerged from this analysis is the absence of pronounced SO phase synchrony between the EEG and MTL, indicating that SOs—in contrast to spindles—might play a negligible role for direct inter-areal information transfer. This was further corroborated by a directional connectivity analysis using the PSI (Fig. 5c). We found a spindle-band specific directional influence from the EEG to the MTL as compared to wakefulness (cluster test: p = 0.0410, d = 0.84) as well as when compared to a surrogate distribution (inset Fig. 5c). A PFC sub-region analysis (N = 15 subjects; RM-ANOVA: F1.57,21.92 = 9.33, p = 0.002, η2 = 0.25) indicated that the main drivers were the dlPFC (t14 = 2.36, p = 0.0166, d = 0.86; post-hoc one-tail t-tests against 0) and mPFC (t14 = 2.04, p = 0.0306, d = 0.74), but not the OFC (t14 = −2.56, p = 0.9887). No directional coupling effects were observed in the SO range (Fig. 5c). In summary, state-dependent directed and undirected connectivity profiles indicate that spindles play a pivotal role mediating directional influences from prefrontal to MTL regions.
Cortical SO-spindle coupling predicts PFC-MTL connectivity
These observations raise the question which role SOs play for inter-areal communication. We tested the hypothesis that the precise cortical SO-spindle coupling phase predicts the magnitude of the inter-areal phase synchrony in support of information transfer. First, we calculated phase synchrony estimates for coupled and uncoupled spindles as well as random event-free epochs (Fig. 6a, b; analogous to the analyses described in Fig. 3c). On the group level, we observed significant differences in synchrony (Fig. 6b, c; RM-ANOVA: F1.49,25.27 = 7.94, p = 0.0043, η2 = 0.21). Post-hoc t-tests revealed that coupled spindles were more synchronous than uncoupled spindles (t17 = 3.73, p = 0.0017, d = 1.00) and event-free intervals (t17 = 2.92, p = 0.0095, d = 1.05). In contrast, uncoupled spindles and random intervals did not differ (t17 = 0.63, p = 0.5369, d = 0.18). We also tested the directionality of the spindle coupling using the PSI (Fig. 6d) as compared to a surrogate distribution given the absence of spindle activity during wakefulness. We found significantly enhanced PSI values, indicating a directed influence from the prefrontal EEG to MTL electrodes (cluster test: p = 0.0130, d = 0.77), thus, replicating the effect as observed for state-dependent analyses (Fig. 5c).
Having established that coupled spindles exhibit more inter-areal synchrony, we then assessed the impact of the precise coupling phase (analogous to the analyses in Fig. 3d–g). We observed clear peaks in the connectivity spectra that were centered on spindles (Fig. 6e; ±2.5 s). Notably, peaks were also evident in spectra depicting the standard deviation across 16 phase bins (insets Fig. 6e). We assessed EEG-MTL spindle synchrony as a function of the SO-spindle coupling phase in every subject (Fig. 6f). Non-uniformity across the 16 bins was assessed using RM-ANOVAs. We found significant (p < 0.05, mean η2 = 0.32 ± 0.023; mean ± SEM) distributions in 18/18 subjects (two-tailed Binomial test: p < 0.0001). The best coupling phase was variable across subjects, but peaked on average after the SO up-state (108.85° ± 16.34°; mean ± SEM; mvl = 0.26), while the worst bin was closer to the down-state (−174.58° ± 16.89°; mvl = 0.22). Within the individual, best and worst phase were preferentially separated by ~180° (Fig. 6g; V-test: v = 7.33, p = 0.0073, mvl = 0.49). Given that we observed this variability on the best connectivity phase, we re-aligned the binned spindle synchrony to the bin centered at ~12° and mean-normalized individual distributions to reduce inter-subject variability. This bin was excluded from subsequent testing. We assessed the non-uniformity across the remaining 15 bins using a RM-ANOVA. We found significant synchrony differences as a function of the precise SO-spindle coupling phase (Fig. 6h; F5.23,88.83 = 3.13, p = 0.0109, η2 = 0.15).
Collectively, these findings establish that the precise SO-spindle coupling phase predicts the connectivity between prefrontal and MTL sites and is in line with the notion that spindles selectively establish a communication pathway between the MTL and neocortex.
Ripple-mediated information transfer
In a final step, we investigated the interplay between spindles and ripple-mediated information transfer. Ripples during hippocampal replay have been proposed to reflect information package transfer2,3. To quantify information transfer, we calculated time-resolved mutual information (MI)37 between the EEG and MTL relative to MTL ripple events (for a schematic of the analysis strategy see Fig. 7a). In particular, we utilized undirectional (time window by time window analyses) as well as directional (ripple-centered fixed time window to all other time windows) MI analyses. We first calculated the undirectional MI between the EEG and MTL (Fig. 7b left panel and Supplementary Fig. 6a). We found that MI reliably increased ~1 s after a ripple (cluster test: p = 0.0040, d = 1.06), which is in line with the idea that information is being reactivated during a ripple for further processing. Albeit non-significant (non-significant positive cluster from 0.25 to 0.35 s, p = 0.0849; non-significant negative cluster from −0.15 to −0.10 s, p = 0.1349), we also observed an additional decrement just prior to the ripple, which could potentially reflect a cortical trigger event initiating the ripple (e.g. a spindle; Supplementary Fig. 6b).
To investigate which frequency bands carried information, we next spectrally decomposed the signal. We z-normalized every frequency band separately to account for the 1/f signal drop-off and performed no baseline correction to assess information prior to the ripple. Spectrally resolved undirected MI estimates (Fig. 7b, right panel) revealed significant information away from the mean in the spindle-band (cluster test: p < 0.0010, d = 1.63) and SO range (p = 0.0060, d = 1.17) already evident prior to the ripple event. Note, these clusters also involved the delta (~3 Hz; possibly reflecting delta waves or sharp-wave activity7) and beta-activity (~32 Hz; possibly reflecting a spindle harmonic7). Spectral techniques are known to give rise to temporal smearing due to the inherent time-frequency trade-off34. However, spectral decomposition offers the advantage that the signal is not mainly dominated by strong low-frequency components, which are particularly pronounced when analyzing data in the time domain due to the 1/f signal drop-off.
In order to test if this increase reflected ripple-specific information, we calculated time-resolved MI between a fixed time-window, which was centered on the MTL ripple (±0.2 s) and all other time points (Fig. 7c, left panel). To resolve directionality, we performed this fixed window analysis twice, either centered on the MTL or the EEG. This analysis revealed that the increase that occurred ~1 s after the ripple mainly reflected information transfer from the MTL to the prefrontal EEG (Fig. 7c; cluster test: p = 0.001, d = 1.08).
Critically, we also found a prominent earlier cluster indicating increased shared information between the EEG (at the time of the ripple event) and the MTL (following a ripple; cluster test: p = 0.0130, d = 1.31), supporting rapid bidirectional interactions between the prefrontal EEG and MTL after a ripple. To further characterize this effect, we calculated time-, frequency- and directionality-resolved MI (Fig. 7c, center and right panel). We observed that shared information between the MTL ripple and the prefrontal EEG was already enhanced prior to the ripple (Fig. 7c, center panel), in particular in the low-frequency (−0.9 to −0.25 s; <2 Hz; cluster test: p = 0.0110, d = 0.87) and spindle-band (−0.85 to 0.85 s; ~16 Hz; p < 0.0010, d = 0.73). Another cluster emerged later (1.1 to 1.9 s, ~11 Hz; p = 0.0050, d = 0.59). This observation further supports the idea that cortical SOs and spindles are predictive of MTL ripple activity (c.f. Figs. 3 and 4). The control condition (Fig. 7c right panel; information between a fixed EEG window and all other MTL time points) did not reveal a frequency-specific effect, but was broadly distributed and covered all frequency bins (three clusters; all p < 0.0240; mean d = 1.62). Note, that the moving window analysis is comparable to a non-linear lagged correlation analysis, but does not actually quantify frequency-specific information flow. Hence, we calculated transfer entropy as directional information-theoretical metric of information flow (Fig. 7d).
We found that phase transfer entropy was stronger from the prefrontal EEG to the MTL than vice versa in the spindle-band (~16 Hz; from −1 to 1.4 s around the ripple; cluster test: p < 0.0010, d = 2.02). We also observed prominent lower frequency components (<3 Hz; Supplementary Fig. 6c), which resemble the low-frequency effects observed in Fig. 7b, c, but these did not exhibit a preferred directionality, mimicking the observed pattern in Fig. 5a, c.
Having established these interactions, we also determined the delay between cortical spindle activity and information transfer (Fig. 7e). In line with the spindle-ripple interactions (Fig. 4) and ripple-information dependency (Fig. 7b, c), this analysis suggested that spindles preferentially occurred 0.5–2 s prior to the information peak as extracted from single-event traces (cluster test: p = 0.001, d = 3.37). Hence, ripple-mediated information could be detected at prefrontal sites after spindle offset, i.e., in between two spindles when the neocortex is relatively desynchronized, a neurophysiological state that maximizes information-processing capacities1,9,21,37. To further highlight this relationship, we calculated spindle-locked time-frequency representations (Fig. 7f; analogous to Fig. 4h), which reveal the anti-phasic between spindle power and MI. To quantify this effect, we calculated the average phase difference at ~0.4 Hz between spindle power and MI. At the group level (inset in Fig. 7f), we found strong non-uniform distributions with a preferred phase difference of (159.6° ± 11.7°, circular mean ± SEM; Rayleigh test: Z = 7.02, p = 0.0005, mvl = 0.62). These findings were further corroborated by the observation that no reliable information transfer occurred after an artifactual ripple (Fig. 7g, h; see also Supplementary Fig. 6d).
Finally, we assessed the spatial extent of this information transfer in the subset of subjects with intracranial PFC electrodes (N = 15; Fig. 7i). We found widespread ripple-mediated information increases, which were most pronounced in the dlPFC (Fig. 7j), but also significant in all other ROIs (paired t-test against 0; dlPFC: t14 = 5.35, p = 0.0001, d = 1.95; OFC: t14 = 4.12, p = 0.0010, d = 1.51; mPFC: t14 = 4.15, p = 0.0010, d = 1.52). Taken together, these findings are consistent with the idea that ripples mediate information transfer from the MTL to the neocortex2,15,38. The data described in Fig. 7I, j indicates widespread increments in shared information following a MTL ripple, not constrained to a single PFC sub-region. Importantly, we observed a clear temporal relationship between information peaks and spindle events (Fig. 7e, f), which might reflect distinct episodes of information reactivation (triggered by spindle synchrony) and information transfer in line with previous behavioral evidence20.
Our results demonstrate that precisely coupled cortical SOs and spindles modulate hippocampal dynamics in support of hippocampal-neocortical information transfer in the human brain. In particular, the quality of this coupling predicts inter-areal connectivity, triggers hippocampal ripple activity and shapes subsequent information transfer. Our findings describe two distinct cortical states during NREM sleep, which alternate every 3–6 s. First, we observed states of high synchrony where coupled SO-spindle complexes engage the MTL and trigger ripples (Figs. 3–6). Second, we also found states of low synchrony, where information flow from the MTL to the neocortex was maximized, but no prominent oscillatory activity was present (Fig. 7). This two-step organization might reflect an endogenous timing mechanism, which ensures that information arrives in the neocortex when processing capacities are optimized20,22. Taken together, our results suggest that synchronized sleep oscillations provide temporal reference frames for feed-forward and feedback communication and structure hippocampal-neocortical dynamics in space and time4.
Hippocampal-prefrontal pathways and their role for memory formation have been the studied extensively over the last few decades4,7,15,16,39,40. However, the majority of evidence stems from recordings in rodents, which exhibit the same prominent network oscillations, such as SOs, spindles and ripples, despite a dramatically different anatomical structure, in particular in the PFC11,12. A major contributing factor is that imaging the human MTL with high spatiotemporal resolution is challenging when using non-invasive methods41. Here we took advantage of recordings from patients who underwent invasive monitoring for seizure localization and were implanted with electrodes in different key nodes of the MTL-PFC network. While previous human intracranial studies demonstrated the presence of hippocampal ripples24 and the nesting of sleep oscillations6,7, these studies did not address the directionality of these interactions nor did they quantify information transfer or assess the contributions of distinct PFC sub-regions.
While memory research has focused on the hippocampus-dependent encoding of novel information, how this information is transferred to long-term storage is unknown. The discovery of neuronal replay provided an elegant mechanistic explanation of how mnemonic representations are strengthened, but this concept does not directly explain how information is subsequently transferred in large-scale networks14. Furthermore, it remains unclear how the different cortical nodes ensure that the receiving area is in a favorable state to process the reactivated information40.
Systems memory consolidation suggested a two-step process: Mnemonic reactivation is associated with hippocampal ripple activity that is tightly coupled to subsequent cortical SOs and spindles, which then mediate neuroplasticity to facilitate long-term neocortical storage1,2,3,16. This model received substantial empirical support over the last two decades2,3,13,15,42,43, but has been challenged recently44,45,46. In line with several of our observations (e.g. early post-ripple PFC-to-MTL information flow; Fig. 7c), several recent studies reported early cortical contributions preceding hippocampal involvement, thus, reflecting a promising avenue for future studies investigating interactions between the hippocampus and neocortex in support of memory formation18,47,48,49.
In particular, it had been observed that SOs, spindles, and ripples are precisely coupled to one another and form an oscillatory hierarchy, where spindles are nested in SOs and ripples are nested in spindles5,7. Crucially, several groups reported that this triple coupling predicts behavior4,16. For example, ripple-triggered electrical stimulation increased subsequent prefrontal SO-spindle coupling and recall performance16. Hence, some theories assumed that the hippocampus is the driver of this interaction and subsequent SOs and spindles mainly index hippocampal processing that facilitated neocortical processing15,16. We observed several data points that contradict the classic systems memory consolidation theory: first, not all signatures of information transfer are frequency-specific (e.g. Fig. 7c). Second, we found evidence for early cortical engagement around the ripple events, which is not in-line with classic systems consolidation theory. Given the recent observation of early cortical memory representations in humans48,49, we speculate that this could reflect a uniquely human network feature, requiring further explication using combined electrophysiology and behavioral testing tracking mnemonic representations13.
Notably, several other recent reports also implied an inverse directionality compared to classic theory1,38,40: SOs predict spindles, which in turn predict ripples. Several reports suggested that cortical activity might actually precede hippocampal reactivation17,18,19, none of which were carried out in humans. The present findings provide empirical evidence for this account in the human brain. Here we demonstrate that SOs shape spindles, which in turn trigger nested ripples. Crucially, the quality of cortical SO-spindle coupling directly predicts ripple-band activity as well as inter-areal synchrony. These findings provide clear evidence that the hippocampal-neocortical dialogue relies on rapid bidirectional communication, where the neocortex can mediate hippocampal information reactivation and subsequent transfer through neocortical-hippocampal-neocortical loops.
It had long been recognized that spindle activity predicts overnight memory retention, but the underlying mechanisms remained unclear8. In particular, spindle magnitudes or densities have commonly been used to assess their role for memory consolidation50. More recently, several reports indicated that not spindle activity per se, but spindle timing relative to the SO, determines the success of information reactivation and consolidation7,10,51,52,53,54. These findings were nicely paralleled by a two-photon calcium imaging study in rodents, which revealed that excitatory neural activity is amplified when spindles coincide with the SO peak, but not when they miss a narrow 'window-of-opportunity'22. This pattern mirrored our observation that connectivity and inter-areal coupling is increased for coupled as compared to uncoupled spindles. This observation reveals that coupled spindles do not reflect one entity, but connectivity and coupling depend on the precise SO coupling phase10,23.
Furthermore, it had recently been observed that spindles exhibit a second-order temporal structure and rhythmically reoccur every 3–6 s20. We observed a highly comparable pattern for human hippocampal ripples. This has previously observed in rodents, albeit on a slower time-scale (~12 s)5. Several lines of research converged on the notion that spindles are associated with information reactivation, but that subsequent processing actually occurs ~1–2 s after a spindle20,55. In this state, the cortex is maximally desynchronized during NREM sleep and entropy and processing capacities are maximized21. These findings are consistent with recent evidence suggesting that cued memory reactivation during the spindle refractory period, i.e., at the peak information transfer, is detrimental for memory consolidation20. Our results provide empirical evidence and offer a mechanistic explanation for this behavioral observation. The present findings reveal that cortical spindles shape hippocampal ripples and subsequent information transfer from the MTL-to-PFC, which peaks 1–2 s after the spindle, i.e., during spindle refractoriness9,20,55. We speculate that additional sensory input during the inter-spindle interval might interfere with MTL-neocortical information transfer and thereby, explain the detrimental effects on memory by cue presentation during the spindle refractoriness20.
Seminal work by Steriade et al.56,57 indicated that spindles are mainly generated in the thalamus and in a cortico-thalamic loop. In the present study, we only recorded from locations that were actively explored for clinical purposes of seizure onset localization and did not involve the thalamus41. However, a recent rare human intracranial study that simultaneously recorded from the thalamus and the PFC, but not the MTL, reported that thalamic spindles are triggered by a neocortical down-state and are then back-projected to the neocortex where the spindle coincides with the next SO up-state58. It is plausible that neocortical SOs trigger thalamic spindles, and jointly provide a messenger mechanism to trigger hippocampal reactivation and transfer. Taken together, several lines of inquiry now indicate that frequency-specific bidirectional communication in large-scale networks during NREM sleep coordinates inter-areal information flow in support of long-term memory retention.
In summary, we established that bidirectional hippocampal-neocortical interactions support hippocampal information reactivation and transfer during NREM sleep. This dynamic process ensures that the neocortex receives mnemonic data at a temporally optimal physiological moment for information transfer and neuroplasticity1,2,8,10. Given clinical time constraints in this study, we did not obtain concomitant behavior. However, the current observed physiologic patterns are in accord with recent findings on how coupled SO-spindles support hippocampus-dependent memory consolidation10. These findings are of immediate clinical relevance, given that temporal dispersion of cortical sleep networks has been suggested to constitute a novel pathway of age- as well as disease-related cognitive decline10,52,59,60,61.
We obtained intracranial recordings from 18 pharmacoresistant epilepsy patients (35.61 ± 12.31 years; mean ± SD; 10 female) who underwent pre-surgical monitoring with implanted depth electrodes (Ad-Tech), which were placed stereo-tactically to localize the seizure onset zone. All patients were recruited from the University of California Irvine Medical Center, USA. Electrode placement was exclusively dictated by clinical considerations and all patients provided written informed consent to participate in the study. Patients selection was solely based on MRI confirmed electrode placement in the hippocampus. The study was not pre-registered. All procedures were approved by the Institutional Review Board at the University of California, Irvine (protocol number: 2014–1522) as well as by the Committee for Protection of Human Subjects at the University of California, Berkeley (Protocol number: 2010-02-783) and conducted in accordance with the 6th Declaration of Helsinki.
Experimental design, data acquisition, and procedure
We recorded a full night of sleep for every participant. Recordings typically started around 8.00–10.00 pm and lasted for ~10–12 h (Supplementary Table 1). Only nights that were seizure-free were included in the analysis. Polysomnography was collected continuously.
Sleep monitoring and data acquisition
We recorded from all available intracranial electrodes. In order to facilitate sleep staging based on established criteria, we also recorded scalp EEG, which typically included recordings from electrodes Fz, Cz, C3, C4, and Oz according to the international 10–20 system. Electrooculogram (EOG) was recorded from four electrodes, which were placed around the right and left outer canthi. All electrophysiological data was acquired using a 256-channel Nihon Kohden recording system (model JE120A), analog filtered at 0.01 Hz and digitally sampled at 5000 Hz.
CT and MRI data acquisition
We obtained anonymized postoperative CT scans and pre-surgical MRI scans, which were routinely acquired during clinical care. MRI scans were typically 1 mm isotropic.
Data analysis was carried out in MatLab 2015a (MathWorks Inc.), using custom code as well as functions from the EEGLAB toolbox62, the FieldTrip toolbox63, the CircStat toolbox64, Freesurfer and SPM12 as well as related toolboxes as described below65. In general, most analyses were carried out using custom code. In case we utilized specific equations, we provide these below. In addition, we used the following standard functions from toolboxes: Filtering (EEGLAB: eegfilt.m; or FieldTrip: ft_preprocessing.m). Preprocessing, spectral analyses and connectivity analyses were mainly carried out using FieldTrip (ft_preprocessing.m, ft_freqanalysis.m and ft_connectivityanalysis.m). Circular statistics were carried out in CircStat (Rayleigh test: circ_rtest.m, V-test: circ_vtest, mean vector length: circ_r, circular mean: circ_mean.m, circular SD: circ_std.m). Anatomical reconstructions using Freesurfer and SPM12 are in detail described here in tutorial format66. Code for using the phase-slope index27 (PSI) and irregular resampling32 (IRASA) is provided in the original publications.
Two independent neurologists visually determined all electrode positions based on individual scans in native space. For further visualization, we reconstructed the electrode positions as outlined recently66. In brief, the pre-implant MRI and the post-implant CT were transformed into Talairach space. Then we segmented the MRI using Freesurfer 5.3.067 and co-registered the T1 to the CT. 3D electrode coordinates were determined using the Fieldtrip toolbox63,66 on the CT scan. Then we warped the aligned electrodes onto a template brain in MNI space to facilitate visualization on the group level. Note that the MNI reconstruction was only done for visualization purposes, but electrode localization was determined in native space. We assigned MTL electrodes to putative MTL subregions (CA1, CA3/DG, Subiculum, entorhinal cortex, perirhinal cortex, and parahippocampal gyrus) after visual inspection68,69. Therefore, bipolar pairs as reported mostly captured activity from more than one subfield.
All available artifact-free scalp electrodes were low-pass filtered at 50 Hz, demeaned and de-trended, downsampled to 400 Hz and referenced against the average of all clean scalp electrodes. EOGs were typically bipolar referenced to obtain one signal per eye. A surrogate electromyogram (EMG) signal was derived from electrodes in immediate proximity to neck or skeletal muscles, by high-pass filtering either the ECG or EEG channels above 40 Hz. Sleep staging was carried out according to Rechtschaffen and Kales guidelines by trained personnel in 30 s segments70 as reported previously10,60.
EEG data
Preprocessing: Scalp EEG was demeaned, de-trended and locally referenced against the mean of all available artifact-free scalp electrodes. We applied a 50-Hz low-pass filter and down-sampled the data to 500 Hz. Scalp EEG analyses were typically centered on electrode Fz unless stated otherwise. In a subset of subjects (N = 5) Fz was not available and Cz was utilized instead of Fz.
In every subject (N = 18), we selected all available electrodes within and close to the hippocampus and entorhinal cortex, which were then demeaned, de-trended, notch-filtered at 60 Hz and its harmonics, bipolar referenced to its immediate lateral neighboring electrode and finally down-sampled to 500 Hz (ft_preprocessing.m). We retained all MTL channels that exhibited interictal epileptic spiking activity, but discarded noisy channels. In total, 15 out of 18 subjects had also simultaneous coverage of all three prefrontal sub-regions (dorsolateral (dlPFC), orbitofrontal (OFC), and medial (mPFC) prefrontal cortex). Again, all available contacts in these regions were included and the same preprocessing steps were applied. Then all resulting traces were manually inspected and noisy, epileptic and artifact-contaminated PFC channels were excluded.
For all inter-areal coupling and connectivity analyses, we calculated all pair-wise channel combinations before averaging across channels in a given ROI.
Interictal epileptic discharge (IED) detection: Prior to all analyses, we first detected IEDs using automated algorithms on all channels located in the MTL. We utilized two different algorithms, which led to comparable results, both in terms of the number of detected events as well as resulting waveform shapes (Supplementary Fig. 1). Hence, we utilized the first algorithm for all subsequent analyses. All cut-offs were chosen in accordance with recently published findings7,25 and were confirmed by two neurologists who visually verified the detected events.
IED detection algorithm 125: The continuous signal was filtered front and backwards between 25 and 80 Hz (eegfilt.m) and the analytical amplitude was extracted from the Hilbert transform (hilbert.m) and then z-scored. Events were detected when this signal was 3 SD above the mean for >20 ms and <100 ms. The IED events were then time-locked to the peak and the epoch (±2.5 s) was considered as an artifact.
IED detection algorithm 27: Here we considered three signals: the raw trace, an amplitude difference trace (amplitude difference between two time points) as well as a 200 Hz high-pass filtered version of the raw signal. We turned every trace into a z-score based on the individual stage-specific mean and SD. A time point was marked as artifactual when it exceeded at z-score of six in any of these traces or if the raw trace plus either the difference or the filtered trace simultaneously exceeded a threshold of 4. Consecutive segments were grouped together and time-locked to the IED peak.
State-dependent spectral analysis: To obtain a continuous time-frequency representation of a whole night of sleep (Fig. 1a) we utilized multitaper spectral analyses26,71 (ft_freqanalysis.m), based on discrete prolate slepian sequences. The raw data was epoched into 30-s-long segments, with 85% overlap (ft_redefinetrial.m). Spectral estimates were obtained between 0.5 and 30 Hz in 0.5 Hz steps. We utilized 29 tapers, providing a frequency smoothing of ±0.5 Hz. Note, different settings were used for all subsequent analyses to optimize the time-frequency trade-off.
Event-locked intracranial spectral analyses (Fig. 2c, d): We utilized multitaper spectral analyses based on discrete prolate spheroidal sequences in 21 logarithmically spaced bins between 32 and 181 Hz71. We adjusted the temporal and spectral smoothing to approximately match a 250 ms time window and ¼ octave frequency smoothing and baseline-corrected the values relative to the pre-event (−2 to −1.5 s) baseline.
Ripple-locked intracranial spectral analysis: After identification of ripple events in the time domain (see below), we transformed these events into the time-frequency domain using the multitaper method based on discrete prolate spheroidal sequences in 89 logarithmically spaced bins between 4 and 181 Hz71. We adjusted the temporal and spectral smoothing to approximately match a 250-ms time window and ¼ octave frequency smoothing and baseline-corrected the values relative to the pre-event (−2 to −1.5 s) baseline (Supplementary Fig. 4b). We utilized less frequency smoothing and a more fine-grained spectral resolution to facilitate detection of narrow-band high-frequency oscillations.
Ripple-locked scalp EEG spectral analysis: To observe ripple-related changes on a longer timescale (Fig. 4h), we extracted epochs (±30 s) around individual ripple events. For an optimal time-frequency trade-off, we utilized a single Hanning taper (window length 500 ms) and spectral estimates between 1–30 Hz were obtained in steps of 50 ms. We baseline-corrected these spectral estimates per frequency band by a z-score relative to a bootstrapped baseline distribution (−2 to −1s before ripple peak)10. The same approach was used for spindle-locked analyses (Fig. 7f). We removed both IEDs (Supplementary Fig. 1) and artifactual ripples (Fig. 4b) from all ripple-locked analyses, unless we specifically used artifactual ripples in the comparison (e.g. Fig. 7g, h).
High-frequency band (HFB) activity: We extracted the HFB activity by band-pass filtering the raw continuous time courses in eight non-overlapping 10 Hz wide bins ranging from 70 to 150 Hz (eegfilt.m) and applying a Hilbert transform to extract the instantaneous amplitude. Then every trace was separately normalized by a z-score after discounting the filter edge artifacts and all eight traces were averaged into one resulting HFB trace per channel (Fig. 3d, e). HFB power was then grouped according to the underlying SO and spindle phase (Fig. 2e; normalized on a mean of one).
Amplitude modulations of ripple power (Fig. 4c): We used irregular resampling32 (IRASA) to identify oscillatory components and separate them from broadband 1/f contributions. Ripple epochs were centered on the ripples (±2.5 s). IRASA resamples the neuronal signals by pairwise non-integer values (1.1–1.9 in steps of 0.05, as well as corresponding factors 0.9 to 0.1). This procedure slightly shifts the peak frequency of oscillatory signals by compressing or stretching the underlying signal. Note that the 1/f component remains stable. This procedure is then repeated in overlapping windows (window size: 1 s, sliding steps: 0.25 s; frequencies up to 20 Hz were estimated). Note resampling was always done in a pairwise fashion. We calculated the auto-power spectrum by means of a FFT after applying a Hanning window for all segments. Then all auto-spectra were median-averaged to obtain the 1/f component. Finally, the resampled 1/f PSD is subtracted from the original PSD to obtain the oscillatory residuals.
Amplitude modulations of spindle power: In order to test whether the amplitude of the spindle exhibits rhythmic modulations (Fig. 4h, i), we first band-pass filtered the continuous scalp EEG signal between 12 and 16 Hz and extracted the analytic amplitude from the Hilbert transform. Then we epoched this data into 30-s-long non-overlapping segments and disentangled oscillatory and fractal 1/f components using irregular resampling (IRASA). Window length was 5 s, 1 s sliding steps and modulating frequencies were estimated up to 6 Hz.
Slow oscillations (SO): Event detection was performed for every channel separately based on previously established algorithms7,10. In brief, we first filtered the continuous signal between 0.16 and 1.25 Hz (eegfilt.m) and detected all the zero crossings. Then events were selected based on time (0.8–2 s duration) and amplitude (75% percentile) criteria. Finally, we extracted 5-s-long segments (±2.5 s centered on the trough) from the raw signal and discarded all events that occurred during an IED.
Sleep spindles: We filtered the signal between 12 and 16 Hz (eegfilt.m) and extracted the analytical amplitude after applying a Hilbert transform (Fig. 1b). We smoothed the amplitude with a 200-ms moving average. Then the amplitude was thresholded at the 75% percentile (amplitude criterion) and only events that exceeded the threshold for 0.5–3 s (time criterion) were accepted. Events were defined as sleep spindle peak-locked 5-s-long epochs (±2.5 s centered on the spindle peak) and events that occurred during an IED were discarded.
SO-Spindle co-occurrence (coupled vs. uncoupled events): For all spindles, we quantified if a separate SO was detected independently in the same time interval (spindle peak ± 2.5 s) and spindles were classified into coupled or uncoupled events based on the simultaneous detection.
Event-free, random intervals: For control analyses, we also extracted random, non-overlapping 5-s-long intervals during NREM sleep, which did neither exhibit any SO and spindle events nor any artifactual IED activity. Onset samples were solely determined by the minimally required distance to all other events (2.5 s away from the previous SO trough or spindle peak).
Ripples: We utilized automatic IED detection algorithms to detect prominent discharges, however, this approach does not capture below-threshold epileptiform or artifactual activity in the HFB. Therefore, we imposed additional measures to isolate true ripples from artifactual ripples, which can easily be mistaken when data is analyzed in the frequency domain or after applying a band-pass filter (Fig. 4b). We first narrowed down our search window for ripples, to epochs during cortical spindle events (±1 s) during NREM sleep when no simultaneous IED had been detected. We time-locked the HFB traces relative to those spindle events and determined the strongest HFB peak during any given, artifact-free spindle event. Then we extracted the number of peaks in the raw signal in a 40-ms window around this peak. If equal or more than three peaks were detected, the event was classified as a true ripple, while events that only exhibited one or two peaks were classified as artifactual. Ripples were peak locked (Supplementary Fig. 4c). Note that for group averages (Fig. 4a, b), we followed the convention that ripples were nested in the upwards swing of the sharp wave and hence, we adjusted ripple polarity, which was less informative given the bipolar referencing scheme (Supplementary Fig. 4c). Resulting ripples were then visually inspected and we carefully assessed their spatiotemporal profile relative to other oscillatory events, such as slow oscillations and spindles (Fig. 4d–g).
State-based connectivity analyses: We calculated metrics for amplitude- as well as phase-based connectivity. Connectivity between the EEG and the MTL (Fig. 5a, b) was calculated for 25 logarithmically spaced bins between 0.5 and 32 Hz (±center frequency/4; eegfilt.m) in 30 s non-overlapping bins after band-pass filtering and applying a Hilbert transformation. Filtering was performed on continuous data to minimize filter edge artifacts. To assess envelope-based couplings, we calculated power correlations36. Even though the data was locally re-referenced (MTL: bipolar; EEG: common scalp EEG), we considered artifactual zero-lag interactions and removed them by orthogonalizing signals in the time domain prior to correlation analysis. The signals X* and Y* were derived by orthogonalizing the complex-valued signals Y to X and vice versa (Eq. (1)).
$$Y^ \ast = {\mathrm{imag}}\left( {Y\left( {t,f} \right) \times \frac{{{\mathrm{conj}}\left( {X\left( {t,f} \right)} \right)}}{{\left| {\left( {X\left( {t,f} \right)} \right.} \right|}}} \right)$$
Then orthogonalized correlations were calculated after extracting the squared analytic amplitude of the complex-valued signals and applying a log10 transform and averaged across both possible orthogonalized correlations (Eq. (2)).
$${\mathrm{rho}}_{\mathrm{{ortho}}} = \frac{{{\mathrm{corr}}\left( {X^ \ast ,Y} \right) + {\mathrm{corr}}\left( {Y^ \ast ,X} \right)}}{2}$$
Likewise, we removed zero-phase lag contributions from the phase-locking value (PLV) by only considering the magnitude of the imaginary part (iPLV; Eq. (3))35.
$${\mathrm{iPLV}} = \left| {{\mathrm{imag}}\left( {\frac{1}{N}\left( {{\mathrm{exp}}^{i\left( {\Phi _{{\mathrm{Signal}}\,1} - \Phi _{{\mathrm{Signal}}\,2}} \right)}} \right)} \right)} \right|$$
Connectivity between intracranial PFC and MTL electrodes was performed accordingly, however, here we obtained connectivity estimates for 34 logarithmically spaced bins between 0.5 and 152 Hz (Supplementary Fig. 5).
Event-based connectivity analysis: In order to resolve how the precise SO-spindle coupling phase indexes the PFC-MTL connectivity, we calculated the iPLV between the scalp EEG and all available MTL electrodes (Fig. 6). Data were epoched (±2.5 s) relative to all IED-free spindles in NREM sleep and spindles were grouped into 16 linearly spaced non-overlapping bins between –pi and pi. Given that different bins exhibited different event numbers, we repeated this procedure 50 times. On every iteration, we chose n trials, where n corresponds to 0.8x the smallest number in any given bin to obtain a bootstrapped distribution even from the smallest bin. Every segment was then transformed into the frequency domain by means of a fast Fourier transform after applying a Hanning window and the cross-spectrum (CSD) was extracted for all frequencies between 1 and 32 Hz (1 Hz steps; ft_freqanalysis.m) and the iPLV was derived from the CSD. The resulting iPLV spectrum was mean-normalized. To facilitate between subject comparisons and to account for the fact of varying preferred coupling directions, we aligned the average iPLV in the spindle range (12–16 Hz) to phase bin 9 (0–22.5°). This phase bin was excluded from subsequent testing.
State-based directionality analysis: Within frequency directionality was tested by means of the phase-slope index (PSI)27, which was calculated for non-overlapping 30 s segments based on the Fourier coefficients (Fig. 5c). Given the different number of wake and NREM epochs, we subsampled 0.8x the smallest number of events (50 repetitions) and averaged the resulting directionality spectra. While power differences potentially affect PSI estimates, they however do not affect directionality estimates, hence, values above zero indicate PFC to MTL directional influences, while values below zero signals the opposite directionality. We also compared it to a surrogate distribution as described below, which allowed us to only study NREM-specific effects and account for power differences between wake and NREM states.
Event-based directionality analysis: We repeated the analysis, but this time centered on the spindle events (Fig. 6d). Given that no spindles were present in the wake state, we compared the estimates relative to a surrogate distribution, where the PSI for event N (EEG channel) was calculated relative to event N + 1 (MTL channel)7. The last event was calculated relative to the first event.
EEG SO-Spindle Cross-frequency coupling (CFC): We first filtered the spindle peak-locked data (Fig. 1b, c) into the SO component (0.1–1.25 Hz) and extracted the instantaneous phase angle after applying a Hilbert transform and extracted the phase angle that coincided with a spindle peak. The mean circular direction (circ_mean.m) and resultant vector length (circ_r.m) across all artifact-free NREM events were determined using the CircStat toolbox10,64.
EEG-MTL SO-Spindle to HFB coupling: To analyze the triple coupling between these oscillations, we expressed the normalized HFB amplitude as a function of the precise SO-spindle coupling phase (Fig. 3d). First, we determined the SO coupling phase for every spindle event. We binned the SO-spindle coupling into 24 non-overlapping, 15° wide bins. Then, we time-locked the HFB activity relative to the detected cortical spindle events and extracted the average power per bin during the spindle peak. The resulting trace was smoothed with a 5-point boxcar window. To further quantify this relationship, we calculated a circular-linear correlation (circ_corrcl.m) between the phase vector (24 bins) and the resulting HFB estimates and z-scored the correlation value relative to a surrogate distribution (1000 repetitions) where the precise relationship between coupling phase and amplitude was abolished. The preferred coupling phase was determined as the phase bin that exhibited the highest HFB power.
We obtained a time course of this coupling interaction by repeating this process at every time point around the spindle event (Fig. 1h, i). We determined the optimal coupling phase for every subject and every time point and tested the phase consistency across subjects using a cluster-corrected Rayleigh test (circ_rtest.m; see below).
MTL-EEG Ripple to SO and Spindles: While we detected ripple events during spindle events (±1 s), this approach did not make any predictions about the exact temporal relationship between the phase of spindle event and the detected ripple peak. Likewise, it did not introduce a bias towards a specific SO phase, given that more than a whole SO cycle was captured within this time window. Therefore, we performed event-locked cross-frequency analyses after band-pass filtering the continuous scalp EEG data into either the SO band (0.1–1.25 Hz) or the spindle band (12–16 Hz) and applying a Hilbert transform to extract the analytic phase. Then we extracted all phase angles that coincided with a ripple peak and quantified the degree of non-uniformity using the CircStat toolbox. We furthermore counted the number of detected ripple events relative to the SO (−2 to 2 s around the SO trough; 100 bins). To minimize between-subject variability, the resulting distributions were mean-normalized and smoothed with a 10-point boxcar window. Then we tested when the ripple count was significantly above the mean across subjects and tested if more ripples occurred prior or after the down-state (400 ms window centered on the cortical up-state before and after the downstate (Fig. 4d, e).
State-based cross-frequency coupling (Fig. 2f, g): In addition to the event-locked analyses, we also screened a wider range of possible phase-amplitude pairs. Therefore, we filtered the continuous data into distinct frequency bands and then calculated the Modulation Index31 for non-overlapping 30 s segments. We extracted the phase from the EEG signal after band-pass filtering the data into 25 logarithmically spaced bins between 0.5 and 32 Hz (±¼ center frequency; eegfilt.m). Amplitude series were extracted for all available MTL channels in 35 logarithmically spaced bins between 8 and 152 Hz. The band-pass was adjusted to include the side peaks of the low-frequency component72. Hence, the window at a given frequency was always defined as twice the low center frequency +2 Hz. The modulation index was then calculated after binning the low-frequency phase into 18 non-overlapping 20° wide bins31. For every bin, the mean amplitude was calculated and normalized before computing the normalized Kullback-Leibler divergence, which quantifies the deviation from a uniform distribution.
Within region Cross-frequency directionality (CFD): We assessed whether low frequencies components drive sleep spindle activity during SWS (Fig. 1d) or vice versa by calculating the cross-frequency phase-slope index28,73 between the normalized signal and the signal filtered in the sleep spindle range (12–16 Hz). We considered all SO frequencies <1.25 Hz after applying a Hanning window and extracting the complex Fourier coefficients. Significant values above zero indicate that SO drive sleep spindle activity, while negative values indicate that sleep spindles drive SO. Values around zero indicate no directional coupling.
Across regions: We further utilized directional CFC analyses34,74. Here directionality is commonly assumed if PAC (Phase1, Amplitude2) is larger than PAC (Phase2, Amplitude1) where signals 1 and 2 correspond to two spatially distinct regions. Here we tested if spindle-HFB coupling is stronger from PFC to MTL or vice versa (Fig. 2j).
Mutual information and transfer entropy: We tested if information transfer is enhanced between the MTL and PFC after a ripple by calculating time-resolved mutual information (MI; Fig. 7)37. We performed the analysis between the MTL channel that exhibited the highest percentage of physiologic ripples and one scalp electrode (typically Fz unless unavailable) or all available PFC electrodes. Data were epoched relative to the ripple and MI was calculated between −1 and +2.25 s in steps of 50 ms. Data was binned into 8 bins (uniform bin count; Fig. 7) within a 400-ms window centered on the current bin. Mutual information (Eq. (4)) between the two signals X and Y was defined as
$${\mathrm{MI}}\left( {X;Y} \right) = \mathop {\sum }\limits_{x \in X} \mathop {\sum }\limits_{y \in Y} p\left( {x,y} \right) \times {\mathrm{log}}_2\left( {\frac{{p\left( {x,y} \right)}}{{p\left( x \right) \times p\left( y \right)}}} \right)$$
where p(x, y) depicts the joint probability function and p(x) and p(y) indicate the class probabilities. Probabilities were normalized by their sum. MI traces in the time domain were mean-normalized relative to the individual baseline (−1 to 0 s). In order to test directional influences and given the uncertainty about the precise time lag between MTL and PFC, we calculated the MI during the ripple peak in either the MTL or the PFC and all previous and subsequent time bins in the other region (Fig. 7b, c). Likewise, we compared MI evolution after a physiologic and artifactual ripple relative to their individual baseline values (Fig. 7g, h). To assess the relationship of spindle peaks and MI peaks, we counted the number of spindle events in 120 evenly spaced bins (±3 s), which were mean normalized and smoothed with a 3-point boxcar function (Fig. 7e).
Spectrally resolved MI (Fig. 7b, c) was calculated after band-pass filtering the data into 25 logarithmically spaced bins between 1 and 64 Hz (±¼ center frequency) and extracting the instantaneous amplitude using a Hilbert transform. Spectrally resolved MI was normalized by means of a z-score per frequency band to discount the 1/f drop-off.
Phase transfer entropy (PTE; Eq. (5)) was calculated as an information-theoretical directional interaction metric75,76 according to the following formula.
$$PTE\left( {X,Y} \right) = \mathop {\sum }\nolimits^ p\left( {Y_\delta } \right)p\left( Y \right)p\left( X \right)\,{\mathrm{log}}_2\frac{{p\left( {Y_\delta \left| Y \right.,X} \right)}}{{p\left( {Y_\delta \left| Y \right.} \right)}}$$
Binning was again performed in eight bins after band-pass filtering, applying a Hilbert transform and extracting the instantaneous phase. The analysis was focused on a 1-s segments centered on every time point (±0.5 s; from −1 and +2.25 s in steps of 50 ms) and calculated between the prefrontal EEG and the MTL. The prediction delay δ was adjusted per frequency depending on phase changes sign across time76. To normalize the spectrum, we divided the PFC-to-MTL estimates by the MTL-to-PFC estimates; hence, values above 1 reflect information flow from PFC to the MTL.
Relationship of MI and Spindle activity (Fig. 7e, f): For every physiologic ripple event, we extracted the subsequent time course of directional MTL-to-EEG information flow (Fig. 7c). Then we determined the largest MI peak for this given trial after the ripple event. Next we detected where the closest spindle peaked relative to this MI peak. Then we binned the obtained time stamps into 100 evenly spaced bins (range: ±2 s around MI peak). To reduce inter-subject variability, we normalized the resulting histogram by its mean prior to averaging. We tested where the observed distribution was significantly larger than one by means of a cluster-based permutation test. MI was also calculated on longer timescales (Fig. 7f) using the same settings, smoothed with a 20 point moving average and then z-scored for display purposes. The average phase difference between MI and spindle power (inset Fig. 7f) was calculated after band-pass filtering between 0.3 and 0.5 Hz and applying a Hilbert transform to extract the analytic phase.
We used fixed effects analyses with the only exception of Supplementary Fig. 3 where we investigated the spatial extend of HFB modulations and given that different subjects contributed different numbers of electrodes, we utilized a random effects analysis. Unless stated otherwise, we used cluster-based permutation tests77 to correct for multiple comparisons as implemented in FieldTrip (Monte Carlo method; 1000 iterations; maxsum criterion). Clusters were formed in the time, frequency or time-frequency domain (e.g., Figs. 5a–c, 6d, and 7a, b, d) by thresholding two-tailed dependent t-tests at p < 0.05 unless stated otherwise. A permutation distribution was then created by randomly shuffling labels. The permutation p-value was obtained by comparing the cluster statistic to the random permutation distribution. The clusters were considered significant at p < 0.05.
Circular statistics were calculated using the CircStat toolbox. Circular non-uniformity was assessed with Rayleigh tests at p < 0.05 or the V-test if a mean direction was hypothesized based on previous evidence. Cluster testing for circular data (Fig. 3h) was based on Rayleigh tests (cluster threshold p < 0.05; maxsize criterion) and considered significant at p < 0.05. Effect sizes were quantified by means of Cohen's d, the correlation coefficient rho or η2 in case of repeated measures ANOVAs. For circular statistics, we report the mean resultant vector length (mvl) as the effect size metric. To obtain effect sizes for cluster tests, we calculated the effect size separately for all data points and averaged across all data points in the cluster. Repeated-measures ANOVAs were Greenhouse-Geisser corrected. In case of unequal variances (Fig. 3j), we utilized the non-parametric Wilcoxon rank sum test. If test statistics were calculated on the individual subject level, we inferred significance at the group level based on a binomial distribution.
The data sets generated and analyzed here are available from Dr. Jack Lin ([email protected]) on reasonable request.
All the computer code used to implement the experiments and to collect and analyze data is available from the corresponding author on reasonable request.
Diekelmann, S. & Born, J. The memory function of sleep. Nat. Rev. Neurosci. 11, 114–126 (2010).
Buzsáki, G. Hippocampal sharp wave-ripple: a cognitive biomarker for episodic memory and planning. Hippocampus 25, 1073–1188 (2015).
Joo, H. R. & Frank, L. M. The hippocampal sharp wave-ripple in memory retrieval for immediate use and consolidation. Nat. Rev. Neurosci. https://doi.org/10.1038/s41583-018-0077-1 (2018).
Latchoumane, C.-F. V., Ngo, H.-V. V., Born, J. & Shin, H.-S. Thalamic spindles promote memory formation during sleep through triple phase-locking of cortical, thalamic, and hippocampal rhythms. Neuron https://doi.org/10.1016/j.neuron.2017.06.025 (2017).
Sirota, A., Csicsvari, J., Buhl, D. & Buzsáki, G. Communication between neocortex and hippocampus during sleep in rodents. Proc. Natl Acad. Sci. USA 100, 2065–2069 (2003).
Clemens, Z. et al. Temporal coupling of parahippocampal ripples, sleep spindles and slow oscillations in humans. Brain J. Neurol. 130, 2868–2878 (2007).
Staresina, B. P. et al. Hierarchical nesting of slow oscillations, spindles and ripples in the human hippocampus during sleep. Nat. Neurosci. 18, 1679–1686 (2015).
De Gennaro, L. & Ferrara, M. Sleep spindles: an overview. Sleep Med. Rev. 7, 423–440 (2003).
Antony, J. W., Schönauer, M., Staresina, B. P. & Cairney, S. A. Sleep spindles and memory reprocessing. Trends Neurosci. https://doi.org/10.1016/j.tins.2018.09.012 (2018).
Helfrich, R. F., Mander, B. A., Jagust, W. J., Knight, R. T. & Walker, M. P. Old brains come uncoupled in sleep: slow wave-spindle synchrony, brain atrophy, and forgetting. Neuron 97, 221–230.e4 (2018).
Carlén, M. What constitutes the prefrontal cortex? Science 358, 478–482 (2017).
Laubach, M., Amarante, L. M., Swanson, K. & White, S. R. What, if anything, is rodent prefrontal cortex? eNeuro 5, pii: ENEURO.0315-18.2018 (2018).
Zhang, H., Fell, J. & Axmacher, N. Electrophysiological mechanisms of human memory consolidation. Nat. Commun. 9, 4103 (2018).
Foster, D. J. Replay comes of age. Annu. Rev. Neurosci. 40, 581–602 (2017).
Todorova, R. & Zugaro, M. Hippocampal ripples as a mode of communication with cortical and subcortical areas. Hippocampus https://doi.org/10.1002/hipo.22997 (2018).
Maingret, N., Girardeau, G., Todorova, R., Goutierre, M. & Zugaro, M. Hippocampo-cortical coupling mediates memory consolidation during sleep. Nat. Neurosci. 19, 959–964 (2016).
Rothschild, G., Eban, E. & Frank, L. M. A cortical-hippocampal-cortical loop of information processing during memory consolidation. Nat. Neurosci. 20, 251–259 (2017).
Tang, W., Shin, J. D., Frank, L. M. & Jadhav, S. P. Hippocampal-prefrontal reactivation during learning is stronger in awake compared with sleep states. J. Neurosci. 37, 11789–11805 (2017).
Wang, D. V. & Ikemoto, S. Coordinated interaction between hippocampal sharp-wave ripples and anterior cingulate unit activity. J. Neurosci. 36, 10663–10672 (2016).
Antony, J. W. et al. Sleep Spindle refractoriness segregates periods of memory reactivation. Curr. Biol. 28, 1736–1743.e4 (2018).
Hanslmayr, S., Staresina, B. P. & Bowman, H. Oscillations and episodic memory: addressing the synchronization/desynchronization conundrum. Trends Neurosci. 39, 16–25 (2016).
Niethard, N., Ngo, H.-V. V., Ehrlich, I. & Born, J. Cortical circuit activity underlying sleep slow oscillations and spindles. Proc. Natl Acad. Sci. USA 115, E9220–E9229 (2018).
Bergmann, T. O. & Born, J. Phase-amplitude coupling: a general mechanism for memory processing and synaptic plasticity? Neuron 97, 10–13 (2018).
Bragin, A., Engel, J., Wilson, C. L., Fried, I. & Buzsáki, G. High-frequency oscillations in human brain. Hippocampus 9, 137–142 (1999).
Gelinas, J. N., Khodagholy, D., Thesen, T., Devinsky, O. & Buzsáki, G. Interictal epileptiform discharges induce hippocampal-cortical coupling in temporal lobe epilepsy. Nat. Med. 22, 641–648 (2016).
Prerau, M. J., Brown, R. E., Bianchi, M. T., Ellenbogen, J. M. & Purdon, P. L. Sleep neurophysiological dynamics through the lens of multitaper spectral analysis. Physiology 32, 60–92 (2017).
Nolte, G. et al. Robustly estimating the flow direction of information in complex physical systems. Phys. Rev. Lett. 100, 234101 (2008).
Jiang, H., Bahramisharif, A., van Gerven, M. A. J. & Jensen, O. Measuring directionality between neuronal oscillations of different frequencies. NeuroImage 118, 359–367 (2015).
Watson, B. O., Ding, M. & Buzsáki, G. Temporal coupling of field potentials and action potentials in the neocortex. Eur. J. Neurosci. https://doi.org/10.1111/ejn.13807 (2017)
Rich, E. L. & Wallis, J. D. Spatiotemporal dynamics of information encoding revealed in orbitofrontal high-gamma. Nat. Commun. 8, 1139 (2017).
Tort, A. B. L. et al. Dynamic cross-frequency couplings of local field potential oscillations in rat striatum and hippocampus during performance of a T-maze task. Proc. Natl Acad. Sci. USA 105, 20517–20522 (2008).
Wen, H. & Liu, Z. Separating fractal and oscillatory components in the power spectrum of neurophysiological signal. Brain Topogr. 29, 13–26 (2016).
Fries, P. Rhythms for cognition: communication through coherence. Neuron 88, 220–235 (2015).
Helfrich, R. F. & Knight, R. T. Oscillatory dynamics of prefrontal cognitive control. Trends Cogn. Sci. 20, 916–930 (2016).
Nolte, G. et al. Identifying true brain interaction from EEG data using the imaginary part of coherency. Clin. Neurophysiol. 115, 2292–2307 (2004).
Hipp, J. F., Hawellek, D. J., Corbetta, M., Siegel, M. & Engel, A. K. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat. Neurosci. 15, 884–890 (2012).
Quian Quiroga, R. & Panzeri, S. Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10, 173–185 (2009).
Rasch, B. & Born, J. About sleep's role in memory. Physiol. Rev. 93, 681–766 (2013).
Buzsáki, G. The hippocampo-neocortical dialogue. Cereb. Cortex 6, 81–92 (1996).
Sirota, A. & Buzsáki, G. Interaction between neocortical and hippocampal networks via slow oscillations. Thalamus Relat. Syst. 3, 245–259 (2005).
Parvizi, J. & Kastner, S. Promises and limitations of human intracranial electroencephalography. Nat. Neurosci. 21, 474–483 (2018).
van de Ven, G. M., Trouche, S., McNamara, C. G., Allen, K. & Dupret, D. Hippocampal offline reactivation consolidates recently formed cell assembly patterns during sharp wave-ripples. Neuron 92, 968–974 (2016).
Axmacher, N., Elger, C. E. & Fell, J. Ripples in the medial temporal lobe are relevant for human memory consolidation. Brain J. Neurol. 131, 1806–1817 (2008).
Yonelinas, A. P., Ranganath, C., Ekstrom, A. D. & Wiltgen, B. J. A contextual binding theory of episodic memory: systems consolidation reconsidered. Nat. Rev. Neurosci. 20, 364–375 (2019).
Antony, J. W. & Schapiro, A. C. Active and effective replay: systems consolidation reconsidered again. Nat. Rev. Neurosci. https://doi.org/10.1038/s41583-019-0191-8 (2019).
Yonelinas, A. P., Ranganath, C., Ekstrom, A. D. & Wiltgen, B. J. Reply to 'Active and effective replay: systems consolidation reconsidered again'. Nat. Rev. Neurosci. https://doi.org/10.1038/s41583-019-0192-7 (2019).
Khodagholy, D., Gelinas, J. N. & Buzsáki, G. Learning-enhanced coupling between ripple oscillations in association cortices and hippocampus. Science 358, 369–372 (2017).
Vaz, A. P., Inati, S. K., Brunel, N. & Zaghloul, K. A. Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory. Science 363, 975–978 (2019).
Brodt, S. et al. Fast track to the neocortex: a memory engram in the posterior parietal cortex. Science 362, 1045–1048 (2018).
Gais, S., Mölle, M., Helms, K. & Born, J. Learning-dependent increases in sleep spindle density. J. Neurosci. 22, 6830–6834 (2002).
Mölle, M., Marshall, L., Gais, S. & Born, J. Grouping of spindle activity during slow oscillations in human non-rapid eye movement sleep. J. Neurosci. 22, 10941–10947 (2002).
Muehlroth, B. E. et al. Precise slow oscillation-spindle coupling promotes memory consolidation in younger and older adults. Sci. Rep. 9, 1940 (2019).
Niknazar, M., Krishnan, G. P., Bazhenov, M. & Mednick, S. C. Coupling of thalamocortical sleep oscillations are important for memory consolidation in humans. PloS ONE 10, e0144720 (2015).
Ladenbauer, J. et al. Promoting sleep oscillations and their functional coupling by transcranial stimulation enhances memory consolidation in mild cognitive impairment. J. Neurosci. https://doi.org/10.1523/JNEUROSCI.0260-17.2017 (2017).
Cairney, S. A., Guttesen, A. Á. V., El Marj, N. & Staresina, B. P. Memory consolidation is linked to spindle-mediated information processing during sleep. Curr. Biol. 28, 948–954.e4 (2018).
Steriade, M., McCormick, D. A. & Sejnowski, T. J. Thalamocortical oscillations in the sleeping and aroused brain. Science 262, 679–685 (1993).
Steriade, M. & Amzica, F. Coalescence of sleep rhythms and their chronology in corticothalamic networks. Sleep. Res. Online 1, 1–10 (1998).
Mak-McCully, R. A. et al. Coordination of cortical and thalamic activity during non-REM sleep in humans. Nat. Commun. 8, 15499 (2017).
Mander, B. A., Winer, J. R. & Walker, M. P. Sleep and human aging. Neuron 94, 19–36 (2017).
Mander, B. A. et al. β-amyloid disrupts human NREM slow waves and related hippocampus-dependent memory consolidation. Nat. Neurosci. 18, 1051–1057 (2015).
Winer, J. R. et al. Sleep as a potential biomarker of tau and β-amyloid burden in the human brain. J. Neurosci. 0503–0519, (2019). https://doi.org/10.1523/JNEUROSCI.0503-19.2019.
Delorme, A. & Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21 (2004).
Oostenveld, R., Fries, P., Maris, E. & Schoffelen, J.-M. FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 156869 (2011).
Berens, P. CircStat: a MATLAB toolbox for circular statistics. J. Stat. Softw. 31, 21 (2009).
Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J. & Nichols, T. E. Statistical Parametric Mapping: The Analysis Of Functional Brain Images. (Academic press, UK, 2011).
Stolk, A. et al. Integrated analysis of anatomical and electrophysiological human intracranial data. Nat. Protoc. https://doi.org/10.1038/s41596-018-0009-6 (2018).
Dale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis. I. Segmentation and surface reconstruction. NeuroImage 9, 179–194 (1999).
Zheng, J. et al. Amygdala-hippocampal dynamics during salient information processing. Nat. Commun. 8, 14413 (2017).
Leal, S. L., Tighe, S. K. & Yassa, M. A. Asymmetric effects of emotion on mnemonic interference. Neurobiol. Learn. Mem. 111, 41–48 (2014).
Rechtschaffen, A. & Kales, A. A Manual Of Standardized Terminology, Techniques, And Scoring Systems For Sleep Stages Of Human Subjects. (Public Health Service, US Government Printing Office, 1968).
Mitra, P. P. & Pesaran, B. Analysis of dynamic brain imaging data. Biophys. J. 76, 691–708 (1999).
Aru, J. et al. Untangling cross-frequency coupling in neuroscience. Curr. Opin. Neurobiol. 31, 51–61 (2015).
Helfrich, R. F., Huang, M., Wilson, G. & Knight, R. T. Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception. Proc. Natl Acad. Sci. USA 114, 9457–9462 (2017).
Voytek, B. et al. Oscillatory dynamics coordinating human frontal networks in support of goal maintenance. Nat. Neurosci. 18, 1318–1324 (2015).
Lobier, M., Siebenhühner, F., Palva, S. & Palva, J. M. Phase transfer entropy: a novel phase-based measure for directed connectivity in networks coupled by oscillatory interactions. NeuroImage 85, 853–872 (2014).
Hillebrand, A. et al. Direction of information flow in large-scale resting-state networks is frequency-dependent. Proc. Natl Acad. Sci. USA 113, 3867–3872 (2016).
Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177–190 (2007).
We would like to thank the EEG technicians at UC Irvine Medical Center, USA, for their assistance with data collection. This work was supported by the Alexander von Humboldt Foundation (Feodor Lynen Program; RFH), the German Research Foundation (LE 3863/2-1; JDL), the NINDS Brain Initiative (1U19NS107609-01; J.J.L. and R.T.K.), R37NS21135 (R.T.K.), R01AG031164 (M.P.W.), R01AG054019 (M.P.W.), RF1AG054019 (M.P.W.), and R01MH093537 (M.P.W.) all from the National Institute of Health.
These authors contributed equally: Jack J. Lin, Robert T. Knight.
Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA, 94720, USA
Randolph F. Helfrich, Janna D. Lendner, Matthew P. Walker & Robert T. Knight
Dept. of Anesthesiology, University Medical Center Hamburg Eppendorf, Martinistrasse 52, 20246, Hamburg, Germany
Janna D. Lendner
Dept. of Psychiatry and Human Behavior, UC Irvine, 101 The City Dr, Orange, CA, 92868, USA
Bryce A. Mander
Dept. of Neurology, UC Irvine, 101 The City Dr, Orange, CA, 92868, USA
Heriberto Guillen, Lilit Mnatsakanyan & Jack J. Lin
Dept. of Neurosurgery, UC Irvine, 101 The City Dr, Orange, CA, 92868, USA
Michelle Paff & Sumeet Vadera
Dept. of Psychology, UC Berkeley, 2121 Berkeley Way, Berkeley, CA, 94720, USA
Matthew P. Walker & Robert T. Knight
Dept. of Biomedical Engineering, Henry Samueli School of Engineering, 402 E Peltason Dr, Irvine, CA, 92617, USA
Jack J. Lin
Randolph F. Helfrich
Heriberto Guillen
Michelle Paff
Lilit Mnatsakanyan
Sumeet Vadera
Matthew P. Walker
Robert T. Knight
Conceptualization by R.F.H. and R.T.K.; methodology by R.F.H.; software by R.F.H.; validation by R.F.H. and J.D.L.; formal analysis by R.F.H.; investigation by R.F.H., J.D.L., B.A.M., H.G., M.P., L.M., S.V. and J.J.L.; resources by R.F.H. and J.D.L.; data curation by R.F.H., J.D.L. and J.J.L.; writing—original draft by R.F.H.; writing—review & editing by M.P.W. and R.T.K; visualization by R.F.H.; supervision by R.T.K.; project administration by J.J.L. and R.T.K.; funding acquisition by R.T.K.
Correspondence to Randolph F. Helfrich.
Peer review information: Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Helfrich, R.F., Lendner, J.D., Mander, B.A. et al. Bidirectional prefrontal-hippocampal dynamics organize information transfer during sleep in humans. Nat Commun 10, 3572 (2019). https://doi.org/10.1038/s41467-019-11444-x
DOI: https://doi.org/10.1038/s41467-019-11444-x
Endogenous memory reactivation during sleep in humans is clocked by slow oscillation-spindle complexes
Thomas Schreiner
Marit Petzka
Bernhard P. Staresina
Reward biases spontaneous neural reactivation during sleep
Virginie Sterpenich
Mojca K. M. van Schie
Sophie Schwartz
A role for spindles in the onset of rapid eye movement sleep
Mojtaba Bandarabadi
Carolina Gutierrez Herrera
Antoine R. Adamantidis
Sleep and Memory in Children
Kerstin Hoedlmoser
Current Sleep Medicine Reports (2020) | CommonCrawl |
Why can't every quantum state be expressed as a density matrix/operator?
It was my previous impression that all quantum states in a Hilbert space can be represented using density matrices† and that's already the most general formulation of a quantum state. Then I came across yuggib's comment here:
Everything would be so easy if there was the one-to-one correspondence you are describing. Sadly, there are many very strong suggestions that this should not be the case. The existence of uncountably many inequivalent irreducible representations of the canonical commutation relations for quantum fields is one of such suggestions. Another is the fact that not every quantum state can be represented, in a given (irreducible) representation, as a ray in Hilbert space (or as a density matrix, actually).
It seems even density matrices don't provide a good enough definition for the "state" of a quantum system, although I don't quite understand why. According to Schuller, in the general formulation of quantum mechanics, the state of a quantum system is defined as a positive trace-class linear map $\rho: \mathcal{H} \to \mathcal{H}$ for which $\mathrm{Tr}(\rho)=1$. How exactly does this definition encapsulate what density matrices cannot? Or are these two actually equivalent and I'm missing some point here?
I'm further confused because Wikipedia clearly states: "Describing a quantum state by its density matrix is a fully general alternative formalism to describing a quantum state by its ket (state vector) or by its statistical ensemble of kets." and that directly contradicts yuggib's comment.
†: Or rather, density operators, if dealing with infinite dimensional Hilbert spaces.
quantum-mechanics quantum-field-theory hilbert-space density-operator quantum-states
$\begingroup$ In introductory material, yeah, all states are just density matrices where $\rho^2 = 1$. But in quantum field theory, where the Hilbert space is very very infinitely dimensional, things become a bit murkier. For instance, "Haag's theorem" shows that an interacting theory of QFT doesn't exist with all the properties we assume it has. (Having said that, this theorem is probably a bit unfair.) Anyway, I think density matrices are a perfectly fine way to talk about states, but probably there are subtle issues in QFT that most people don't care about because they're not too important. $\endgroup$
$\begingroup$ Everything is murkier in infinite dimensional spaces. Of course, you can formulate all the theories we know and love in finite dimensional spaces, rendering all these concerns about rigor immaterial, but for some reason this possibility is not regarded as interesting. $\endgroup$
– knzhou
$\begingroup$ Interacting QFTs have resisted rigorous mathematical definitions. See, for instance, the "Yang–Mills existence and mass gap" Clay Institute prize. Yang–Mills theories are some of the most important in physics, and yet we don't even know if they "exist" rigorously. (However, maybe this isn't a problem if you assume a more fundamental theory of quantum gravity kicks in at short distances and changes everything.) $\endgroup$
$\begingroup$ It is strongly false that everyrhing in QM can be formulated in a finite dim Hilbert space, considering the infinite dimensional case as a "straightforward" limit. E.g., there are no X and P satisfying CCRs in finite dim space, so no limit can be considered. In finite dim, independent systems are always described in terms of tensor product. In infinite dim, this fact is untenable in relevant situations in qft in particular. $\endgroup$
– Valter Moretti
$\begingroup$ @Blue For bounded operators (and trace-class includes boundedness) positivity implies selfadjointness, so it is not necessary to mention it. Positive and positive semidefinite are the same notion in this context... $\endgroup$
The statement by yuggib is correct. To put it in perspective, I'll start with a completely general formulation, and then I'll show how vector-states and density operators fit into that picture. I won't try to be mathematically rigorous here, but I'll try to give an overview with enough keywords and references to enable further study.
State = normalized positive linear functional
Every quantum state, pure or mixed, can be represented by a normalized positive linear functional on the operator algebra. Such a functional takes any operator $X$ as input and returns a single complex number $\rho(X)$ as output, with nice properties like \begin{gather*} \rho(X+Y)=\rho(X)+\rho(Y) \hskip2cm \rho(cX)=c\rho(X) \\ \rho(X^*X)\geq 0 \hskip2cm \rho(1)=1 \end{gather*} for all operators $X,Y$ and complex numbers $c$. I'm using an asterisk both for complex conjugation and for the operator adjoint, and I'm writing $1$ both for the identity operator and for the unit number. I'm also considering only bounded operators to keep the statements simple. This is always sufficient in principle, even though we normally use some unbounded operators in practice because it's convenient.
"Normalized positive linear functional" is a long name for a very simple thing. It also has a shorter name: mathematicians often just call it a state (see Wikipedia), and I'll use that name here. In [1], it is called an algebraic state to distinguish it from other usages of the word "state."
A state is called mixed if it can be written as $$ \rho(X) = \lambda_1\rho_1(X)+\lambda_2\rho_2(X) $$ for all $X\in{\cal A}$, where $\rho_n$ are two distinct states and where the coefficients $\lambda_n$ are both positive real numbers (not zero). A state that cannot be written this way is called pure.
This is all completely general. It works just fine in everything from a single-qubit system to quantum field theory. In contrast, using a density operator to represent a state is mathematically less general. The following paragraphs address how vector-states and density matrices fit into the more general picture described above.
Vector states and density matrices / density operators
The GNS theorem says that a state can always be implemented as $$ \rho(\cdots) =\frac{\langle\psi|\cdots|\psi\rangle}{ \langle\psi|\psi\rangle} $$ where $|\psi\rangle$ is a single vector in some Hilbert-space representation of the operator algebra. Even mixed states can always be implemented this way. The catch is that the required Hilbert-space representation is not necessarily irreducible, and we may need to switch to different Hilbert-space representations to implement different states this way. The Hilbert-space representation of the operator algebra is irreducible if and only if the state is pure [2][3].
A state $\rho$ is called a normal state if an operator $\hat\rho$ (a density matrix or density operator) exists such that [4] $$ \rho(\cdots) = \text{Trace}(\cdots \hat \rho). $$ The fact that this kind of state has a special name suggests that it is a special kind of state — that not every state can be expressed this way. This is confirmed in [5], where counterexamples are described by Valter Moretti. The Math SE question [6] also asks for a counterexample, and it has an answer.
This is all consistent with yuggib's statement
not every quantum state can be represented, in a given (irreducible) representation, as a ray in Hilbert space (or as a density matrix, actually).
The statement needs to be parsed carefully, though: the words given and irreducible are important. The Wikipedia page that said "Describing a quantum state by its density matrix is a fully general alternative..." might be referring to a less-general context, like finite-dimensional Hilbert spaces, or might be implicitly using a less-general definition of "state." That doesn't mean the Wikipedia page is wrong; it just means that — as always — we need to beware of equivocation.
[1] Valter Moretti (2013), Spectral Theory and Quantum Mechanics (A 2018 edition is also available; I cited the 2013 version because it's the one I had on hand when writing this answer)
[2] Proposition 1.8 in https://arxiv.org/abs/math-ph/0602036
[3] Theorem 14.12 in [1]
[4] https://ncatlab.org/nlab/show/state+on+a+star-algebra
[5] Is there a physical significance to non-normal states of the algebra of observables? (on Physics SE)
[6] "Non normal state" (https://math.stackexchange.com/q/2962163)
Addendum: This answer has been downvoted a couple of times. I don't know why (no comments were left), but I'm adding the following clarification in case it addresses the concern:
If the question had been "Are normal states sufficient for all practical purposes?" then the answer would surely be yes. But that wasn't the question. The quesetion asked for the reason behind a specific mathematically-minded statement about states on operator algebras, and that's what this answer tries to address.
Chiral AnomalyChiral Anomaly
$\begingroup$ @ValterMoretti I didn't see your comments below the question until after I posted my answer (I'm a slow writer), but my answer cites two sources written by you. Please correct me if I've misrepresented anything in my answer. $\endgroup$
– Chiral Anomaly
$\begingroup$ Thank you very much for quoting my book! Actually there is a 2018 edition including some new material also concerning the discussed issue... I have only a minir remark about your post: the second condition you wrote defining $\rho$ should be replaced by $\rho(cA) = c\rho(A)$. Actually, the condition you wrote is a consequence of linearity and positivity of $\rho$ (positivity us your penultimate requirement). $\endgroup$
$\begingroup$ This is such a clean answer, +1. $\endgroup$
– gented
I think the statement is simply mistaken. The first part of the bolded quote, "[N]ot every quantum state can be represented, in a given (irreducible) representation," is accurate, because only pure states exist as rays in the Hilbert space. In fact, the whole point of the density matrix formulation is that it allows for more general states, which are not pure states. For a pure state, the density matrix is effectively projection operator onto that state (thus satisfying $\rho^{2}=\rho$), but the density matrix can also be a probabilistically weighted sum of such projection operators. (What exactly these mixed states mean brings us to the puzzle of the correct interpretation of quantum mechanics; however, from a practical standpoint, they do exist, at least in some sense.)
I suspect that the author of that quote simply overgeneralized. For non-pure states, there is not a representation in terms of a density matrix $\rho$ that satisfies $\rho^{2}=\rho$. Many pedagogical treatments of the density matrix start by only considering the density matrices for pure states, for which $\rho^{2}=\rho$ is a consistency condition; in fact, some treatments never even deal with the more general case. However, I personally think such an approach is foolish, since the most important motivation for the density matrix formulation of quantum mechanics is precisely its capacity to handle mixed states.
Buzz♦Buzz
12k1313 gold badges3636 silver badges5050 bronze badges
What is a quantum state?
What's the influence due to the phase factor in state vector?
Is there a physical significance to non-normal states of the algebra of observables?
Is linearity of the quantum state space a necessary postulate in the reconstruction of quantum theory?
Quantum Mechanics "inside out"
Density matrix formalism
Linearity of the time evolution operator for the reduced density matrix of an entangled state
Expression of density operator
On what conditions is a matrix a density matrix (of a pure state)?
Conditional density matrix
Two distinct uses for the density matrix formalism
Can we have a mixed density matrix from a superposition of eigenstates? | CommonCrawl |
Tag: Riemann
NeB : 7 years and now an iPad App
Exactly 7 years ago I wrote my first post. This blog wasn't called NeB yet and it used pMachine, a then free blogging tool (later transformed into expression engine), rather than WordPress.
Over the years NeB survived three hardware-upgrades of 'the Matrix' (the webserver hosting it), more themes than I care to remember, and a couple of dramatic closure announcements…
But then we're still here, soldiering on, still uncertain whether there's a point to it, but grateful for tiny tokens of appreciation.
Such as this morning's story: Chandan deemed it necessary to correct two spelling mistakes in a 2 year old Fun-math post on Weil and the Riemann hypothesis (also reposted on Neb here). Often there's a story behind such sudden comments, and a quick check of MathOverflow revealed this answer and the comments following it.
I thank Ed Dean for linking to the Fun-post, Chandan for correcting the misspellings and Georges for the kind words. I agree with Georges that a cut© of a blogpost-quoted text does not require a link to that post (though it is always much appreciated). It is rewarding to see such old posts getting a second chance…
Above the Google Analytics graph of the visitors coming here via a mobile device (at most 5 on a good day…). Anticipating much more iPads around after tonights presents-session I've made NeB more accessible for iPods, iPhones, iPads and other mobile devices.
The first time you get here via your Mac-device of choice you'll be given the option of saving NeB as an App. It has its own icon (lowest row middle, also the favicon of NeB) and flashy start-up screen.
Of course, the whole point trying to make NeB more readable for Mobile users you get an overview of the latest posts together with links to categories and tags and the number of comments. Sliding through you can read the post, optimized for the device.
I do hope you will use the two buttons at the end of each post, the first to share or save it and the second to leave a comment.
I wish you all a lot of mathematical (and other) fun in 2011 :: lieven.
Published October 26, 2010 by lievenlb
This is a belated response to a Math-Overflow exchange between Thomas Riepe and Chandan Singh Dalawat asking for a possible connection between Connes' noncommutative geometry approach to the Riemann hypothesis and the Langlands program.
Here's the punchline : a large chunk of the Connes-Marcolli book Noncommutative Geometry, Quantum Fields and Motives can be read as an exploration of the noncommutative boundary to the Langlands program (at least for $GL_1 $ and $GL_2 $ over the rationals $\mathbb{Q} $).
Recall that Langlands for $GL_1 $ over the rationals is the correspondence, given by the Artin reciprocity law, between on the one hand the abelianized absolute Galois group
$Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} = Gal(\mathbb{Q}(\mu_{\infty})/\mathbb{Q}) \simeq \hat{\mathbb{Z}}^* $
and on the other hand the connected components of the idele classes
$\mathbb{A}^{\ast}_{\mathbb{Q}}/\mathbb{Q}^{\ast} = \mathbb{R}^{\ast}_{+} \times \hat{\mathbb{Z}}^{\ast} $
The locally compact Abelian group of idele classes can be viewed as the nice locus of the horrible quotient space of adele classes $\mathbb{A}_{\mathbb{Q}}/\mathbb{Q}^{\ast} $. There is a well-defined map
$\mathbb{A}_{\mathbb{Q}}'/\mathbb{Q}^{\ast} \rightarrow \mathbb{R}_{+} \qquad (x_{\infty},x_2,x_3,\ldots) \mapsto | x_{\infty} | \prod | x_p |_p $
from the subset $\mathbb{A}_{\mathbb{Q}}' $ consisting of adeles of which almost all terms belong to $\mathbb{Z}_p^{\ast} $. The inverse image of this map over $\mathbb{R}_+^{\ast} $ are precisely the idele classes $\mathbb{A}^{\ast}_{\mathbb{Q}}/\mathbb{Q}^{\ast} $. In this way one can view the adele classes as a closure, or 'compactification', of the idele classes.
This is somewhat reminiscent of extending the nice action of the modular group on the upper-half plane to its badly behaved action on the boundary as in the Manin-Marcolli cave post.
The topological properties of the fiber over zero, and indeed of the total space of adele classes, are horrible in the sense that the discrete group $\mathbb{Q}^* $ acts ergodically on it, due to the irrationality of $log(p_1)/log(p_2) $ for primes $p_i $. All this is explained well (in the semi-local case, that is using $\mathbb{A}_Q' $ above) in the Connes-Marcolli book (section 2.7).
In much the same spirit as non-free actions of reductive groups on algebraic varieties are best handled using stacks, such ergodic actions are best handled by the tools of noncommutative geometry. That is, one tries to get at the geometry of $\mathbb{A}_{\mathbb{Q}}/\mathbb{Q}^{\ast} $ by studying an associated non-commutative algebra, the skew-ring extension of the group-ring of the adeles by the action of $\mathbb{Q}^* $ on it. This algebra is known to be Morita equivalent to the Bost-Connes algebra which is the algebra featuring in Connes' approach to the Riemann hypothesis.
It shouldn't thus come as a major surprise that one is able to recover the other side of the Langlands correspondence, that is the Galois group $Gal(\mathbb{Q}(\mu_{\infty})/\mathbb{Q}) $, from the Bost-Connes algebra as the symmetries of certain states.
In a similar vein one can read the Connes-Marcolli $GL_2 $-system (section 3.7 of their book) as an exploration of the noncommutative closure of the Langlands-space $GL_2(\mathbb{A}_{\mathbb{Q}})/GL_2(\mathbb{Q}) $.
At the moment I'm running a master-seminar noncommutative geometry trying to explain this connection in detail. But, we're still in the early phases, struggling with the topology of ideles and adeles, reciprocity laws, L-functions and the lot. Still, if someone is interested I might attempt to post some lecture notes here.
big Witt vectors for everyone (1/2)
Next time you visit your math-library, please have a look whether these books are still on the shelves : Michiel Hazewinkel's Formal groups and applications, William Fulton's and Serge Lange's Riemann-Roch algebra and Donald Knutson's lambda-rings and the representation theory of the symmetric group.
I wouldn't be surprised if one or more of these books are borrowed out, probably all of them to the same person. I'm afraid I'm that person in Antwerp…
Lately, there's been a renewed interest in $\lambda $-rings and the endo-functor W assigning to a commutative algebra its ring of big Witt vectors, following Borger's new proposal for a geometry over the absolute point.
However, as Hendrik Lenstra writes in his 2002 course-notes on the subject Construction of the ring of Witt vectors : "The literature on the functor W is in a somewhat unsatisfactory state: nobody seems to have any interest in Witt vectors beyond applying them for a purpose, and they are often treated in appendices to papers devoting to something else; also, the construction usually depends on a set of implicit or unintelligible formulae. Apparently, anybody who wishes to understand Witt vectors needs to construct them personally. That is what is now happening to myself."
Before doing a series on Borger's paper, we'd better run through Lenstra's elegant construction in a couple of posts. Let A be a commutative ring and consider the multiplicative group of all 'one-power series' over it $\Lambda(A)=1+t A[[t]] $. Our aim is to define a commutative ring structure on $\Lambda(A) $ taking as its ADDITION the MULTIPLICATION of power series.
That is, if $u(t),v(t) \in \Lambda(A) $, then we define our addition $u(t) \oplus v(t) = u(t) \times v(t) $. This may be slightly confusing as the ZERO-element in $\Lambda(A),\oplus $ will then turn be the constant power series 1…
We are now going to define a multiplication $\otimes $ on $\Lambda(A) $ which is distributively with respect to $\oplus $ and turns $\Lambda(A) $ into a commutative ring with ONE-element the series $~(1-t)^{-1}=1+t+t^2+t^3+\ldots $.
We will do this inductively, so consider $\Lambda_n(A) $ the (classes of) one-power series truncated at term n, that is, the kernel of the natural augmentation map between the multiplicative group-units $~A[t]/(t^{n+1})^* \rightarrow A^* $.
Again, taking multiplication in $A[t]/(t^{n+1}) $ as a new addition rule $\oplus $, we see that $~(\Lambda_n(A),\oplus) $ is an Abelian group, whence a $\mathbb{Z} $-module.
For all elements $a \in A $ we have a scaling operator $\phi_a $ (sending $t \rightarrow at $) which is an A-ring endomorphism of $A[t]/(t^{n+1}) $, in particular multiplicative wrt. $\times $. But then, $\phi_a $ is an additive endomorphism of $~(\Lambda_n(A),\oplus) $, so is an element of the endomorphism-RING $End_{\mathbb{Z}}(\Lambda_n(A)) $. Because composition (being the multiplication in this endomorphism ring) of scaling operators is clearly commutative ($\phi_a \circ \phi_b = \phi_{ab} $) we can define a commutative RING $E $ being the subring of $End_{\mathbb{Z}}(\Lambda_n(A)) $ generated by the operators $\phi_a $.
The action turns $~(\Lambda_n(A),\oplus) $ into an E-module and we define an E-module morphism $E \rightarrow \Lambda_n(A) $ by $\phi_a \mapsto \phi_a((1-t)^{-1}) = (1-at)^{-a} $.
All of this looks pretty harmless, but the upshot is that we have now equipped the image of this E-module morphism, say $L_n(A) $ (which is the additive subgroup of $~(\Lambda_n(A),\oplus) $ generated by the elements $~(1-at)^{-1} $) with a commutative multiplication $\otimes $ induced by the rule $~(1-at)^{-1} \otimes (1-bt)^{-1} = (1-abt)^{-1} $.
Explicitly, $L_n(A) $ is the set of one-truncated polynomials $u(t) $ with coefficients in $A $ such that one can find elements $a_1,\ldots,a_k \in A $ such that $u(t) \equiv (1-a_1t)^{-1} \times \ldots \times (1-a_k)^{-1}~mod~t^{n+1} $. We multiply $u(t) $ with another such truncated one-polynomial $v(t) $ (taking elements $b_1,b_2,\ldots,b_l \in A $) via
$u(t) \otimes v(t) = ((1-a_1t)^{-1} \oplus \ldots \oplus (1-a_k)^{-1}) \otimes ((1-b_1t)^{-1} \oplus \ldots \oplus (1-b_l)^{-1}) $
and using distributivity and the multiplication rule this gives the element $\prod_{i,j} (1-a_ib_jt)^{-1}~mod~t^{n+1} \in L_n(A) $.
Being a ring-qutient of $E $ we have that $~(L_n(A),\oplus,\otimes) $ is a commutative ring, and, from the construction it is clear that $L_n $ behaves functorially.
For rings $A $ such that $L_n(A)=\Lambda_n(A) $ we are done, but in general $L_n(A) $ may be strictly smaller. The idea is to use functoriality and do the relevant calculations in a larger ring $A \subset B $ where we can multiply the two truncated one-polynomials and observe that the resulting truncated polynomial still has all its coefficients in $A $.
Here's how we would do this over $\mathbb{Z} $ : take two irreducible one-polynomials u(t) and v(t) of degrees r resp. s smaller or equal to n. Then over the complex numbers we have
$u(t)=(1-\alpha_1t) \ldots (1-\alpha_rt) $ and $v(t)=(1-\beta_1) \ldots (1-\beta_st) $. Then, over the field $K=\mathbb{Q}(\alpha_1,\ldots,\alpha_r,\beta_1,\ldots,\beta_s) $ we have that $u(t),v(t) \in L_n(K) $ and hence we can compute their product $u(t) \otimes v(t) $ as before to be $\prod_{i,j}(1-\alpha_i\beta_jt)^{-1}~mod~t^{n+1} $. But then, all coefficients of this truncated K-polynomial are invariant under all permutations of the roots $\alpha_i $ and the roots $\beta_j $ and so is invariant under all elements of the Galois group. But then, these coefficients are algebraic numbers in $\mathbb{Q} $ whence integers. That is, $u(t) \otimes v(t) \in \Lambda_n(\mathbb{Z}) $. It should already be clear from this that the rings $\Lambda_n(\mathbb{Z}) $ contain a lot of arithmetic information!
For a general commutative ring $A $ we will copy this argument by considering a free overring $A^{(\infty)} $ (with 1 as one of the base elements) by formally adjoining roots. At level 1, consider $M_0 $ to be the set of all non-constant one-polynomials over $A $ and consider the ring
$A^{(1)} = \bigotimes_{f \in M_0} A[X]/(f) = A[X_f, f \in M_0]/(f(X_f) , f \in M_0) $
The idea being that every one-polynomial $f \in M_0 $ now has one root, namely $\alpha_f = \overline{X_f} $ in $A^{(1)} $. Further, $A^{(1)} $ is a free A-module with basis elements all $\alpha_f^i $ with $0 \leq i < deg(f) $.
Good! We now have at least one root, but we can continue this process. At level 2, $M_1 $ will be the set of all non-constant one-polynomials over $A^{(1)} $ and we use them to construct the free overring $A^{(2)} $ (which now has the property that every $f \in M_0 $ has at least two roots in $A^{(2)} $). And, again, we repeat this process and obtain in succession the rings $A^{(3)},A^{(4)},\ldots $. Finally, we define $A^{(\infty)} = \underset{\rightarrow}{lim}~A^{(i)} $ having the property that every one-polynomial over A splits entirely in linear factors over $A^{(\infty)} $.
But then, for all $u(t),v(t) \in \Lambda_n(A) $ we can compute $u(t) \otimes v(t) \in \Lambda_n(A^{(\infty)}) $. Remains to show that the resulting truncated one-polynomial has all its entries in A. The ring $A^{(\infty)} \otimes_A A^{(\infty)} $ contains two copies of $A^{(\infty)} $ namely $A^{(\infty)} \otimes 1 $ and $1 \otimes A^{(\infty)} $ and the intersection of these two rings in exactly $A $ (here we use the freeness property and the additional fact that 1 is one of the base elements). But then, by functoriality of $L_n $, the element
$u(t) \otimes v(t) \in L_n(A^{(\infty)} \otimes_A A^{(\infty)}) $ lies in the intersection $\Lambda_n(A^{(\infty)} \otimes 1) \cap \Lambda_n(1 \otimes A^{(\infty)})=\Lambda_n(A) $. Done!
Hence, we have endo-functors $\Lambda_n $ in the category of all commutative rings, for every number n. Reviewing the construction of $L_n $ one observes that there are natural transformations $L_{n+1} \rightarrow L_n $ and therefore also natural transformations $\Lambda_{n+1} \rightarrow \Lambda_n $. Taking the inverse limits $\Lambda(A) = \underset{\leftarrow}{lim} \Lambda_n(A) $ we therefore have the 'one-power series' endo-functor
$\Lambda~:~\mathbf{comm} \rightarrow \mathbf{comm} $
which is 'almost' the functor W of big Witt vectors. Next time we'll take you through the identification using 'ghost variables' and how the functor $\Lambda $ can be used to define the category of $\lambda $-rings.
Published May 11, 2009 by lievenlb
We have seen that Conway's big picture helps us to determine all arithmetic subgroups of $PSL_2(\mathbb{R}) $ commensurable with the modular group $PSL_2(\mathbb{Z}) $, including all groups of monstrous moonshine.
As there are exactly 171 such moonshine groups, they are determined by a finite subgraph of Conway's picture and we call the minimal such subgraph the moonshine picture. Clearly, we would like to determine its structure.
On the left a depiction of a very small part of it. It is the minimal subgraph of Conway's picture needed to describe the 9 moonshine groups appearing in Duncan's realization of McKay's E(8)-observation. Here, only three primes are relevant : 2 (blue lines), 3 (reds) and 5 (green). All lattices are number-like (recall that $M \frac{g}{h} $ stands for the lattice $\langle M e_1 + \frac{g}{h} e_2, e_2 \rangle $).
We observe that a large part of this mini-moonshine picture consists of the three p-tree subgraphs (the blue, red and green tree starting at the 1-lattice $1 = \langle e_1,e_2 \rangle $. Whereas Conway's big picture is the product over all p-trees with p running over all prime numbers, we observe that the mini-moonshine picture is a very small subgraph of the product of these three subtrees. In fact, there is just one 2-cell (the square 1,2,6,3).
Hence, it seems like a good idea to start our investigation of the full moonshine picture with the determination of the p-subtrees contained in it, and subsequently, worry about higher dimensional cells constructed from them. Surely it will be no major surprise that the prime numbers p that appear in the moonshine picture are exactly the prime divisors of the order of the monster group, that is p=2,3,5,7,11,13,17,19,23,29,31,41,47,59 or 71. Before we can try to determine these 15 p-trees, we need to know more about the 171 moonshine groups.
Recall that the proper way to view the modular subgroup $\Gamma_0(N) $ is as the subgroup fixing the two lattices $L_1 $ and $L_N $, whence we will write $\Gamma_0(N)=\Gamma_0(N|1) $, and, by extension we will denote with $\Gamma_0(X|Y) $ the subgroup fixing the two lattices $L_X $ and $L_Y $.
As $\Gamma_0(N) $ fixes $L_1 $ and $L_N $ it also fixes all lattices in the (N|1)-thread, that is all lattices occurring in a shortest path from $L_1 $ to $L_N $ (on the left a picture of the (200|1)-thread).
If $N=p_1^{a_1} p_2^{a_2} \ldots p_k^{a_k} $, then the (N|1)-thread has $2^k $ involutions as symmetries, called the Atkin-Lehner involutions. For every exact divisor $e || N $ (that is, $e|N $ and $gcd(e,\frac{N}{e})=1 $ we have an involution $W_e $ which acts by sending each point in the thread-cell corresponding to the prime divisors of $e $ to its antipodal cell-point and acts as the identity on the other prime-axes. For example, in the (200|1)-thread on the left, $W_8 $ is the left-right reflexion, $W_{25} $ the top-bottom reflexion and $W_{200} $ the antipodal reflexion. The set of all exact divisors of N becomes the group $~(\mathbb{Z}/2\mathbb{Z})^k $ under the operation $e \ast f = \frac{e \times f}{gcd(e,f)^2} $.
Most of the moonshine groups are of the form $\Gamma_0(n|h)+e,f,g,… $ for some $N=h.n $ such that $h | 24 $ and $h^2 | N $. The group $\Gamma_0(n|h) $ is then conjugate to the modular subgroup $\Gamma_0(\frac{n}{h}) $ by the element $\begin{bmatrix} h & 0 \ 0 & 1 \end{bmatrix} $. With $\Gamma_0(n|h)+e,f,g,… $ we mean that the group $\Gamma_0(n|h) $ is extended with the involutions $W_e,W_f,W_g,… $. If we simply add all Atkin-Lehner involutions we write $\Gamma_0(n|h)+ $ for the resulting group.
Finally, whenever $h \not= 1 $ there is a subgroup $\Gamma_0(n||h)+e,f,g,… $ which is the kernel of a character $\lambda $ being trivial on $\Gamma_0(N) $ and on all involutions $W_e $ for which every prime dividing $e $ also divides $\frac{n}{h} $, evaluating to $e^{\frac{2\pi i}{h}} $ on all cosets containing $\begin{bmatrix} 1 & \frac{1}{h} \ 0 & 1 \end{bmatrix} $ and to $e^{\pm \frac{2 \pi i }{h}} $ for cosets containing $\begin{bmatrix} 1 & 0 \ n & 0 \end{bmatrix} $ (with a + sign if $\begin{bmatrix} 0 & -1 \ N & 0 \end{bmatrix} $ is present and a – sign otherwise). Btw. it is not evident at all that this is a character, but hard work shows it is!
Clearly there are heavy restrictions on the numbers that actually occur in moonshine. In the paper On the discrete groups of moonshine, John Conway, John McKay and Abdellah Sebbar characterized the 171 arithmetic subgroups of $PSL_2(\mathbb{R}) $ occuring in monstrous moonshine as those of the form $G = \Gamma_0(n || h)+e,f,g,… $ which are
(a) of genus zero, meaning that the quotient of the upper-half plane by the action of $G \subset PSL_2(\mathbb{R}) $ by Moebius-transformations gives a Riemann surface of genus zero,
(b) the quotient group $G/\Gamma_0(nh) $ is a group of exponent 2 (generated by some Atkin-Lehner involutions), and
(c) every cusp can be mapped to $\infty $ by an element of $PSL_2(\mathbb{R}) $ which conjugates the group to one containing $\Gamma_0(nh) $.
Now, if $\Gamma_0(n || h)+e,f,g,… $ is of genus zero, so is the larger group $\Gamma_0(n | h)+e,f,g,… $, which in turn, is conjugated to the group $\Gamma_0(\frac{n}{h})+e,f,g,… $. Therefore, we need a list of all groups of the form $\Gamma_0(\frac{n}{h})+e,f,g,… $ which are of genus zero. There are exactly 123 of them, listed on the right.
How does this help to determine the structure of the p-subtree of the moonshine picture for the fifteen monster-primes p? Look for the largest p-power $p^k $ such that $p^k+e,f,g… $ appears in the list. That is for p=2,3,5,7,11,13,17,19,23,29,31,41,47,59,71 these powers are resp. 5,3,2,2,1,1,1,1,1,1,1,1,1,1,1. Next, look for the largest p-power $p^l $ dividing 24 (that is, 3 for p=2, 1 for p=3 and 0 for all other primes). Then, these relevant moonshine groups contain the modular subgroup $\Gamma_0(p^{k+2l}) $ and are contained in its normalizer in $PSL_2(\mathbb{R}) $ which by the Atkin-Lehner theorem is precisely the group $\Gamma_0(p^{k+l}|p^l)+ $.
Right, now the lattices fixed by $\Gamma_0(p^{k+2l}) $ (and permuted by its normalizer), that is the lattices in our p-subtree, are those that form the $~(p^{k+2l}|1) $-snake in Conway-speak. That is, the lattices whose hyper-distance to the $~(p^{k+l}|p^l) $-thread divides 24. So for all primes larger than 2 or 3, the p-tree is just the $~(p^l|1) $-thread.
For p=3 the 3-tree is the (243|1)-snake having the (81|3)-thread as its spine. It contains the following lattices, all of which are number-like.
Depicting the 2-tree, which is the (2048|1)-snake may take a bit longer… Perhaps someone should spend some time figuring out which cells of the product of these fifteen trees make up the moonshine picture!
best of 2008 (2) : big theorems
Published January 3, 2009 by lievenlb
Charles Siegel of Rigorous Trivialities ran a great series on big theorems.
The series started january 10th 2008 with a post on Bezout's theorem, followed by posts on Chow's lemma, Serre duality, Riemann-Roch, Bertini, Nakayama's lemma, Groebner bases, Hurwitz to end just before christmas with a post on Kontsevich's formula.
Also at other blogs, 2008 was the year of series of long posts containing substantial pure mathematics.
Out of many, just two examples : Chris Schommer-Pries ran a three part series on TQFTs via planar algebras starting here, at the secret blogging seminar.
And, Peter Woit of Not Even Wrong has an ungoing series of posts called Notes on BRST, starting here. At the moment he is at episode nine.
It suffices to have a quick look at the length of any of these posts, to see that a great deal of work was put into these series (and numerous similar ones, elsewhere). Is this amount of time well spend? Or, should we focus on shorter, easier digestible math-posts?
What got me thinking was this merciless comment Charles got after a great series of posts leading up to Kontsevich's formula :
"Perhaps you should make a New Years commitment to not be so obscurantist, like John Armstrong, and instead promote the public understanding of math!"
Well, if this doesn't put you off blogging for a while, what will?
So, are we really writing the wrong sort of posts? Do math-blog readers only want short, flashy, easy reading posts these days? Or, is anyone out there taking notice of the hard work it takes to write such a technical post, let alone a series of them?
At first I was rather pessimistic about the probable answer to all these questions, but, fortunately we have Google Analytics to quantify things a bit.
Clearly I can only rely on the statistics for my own site, so I'll treat the case of a recent post here : Mumford's treasure map which tried to explain the notion of a generic point and how one might depict an affine scheme.
Here's some of the Google Analytics data :
The yellow function gives the number of pageviews for that post, the value ranges between 0 and 600 (the number to the right of the picture). In total this post was viewed 2470 times, up till now.
The blue function tells the average time a visitor spend reading that post, the numbers range between 0 and 8 minutes (the times to the left of the picture). On average the time-on-page was 2.24 minutes, so in all people spend well over 92 hours reading this one post! This seems like a good return for the time it took me to write it…
Some other things can be learned from this data. Whereas the number of page-views has two peaks early on (one the day it was posted, the second one when Peter Woit linked to it) and is now steadily decreasing, the time-on-page for the later visitors is substantially longer than the early readers.
Some of this may be explained (see comment below) by returning visits. Here is a more detailed picture (orange = new visits, green=returning visits, blue='total' whatever this means).
All in all good news : there is indeed a market for longer technical math-posts and people (eventually) take time to read the post in detail. | CommonCrawl |
Blockchain-related papers — June 2020
Home Latest News Blockchain-related papers — June 2020
By joangarciaitormo | Latest News | 0 comment | 3 June, 2020 |
June 2020 list
If you feel a paper should belong to another category, or that we missed a relevant paper just let us know. Participation is most welcome!
Attacks and defenses
Blockchain-general
Blockchain-noncrypto uses
Proof of Work (PoW) alternatives
Flood & Loot: A Systemic Attack On The Lightning Network
Authors: Jona Harris, Aviv Zohar
arXiv:2006.08513
Abstract: The Lightning Network promises to alleviate Bitcoin's known scalability problems. The operation of such second layer approaches relies on the ability of participants to turn to the blockchain to claim funds at any time, which is assumed to happen rarely. One of the risks that was identified early on is that of a wide systemic attack on the protocol, in which an attacker triggers the closure of many Lightning channels at once. The resulting high volume of transactions in the blockchain will not allow for the proper settlement of all debts, and attackers may get away with stealing some funds. This paper explores the details of such an attack and evaluates its cost and overall impact on Bitcoin and the Lightning Network. Specifically, we show that an attacker is able to simultaneously cause victim nodes to overload the Bitcoin blockchain with requests and to steal funds that were locked in channels. We go on to examine the interaction of Lightning nodes with the fee estimation mechanism and show that the attacker can continuously lower the fee of transactions that will later be used by the victim in its attempts to recover funds – eventually reaching a state in which only low fractions of the block are available for lightning transactions. Our attack is made easier even further as the Lightning protocol allows the attacker to increase the fee offered by his own transactions. We continue to empirically show that the vast majority of nodes agree to channel opening requests from unknown sources and are therefore susceptible to this attack. We highlight differences between various implementations of the Lightning Network protocol and review the susceptibility of each one to the attack. Finally, we propose mitigation strategies to lower the systemic attack risk of the network.
Time-Dilation Attacks on the Lightning Network
Authors: Antoine Riard, Gleb Naumenko
Abstract: Lightning Network (LN) is a widely-used network of payment channels enabling faster and cheaper Bitcoin transactions. In this paper, we outline three ways an attacker can steal funds from honest LN users. The attacks require dilating the time for victims to become aware of new blocks by eclipsing (isolating) victims from the network and delaying block delivery. While our focus is on the LN, time-dilation attacks may be relevant to any second-layer protocol that relies on a timely reaction. According to our measurements, it is currently possible to steal the total channel capacity by keeping a node eclipsed for as little as 2 hours. Since trust-minimized Bitcoin light clients currently connect to a very limited number of random nodes, running just 500 Sybil nodes allows an attacker to Eclipse 47\% of newly deployed light clients (and hence prime them for an attack). As for the victims running a full node, since they are often used by large hubs or service providers, an attacker may justify the higher Eclipse attack cost by stealing all their available liquidity. In addition, time-dilation attacks neither require access to hashrate nor purchasing from a victim. Thus, this class of attacks is a more practical way of stealing funds via Eclipse attacks than previously anticipated double-spending. We argue that simple detection techniques based on the slow block arrival alone are not effective, and implementing more sophisticated detection is not trivial. We suggest that a combination of anti-Eclipse/anti-Sybil measures are crucial for mitigating time-dilation attacks.
Tracing Cryptocurrency Scams: Clustering Replicated Advance-Fee and Phishing Websites
Authors: R. Phillips, H. Wilder
Abstract: Over the past few years, there has been a growth in activity, public knowledge, and awareness of cryptocurrencies and related blockchain technology. As the industry has grown, there has also been an increase in scams looking to steal unsuspecting individuals cryptocurrency. Many of the scams operate on visually similar but seemingly unconnected websites, advertised by malicious social media accounts, which either attempt an advance-fee scam or operate as phishing websites. This paper analyses public online and blockchain-based data to provide a deeper understanding of these cryptocurrency scams. The clustering technique DBSCAN is applied to the content of scam websites to discover a typology of advance-fee and phishing scams. It is found that the same entities are running multiple instances of similar scams, revealed by their online infrastructure and blockchain activity. The entities also manufacture public blockchain activity to create the appearance that their scams are genuine. Through source and destination of funds analysis, it is observed that victims usually send funds from fiat-accepting exchanges. The entities running these scams cash-out or launder their proceeds using a variety of avenues including exchanges, gambling sites, and mixers.
Perigee: Efficient Peer-to-Peer Network Design for Blockchains
Authors: Yifan Mao, Soubhik Deb, Bojja Shaileshh Venkatakrishnan, Sreeram Kannan, Kannan Srinivasan
Abstract: A key performance metric in blockchains is the latency between when a transaction is broadcast and when it is confirmed (the so-called, confirmation latency). While improvements in consensus techniques can lead to lower confirmation latency, a fundamental lower bound on confirmation latency is the propagation latency of messages through the underlying peer-to-peer (p2p) network (inBitcoin, the propagation latency is several tens of seconds). The de facto p2p protocol used by Bitcoin and other blockchains is based on random connectivity: each node connects to a random subset of nodes. The induced p2p network topology can be highly suboptimal since it neglects geographical distance, differences in bandwidth, hash-power and computational abilities across peers. We present Perigee, a decentralized algorithm that automatically learns an efficient p2p topology tuned to the aforementioned network heterogeneities, purely based on peers' interactions with their neighbors. Motivated by the literature on the multi-armed bandit problem, Perigee optimally balances the tradeoff between retaining connections to known well-connected neighbors, and exploring new connections to previously-unseen neighbors. Experimental evaluations show that Perigee reduces the latency to broadcast by $33\%$. Lastly Perigee is simple, computationally lightweight, adversary-resistant, and compatible with the selfish interests of peers, making it an attractive p2p protocol for blockchains.
Equilibrium of Blockchain Miners with Dynamic Asset Allocation
Authors: Go Yamamoto, Aron Laszka, Fuhito Kojima
Abstract: We model and analyze blockchain miners who seek to maximize the compound return of their mining businesses. The analysis of the optimal strategies finds a new equilibrium point among the miners and the mining pools, which predicts the market share of each miner or mining pool. The cost of mining determines the share of each miner or mining pool at equilibrium. We conclude that neither miners nor mining pools who seek to maximize their compound return will have a financial incentive to occupy more than 50% of the hash rate if the cost of mining is at the same level for all. However, if there is an outstandingly cost-efficient miner, then the market share of this miner may exceed 50% in the equilibrium, which can threaten the viability of the entire ecosystem.
Leveraging Bitcoin Testnet for Bidirectional Botnet Command and Control Systems
Authors: Federico Franzoni, Ivan Abellan, Vanesa Daza
Abstract: Over the past twenty years, the number of devices connected to the Internet grew exponentially. Botnets benefited from this rise to increase their size and the magnitude of their attacks. However, they still have a weak point in their Command & Control (C&C) system, which is often based on centralized services or require a complex infrastructure to keep operating without being taken down by authorities. The recent spread of blockchain technologies may give botnets a powerful tool to make them very hard to disrupt. Recent research showed how it is possible to embed C&C messages in Bitcoin transactions, making them nearly impossible to block. Nevertheless, transactions have a cost and allow very limited amounts of data to be transmitted. Because of that, only messages from the botmaster to the bots are sent via Bitcoin, while bots are assumed to communicate through external channels. Furthermore, for the same reason, Bitcoin-based messages are sent in clear. In this paper we show how, using Bitcoin Testnet, it is possible to overcome these limitations and implement a cost-free, bidirectional, and encrypted C&C channel between the botmaster and the bots. We propose a communication protocol and analyze its viability in real life. Our results show that this approach would enable a botmaster to build a robust and hard-to-disrupt C&C system at virtually no cost, thus representing a realistic threat for which countermeasures should be devised.
Similarities and Learnings from Ancient Literature on Blockchain Consensus and Integrity
Authors: Ashish Kundu, Arun Ayachitula, Nagamani Sistla
Abstract: In this paper, we have studied how the text of an ancient literature on how their integrity has been preserved for several centuries. Specifically, The Vedas is an ancient literature, which has its text remained preserved without any corruption for thousands of years. As we studied the system that protects the integrity of the text, pronunciation and semantics of the The Vedas, we discovered a number of similarities it has with the current concept of blockchain technology. It is surprising that the notion of de-centralized trust and mathematical encodings have existed since thousands of years in order to protect this work of literature. We have presented our findings and analysis of the similarities. There are also certain technical mechanisms that The Vedic integrity system uses, which can be used to enhance the current digital blockchain platforms in terms of its security and robustness.
Blockchain-Based Differential Privacy Cost Management System
Authors: Mei Leong Han, Yang Zhao, Jun Zhao
Abstract: Privacy preservation is a big concern for various sectors. To protect individual user data, one emerging technology is differential privacy. However, it still has limitations for datasets with frequent queries, such as the fast accumulation of privacy cost. To tackle this limitation, this paper explores the integration of a secured decentralised ledger, blockchain. Blockchain will be able to keep track of all noisy responses generated with differential privacy algorithm and allow for certain queries to reuse old responses. In this paper, a demo of a proposed blockchain-based privacy management system is designed as an interactive decentralised web application (DApp). The demo created illustrates that leveraging on blockchain will allow the total privacy cost accumulated to decrease significantly.
A Survey on Blockchain Interoperability: Past, Present, and Future Trends
Authors: Rafael Belchior, André Vasconcelos, Sérgio Guerreiro, Miguel Correia
Abstract: Blockchain interoperability is emerging as one of the crucial features of blockchain technology, but the knowledge necessary for achieving it is fragmented. This fact makes it challenging for academics and the industry to seamlessly achieve interoperability among blockchains. Given the novelty and potential of this new domain, we conduct a literature review on blockchain interoperability, by collecting 262 papers, and 70 grey literature documents, constituting a corpus of 332 documents. From those 332 documents, we systematically analyzed and discussed 80 documents, including both peer-reviewed papers and grey literature. Our review classifies studies in three categories: Cryptocurrency-directed interoperability approaches, Blockchain Engines, and Blockchain Connectors. Each category is further divided into sub-categories based on defined criteria. We discuss not only studies within each category and subcategory but also across categories, providing a holistic overview of blockchain interoperability, paving the way for systematic research in this domain. Our findings show that blockchain interoperability has a much broader spectrum than cryptocurrencies. The present survey leverages an interesting approach: we systematically contacted the authors of grey literature papers and industry solutions to obtain an updated view of their work. Finally, this paper discusses supporting technologies, standards, use cases, open challenges, and provides several future research directions.
Wallet Attestations for Virtual Asset Service Providers and Crypto-Assets Insurance
Authors: Thomas Hardjono, Alexander Lipton, Alex Pentland
Abstract: The emerging virtual asset service providers (VASP) industry currently faces a number of challenges related to the Travel Rule, notably pertaining to customer personal information, account number and cryptographic key information. VASPs will be handling virtual assets of different forms, where each may be bound to different private-public key pairs on the blockchain. As such, VASPs also face the additional problem of the management of its own keys and the management of customer keys that may reside in a customer wallet. The use of attestation technologies as applied to wallet systems may provide VASPs with suitable evidence relevant to the Travel Rule regarding cryptographic key information and their operational state. Additionally, wallet attestations may provide crypto-asset insurers with strong evidence regarding the key management aspects of a wallet device, thereby providing the insurance industry with measurable levels of assurance that can become the basis for insurers to perform risk assessment on crypto-assets bound to keys in wallets, both enterprise-grade wallets and consumer-grade wallets.
The Ritva Blockchain: Enabling Confidential Transactions at Scale
Authors: Henri Aare, Peter Vitols
Abstract: The distributed ledger technology has been widely hailed as the break-through technology. It has realised a great number of application scenarios, and improved workflow of many domains. Nonetheless, there remain a few major concerns in adopting and deploying the distributed ledger technology at scale. In this white paper, we tackle two of them, namely the throughput scalability and confidentiality protection for transactions. We learn from the existing body of research, and build a scale-out blockchain platform that champions privacy called RVChain. RVChain takes advantage of trusted execution environment to offer confidentiality protection for transactions, and scales the throughput of the network in proportion with the number of network participants by supporting parallel shadow chains.
A framework of blockchain-based secure and privacy-preserving E-government system
Authors: Noe Elisa, Longzhi Yang, Fei Chao, Yi Cao
Abstract: Electronic government (e-government) uses information and communication technologies to deliver public services to individuals and organisations effectively, efficiently and transparently. E-government is one of the most complex systems which needs to be distributed, secured and privacy-preserved, and the failure of these can be very costly both economically and socially. Most of the existing e-government systems such as websites and electronic identity management systems (eIDs) are centralized at duplicated servers and databases. A centralized management and validation system may suffer from a single point of failure and make the system a target to cyber attacks such as malware, denial of service attacks (DoS), and distributed denial of service attacks (DDoS). The blockchain technology enables the implementation of highly secure and privacy-preserving decentralized systems where transactions are not under the control of any third party organizations. Using the blockchain technology, exiting data and new data are stored in a sealed compartment of blocks (i.e., ledger) distributed across the network in a verifiable and immutable way. Information security and privacy are enhanced by the blockchain technology in which data are encrypted and distributed across the entire network. This paper proposes a framework of a decentralized e-government peer-to-peer (p2p) system using the blockchain technology, which can ensure both information security and privacy while simultaneously increasing the trust of the public sectors. In addition, a prototype of the proposed system is presented, with the support of a theoretical and qualitative analysis of the security and privacy implications of such system.
Consortium Blockchain for Security and Privacy-Preserving in E-government Systems
Authors: Noe Elisa, Longzhi Yang, Honglei Li, Fei Chao, Nitin Naik
Abstract: Since its inception as a solution for secure cryptocurrencies sharing in 2008, the blockchain technology has now become one of the core technologies for secure data sharing and storage over trustless and decentralised peer-to-peer systems. E-government is amongst the systems that stores sensitive information about citizens, businesses and other affiliates, and therefore becomes the target of cyber attackers. The existing e-government systems are centralised and thus subject to single point of failure. This paper proposes a secure and decentralised e-government system based on the consortium blockchain technology, which is a semi-public and decentralised blockchain system consisting of a group of pre-selected entities or organisations in charge of consensus and decisions making for the benefit of the whole network of peers. In addition, a number of e-government nodes are pre-selected to perform the tasks of user and transaction validation before being added to the blockchain network. Accordingly, e-government users of the consortium blockchain network are given the rights to create, submit, access, and review transactions. Performance evaluation on single transaction time and transactions processed per second demonstrate the practicability of the proposed consortium blockchain-based e-government system for secure information sharing amongst all stakeholders.
Blockchain for Academic Credentials
Authors: Chaitanya Bapat
Abstract: Academic credentials are documents that attest to successful completion of any test, exam or act as a validation of an individual's skill. Currently, the domain of academic credential management suffers from large time consumption, high cost, dependence on third-party and a lack of transparency. A blockchain based solution tries to resolve these pain-points by allowing any recruiter or company to verify the user credentials without dependence on any centralized third party. Our decentralized application is based off of BlockCerts, an MIT project that acts as an open standard for blockchain credentials. The project talks about the implementation details of the decentralized application built for BlockCerts Wallet. It is an attempt to leverage the power of the blockchain technology as a global notary for the verification of digital records.
Access Control Management for Computer-Aided Diagnosis Systems using Blockchain
Authors: Mayra Samaniego, Hosseinzadeh Sara Kassani, Cristian Espana, Ralph Deters
Abstract: Computer-Aided Diagnosis (CAD) systems have emerged to support clinicians in interpreting medical images. CAD systems are traditionally combined with artificial intelligence (AI), computer vision, and data augmentation to evaluate suspicious structures in medical images. This evaluation generates vast amounts of data. Traditional CAD systems belong to a single institution and handle data access management centrally. However, the advent of CAD systems for research among multiple institutions demands distributed access management. This research proposes a blockchain-based solution to enable distributed data access management in CAD systems. This solution has been developed as a distributed application (DApp) using Ethereum in a consortium network.
Simulation-Based Digital Twin Development for Blockchain Enabled End-to-End Industrial Hemp Supply Chain Risk Management
Authors: Keqi Wang, Wei Xie, Wencen Wu, Bo Wang, Jinxiang Pei, Mike Baker, Qi Zhou
Abstract: With the passage of the 2018 U.S. Farm Bill, Industrial Hemp production is moved from limited pilot programs to a regulated agriculture production system. However, Industrial Hemp Supply Chain (IHSC) faces critical challenges, including: high complexity and variability, very limited production knowledge, lack of data and information tracking. In this paper, we propose blockchain-enabled IHSC and develop a preliminary simulation-based digital twin for this distributed cyber-physical system (CPS) to support the process learning and risk management. Basically, we develop a two-layer blockchain with proof of authority smart contract, which can track the data and key information, improve the supply chain transparency, and leverage local authorities and state regulators to ensure the quality control verification. Then, we introduce a stochastic simulation-based digital twin for IHSC risk management, which can characterize the process spatial-temporal causal interdependencies and dynamic evolution to guide risk control and decision making. Our empirical study demonstrates the promising performance of proposed platform.
Distributed Attribute-Based Access Control System Using a Permissioned Blockchain
Authors: Sara Rouhani, Rafael Belchior, S. Rui Cruz, Ralph Deters
Abstract: Auditing provides an essential security control in computer systems, by keeping track of all access attempts, including both legitimate and illegal access attempts. This phase can be useful to the context of audits, where eventual misbehaving parties can be held accountable. Blockchain technology can provide trusted auditability required for access control systems. In this paper, we propose a distributed \ac{ABAC} system based on blockchain to provide trusted auditing of access attempts. Besides auditability, our system presents a level of transparency that both access requestors and resource owners can benefit from it. We present a system architecture with an implementation based on Hyperledger Fabric, achieving high efficiency and low computational overhead. The proposed solution is validated through a use case of independent digital libraries. Detailed performance analysis of our implementation is presented, taking into account different consensus mechanisms and databases. The experimental evaluation shows that our presented system can process 5,000 access control requests with the send rate of 200 per second and a latency of 0.3 seconds.
Is Blockchain Suitable for Data Freshness? — Age-of-Information Perspective
Authors: Sungho Lee, Minsu Kim, Jemin Lee, Ruei-Hau Hsu, S. Q. Tony Quek
Abstract: Recent advances in blockchain have led to a significant interest in developing blockchain-based applications. While data can be retained in blockchains, the stored values can be deleted or updated. From a user viewpoint that searches for the data, it is unclear whether the discovered data from the blockchain storage is relevant for real-time decision-making process for blockchain-based application. The data freshness issue serves as a critical factor especially in dynamic networks handling real-time information. In general, transactions to renew the data require additional processing time inside the blockchain network, which is called ledger-commitment latency. Due to this problem, some users may receive outdated data. As a result, it is important to investigate if blockchain is suitable for providing real-time data services. In this article, we first describe blockchain-enabled (BCE) networks with Hyperledger Fabric (HLF). Then, we define age of information (AoI) of BCE networks and investigate the influential factors in this AoI. Analysis and experiments are conducted to support our proposed framework. Lastly, we conclude by discussing some future challenges.
Lightweight Blockchain Framework for Location-aware Peer-to-Peer Energy Trading
Authors: Mohsen Khorasany, Ali Dorri, Reza Razzaghi, Raja Jurdak
Abstract: Peer-to-Peer (P2P) energy trading can facilitate integration of a large number of small-scale producers and consumers into energy markets. Decentralized management of these new market participants is challenging in terms of market settlement, participant reputation and consideration of grid constraints. This paper proposes a blockchain-enabled framework for P2P energy trading among producer and consumer agents in a smart grid. A fully decentralized market settlement mechanism is designed, which does not rely on a centralized entity to settle the market and encourages producers and consumers to negotiate on energy trading with their nearby agents truthfully. To this end, the electrical distance of agents is considered in the pricing mechanism to encourage agents to trade with their neighboring agents. In addition, a reputation factor is considered for each agent, reflecting its past performance in delivering the committed energy. Before starting the negotiation, agents select their trading partners based on their preferences over the reputation and proximity of the trading partners. An Anonymous Proof of Location (A-PoL) algorithm is proposed that allows agents to prove their location without revealing their real identity. The practicality of the proposed framework is illustrated through several case studies, and its security and privacy are analyzed in detail.
Stablecoins 2.0: Economic Foundations and Risk-based Models
Authors: Ariah Klages-Mundt, Dominik Harz, Lewis Gudgeon, Jun-You Liu, Andreea Minca
Abstract: Stablecoins are one of the most widely capitalized type of cryptocurrency. However, their risks vary significantly according to their design and are often poorly understood. In this paper, we seek to provide a sound foundation for stablecoin theory, with a risk-based functional characterization of the economic structure of stablecoins. First, we match existing economic models to the disparate set of custodial systems. Next, we characterize the unique risks that emerge in non-custodial stablecoins and develop a model framework that unifies existing models from economics and computer science. We further discuss how this modeling framework is applicable to a wide array of cryptoeconomic systems, including cross-chain protocols, collateralized lending, and decentralized exchanges. These unique risks yield unanswered research questions that will form the crux of research in decentralized finance going forward.
Re-evaluating cryptocurrencies' contribution to portfolio diversification — A portfolio analysis with special focus on German investors
Authors: Tim Schmitz, Ingo Hoffmann
Abstract: In this paper, we investigate whether mixing cryptocurrencies to a German investor portfolio improves portfolio diversification. We analyse this research question by applying a (mean variance) portfolio analysis using a toolbox consisting of (i) the comparison of descriptive statistics, (ii) graphical methods and (iii) econometric spanning tests. In contrast to most of the former studies we use a (broad) customized, Equally-Weighted Cryptocurrency Index (EWCI) to capture the average development of a whole ex ante defined cryptocurrency universe and to mitigate possible survivorship biases in the data. According to Glas/Poddig (2018), this bias could have led to misleading results in some already existing studies. We find that cryptocurrencies can improve portfolio diversification in a few of the analyzed windows from our dataset (consisting of weekly observations from 2014-01-01 to 2019-05-31). However, we cannot confirm this pattern as the normal case. By including cryptocurrencies in their portfolios, investors predominantly cannot reach a significantly higher efficient frontier. These results also hold, if the non-normality of cryptocurrency returns is considered. Moreover, we control for changes of the results, if transaction costs/illiquidities on the cryptocurrency market are additionally considered.
Egalitarian and Just Digital Currency Networks
Authors: Gal Shahaf, Ehud Shapiro, Nimrod Talmon
Abstract: Cryptocurrencies are a digital medium of exchange with decentralized control that renders the community operating the cryptocurrency its sovereign. Leading cryptocurrencies use proof-of-work or proof-of-stake to reach consensus, thus are inherently plutocratic. This plutocracy is reflected not only in control over execution, but also in the distribution of new wealth, giving rise to "rich get richer" phenomena. Here, we explore the possibility of an alternative digital currency that is egalitarian in control and just in the distribution of created wealth. Such currencies can form and grow in grassroots and sybil-resilient way. A single currency community can achieve distributive justice by egalitarian coin minting, where each member mints one coin at every time step. Egalitarian minting results, in the limit, in the dilution of any inherited assets and in each member having an equal share of the minted currency, adjusted by the relative productivity of the members. Our main theorem shows that a currency network, where agents can be members of more than one currency community, can achieve distributive justice globally across the network by \emph{joint egalitarian minting}, where each agent mints one coin in only one community at each timestep. Equality and distributive justice can be achieved among people that own the computational agents of a currency community provided that the agents are genuine (unique and singular). We show that currency networks are sybil-resilient, in the sense that sybils (fake or duplicate agents) affect only the communities that harbour them, and not hamper the ability of genuine (sybil-free)communities in a network to achieve distributed justice.
Blockchain, Fog and IoT Integrated Framework: Review, Architecture and Evaluation
Authors: Tanweer Alam, Mohamed Benaida
Abstract: In the next-generation computing, the role of cloud, internet, and smart devices will be capacious. Nowadays we all are familiar with the word smart. This word is used a number of times in our daily life. The Internet of Things (IoT) will produce remarkable different kinds of information from different resources. It can store and process big data in the cloud. The fogging acts as an interface between cloud and IoT. The IoT nodes are also known as fog nodes, these nodes are able to access anywhere within the range of the network. The blockchain is a novel approach to record the transactions in a sequence securely. Developing new blockchains based integrated framework in the architecture of the IoT is one of the emerging approaches to solving the issue of communication security among the IoT public nodes. This research explores a novel approach to integrate blockchain technology with the fog and IoT networks and provides communication security to the internet of smart devices. The framework is tested and implemented in the IoT network. The results are found positive.
On the Security of Proofs of Sequential Work in a Post-Quantum World
Authors: Jeremiah Blocki, Seunghoon Lee, Samson Zhou
Abstract: A proof of sequential work allows a prover to convince a resource-bounded verifier that the prover invested a substantial amount of sequential time to perform some underlying computation. Proofs of sequential work have many applications including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Mahmoody, Moran, and Vadhan (ITCS 2013) gave the first construction of proofs of sequential work in the random oracle model though the construction relied on expensive depth-robust graphs. In a recent breakthrough, Cohen and Pietrzak (EUROCRYPT 2018) gave a more efficient construction that does not require depth-robust graphs. In each of these constructions, the prover commits to a labeling of a directed acyclic graph $G$ with $N$ nodes and the verifier audits the prover by checking that a small subset of labels are locally consistent, e.g., $L_v = H(L_{v_1},\ldots,L_{v_δ})$, where $v_1,\ldots,v_δ$ denote the parents of node $v$. Provided that the graph $G$ has certain structural properties (e.g., depth-robustness), the prover must produce a long $\mathcal{H}$-sequence to pass the audit with non-negligible probability. An $\mathcal{H}$-sequence $x_0,x_1\ldots x_T$ has the property that $H(x_i)$ is a substring of $x_{i+1}$ for each $i$, i.e., we can find strings $a_i,b_i$ such that $x_{i+1} = a_i \cdot H(x_i) \cdot b_i$. In the parallel random oracle model, it is straightforward to argue that any attacker running in sequential time $T-1$ will fail to produce an $\mathcal{H}$-sequence of length $T$ except with negligible probability — even if the attacker submits large batches of random oracle queries in each round. (See the paper for the full abstract.)
Stateless Distributed Ledgers
Authors: François Bonnet, Quentin Bramas, Xavier Défago
Abstract: In public distributed ledger technologies (DLTs), such as Blockchains, nodes can join and leave the network at any time. A major challenge occurs when a new node joining the network wants to retrieve the current state of the ledger. Indeed, that node may receive conflicting information from honest and Byzantine nodes, making it difficult to identify the current state. In this paper, we are interested in protocols that are stateless, i.e., a new joining node should be able to retrieve the current state of the ledger just using a fixed amount of data that characterizes the ledger (such as the genesis block in Bitcoin). We define three variants of stateless DLTs: weak, strong, and probabilistic. Then, we analyze this property for DLTs using different types of consensus.
GHAST: Breaking Confirmation Delay Barrier in Nakamoto Consensus via Adaptive Weighted Blocks
Authors: Chenxing Li, Fan Long, Guang Yang
Abstract: Initiated from Nakamoto's Bitcoin system, blockchain technology has demonstrated great capability of building secure consensus among decentralized parties at Internet-scale, i.e., without relying on any centralized trusted party. Nowadays, blockchain systems find applications in various fields. But the performance is increasingly becoming a bottleneck, especially when permissionless participation is retained for full decentralization. In this work, we present a new consensus protocol named GHAST (Greedy Heaviest Adaptive Sub-Tree) which organizes blocks in a Tree-Graph structure (i.e., a directed acyclic graph (DAG) with a tree embedded) that allows fast and concurrent block generation. GHAST protocol simultaneously achieves a logarithmically bounded liveness guarantee and low confirmation latency. More specifically, for maximum latency $d$ and adversarial computing power bounded away from 50\%, GHAST guarantees confirmation with confidence $\ge 1-\varepsilon$ after a time period of $O(d\cdot \log(1/\varepsilon))$. When there is no observable attack, GHAST only needs $3d$ time to achieve confirmation at the same confidence level as six-block-confirmation in Bitcoin, while it takes roughly $360d$ in Bitcoin.
Time-Variant Proof-of-Work Using Error-Correction Codes
Authors: Sangjun Park, Haeung Choi, Heung-No Lee
Abstract: The protocol for cryptocurrencies can be divided into three parts, namely consensus, wallet, and networking overlay. The aim of the consensus part is to bring trustless rational peer-to-peer nodes to an agreement to the current status of the blockchain. The status must be updated through valid transactions. A proof-of-work (PoW) based consensus mechanism has been proven to be secure and robust owing to its simple rule and has served as a firm foundation for cryptocurrencies such as Bitcoin and Ethereum. Specialized mining devices have emerged, as rational miners aim to maximize profit, and caused two problems: i) the re-centralization of a mining market and ii) the huge energy spending in mining. In this paper, we aim to propose a new PoW called Error-Correction Codes PoW (ECCPoW) where the error-correction codes and their decoder can be utilized for PoW. In ECCPoW, puzzles can be intentionally generated to vary from block to block, leading to a time-variant puzzle generation mechanism. This mechanism is useful in repressing the emergence of the specialized mining devices. It can serve as a solution to the two problems of recentralization and energy spending.
Democratising blockchain: A minimal agency consensus model
Authors: Marcin Abram, David Galindo, Daniel Honerkamp, Jonathan Ward, Jin-Mann Wong
Abstract: We propose a novel consensus protocol based on a hybrid approach, that combines a directed acyclic graph (DAG) and a classical chain of blocks. This architecture allows us to enforce collective block construction, minimising the monopolistic power of the round-leader. In this way, we decrease the possibility for collusion among senders and miners, as well as miners themselves, allowing the use of more incentive compatible and fair pricing strategies. We investigate these possibilities alongside the ability to use the DAG structure to minimise the risk of transaction censoring. We conclude by providing preliminary benchmarks of our protocol and by exploring further research directions.
ProPoS: A Probabilistic Proof-of-Stake Protocol
Authors: Daniel Reijsbergen, Pawel Szalachowski, Junming Ke, Zengpeng Li, Jianying Zhou
Abstract: We present ProPoS, a Proof-of-Stake protocol dedicated, but not limited, to cryptocurrencies. ProPoS is a chain-based protocol that minimizes interactions between nodes through lightweight committee voting, resulting in a more simple, robust, and scalable proposal than competing systems. It also mitigates other drawbacks of previous systems, such as high reward variance and long confirmation times. ProPoS can support large node numbers by design, and provides probabilistic safety guarantees whereby a client makes commit decisions by calculating the probability that a transaction is reverted based on its blockchain view. We present a thorough analysis of ProPoS and report on its implementation and evaluation. Furthermore, our new technique of proving safety can be applied more broadly to other Proof-of-Stake protocols.
Smart Contract-based Computing ResourcesTrading in Edge Computing
Authors: Jinyue Song, Tianbo Gu, Yunjie Ge, Prasant Mohapatra
Abstract: In recent years, there is an emerging trend that some computing services are moving from cloud to the edge of the networks. Compared to cloud computing, edge computing can provide services with faster response, lower expense, and more security. The massive idle computing resources closing to the edge also enhance the deployment of edge services. Instead of using cloud services from some primary providers, edge computing provides people with a great chance to actively join the market of computing resources. However, edge computing also has some critical impediments that we have to overcome. In this paper, we design an edge computing service platform that can receive and distribute the computing resources from the end-users in a decentralized way. Without centralized trade control, we propose a novel hierarchical smart contract-based decentralized technique to establish the trading trust among users and provide flexible smart contract interfaces to satisfy users. Our system also considers and resolves a variety of security and privacy challenges when utilizing the encryption and distributed access control mechanism. We implement our system and conduct extensive experiments to show the feasibility and effectiveness of our proposed system.
DEPOSafe: Demystifying the Fake Deposit Vulnerability in Ethereum Smart Contracts
Authors: Ru Ji, Ningyu He, Lei Wu, Haoyu Wang, Guangdong Bai, Yao Guo
Abstract: Cryptocurrency has seen an explosive growth in recent years, thanks to the evolvement of blockchain technology and its economic ecosystem. Besides Bitcoin, thousands of cryptocurrencies have been distributed on blockchains, while hundreds of cryptocurrency exchanges are emerging to facilitate the trading of digital assets. At the same time, it also attracts the attentions of attackers. Fake deposit, as one of the most representative attacks (vulnerabilities) related to exchanges and tokens, has been frequently observed in the blockchain ecosystem, causing large financial losses. However, besides a few security reports, our community lacks of the understanding of this vulnerability, for example its scale and the impacts. In this paper, we take the first step to demystify the fake deposit vulnerability. Based on the essential patterns we have summarized, we implement DEPOSafe, an automated tool to detect and verify (exploit) the fake deposit vulnerability in ERC-20 smart contracts. DEPOSafe incorporates several key techniques including symbolic execution based static analysis and behavior modeling based dynamic verification. By applying DEPOSafe to 176,000 ERC-20 smart contracts, we have identified over 7,000 vulnerable contracts that may suffer from two types of attacks. Our findings demonstrate the urgency to identify and prevent the fake deposit vulnerability.
joangarciaitormo
More posts by joangarciaitormo
What Is Blockchain and What Does It Have To Do With Money?
By Pete Hill | 0 comment
Cryptocurrency simply wouldn't exist without the blockchain. Its unique peer-to-peer verification framework is the backbone of digital money – and it's influencing a wealth of other industries too. Blockchain was invented by Satoshi Nakamoto inRead more
Is It Ethical To Mine Cryptocurrency?
Using your computer to mine cryptocurrency can, at first glance, feel a bit risky, even though the rewards look great. Is it too good to be true? Using your existing hardware to create digital moneyRead more
Cryptocurrency Myth Busters
Cryptocurrency isn't exactly shrouded in mystery; it's in the media every day. However, coverage is often sensationalist and deviates from the facts. Money is the foundation of our modern world. With cryptocurrency changing the landscapeRead more
Is It Safe To Mine Cryptocurrency From My Computer?
Whether you're a business or a home user, the idea of lending your computer's power to a third party can seem intimidating. Does mining cryptocurrency pose a risk to network security? Cudo Miner uses aRead more
How to Make Cryptocurrency Mining Work For Your Business
In 2016, 83.7% of all businesses in the UK had internet access, with no sign of the number decreasing. Computing has become synonymous with business practice; no longer a luxury, but essential to keep upRead more
How Can I Use My Cryptocurrency?
Cryptocurrency may not be accepted in every High Street store, but its prominence is growing fast. Moreover, there's multiple ways to enjoy its value. It may not be as widely-recognised as fiat currency, but denyingRead more
The Rise (and rise) of Monetising Idle Compute – Is It Worth My While?
The sharing economy is booming, just look at Uber or Airbnb. But can your computer work and earn for you in the same way? None, some or all of you (covering all bases) will beRead more
Cudo Miner Update 20/08/2018
We wanted to let you know what we have produced during the latest month of evolving our software, so here are some highlights. The most exciting new features are around GPU-optimised mining. Although we've beenRead more | CommonCrawl |
Quantification of the natural history of visceral leishmaniasis and consequences for control
Lloyd A C Chapman ORCID: orcid.org/0000-0001-7727-71021,
Louise Dyson1,
Orin Courtenay1,
Rajib Chowdhury2,3,
Caryn Bern4,
Graham F. Medley5 &
T. Deirdre Hollingsworth1
Visceral leishmaniasis has been targeted for elimination as a public health problem (less than 1 case per 10,000 people per year) in the Indian sub-continent by 2017. However, there is still a high degree of uncertainty about the natural history of the disease, in particular about the duration of asymptomatic infection and the proportion of asymptomatically infected individuals that develop clinical visceral leishmaniasis. Quantifying these aspects of the disease is key for guiding efforts to eliminate visceral leishmaniasis and maintaining elimination once it is reached.
Data from a detailed epidemiological study in Bangladesh in 2002–2004 was analysed to estimate key epidemiological parameters. The role of diagnostics in determining the probability and rate of progression to clinical disease was estimated by fitting Cox proportional hazards models. A multi-state Markov model of the natural history of visceral leishmaniasis was fitted to the data to estimate the asymptomatic infection period and the proportion of asymptomatic individuals going on to develop clinical symptoms.
At the time of the study, individuals were taking several months to be diagnosed with visceral leishmaniasis, leading to many opportunities for ongoing transmission. The probability of progression to clinical disease was strongly associated with initial seropositivity and even more strongly with seroconversion, with most clinical symptoms developing within a year. The estimated average durations of asymptomatic infection and symptomatic infection for our model of the natural history are 147 days (95 % CI 130–166) and 140 days (95 % CI 123–160), respectively, and are significantly longer than previously reported estimates. We estimate from the data that 14.7 % (95 % CI 12.6-20.0 %) of asymptomatic individuals develop clinical symptoms—a greater proportion than previously estimated.
Extended periods of asymptomatic infection could be important for visceral leishmaniasis transmission, but this depends critically on the relative infectivity of asymptomatic and symptomatic individuals to sandflies. These estimates could be informed by similar analysis of other datasets. Our results highlight the importance of reducing times from onset of symptoms to diagnosis and treatment to reduce opportunities for transmission.
Visceral leishmaniasis (VL) in the Indian sub-continent (ISC) is a disease caused by chronic infection with the protozoan parasite Leishmania donovani, transmitted by the Phlebotomus argentipes sandfly. Clinically manifest visceral leishmaniasis, also called kala-azar (KA), is progressive with a high mortality rate, and characterized by prolonged fever and an enlarged liver and/or spleen. Clinical and laboratory diagnostics are imprecise [1–4], partly because only a small proportion of infected individuals develop disease (so that the presence of infection alone is not diagnostic), and partly because the clinical features of VL overlap with those of other endemic diseases (e.g. hyperreactive malarial splenomegaly, typhoid fever and disseminated tuberculosis), so that clinical presentation alone is not diagnostic. The current diagnosis generally relies on clinical features, specifically a fever lasting at least 14 days and a palpable liver/spleen (hepatomegaly/splenomegaly), and elevated rK39 antibodies (based on immuno-chromatographic assay) rather than evidence of active current infection. This combination of the duration of infection and rK39 rapid testing has high sensitivity [5].
Epidemiologically, KA is spatially highly heterogeneous with focal 'hotspots' of infection that move over time, and periodic epidemics on a timescale of decades [6, 7]. The control campaign in the ISC, which has been running since 2005, has focused on elimination as a public health problem (less than 1 new case per 10,000 people per year), defined at local geographical scales (per subdistrict in both India and Bangladesh, known as an upazila in Bangladesh). Progress has been made towards the target by implementing novel case detection strategies, rapid diagnostic testing and vector control activities. In Bangladesh, KA incidence declined in all districts in the 8 years following the start of the control campaign (2006–2013) from the previous 8 years (1998–2005) [8]. The highest number of cases was reported in 2006, after which the annual number of cases decreased significantly. In the period from 2008 to 2013, only 16 upazilas had average incidence rates above the elimination target (ranging from 1.06-18.25/10,000 people/year) [8]. In Nepal, where KA was only endemic in south-eastern districts neighbouring the state of Bihar in India and incidence rates were much lower (1-10/10,000 people/year in 2007–2008 [9]), the elimination target has been reached for two consecutive years. However, Bihar, which accounts for 70-80 % of the KA cases in India [10], is still far from the target with an estimated incidence of 22–29.8/10,000 people/year in 2006–2007 [11], and more recent estimates of 1–5 cases/10,000 people/year [10, 12].
Although vector control activities (in particular indoor residual spraying (IRS)) are a pillar of the elimination programme [13, 14], they appear to have uncertain and variable effectiveness, likely due to sub-optimal implementation and, in some areas, insecticide resistance [15–17]. However, a randomised control trial of IRS and insecticide-treated bed nets in Bangladesh from 2006 to 2007 showed a 70-80 % reduction in sandfly density up to 5 months post intervention [18], and recent modelling of IRS suggests that in low and medium endemicity settings (5–10 KA cases/10,000 people/year) effective IRS may be sufficient to reach the 1 case/10,000 people/year elimination target [19].
Progress in KA case reduction over the past decade has been largely attributed to improved timeliness of diagnoses and more effective treatment [20]. Given that the 'natural' epidemiology of the disease is typified by recurrent epidemics followed by long periods of low incidence, and noting that the current control is dependent on substantial external resources, effort and clinical awareness, there is considerable potential for future resurgence without a sustained elimination effort [20, 21]. Other major issues for the elimination programme include high levels of under-reporting (the ratio of actual to reported KA cases in the ISC is estimated to range between 2:1 and 8:1 [22, 23]), and the unknown contribution of asymptomatically infected individuals, who potentially form a large infectious reservoir, to transmission [24, 25].
Mathematical and statistical modelling of infectious diseases has a successful history of combining epidemiological data, biological understanding and clinical knowledge into quantitative frameworks that can be used to both interpret disease incidence (in terms of infection patterns) and predict the impact of proposed interventions. Visceral leishmaniasis is unusual in that there have been relatively few previous modelling attempts, mostly driven by the lack of quantitative data [26]. There are, to our knowledge, only two recent, high quality, longitudinal epidemiological studies: the KALANET bed net trial in India and Nepal between 2006 and 2009 [27, 28], and the studies of Bern et al. in Bangladesh from 2002 to 2010 [29–31]. Consequently, there is still a large amount of uncertainty about the natural history of the disease, in particular about its incubation period and the proportion of asymptomatically infected individuals that develop KA. From here on we treat the incubation period as being synonymous with the duration of asymptomatic infection, since in our modelling we do not initially distinguish between asymptomatically infected individuals that develop KA and those that do not, but we note that the duration of asymptomatic infection may be different for the two groups and also test this hypothesis (see Additional file 1). Previous estimates for the incubation period have ranged from 2 to 6 months [32–34], while estimates for the proportion of asymptomatic individuals that progress to KA have varied hugely, from 0.33 % [33] to 25 % [31]. Better quantification of these aspects of the disease is critical for developing effective models, guiding efforts to eliminate VL and maintaining elimination once it is reached.
Towards this end, we analyse the prospective, longitudinal data from a 3-year study in Bangladesh in the period 2002–2004 (details of the study and epidemiological analyses have been reported elsewhere [29, 31]). We use annual data on rK39 positivity and positivity of the leishmanin skin test (LST), together with KA diagnosis, to estimate the rates of progression between different disease states. The aim is to provide preliminary, quantitative estimates of waiting times (i.e. times spent in each state) and paths of progression that will feed into future transmission model development.
The study took place in a single community in Fulbaria upazila, Mymensingh district, Bangladesh between January 2002 and June 2004. Fulbaria was chosen due to its high reported KA incidence for the three years prior to the study. In 2002 the community had a population of approximately 12,000 people and was divided into 9 'paras' (sections), of 100–500 houses each. Cross-sectional household surveys of the 3 paras with the highest reported KA incidence were conducted from January to April in 2002, 2003 and 2004. All individuals who lived at least 6 months in the study area in the 3 years before the first survey in 2002 were included. The protocol was approved by the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) Research and Ethical Review Committees and the Institutional Review Board of the Centers for Disease Control and Prevention (CDC).
The data recorded included demographic information (age and sex), present and past KA cases (back to 1999), and risk factors (such as sleeping location, bed net use, diet, and animal ownership). For participants ≥ 3 years of age, capillary blood samples were taken for serology testing and the leishmanin skin test (LST) was applied intradermally. The blood samples were tested by an enzyme-linked immunosorbent assay (ELISA) with recombinant K39 (rK39) antigen [35, 36] and a modified protocol that included a standard titration curve of a pool of known positive sera on each plate of blood samples [37]. Concentration units (CU) were assigned to the standard titration curve (with the highest concentration on the curve assigned a value of 1000CU), and the optical density of the serum specimens converted into CU. The positive ELISA cut-off was set at 20CU—the 99th percentile of the distribution of ELISA readings from 38 individuals living in a non-VL-endemic region of Bangladesh. A second cut-off of 61CU was introduced for diagnosis of active KA, with a sensitivity and specificity of 97 % and 98.9 % respectively for sera from the study population based on receiver-operator-characteristic analysis [37].
The antigen for the LST was a suspension of 5 × 106 promastigotes/mL of the WHO-approved MHOM/TN/80/IPT1 strain of L. infantum. The test was applied following the standard protocol: 0.1 mL of antigen was injected intradermally on the inside of the forearm and 48–72 h later the induration of the skin measured in two perpendicular directions [38, 39]. In accordance with international consensus, the LST result was deemed positive if the mean of the two measurements was ≥ 5 mm [31, 40]. There was evidence of loss of leishmanin potency in this study in the 2003 and 2004 survey rounds [30], and so at later time points the number of individuals with positive LST reactivity is likely to be an underestimate (testing showed that the L. infantum antigen had a sensitivity of 70 % compared with L. amazonensis antigen in 2004).
A past case of KA was defined as an illness with ≥ 2 weeks of fever and at least one of: weight loss, abdominal swelling or skin darkening, with clinical improvement after 20 days of intramuscular injections of sodium stibogluconate (SSG) (the treatment for KA prescribed by national guidelines at the time). A present case of KA was defined as one that fulfilled the same definition plus splenomegaly and/or hepatomegaly and a positive rK39 ELISA result or rK39 dipstick test [29].
In total, data was collected on 2,410 out of 2507 individuals living in 509 houses in the 3 paras during the study. Of these individuals, 47 % were male and 53 % were female, and 2,152 had at least one rK39 ELISA or LST reading between 2002 and 2004. There were 182 cases of KA from the start of 1999 to the end of the study in June 2004: 125 cases with onset before 2002 and 57 with onset from 2002 to the end of the study (see Table 1). There were only 5 relapses to active KA following treatment, and the incidence of post kala-azar dermal leishmaniasis (PKDL) was very low, with only 4 confirmed cases out of the 182 KA cases (all of which were in 2004). Consequently, we have not included development of PKDL in our modelling.
Table 1 Summary of the data
Following the identification of delays between onset of symptoms and diagnosis and treatment [20, 41], a descriptive analysis of the key time periods in the data was performed. To investigate the impact of serological status on progression to disease, we analysed the risk of progression to KA for those with a particular sero-status at baseline, and those who seroconverted during the study. Kaplan-Meier curves were plotted and Cox proportional hazards regression models fitted to test for associations between (i) initial rK39 seropositivity and KA and (ii) rK39 seroconversion and KA, following a previous analysis by Hasker et al. [28].
Hasker et al. analysed serology data from four different cohorts in two large studies—two from the KALANET trial in India and Nepal, and two from the Tropical Medicine Research Centre (TMRC) project run in Bihar, India since 2008 [42, 43]—to determine the association between rK39 and direction agglutination test (DAT) antibody titres and progression to KA. For the KALANET trial, rK39 ELISA results were available for 2006 only, so only the risk of KA as a function of baseline seropositivity could be assessed, but for the TMRC surveys blood samples were tested using rK39 ELISA at each survey, so seroconversion was also investigated. Hasker et al. found that there was a strong association between high rK39 titres at baseline and progression to KA, and an even stronger association between seroconversion to a high titre and subsequent progression.
In our analysis, we take the 2002 serology survey as the baseline survey and the 2003 survey as the follow-up survey. We use the rK39 ELISA cut-offs described above to define seronegativity (rK39 ELISA reading < 20CU), moderate seropositivity (20CU ≤ rK39 ELISA < 61CU) and strong seropositivity (rK39 ELISA ≥ 61CU). These differ slightly from the cut-offs used in the Hasker study, in which the cut-off for seropositivity was given by the mean optical density of known negative sera plus two standard deviations, and the cut-off for strong seropositivity was determined by the percentage point optical density with the highest combined sensitivity and specificity for identifying individuals diagnosed with KA in the last 2 years. Since the cut-off for strong seropositivity for our data was determined using samples from individuals with active KA as positive controls, it is likely that it corresponds to a higher rK39 titre and is more specific for KA. Nevertheless, the cut-offs for the second TMRC cohort in [28], which was from a higher endemicity region, correspond closely to those used in our analysis.
For the analysis of KA progression risk with seroconversion, we classified seroconvertors from the 2002 survey to the 2003 survey into different groups. Individuals that sero-deconverted from being either strongly seropositive or moderately seropositive to seronegative were grouped together (sero-deconvertors), as were those that remained either seronegative or seropositive (non-convertors), who were taken as the reference group. Individuals whose titre increased between surveys were grouped into seroconvertors (who went from being seronegative to seropositive) and strong seroconvertors (who went from being seronegative or seropositive to strongly seropositive). People that were strongly seropositive at both surveys were treated as a separate group.
Multi-state Markov model of natural history of VL
Multi-state Markov models provide an informative way of analysing the natural history of a disease, by describing how an individual moves through a series of disease states (e.g. healthy, infected, recovered, dead) in continuous time. The movement of individuals between states is governed by a set of transition intensities, q rs (r, s = 1, …, R), each of which represents the instantaneous risk of moving from state r to state s for r ≠ s (\( {q}_{rr}:=-{\displaystyle \sum_{s\ne r}}{q}_{rs} \)), where R is the number of states. The transition intensities may depend on time t and a set of (potentially individual-specific) explanatory variables z (i.e. q rs = q rs (t, z)), and are summarised in an R × R matrix, Q, whose rows sum to zero. The aim of fitting the multi-state model to data on observations of individuals' disease states is to estimate the transition intensity matrix Q.
Multi-state Markov models are particularly useful for modelling panel data on disease progression, such as that described above, where individuals are observed at approximately regular intervals, but the exact times between follow-up visits vary and limited information is available about the individuals between follow-up visits. This means that changes in individuals' disease states generally occur at unknown times.
Following Stauch et al. [33] and with a view to developing a transmission model, we model the natural history of VL as shown in Fig. 1. Individuals are classified into 5 different disease states—susceptible, asymptomatically infected, symptomatically infected (active KA), recovered/dormant, and dead—according to their KA status and the results of the rK39 ELISA and LST as shown in Table 2 (see Table A1 in Additional file 1 for the full classification including censored states for missing tests). Susceptible individuals (state 1) have negative rK39 ELISA and LST readings, are not currently symptomatic, and their most recent rK39 ELISA test was negative. If individuals have a positive ELISA, but a negative LST and have not previously suffered KA they are classed as being asymptomatically infected (state 2). On development of symptoms an individual is recorded as having KA (state 3) and remains in this state from the date of fever onset to the end of treatment or, if the individual is untreated, one year after the onset of symptoms (98 % of patients that have KA during the study have a date of onset of symptoms and 91 % have a date of start of treatment; prior to and after the study period these data are often missing and so these dates are recorded as unknown or uncertain). However, asymptomatically infected individuals may also progress to a 'recovered/dormant' state (state 4) without developing symptoms. Recovered/dormant individuals are classified as those who are LST positive, have recovered from their KA symptoms or have sero-deconverted from positive to negative rK39 ELISA. Recovered/dormant individuals may relapse to KA or return to being susceptible. Deaths due to KA and other causes (state 5) are included in the dataset, so individuals may be absorbed from any of states 1 to 4 into state 5.
Flow diagram for multi-state Markov model of natural history of VL
Table 2 Classification of individuals into different disease states in multi-state model
The transition intensity matrix for this 5-state model of VL is
$$ Q=\left(\begin{array}{ccccc}\hfill {q}_{11}\hfill & \hfill {q}_{12}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {q}_{15}\hfill \\ {}\hfill 0\hfill & \hfill {q}_{22}\hfill & \hfill {q}_{23}\hfill & \hfill {q}_{24}\hfill & \hfill {q}_{25}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill {q}_{33}\hfill & \hfill {q}_{34}\hfill & \hfill {q}_{35}\hfill \\ {}\hfill {q}_{41}\hfill & \hfill 0\hfill & \hfill {q}_{43}\hfill & \hfill {q}_{44}\hfill & \hfill {q}_{45}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right). $$
We will assume that the q rs are independent of the number of individuals in each state, and thus are time-independent. For this model, the proportion of those asymptomatically infected who develop symptoms (excluding individuals that die) is the ratio of the transition rate from asymptomatic infection to KA to the total rate of progression to KA or recovery:
$$ \mathrm{Probability}\ \mathrm{of}\ \mathrm{developing}\ \mathrm{symptoms}=\frac{q_{23}}{q_{23}+{q}_{24}}. $$
The model was fitted using the multi-state modelling package msm in R [44]. This package allows for the fact that some observations are exact (such as dates of death) and others are censored (such as the date of seroconversion) (see Additional file 1 for further details). The package estimates the transition intensity matrix Q and its confidence intervals by maximising the likelihood of the model given the data (see Additional file 1 for full details). The model also estimates the durations of the asymptomatic and symptomatic stages (the waiting times in states 2 and 3). The BFGS (Broyden-Fletcher-Goldfarb-Shanno) optimisation method (a quasi-Newtonian hill-climbing method that uses analytic derivatives for the optimisation [45]) in the optim function was used for finding the maximum likelihood. The confidence interval for the proportion of asymptomatic individuals that develop KA was calculated by bootstrap resampling of the data and refitting of the model with 1000 bootstrap samples [44].
Delays to treatment
Figure 2(a)-(c) show the distributions of onset-to-treatment, onset-to-diagnosis and diagnosis-to-treatment times for all KA patients for whom these times were recorded, along with the median, mean and standard deviation of each distribution. The median time of 120 days from onset to start of treatment is much longer than figures recently reported for Bangladesh (58 days) [20] and Nepal (55 days), but is comparable with those for Bihar (104 days) [41]. The discrepancy between our data and recently reported figures likely reflects the poorer state of the health care system and the greater cost of treatment in Bangladesh at the time, and the fact that there was no active detection programme for KA cases before 2005 [8, 46]. However, methodological differences between the studies may also account for some of the difference; for example, the dates of symptom onset and treatment from 1999 to 2002 were retrospectively ascertained in our study, so may be subject to recall bias.
Delays to treatment. Distributions of (a) onset-to-treatment time, (b) onset-to-diagnosis time, and (c) diagnosis-to-treatment time. Sample sizes (n), medians, means and standard deviations (SDs): (a) n = 147, median = 120 days, mean = 133 days, SD = 90 days; (b) n = 67, median = 90 days, mean = 111 days, SD = 94 days; (c) n = 64, median = 12 days, mean = 24 days, SD = 35 days
Risk of progression to KA
In the analysis of the association between rK39 sero-status and progression to KA, 1,515 individuals who had not had KA previously and who had a serological measurement at baseline were included, amongst whom there were a total of 43 KA cases. The Kaplan-Meier curve in Fig. 3(a) illustrates that being highly seropositive at baseline was much more predictive of progression to clinical symptoms than moderate seropositivity or negative serology—29 % of strongly seropositive individuals progressed to KA compared to 3 % and 2 % of seropositive and seronegative individuals. Testing these associations using Cox regression modelling showed that there was relatively little difference in the risk of progression for seronegative and moderately seropositive individuals, but a much higher hazard ratio (HR) for those with high seropositivity (HR 17.7, 95 % CI 8.05-38.8, Table 3). This matches the analysis of Hasker et al. [28], which found hazard ratios for progression to KA ranging from 1.6 to 4.9 for seropositive individuals and from 7.7 to 39.6 for strongly seropositive individuals compared to seronegative individuals in 4 cohorts studied in Bihar, India and Terai, Nepal. However, the proportion of strongly seropositive individuals progressing to KA in the Bangladesh study was much higher than in all of the cohorts in [28] (29 % compared to 1.1-7.7 %) except for the TMRC cohort in Muzaffapur, Bihar that was selected based on high reported KA incidence in the year prior to the study (where the proportion was 23.3 %).
Kaplan-Meier curves for risk of progression to KA. Progression risk (with censoring) by (a) serology status at baseline, and (b) seroconversion from baseline survey (2002) to second survey (2003). Dots show where individuals were lost to follow-up; dashed lines show 95 % confidence intervals
Table 3 Progression to KA depending on baseline rK39 sero-status. Hazard ratios and p-values estimated from fitted Cox proportional hazards regression models
As well as sero-status at baseline, seroconversion was an important marker for progression to KA. The seroconversion analysis was performed using 1,372 individuals that had rK39 ELISA readings from both the 2002 and 2003 surveys, 33 of whom developed KA. As expected from the figures shown in Table 3, a transition to strong seropositivity from either negative or moderately positive serology at baseline was associated with a high progression rate to KA compared with no seroconversion (HR 165, 95 % CI 74.6-365). Individuals that were strongly seropositive at both surveys also had a high risk of developing clinical symptoms (HR 61.5, 95 % CI 19.3-196). Seroconversion to moderate seropositivity was associated with an approximately 5-fold increase in risk of KA over no seroconversion, but the difference in progression for sero-deconvertors and non-convertors was not significant (Table 4). These results are similar to those for the highly endemic villages in Muzaffapur, Bihar in [28], which showed a hazard ratio for KA of 123.9 for individuals that became strongly seropositive compared with those that remained seronegative. However, a far greater proportion of high-titre seroconvertors progressed to KA in our data than in the previous study (13/18, 72 % as opposed to 9/37, 24.3 %). Also, unlike in this analysis, there was no significant association between moderate-titre seroconversion and progression to KA in the previous study. This is likely to be partly due to the differences, described in the Methods section, in the definitions of the cut-off values for seropositivity and strong seropositivity between the two studies.
Table 4 Progression to KA depending on change in serology status from first survey to second survey. Hazard ratios and p-values estimated from fitted Cox proportional hazard models. Sero-status: seronegative (−), seropositive (+), strongly seropositive (++)
The strong association between seroconversion to a high rK39 antibody titre and progression to KA in the study data suggests that rK39 ELISA could be used to predict KA prior to the onset of symptoms. However, since most high-titre seroconvertors who developed clinical symptoms did so before the second survey (Fig. 3(b)), more frequent testing would be required to use rK39 ELISA to pre-diagnose KA. Furthermore, even if practical constraints allowed for such testing, there is no suitable prophylactic treatment available at present, due to the toxicity of drugs currently used to treat KA.
Natural history of VL
We first fitted the Markov model in its simplest form with constant transition intensities, i.e. treating individuals in the same state as having the same risk of infection and disease. The estimated transition intensity matrix for this 5-state model is
$$ Q=\left(\begin{array}{ccccc}\hfill -0.22\hfill & \hfill 0.21\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0.005\hfill \\ {}\hfill 0\hfill & \hfill -2.49\hfill & \hfill 0.36\hfill & \hfill 2.11\hfill & \hfill 0.02\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill -2.61\hfill & \hfill 2.48\hfill & \hfill 0.13\hfill \\ {}\hfill 0.31\hfill & \hfill 0\hfill & \hfill 0.01\hfill & \hfill -0.33\hfill & \hfill 0.006\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right), $$
and the negative log-likelihood of the model is − log L = 1760.5 for the fitted intensities. The movement of individuals through the different disease states in the model with the estimated transition intensities is simulated in Additional file 2.
Proportion of asymptomatic individuals that develop KA
The estimated intensities show that for asymptomatically infected individuals the probability of developing clinical symptoms (from (1)) was
$$ \frac{q_{23}}{q_{23}+{q}_{24}}=\frac{0.36}{0.36+2.11}=0.147, $$
with a 95 % bootstrap confidence interval (CI) of 0.126-0.200, and that recovery from KA with correct treatment was nearly 20 times as likely as dying from KA (q 34/q 35 = 19.5). This estimate for the proportion of asymptomatic individuals that develop KA, 14.7 % (95 % CI 12.6-20.0 %), is much larger than the figure of 0.33 % (95 % CI 0.22-0.49 %) estimated by Stauch et al. [33] from their SIRS model of VL transmission fitted to the KALANET trial data, and the figure of 4 % used in the transmission model of Medley et al. [20] based on the average progression to KA from baseline seropositivity in the KALANET data [28]. It is, however, similar to the proportion of DAT seroconvertors that developed KA in high-endemicity villages in the KALANET trial (10.1 %) [47], and the KA progression rate of asymptomatics identified by rK39 and PCR positivity in a study in two highly endemic villages in Bihar in 2005–2006 (23 %) [24]. It also agrees reasonably well with the 4:1 ratio of cases of seroconversion to KA reported by Bern et al. [31] for the data from 2002 to 2004. The variation in these estimates may reflect differences in various factors between the different study locations and periods, including parasite virulence, the immune and nutritional status of the host population, and the part of the epidemic curve the study population was on (incidence was climbing steeply in Fulbaria in 2000–2004 [7]). However, Stauch et al.'s estimate may be a considerable underestimate of the actual proportion that progress to KA due to the rapid cycling of individuals through asymptomatic infection in their model caused by fitting to cross-sectional data. It is likely, therefore, that Stauch et al. underestimate the contribution of KA patients to transmission relative to that of asymptomatic individuals.
Average durations of asymptomatic infection and KA
The mean waiting times in the different disease states for the estimated transition intensities are shown in Table A2 in Additional file 1. The mean duration of asymptomatic infection was 147 days (95 % CI 130–166 days) and that of symptomatic infection was 140 days (95 % CI 123–160 days). Both these estimates are much longer than estimates from previous models. Stauch et al. [33] estimated that asymptomatic infection lasted on average 72 days (95 % CI 69–75 days) and that KA patients that received successful first-line treatment were cleared of parasites after 31 days and took 105 days on average from developing symptoms to become DAT negative and LST positive (i.e. to fully recover). Medley et al. [20] assumed that the duration of asymptomatic infection is 80 days in their model. Clearly, longer infection and disease durations can lead to increased transmission, as there are more opportunities for sandflies to become infected through feeding on humans. Based on our estimates, asymptomatic individuals are likely to contribute significantly to transmission even if their infectivity to sandflies is low relative to symptomatic individuals, due to the long asymptomatic infection period and the large ratio of asymptomatic to symptomatic individuals (although this proportion is smaller than that estimated by Stauch et al. [33]). If our estimates of the infection durations are representative of current high endemicity areas, more effort should be focused on reducing time to treatment through active case detection and early diagnosis, and greater surveillance of asymptomatic infection is required to identify individuals that are likely to develop KA and to estimate their contribution to transmission.
The estimated mean waiting time in the dormant/recovered stage was 1110 days (95 % CI 988–1247 days), which is much longer than that in the model of Stauch et al., where individuals remained DAT positive after asymptomatic infection or KA treatment for 74 days (95 % CI 65–84 days) and LST-positive for 307 days (95 % CI 260–356 days) on average, corresponding to a total time in the dormant/recovered stage of 381 days. We note that our estimate for the total time spent immune or with dormant infection is also likely to be an underestimate, due to the decrease in the sensitivity of the LST over the course of the study. This suggests that cellular immunity to the parasite can last for multiple years after asymptomatic infection or successful treatment for KA as reported elsewhere [48], rather than being lost within a year as suggested by Stauch et al. Indeed the age distribution of LST positivity in 2002 (when the LST had highest sensitivity) (Figure A1 in Additional file 1) shows an increase in the proportion of LST positive individuals with age, and Bern et al. [31] calculated from the data that there was a 48 % (95 % CI 38-59 %) increase in the chance of being LST positive with each 10-year increase in age, which suggests that cellular immunity wanes slowly. Of 530 individuals that tested LST positive in 2002, only one developed KA over the next two years, compared with 43 of the 1000 individuals that tested negative (relative risk = 0.04, 95 % CI 0.006-0.32), indicating that LST positivity represents effective immunity against KA. Given the increasing prevalence of LST positivity with age, its potentially long duration and the strong protection it offers against KA, control efforts should strive for 100 % detection and treatment of KA cases, particularly among individuals below 30 years of age who are less likely to be immune. The LST should also form a routine part of epidemiological studies to enable effective monitoring of levels of immunity within the population and better prediction of the risk of KA outbreaks, though this will require the production of sufficient quality leishmanin antigen to avoid issues associated with low antigen sensitivity [30].
Model fit
To assess how well the multi-state model fits the data with the estimated transition intensity matrix in (2), we compared the observed number and prevalence of individuals in each state during the study period from 2002 to 2004 to the expected number and prevalence from the model. As shown in Table A7 and Figure A3 in Additional file 1, the overall fit of the model is good, with the observed and expected prevalences matching very closely for susceptible individuals, KA patients and recovered/dormant individuals, and fairly closely for asymptomatically infected individuals and dead individuals.
We also compared the model to a 6-state model in which asymptomatically infected individuals were split into two separate states—one for those who subsequently progressed to KA ('pre-symptomatics'), and one for those who recovered without developing symptoms ('asymptomatics')—to determine whether there was a difference in the duration of asymptomatic infection for the two groups (see Additional file 1 for full details). Fitting the 6-state model to the data gave similar results to the 5-state model for the mean durations of the different disease stages (127 days for KA, 95 % CI 113–143 days, and 1108 days for the time spent immune or with dormant infection, 95 % CI 987–1244 days) and the proportion of infected individuals that develop symptoms (13.8 %, 95 % CI 9.7–19.4 %), and fairly similar asymptomatic infection durations of 135 days (95 % CI 109–167 days) and 159 days (95 % CI 138–183 days) for pre-symptomatics and asymptomatics.
Covariates
A number of factors may be associated with altered KA risk, including sex, age and consistent use of bed nets (Table A3 and Figure A2 in Additional file 1). Individuals aged between 0 and 14 were at highest risk of KA (9.9 %), with a significantly decreased KA incidence in adults aged over 45, with only 2.8 % developing KA over the course of the study. Males were found to have a slightly higher incidence of KA than females (9.2 % compared to 7.8 %), while the use of bed nets more than halved the risk of KA (6.7 % compared to 14.7 %).
To further investigate the effects of these variables on risk of infection and disease, we fitted the model with each variable included as a covariate on the transition intensities. The results are summarised in Table A4 in Additional file 1, which gives hazard ratios for each covariate with 95 % confidence intervals. This analysis allows us to investigate which parts of the disease progression are affected by each covariate. For example, the hazard ratios for q 12 and q 34 for bed net use are 0.72 (95 % CI 0.52–1.00) and 1.44 (95 % CI 1.06–1.96), suggesting that bed net use reduces the risk of leishmanial infection by 28 % and increases the chance of recovery from KA by 44 % over no bed net use. This is potentially due to bed nets preventing infected sandflies from biting humans and either infecting or reinoculating them, and suggests that, with proper and widespread use, bed nets could form an effective part of VL control.
Performing a similar analysis on sex indicates that females have a lower rate of progression from asymptomatic infection to recovery/dormant infection (HR for q 24 0.73, 95 % CI 0.57–0.94) and a higher rate of return from recovery/dormant infection to susceptibility (HR for q 41 1.36, 95 % CI 1.07–1.72). While Table A3 and Figure A2 in Additional file 1 suggest that the risk of KA generally decreases with age, the risk does not decrease linearly. Individuals aged 15–45 appear to be at increased risk of infection compared with those aged 0–14 (HR for q 12 1.31, 95 % CI 0.99–1.73), but more likely to recover from asymptomatic infection without developing KA (HR for q 24 1.35, 95 % CI 1.03-1.77) and less likely to recover from KA (HR q 34 0.75, 95 % CI 0.56-1.00). The risk of death from KA is higher for adults aged over 45 than children aged 0–14 (HR for q 35 5.19, 95 % CI 1.28–21.0). As expected, the risk of death due to other causes is much higher for adults aged over 45 than children aged 0–14 (HRs for q 15, q 25 and q 45 15.9, 95 % CI 4.5-55.4). Table A5 in Additional file 1 shows the probability of developing KA from asymptomatic infection for the different groups for each covariate. The probability is fairly similar across all groups and covariates, at approximately 0.15-0.16, apart from for 0–14-year-olds, who have a higher probability of developing symptoms of 0.17, and those aged over 45, who have a much lower probability of symptoms of 0.06. These differences have implications for design of surveillance systems; for example, they suggest that children are likely to be a more sensitive indicator of continued transmission, whereas most infection in adults is asymptomatic. Comparison of the model with each of the covariates to the model without any covariates using the likelihood ratio test and Akaike information criterion (Table A6 in Additional file 1) reveals that including each of the covariates significantly improves the fit of the model to the data. The largest improvement in the fit is given by including age-group as a covariate, which decreases the negative log-likelihood for the model to − log L = 1720.7 (p = 1.8 × 10−10 for the likelihood ratio test against the model with no covariates).
By reanalysing a detailed dataset on the development of clinical VL in Bangladesh in 2002–2004, we have been able to provide an independent estimate for the proportion of asymptomatically infected individuals who progress to KA of 14.7 % and an estimate for the asymptomatic infection period of 147 days. Both these estimates are similar to those reported in the literature by other means [32, 34], but much higher and longer respectively than those used in the main previous modelling studies [20, 33, 49].
Our analysis also shows that high rK39 levels, and in particular seroconversion to a high rK39 titre, are good predictors of progression to clinical VL, providing independent support for the results from a previous study [28]. This suggests that it may be possible to screen individuals to identify those who are likely to progress to clinical VL, improve their access to treatment and potentially reduce their infectious period and onward transmission through targeted IRS.
The role of seroconverting and symptomatic individuals in transmission depends not only on the proportion of individuals in each state and the lengths of time they are in each state, but also on their infectivity to sand flies. The relative infectivity of asymptomatics and symptomatics has rarely been studied, [50, 51], although one xenodiagnostic study in Ethiopian patients and vectors suggests that there may be marked changes in infectivity with parasitaemia [52], a measure which was not noted in this study.
We have also highlighted that the time from onset of symptoms to treatment in this area of Bangladesh in the early 2000s was considerably longer than in recent times [41]. During the 2002–2004 study period, Bangladesh experienced a major shortage of sodium stibogluconate, the only KA treatment drug in use at the time [53]. The shortage led to lack of supply in government health facilities and price gouging in the private marketplace [54]. At the time the study began, the median time from onset to treatment was 6 months and the only available drug was provided by the project; this shortened to 3–4 months over the course of the study. Improvements in drug availability after 2005, and especially after the implementation of active case detection and treatment with short course liposomal amphotericin B in Fulbaria [55], may have helped to drive reductions in incidence in Bangladesh by shortening the period symptomatic individuals spent in the community prior to treatment [20].
Despite our limited understanding of the natural history of VL [26], there are only two detailed epidemiological studies of the progression of leishmanial infection and disease. If we are to further refine control strategies to bring VL to local elimination, such studies will be invaluable, particularly if they can assist in identifying individuals who will develop KA or who may contribute most substantially to transmission.
Our intention is to use the results of our statistical modelling to develop transmission dynamics models of VL to evaluate the effect of potential interventions and the feasibility of achieving the 2020 elimination goals. At present, our estimate of the rate of infection, q 12, is independent of time and of the prevalence of infection, but should be a function of the number of infectious sandflies, which itself is dependent on the infectiousness of the human population. Within such a framework, we can also consider the spatial kernel of transmission and the impact of individual circumstances such as livestock ownership, nutritional status and sleeping location. To refine our estimates of the asymptomatic infection period and proportion of infected individuals that develop KA, and to assess the sensitivity and specificity of the rK39 ELISA and LST used in the study, we need to account for misclassification of individuals' disease states in the multi-state model due to errors in the test results. This can be achieved using a hidden Markov model, in which individuals' observed states can be misclassifications of their true, underlying disease states.
The results of this study and the potential for future development highlight the importance of detailed, longitudinal studies for improving understanding of VL and creating datasets that can be used for the design of interventions. As our understanding of the disease develops, the requirements for such data change, indicating that datasets such as these must be continually gathered.
Boelaert M, Bhattacharya S, Chappuis F, El Safi SH, Hailu A, Mondal D, et al. Evaluation of rapid diagnostic tests: Visceral leishmaniasis. Nat Rev Microbiol. 2007;5:S30–9.
Chappuis F, Sundar S, Hailu A, Ghalib H, Rijal S, Peeling RW, et al. Visceral leishmaniasis: What are the needs for diagnosis, treatment and control? Nat Rev Microbiol. 2007;5(11):873–82.
Maia Z, Lírio M, Mistro S, Mendes C, Mehta SR, Badaro R. Comparative study of rK39 leishmania antigen for serodiagnosis of visceral leishmaniasis: Systematic review with meta-analysis. PLoS Negl Trop Dis. 2012;6(1), e1484.
de Ruiter C, Van der Veer C, Leeflang M, Deborggraeve S, Lucas C, Adams E. Molecular tools for diagnosis of visceral leishmaniasis: Systematic review and meta-analysis of diagnostic test accuracy. J Clin Microbiol. 2014;52(9):3147–55.
Boelaert M, Verdonck K, Menten J, Sunyoto T, van Griensven J, Chappuis F, et al. Rapid tests for the diagnosis of visceral leishmaniasis in patients with suspected disease. Wiley Online Library: The Cochrane Library; 2014.
Dye C, Wolpert DM. Earthquakes, influenza and cycles of Indian kala-azar. Trans R Soc Trop Med Hyg. 1988;82(6):843–50.
Islam S, Kenah E, Bhuiyan MAA, Rahman KM, Goodhew B, Ghalib CM, et al. Clinical and immunological aspects of post-kala-azar dermal leishmaniasis in Bangladesh. Am J Trop Med Hyg. 2013;89(2):345–53.
Chowdhury R, Mondal D, Chowdhury V, Faria S, Alvar J, Nabi SG, et al. How far are we from visceral leishmaniasis elimination in Bangladesh? An assessment of epidemiological surveillance data. PLoS Negl Trop Dis. 2014;8(8), e3020.
Rijal S. WHO Visceral Leishmaniasis country data. Nepal: World Health Organisation; 2010.
Bhunia GS, Kesari S, Chatterjee N, Kumar V, Das P. The burden of visceral leishmaniasis in India: Challenges in using remote sensing and GIS to understand and control. ISRN Infect Dis. 2012. doi:10.5402/2013/675846.
Mondal D, Singh SP, Kumar N, Joshi A, Sundar S, Das P, et al. Visceral leishmaniasis elimination programme in India, Bangladesh, and Nepal: Reshaping the case finding/Case management strategy. PLoS Negl Trop Dis. 2009;3(1):e355.
Sharma SN, Batthacharya S, Sundar S. WHO Visceral Leishmaniasis country data. World Health Organisation: India; 2010.
Regional strategic framework for elimination of kala-azar from the South-East Asia region (2005–2015). New Delhi: WHO Regional Office for South-East Asia. World Health Organization; 2005.
Sundar S, Chakravarty J. Leishmaniasis: Challenges in the control and eradication. In: Fong I, editor. Challenges in infectious diseases. Springer: New York; 2013. p. 247–64.
Chowdhury R, Huda M, Kumar V, Das P, Joshi A, Banjara M, et al. The Indian and Nepalese programmes of indoor residual spraying for the elimination of visceral leishmaniasis: Performance and effectiveness. Ann Trop Med Parasitol. 2011;105(1):31–5.
Coleman M, Foster GM, Deb R, Singh RP, Ismail HM, Shivam P, et al. DDT-based indoor residual spraying suboptimal for visceral leishmaniasis elimination in India. Proc Natl Acad Sci. 2015;112(28):8573–8.
Hasker E, Singh SP, Malaviya P, Picado A, Gidwani K, Singh RP, et al. Visceral leishmaniasis in rural Bihar. India Emerg Infect Dis. 2012;18(10):1662-4.
Chowdhury R, Dotson E, Blackstock AJ, McClintock S, Maheswary NP, Faria S, et al. Comparison of insecticide-treated nets and indoor residual spraying to control the vector of visceral leishmaniasis in Mymensingh district, Bangladesh. Am J Trop Med Hyg. 2011;84(5):662–7.
Rutte EA le, Coffeng LE, Bontje DM, Hasker Epco C, Postigo JAR, Dagne DA, et al. Feasibility of eliminating visceral leishmaniasis from the Indian subcontinent: Explorations with a deterministic transmission model. Parasites and Vectors (Submitted). 2015.
Medley GF, Hollingsworth TD, Olliaro PL, Adams ER. Visceral leishmaniasis control: Health-seeking, diagnostics and transmission. Nature supplement (Under review). 2015.
Malaviya P, Picado A, Singh S, Hasker E, Singh R, Boelaert M, et al. Visceral leishmaniasis in Muzaffarpur district, Bihar, India from 1990 to 2008. PLoS One. 2010;6(3), e14751.
Alvar J, Vélez ID, Bern C, Herrero M, Desjeux P, Cano J, et al. Leishmaniasis worldwide and global estimates of its incidence. PLoS One. 2012;7(5), e35671.
Singh VP, Ranjan A, Topno RK, Verma RB, Siddique NA, Ravidas VN, et al. Estimation of under-reporting of visceral leishmaniasis cases in Bihar. India Am J Trop Med Hyg. 2010;82(1):9–11.
Das V, Siddiqui N, Verma R, Topno R, Singh D, Das S, et al. Asymptomatic infection of visceral leishmaniasis in hyperendemic areas of Vaishali district, Bihar, India: A challenge to kala-azar elimination programmes. Trans R Soc Trop Med Hyg. 2011;105(11):661–6.
Das S, Matlashewski G, Bhunia GS, Kesari S, Das P. Asymptomatic Leishmania infections in northern India: A threat for the elimination programme? Trans R Soc Trop Med Hyg. 2014;108(11):679–84.
Rock KS, Rutte EA le, Vlas SJ de, Adams ER, Medley GF, Hollingsworth TD. Uniting mathematics and biology for control of visceral leishmaniasis. Trends Parasitol. 2015
Picado A, Kumar V, Das M, Burniston I, Roy L, Suman R, et al. Effect of untreated bed nets on blood-fed Phlebotomus argentipes in kala-azar endemic foci in Nepal and India. Mem Inst Oswaldo Cruz. 2009;104(8):1183–6.
Hasker E, Malaviya P, Gidwani K, Picado A, Ostyn B, Kansal S, et al. Strong association between serological status and probability of progression to clinical visceral leishmaniasis in prospective cohort studies in India and Nepal. PLoS Negl Trop Dis. 2014;8(1).
Bern C, Hightower A, Chowdhury R, Ali M, Amann J, Wagatsuma Y, et al. Risk factors for kala-azar in Bangladesh. Emerg Infect Dis. 2005;11(5):655–62.
Bern C, Amann J, Haque R, Chowdhury R, Ali M, Kurkjian KM, et al. Loss of leishmanin skin test antigen sensitivity and potency in a longitudinal study of visceral leishmaniasis in Bangladesh. Am J Trop Med Hyg. 2006;75(4):744–8.
Bern C, Haque R, Chowdhury R, Ali M, Kurkjian KM, Vaz L, et al. The epidemiology of visceral leishmaniasis and asymptomatic leishmanial infection in a highly endemic Bangladeshi village. Am J Trop Med Hyg. 2007;76(5):909–14.
Rees PH, Kager PA. Visceral leishmaniasis and post-kala-azar dermal leishmaniasis. In: Peters W, Killick-Kendrick R, editors. The leishmaniases in biology and medicine. Volume II. Clinical aspects and control. London: Academic Press; 1987. p. 583–615.
Stauch A, Sarkar RR, Picado A, Ostyn B, Sundar S, Rijal S, et al. Visceral leishmaniasis in the Indian subcontinent: Modelling epidemiology and control. PLoS Negl Trop Dis. 2011;5(11):1–12.
Mubayi A, Castillo-Chavez C, Chowell G, Kribs-Zaleta C, Siddiqui NA, Kumar N, et al. Transmission dynamics and underreporting of kala-azar in the Indian state of Bihar. J Theor Biol. 2010;262(1):177–85.
Houghton RL, Petrescu M, Benson DR, Skeiky YA, Scalone A, Badaró R, et al. A cloned antigen (recombinant k39) of Leishmania chagasi diagnostic for visceral leishmaniasis in human immunodeficiency virus type 1 patients and a prognostic indicator for monitoring patients undergoing drug therapy. J Infect Dis. 1998;177(5):1339–44.
Badaro R, Benson D, Eulalio M, Freire M, Cunha S, Netto E, et al. RK39: A cloned antigen of Leishmania chagasi that predicts active visceral leishmaniasis. J Infect Dis. 1996;173(3):758–61.
Kurkjian K, Vaz L, Haque R, Cetre-Sossah C, Akhter S, Roy S, et al. Application of an improved method for the recombinant K39 enzyme-linked immunosorbent assay to detect visceral leishmaniasis disease and infection in Bangladesh. Clin Diagn Lab Immunol Am Soc Microbiol. 2005;12(12):1410–5.
Sokal JE. Editorial: Measurement of delayed skin-test responses. N Engl J Med. 1975;293(10):501–2.
Gramiccia M, Bettini S, Gradoni L, Ciarmoli P, Verrilli M, Loddo S, et al. Leishmaniasis in Sardinia 5. Leishmanin reaction in the human population of a focus of low endemicity of canine leishmaniasis. Trans R Soc Trop Med Hyg. 1990;84(3):371–4.
Weigle KA, Valderrama L, Arias AL, Santrich C, Saravia NG. Leishmanin skin test standardization and evaluation of safety, dose, storage, longevity of reaction and sensitization. Am J Trop Med Hyg. 1991;44(3):260–71.
Boettcher JP, Siwakoti Y, Milojkovic A, Siddiqui NA, Gurung CK, Rijal S, et al. Visceral leishmaniasis diagnosis and reporting delays as an obstacle to timely response actions in Nepal and India. BMC Infect Dis. 2015;15(1):43.
Picado A, Singh SP, Rijal S, Sundar S, Ostyn B, Chappuis F, et al. Longlasting insecticidal nets for prevention of Leishmania donovani infection in India and Nepal: Paired cluster randomised trial. BMJ. 2010;341:c6760.
Hasker E, Kansal S, Malaviya P, Gidwani K, Picado A, Singh RP, et al. Latent infection with Leishmania donovani in highly endemic villages in Bihar, India. PLoS Negl Trop Dis. 2013;7(2), e2053.
Jackson C. Multi-state modelling with R: the msm package. MRC Biostatistics Unit. 2014. https://cran.r-project.org/web/packages/msm/vignettes/msm-manual.pdf. Accessed 1 Jul 2015.
Fletcher R. Practical methods of optimization. Chichester: John Wiley & Sons; 1987
Das A, Harries A, Hinderaker S, Zachariah R, Ahmed B, Shah G, et al. Active and passive case detection strategies for the control of leishmaniasis in Bangladesh. Public Health Action. 2014;4(1):15–21.
Ostyn B, Gidwani K, Khanal B, Picado A, Chappuis F, Singh S, et al. Incidence of symptomatic and asymptomatic Leishmania donovani infections in high-endemic foci in India and Nepal: A prospective study. PLoS Negl Trop Dis. 2011;5(10):e1284.
Pampiglione S, Manson-Bahr P, La Placa M, Borgatti M, Musumeci S. Studies in mediterranean leishmaniasis: 3. The leishmanin skin test in kala-azar. Trans R Soc Trop Med Hyg. 1975;69(1):60–8.
Stauch A, Duerr H-P, Picado A, Ostyn B, Sundar S, Rijal S, et al. Model-based investigations of different vector-related intervention strategies to eliminate visceral leishmaniasis on the Indian subcontinent. PLoS Negl Trop Dis. 2014;8(4), e2810.
Knowles R, Napier LE, Smith R. On a herpetomonas found in the gut of the sandfly Phlebotomus argentipes, fed on kala-azar patients: A preliminary note. Spink: Thacker; 1926.
Shortt HE, Craighead AC, Barraud PJ, K-a Commission K-a et al. Note on a massive infection of the pharynx of Phlebotomus argentipes with herpetomonas donovani. Indian J Med Res. 1926;13(3):441-4.
Miller E, Warburg A, Novikov I, Hailu A, Volf P, Seblova V, et al. Quantifying the contribution of hosts with different parasite concentrations to the transmission of visceral leishmaniasis in Ethiopia. PLoS Negl Trop Dis. 2014;8(10), e3288.
Ahluwalia IB, Bern C, Costa C, Akter T, Chowdhury R, Ali M, et al. Visceral leishmaniasis: Consequences of a neglected disease in a Bangladeshi community. Am J Trop Med Hyg. 2003;69(6):624–8.
Ozaki M, Islam S, Rahman KM, Rahman A, Luby SP, Bern C. Economic consequences of post–kala-azar dermal leishmaniasis in a rural Bangladeshi community. Am J Trop Med Hyg. 2011;85(3):528–34.
Lucero E, Collin SM, Gomes S, Akter F, Asad A, Kumar Das A, et al. Effectiveness and safety of short course liposomal amphotericin B (amBisome) as first line treatment for visceral leishmaniasis in Bangladesh. PLoS Negl Trop Dis. 2015;9(4), e0003699.
The authors gratefully acknowledge funding of the NTD Modelling Consortium by the Bill and Melinda Gates Foundation in partnership with the Task Force for Global Health. The original data collection was funded by a grant from the U.S. Centers for Disease Control and Prevention Emerging Infections Initiative. The authors thank Chris Jackson for developing the multi-state modelling R package (msm), and Marinella Capriati and Sarah Jervis for very useful discussions. The views, opinions, assumptions or any other information set out in this article are solely those of the authors.
School of Life Sciences, University of Warwick, Gibbet Hill Campus, Coventry, CV4 7AL, UK
Lloyd A C Chapman, Louise Dyson, Orin Courtenay & T. Deirdre Hollingsworth
Country Programme Manager - Bangladesh, KalaCORE Programme, Dhaka, Bangladesh
Rajib Chowdhury
Department of Medical Entomology, National Institute of Preventive and Social Medicine (NIPSOM), Mohakhali, Dhaka, Bangladesh
UCSF School of Medicine, 550 16th Street, San Francisco, CA, 94158, USA
Caryn Bern
London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK
Graham F. Medley
Lloyd A C Chapman
Louise Dyson
Orin Courtenay
T. Deirdre Hollingsworth
Correspondence to Lloyd A C Chapman.
LACC, LD, GFM and TDH drafted the manuscript. RC and CB collected the data and helped with the drafting of the manuscript. LACC, OC and TDH conceived and designed the study. LACC, LD and TDH developed the R code and performed the data analysis. All authors read and approved the final manuscript.
Graham F. Medley and T. Deirdre Hollingsworth contributed equally to this work.
Further explanation of the multi-state Markov model and fitting, tables of state classifications and results from the model, and comparison of the model with a 6-state model that splits the asymptomatic group into those that progress to KA and those that do not. (PDF 301 kb)
Matlab file for numerically solving the system of ordinary differential equations (ODEs) that can be used to represent the 5-state model for a given set of parameter values. (M 1 kb)
Chapman, L.A.C., Dyson, L., Courtenay, O. et al. Quantification of the natural history of visceral leishmaniasis and consequences for control. Parasites Vectors 8, 521 (2015). https://doi.org/10.1186/s13071-015-1136-3
Visceral leishmaniasis
Multi-state Markov model | CommonCrawl |
Discovery of prognostic biomarkers for predicting lung cancer metastasis using microarray and survival data
Hui-Ling Huang1,2,
Yu-Chung Wu3,
Li-Jen Su4,
Yun-Ju Huang5,
Phasit Charoenkwan1,
Wen-Liang Chen2,
Hua-Chin Lee1,2,
William Cheng-Chung Chu6 &
Shinn-Ying Ho1,2
BMC Bioinformatics volume 16, Article number: 54 (2015) Cite this article
Few studies have investigated prognostic biomarkers of distant metastases of lung cancer. One of the central difficulties in identifying biomarkers from microarray data is the availability of only a small number of samples, which results overtraining. Recently obtained evidence reveals that epithelial–mesenchymal transition (EMT) of tumor cells causes metastasis, which is detrimental to patients' survival.
This work proposes a novel optimization approach to discovering EMT-related prognostic biomarkers to predict the distant metastasis of lung cancer using both microarray and survival data. This weighted objective function maximizes both the accuracy of prediction of distant metastasis and the area between the disease-free survival curves of the non-distant and distant metastases. Seventy-eight patients with lung cancer and a follow-up time of 120 months are used to identify a set of gene markers and an independent cohort of 26 patients is used to evaluate the identified biomarkers. The medical records of the 78 patients show a significant difference between the disease-free survival times of the 37 non-distant- and the 41 distant-metastasis patients. The experimental results thus obtained are as follows. 1) The use of disease-free survival curves can compensate for the shortcoming of insufficient samples and greatly increase the test accuracy by 11.10%; and 2) the support vector machine with a set of 17 transcripts, such as CCL16 and CDKN2AIP, can yield a leave-one-out cross-validation accuracy of 93.59%, a test accuracy of 76.92%, a large disease-free survival area of 74.81%, and a mean survival prediction error of 3.99 months. The identified putative biomarkers are examined using related studies and signaling pathways to reveal the potential effectiveness of the biomarkers in prospective confirmatory studies.
The proposed new optimization approach to identifying prognostic biomarkers by combining multiple sources of data (microarray and survival) can facilitate the accurate selection of biomarkers that are most relevant to the disease while solving the problem of insufficient samples.
Primary lung cancer is very heterogeneous in its clinical presentation, histopathology, and treatment response [1]. Differentiating between an occurrence of a new primary lung cancer and a recurrence of lung cancer is often difficult. Conventionally, lung cancers have been divided into non-small-cell lung cancer (NSCLC) and small-cell lung cancer (SCLC). The stage of each cancer is the most significant predictor of survival. Cancer metastasis and the emergence of drug resistance are the major causes of the failure of treatment for lung cancer. Thus, therapy for lung cancer that takes into account distant metastasis and drug resistance is an emerging field of research. Prognostic biomarkers are expected to be useful in predicting the probable course of lung cancer metastases, and they importantly affect the aggressiveness of therapy. Some new promising strategies for biomarker discovery include microarray-based profiling at the DNA and mRNA levels, and mass-spectrometry-based profiling at the protein and peptide levels [2]. The combination of multiple biomarkers is generally agreed to increase diagnostic sensitivity and specificity over the use of individual markers.
During cancer progression, some tumor cells acquire new characteristics, such as over-expression of epithelial-mesenchymal transition (EMT) markers, and undergo profound morphogenetic changes. EMT is a process in which epithelial cells lose their cell polarity and cell-cell adhesion, and gain migratory and invasive properties, becoming mesenchymal cells [3,4]. EMT plays an important role in cancer progression and provides a new basis for understanding the progression of carcinoma towards dedifferentiated and more malignant states [3,4]. Additionally, EMT affects cancer cell invasion, resistance to apoptosis, and stem cell features [5]. Growth factors [6,7], ligand-dependent nuclear receptors [8], transcription regulators [3,9], cytokine [7,10], and kinase [11,12], which are potential regulators that are related to EMT have been identified in the literature. Signaling pathways that are activated by intrinsic or extrinsic stimulation converge on the transcriptional factors and regulate phenotypic changes of cancer cells [9].
DNA microarrays perform the simultaneous interrogation of thousands of genes and provide an opportunity to measure a tumor from multiple perspectives. Microarray-based techniques generally provide detailed observations at gene activities in tumors and generate opportunities for finding therapeutic targets. As a high-throughput technology at the molecular level, DNA microarray-based methods have clear advantages over traditional histological examinations and have been extensively used in cancer research to predict more accurately clinical outcomes and potentially improve patient management. Studies indicate that microarray techniques greatly facilitate accurate tumor classification and predicted outcome in terms of, for example, tumor stage, metastatic status, and patient survival, offering some hope for personalized medicine [13-15].
Methods for predicting lung cancer metastasis involve feature (gene) selection and classifier design. Feature selection identifies a subset of differentially-expressed genes that are potentially relevant to distinguishing different classes of samples [16]. One of the central difficulties in investigating microarray classification and gene selection is the availability of only a small number of samples, compared to the large number of genes in a sample [16]. Hierarchical clustering [17] is one of the most commonly used approaches in microarray studies. However, hierarchical clustering (or any purely correlative technique) cannot alone provide a rational biological basis for disease classification [18]. Generally, univariate analysis is conducted to reduce feature size and, then, a support vector machine (SVM) [19] or maximum likelihood classification [20] with an effective feature selection method is used to identify a small set of informative genes. The most challenging task is to avoid overfitting a small number of samples, resulting in the poor performance of independent tests.
In this work, the medical records of 78 patients with lung cancer and a follow-up time of 120 months reveal significant difference between the disease-free survival times of the 37 non-distant- and the 41 distant-metastasis patients. To solve the problem of insufficient samples, this work proposes a novel optimization approach to discovering EMT-related prognostic biomarkers for predicting the distant metastasis of lung cancer using both microarray and survival data. The proposed optimal gene selection method incorporates gene expression profiles and their corresponding disease-free survival curves of patients to design a fitness function for using an intelligent genetic algorithm [21]. The set of 78 samples is used to identify a set of gene markers and an independent cohort of 26 samples is used to evaluate the identified biomarkers. The experimental results show that disease-free survival curves can compensate for the insufficient samples and the SVM with a set of 17 transcripts can yield high prediction accuracies of distant metastasis and disease-free survival time. The putative biomarkers for predicting the distant metastasis of lung cancer are examined using relevant signaling pathways to reveal the potential of biomarkers.
RNA isolation and microarray platform
Illumina Sentrix-6 Whole-Genome Expression BeadChips are relatively new microarray platforms, that have been used in many microarray studies in the past few years [22]. Physically, each Sentrix-6 BeadChip consists of 12 equally-spaced strips of beads. Each pair of adjacent strips comprises a single microarray and is hybridized with a single RNA sample [22]. The used microarray is Illumina of HumanWG-6 BeadChip Kit Support. Fresh-frozen specimens were removed from liquid nitrogen and homogenized using TissuLyzer in RLT buffer of RNeasy isolation kit, both from Qiagen. Total RNA was extracted from fresh-frozen tumors followed the manufacturer's suggestion, purified by RNeasy mini kit, and checked by NanoDrop spectrophotometer and Agilent Bioanalyzer for quantity and quality. Biotin labeled cRNA was prepared from Illumina TotalPrepTM RNA amplification kit, Life Technologies. One and half ug cRNA was hybridized to the Illumina Multi-sample Human WG-6 v3.0 chip according to manufacturer's instructions. Globe normalization was used to normalize for signal intensity of chips. This microarray of a sample has 48,803 transcripts.
Cohorts of lung cancer
The dataset of 78 lung cancer samples, comprising 37 non-distant- and 41 distant-metastasis samples as well as their corresponding disease-free survival time, comes from Taipei Veterans General Hospital (TVGH) in Taiwan. This work developed prognostic models using these 78 samples. To evaluate the potential of putative biomarkers and the gene-discovery method for identifying a small set of genes that can be used to predict distant metastasis of lung cancer, a cohort of 26 samples, comprising 6 non-distant- and 20 distant-metastasis patients of TVGH, is utilized as an independent test dataset. This study as well as the tissue procurement protocol were approved by the Institutional Review Board of TVGH (VGHIRB No. 2013-04-015 AC), and written informed consent was obtained from all patients. The datasets of 78 and 26 samples consisting of microarray data and disease-free survival time that the patient identifiers have been removed are given in Additional file 1: Table S1 and Additional file 2: Table S2, respectively.
Characteristics of cohorts
Patient survival is a major clinical parameter that is used to evaluate the efficacy of a particular therapy. Disease-free survival, used herein, is defined as the time between surgery and the occurrence of an event (death or distant metastasis). The censored data are that when the event did not occur, and the survival time is that between surgery and the last follow-up date. Figure 1 shows statistics concerning disease-free survival times for 78 lung cancer patients with a follow-up time of 120 months. The mean times of disease-free survival for non-distant and distant metastasis are 73.08 and 14.02 months, respectively. A t-test with p-value = 7.99E-22 reveals a significant difference (p < 0.001) between the disease-free survival times of the 37 non-distant- and the 41 distant-metastasis patients. The result suggests that distant metastasis is strongly correlated to patients' survival. Some characteristics of these 78 patients were summarized in Table 1. From the results of Fisher's exact test, there was no significant association between distant metastasis and the interested factors such as age, sex, smoking, tumor size, pN and pM status, histologic type, and differentiation (p > 0.05). Notably, there was a strong association between distant metastasis and pathologic stage (p = 3.0E-5).
Disease-free survival times for 78 lung cancer patients with a follow-up time of 120 months. (A) The mean times of disease-free survival for non-distant and distant metastasis are 73.08 and 14.02 months, respectively. (B) The box plots of non-distant and distant metastasis. The p-value of t-test is 7.99E-22 (p < 0.001) suggesting that distant metastasis is highly correlated to patients' survival.
Table 1 Selected characteristics of participants according to NSCLC
EMT-related transcripts
Ingenuity Knowledge Base (IKB; http://www.ingenuity.com) is a repository of biological interactions and functional annotations that are created from millions of individually modeled relationships among proteins, genes, complexes, cells, tissues, metabolites, drugs, and diseases. IKB provides comprehensive, species-specific knowledge about the function and regulation of genes, tissue and cell line expression patterns, clinical biomarkers, subcellular locations, mutations, and disease associations. First, EMT-related genes annotated as "growth factor", "ligand-dependent nuclear receptor", "transcription regulator", "cytokine", or "kinase" in the IKB are identified. As a result, 4,314 transcripts are identified in the HumanWG-6 microarray data, comprising 233 transcripts of growth factor, 103 transcripts of ligand-dependent nuclear receptor, 2,465 transcripts of transcription regulator, 276 transcripts of cytokine, and 1,237 transcripts of kinase. Second, univariate t-test analysis is used to calculate the p-values of individual transcripts between two classes. The genes with a very small p-value are useful in predicting distant metastasis. There are 474 top-ranked transcripts according to the p-value selected from the 4,314 EMT-related transcripts. The training dataset of 78 samples with the 474 EMT-related transcripts is used to identify a small set of prognostic biomarkers that is predictive of the distant metastasis of lung cancer.
Disease-free survival area
The Kaplan-Meier survival curves reveal a significant difference between the survival times of patients in the two classes. The disease-free survival area is calculated between the two survival curves from zero to the maximum disease-free survival time (117 months in this work). The disease-free survival area is represented as a percentage, which is the ratio of this area to the maximum area. The gene set with a large disease-free survival area is expected to have a strong ability to distinguish a non-distant metastasis from a distant metastasis of lung cancer.
Fitness function
Microarray data contain valuable information about a huge gene set but often suffer from a small number of available samples. In the design of classifiers with gene selection, there exist numerous candidate sets of genes that can achieve high training accuracies for predicting distant metastasis. However, most of the candidate gene sets have relatively low accuracies of independent tests that result in overtraining. The right sets of biomarkers should have high prediction accuracies for both training and test datasets. To cope with the overtraining problem, it is essential to identify the right sets of biomarkers by discarding the gene sets of overtraining. Recently obtained evidence reveals that distant metastasis from lung cancer is detrimental to patients' survival [23]. Moreover, Figure 1 suggests that distant metastasis is strongly related to patients' survival. Thus, it is hypothesized that the right set of biomarkers can also be used to effectively predict patients' survival. This work proposes a hybrid approach that uses two kinds of resources, microarray data and the disease-free survival data, to identify a set of biomarkers.
The fitness function provides the only means by which genetic algorithms (GAs) optimize all system parameters that are encoded in a GA-chromosome. The three objectives of designing the predictor of lung cancer metastasis and discovering a set of genes using GA-based optimization methods are as follows. The first objective is to maximize the classification accuracy (denoted as Acc) of the SVM classifier; the second is to maximize the disease-free survival area (denoted as Asurv), and the last is to identify a small set of informative genes. The values of Acc and Asurv are in the range [0, 1]. The two maximum objectives without conflicting each other can be combined into a weighted objective function f(G) as follows.
$$ \mathrm{Max}f(G)=w\times Acc + \left(1\hbox{--} w\right)\times Asurv $$
where w denotes a positive weight in the range [0, 1] which is determined according to the preferences for individual objectives, and G denotes the selected gene set. Generally, maximizing f(G) is the major objective and the number of selected genes is restricted within a relatively small range (as will be discussed in the next section). If w = 1.0, then the fitness function degenerates to the conventional one that uses no clinical outcome (disease-free survival curve). The fitness function using the overall accuracy Acc is suitable for balanced datasets (almost equal populations of metastatic positive and negative samples). When applying to imbalanced datasets, random over-sampling methods such as SMOTE which is short for Synthetic Minority Over-sampling Technique [24] can be used to adjust these datasets to balanced datasets.
Gene selection using IBCGA
Selecting a minimal number of informative genes while maximizing the prediction performance of distant metastasis is a bi-objective 0/1 combinatorial optimization problem. This work propose a novel method for identifying a small number m of informative genes from a large number n of candidate genes for prediction and biomarker discovery based on an inheritable bi-objective combinatorial genetic algorithm (IBCGA) [25] with SVM classifiers. IBCGA has been previously used to identify a small set of properties from 531 physicochemical properties for predicting the immunogenicity of MHC class I binding peptides [26]. Feature selection is a combinatorial optimization problem C(n, m) with a huge search space of size C(n, m) = n!/(m!(n-m)!)). The IBCGA uses an intelligent genetic algorithm (IGA) [21] with an inheritance mechanism [25] to search efficiently for the solutions S r to C(n, r) and S r+1 to C(n, r + 1) by inheriting the good solution S r. IGA that is based on orthogonal experimental design uses a divide-and-conquer strategy and a systematic reasoning method rather than a conventional generate-and-go method to solve efficiently the large-scale combinatorial optimization problem. The SVM-based training model uses the prediction performance of leave-one-out cross-validation (LOOCV) as the fitness function in using IBCGA with the whole training set.
The input for the SVM-based model design procedure is a training dataset that is composed of two classes (distant and non-distant metastases). The output of the procedure includes a set of m selected transcripts and an SVM classifier with associated parameter settings of γ and C. A radial basis kernel function exp(−γ ||x i - x j||2) is adopted, where x i and x j are training samples, and γ is a kernel parameter. In this work, γ ∈{2−7, 2−6, …, 28} and C∈{2−7, 2−6, …, 28}. Each sample is represented using an n-dimensional feature vector P = [p 1, p 2, …, p n]. In this work, n = 474. The IGA-chromosome consists of n binary IGA-genes f i to select features and two 4-bit genes for encoding γ and C. The corresponding feature p i (the i-th transcript) is excluded from the SVM classifier if f i = 0, and is included if f i = 1. Let m be the sum of f i . The IBCGA with the fitness function f(G) that uses LOOCV can simultaneously obtain a set of solutions, S r , where r = r start, r start + 1, …, r end in a single run. In this work, the parameter settings are r start =10, r end =30, N pop =60, p c =0.8, p m =0.05, and Gmax =60. The customized IBCGA for transcript selection is given below.
Step 1 (Initiation) Randomly generate an initial population of N pop individuals. All n binary genes f i have r 1 s and n-r 0 s where r = r start.
Step 2 (Evaluation) Evaluate the fitness values of all individuals using f(G).
Step 3 (Selection) Use a conventional method of tournament selection that selects the winner from two randomly selected individuals to generate a mating pool.
Step 4 (Crossover) Select p c · N pop parents from the mating pool to perform orthogonal array crossover [25] on selected pairs of parents where p c is the probability of crossover operations.
Step 5 (Mutation) Apply a conventional mutation operator to the randomly selected p m · N pop individuals (except the best individual) in the new population where p m is the probability of mutation operations.
Step 6 (Termination test) If the stopping condition (reaching Gmax generations) for obtaining the solutions S r is satisfied, then output the best individual as S r . Otherwise, go to Step 2.
Step 7 (Inheritance) If r < r end, then randomly change one bit in the binary genes f i for each individual from 0 to 1; increase r by one, and go to Step 2.
Step 8 (Non-deterministic) Perform Steps 1–7 for R (=30 in this work) independent runs and obtain the best of the R solutions. The best solution can be determined by considering the most accurate one (S a ) with the highest fitness value or the robust one (S b ) with the highest score of appearance [27]. The appearance score considers both the fitness value as well as the mean number of times for individual genes selected in the R runs.
Notably, all genetic algorithms search for globally optimal solutions but their outputs are non-deterministic because of randomization. Therefore, the common approach to solving with the non-deterministic problem is to perform a number of independent runs to evaluate the final answer. In this work, the answers obtained in all R runs are utilized efficiently, as described in the next section.
Gene selection using a sequential backward selection method
An IGA-based gene selection method with SVM (known as ESVM) [19] can obtain a high prediction accuracy of 96.88% with a mean number of 10.0 to select genes from 11 benchmark datasets concerning various cancers using 10-fold cross-validation (10-CV). For two two-class tumor datasets, ESVM yields a mean number of 4.65 and a classification accuracy of 97.82%. An IGA-based gene selection method uses a maximum likelihood (MLHD) classifier [20] to select a minimal number of relevant genes for accurate classification of tumor samples. The experimental results show that the hybrid method IGA/MLHD outperforms existing methods in terms of the number of selected genes (9.86 on average), classification accuracy (mean accuracy of 96.20%), and robustness of the selected genes based on 11 human cancer-related gene expression datasets.
In this work, the IBCGA that is based on IGA with an SVM selects a small set of genes that are relevant to the distant metastasis of lung cancer while maximizing the fitness function. Since the number of samples (78) is very small, the IBCGA can identify a very small number m of genes and obtain a very high training accuracy with m < 10. However, a very small number of genes can provide a very high training accuracy (either LOOCV or 10-CV) but the independent accuracy is not always satisfactory owing to overtraining. Moreover, a feasible set of biomarkers for yielding the high tumor prediction accuracy on an independent test dataset often has more than 10 genes. Therefore, this work proposes a novel methodological approach to alleviating the overtraining problem in two ways: 1) by utilizing additional clinical data, i.e. the disease-free survival curve, and 2) using a sequential backward selection (SBS) method to select the best set of transcripts from all of the selected transcripts of the R = 30 runs of the IBCGA. For each run, the IBCGA selects at least 10 transcripts (r start = 10). Sequential backward selection starts from the full set of the 30 transcripts with the highest appearance times, and sequentially removes transcript x from G (called the G-x set) that results in the smallest decrease of the value of the objective function f(G-x). Notably, the removal of a feature may actually increase the value of the objective function such that f(G-x) > f(G).
Performance evaluation with various weights
To determine the best value of the weight w and prevent overtraining in the subsequent design of gene selection methods, all the 78 samples are randomly divided into five groups, of which four are used as a training set and the other serves as an independent test set. Each group of samples serves as a test set in turn. Four experiments with w values of 1.0, 0.8, 0.5, and 0.2 are conducted. Table 2 shows the mean accuracies of the independent test for various weights. The results reveal that the best performance is achieved using w = 0.8 with a test accuracy of 74.82%. The conventional method has an accuracy of 61.62% when the disease-free survival area is not used (w = 1.0). The additional use of a survival curve can compensate for the shortcomings of insufficient microarray samples and improve the test accuracy 13.20%.
Table 2 Accuracies of the independent test for various weights in the fitness function
Identifying a gene set using IBCGA and sequential backward selection
The IBCGA identifies a small number m of transcripts from n = 474 candidate transcripts while maximizing the fitness function f(G). The results show that m = 10 has a very high value of f(G). The IBCGA with R = 30 runs yields 30 sets of transcripts. The most accurate solution S a and the most robust solution S b are recorded. The number of appearances of each selected transcript in the 30 runs is recorded. Table 3 lists statistical results concerning the number of appearances for the 30 transcripts with the highest appearance frequency. The genes with rank 1 are FBJ murine osteosarcoma viral oncogene homolog B (FOSB) and microtubule associated serine/threonine kinase 1 (MAST1), which were selected 12 times. The gene forkhead box E1 (FOXE1) has two transcripts with ID numbers 6250309 and 3450692, named FOXE1-1 and FOXE1-2, at ranks nine and 13, respectively. Similarly, the gene protein kinase C beta (PRKCB1) has two transcripts with ID numbers 3460564 and 5090563, named PRKCB1-1 and PRKCB1-2, at ranks 23 and 27, respectively. The set of 30 transcripts was used in the sequential backward selection (SBS) method for further identifying a set of prognostic biomarkers that are effective and stable in predicting lung cancer metastasis. Figure 2 shows the results of the SBS method for w = 0.8 and 1.0. Table 4 shows the performance of the IBCGA and SBS methods for w = 0.8 and 1.0.
Table 3 The 30 top-ranked transcripts in terms of selected times from 30 runs
The prediction accuracy and disease-free survival area obtained using the sequence backward selection method. (A) w = 0.8 (B) w = 1.0.
Table 4 The performance of the IBCGA and sequential backward selection (SBS) methods
From Figure 2, the SBS method with 17 transcripts and w = 0.8 has the largest disease-free survival area (Asurv = 74.81%) and prediction accuracy (Acc = 93.59%). The largest Asurv is 70.92% and Acc = 91.03% when a set of 15 transcripts was adopted for w = 1.0. From Table 4, the S a solution has a set of 10 transcripts and is associated with a performance of Acc = 95.95% and Asurv = 70.01% for w = 0.8. The S b solution has a set of 10 transcripts and is associated with a performance of Acc = 92.02% and Asurv = 63.81% for w = 0.8. Consider that 1) the 30 transcripts with high frequency are selected; 2) the test accuracy of w = 0.8 is higher than that of w = 1.0; and 3) the set of 17 transcripts yields the largest disease-free survival area. Therefore, Table 5 presents the 17 transcripts of the 16 genes obtained using the SBS method with w = 0.8. The 16 genes regarded as a candidate set of biomarkers (the 16-gene set) can be further analyzed.
Table 5 The 17 transcripts obtained by the sequence backward selection method with the disease-free survival area
Analyzing identified 16-gene set
To investigate the abilities of individual genes to predict distant metastasis, the rankings by classification ability, disease-free survival area, and p-value using the training dataset are analyzed. The classification ability ranking is derived according to the classification accuracy using a single gene and the SVM classifier with a parameter setting using a grid search method. Table 5 lists the rankings of classification ability, disease-free survival area, and p-value for every transcript in the 16-gene set. Table 5 reveals that the distributions of rankings for the three metrics are diverse. All of the p-values of the 17 transcripts are very small so a comparison of p-value ranking is not meaningful. The Person correlation coefficient between the ranks based on the two metrics (accuracy and survival area) is not very high (R = 0.4755). The gene chemokine (C-C motif) ligand 16 (CCL16) provides the highest classification accuracy (74.36%) and the second largest area of disease-free survival (43.58%). The gene CDKN2A-interacting protein (CDKN2AIP) has a disease-free survival area of 46.00% (rank 1) and a classification accuracy of 61.54% (rank 16). The significant variation among the rankings of genes based on the three metrics is discussed below.
The effective discrimination between distant and non-distant metastases can be achieved using a set of interacted genes that are involved in various signaling pathways, rather than a set of mutually independent genes. The LOOCV accuracy of 93.59% that is obtained using the 16-gene set substantially exceeds that obtained using a single gene. On the other hand, when all of the 17 transcripts are used to predict distant metastasis, the disease-free survival area (Asurv) is 74.81%, which is very close to the real area of 73.21%. Moreover, this area (74.81%) is much larger than that obtained using a single gene. The expression level of individual genes using the microarray technique cannot be used reliably to discriminate between samples of distant and non-distant metastases. Several factors determine patient survival, including gene expression and the nature of the therapy. Therefore, survival curve provides valuable information, but care must be taken in using the survival curves of individual genes. In brief, the rankings of individual genes for the three metrics based on univariate analysis can be used in initial screening in coarse-to-fine gene selection. In this work, the discovery of biomarkers in the fine stage takes into account a set of genes by combining both the classification ability and disease-free survival area.
Numerous prediction methods discover biomarkers by searching for a set of genes that can provide highly accurate performance of LOOCV or 10-CV [19,20]. Many sets may comprise a small number of genes that can achieve the same goal. The highest LOOCV accuracy that can be achieved using SVM with ten genes without considering the disease-free survival curves is 97.43% (w = 1.0, Table 4). The proposed method of applying SBS to a combination of promising gene sets aims to identify a set of reliable biomarkers for predicting distant metastasis. Notably, some genes in the set of 30 transcripts (Table 3) but not in the identified 16-gene set may also be potential biomarkers.
Evaluation of biomarkers using an independent cohort
To evaluate generalizability of the identified gene sets, an independent cohort of 26 samples was utilized. Table 6 shows the performance of the IBCGA and the SBS methods by performing 30 runs on the 26 test samples. The training accuracies for w = 1.0 and 0.8 are 88.80% and 88.88%, respectively, which are very close to each other. However, the test accuracies for w = 1.0 and 0.8 are 50.41% and 61.08%, respectively. The test accuracy obtained using disease-free survival curves (w = 0.8) is larger than that obtained using no survival curve (w = 1.0) that is improved by 10.67%. The test accuracies of the SBS method for w = 1.0 and 0.8 are 53.84% and 65.38%, respectively. The improvement in the test accuracy is 11.54%. The improvement in the mean test accuracy using the disease-free survival curves is 11.10%. The results also reveal that the SBS method that includes the IBCGA outperforms the IBCGA method alone for both w = 1.0 and 0.8.
Table 6 The test accuracies of the IBCGA and sequence backward selection (SBS) methods from 30 runs
The SBS method with w = 0.8 is the best method, yielding an LOOCV accuracy of 93.59% and a predicted disease-free survival area of 74.81%, which is very close to the real disease-free survival area of 73.21%. The accuracy of the independent test using an SVM classifier with 17 transcripts is 65.38%. The SVM ensemble of 30 SVM classifiers of which each uses five transcripts that are randomly selected from the 17 transcripts can yield the test accuracy of 76.92%. The intractable problem of overtraining has been alleviated by the additional use of survival curves in the proposed optimization method. However, this performance should be not sufficient to select a putative set of biomarkers. Increase of the number of samples can further mitigate the overtraining problem.
Performance comparison of various gene sets
Table 7 shows the type of regulator and location of its protein product, as well as the related cancer genes in the 16-gene set. The 16 genes are as follows: 1) endothelin 1 (EDN1), 2) casein kinase 1, alpha 1 (CSNK1A1), 3) chemokine (C-C motif) ligand 15 (CCL15), 4) splicing factor 1 (SF1), 5) gene protein kinase C beta (PRKCB1), 6) microtubule-associated serine/threonine protein kinase 1 (MAST1), 7) zeta-chain associated protein kinase 70 kDa (ZAP70), 8) chemokine (C-C motif) ligand 16 (CCL16), 9) E74-like factor 5 (ELF5), 10) CDKN2A-interacting protein (CDKN2AIP), 11) histone deacetylase 9 (HDAC9), 12) GLI family zinc finger 3 (GLI3), 13) forkhead box E1 (FOXE1), 14) male germ cell-associated kinase (MAK), 15) tubby protein homolog (TUB), and 16) cellular repressor of E1A-stimulated genes 1 (CREG1). The 16-gene set can be categorized into two subsets. One subset has five known lung-cancer-related genes (named 5-gene set), including EDN1, CSNK1A1, CCL15, SF1 and PRKCB1. The other subset (11-gene set) consists of nine cancer-related genes and two potential biomarkers (TUB and CREG1).
Table 7 The type of regulator and location of its protein product, as well as the related cancer of the genes in the 16-gene set
A number of lung cancer-related genes in the literature are considered for further comparison and analysis. Since EMT is known to be involved in tumor malignancy, some mesenchymal-related genes of gastric cancer, including WNT5A, CDH2, PDGFRB, EDNRA, ROBO1, ROR2, and MEF2C that are activated by an EMT regulator, are also examined [28]. Table 8 presents the 32 genes and their relevant papers [28-39]. The same sequence backward selection (SBS) method is applied to the 32-gene set, yielding a set of nine genes that can be used to accurately predict distant metastasis of lung cancer. The 9-gene set consists of MEF2C, MMP-2, ID2, CDH2, WNT5A, CDH1, TGFB1, MMP-9, and TWIST2. Notably, cigarette smoke induces WNT5A-coupled PKC activity during lung carcinogenesis, which causes Akt activity and anti-apoptosis in lung cancer [40]. The two mesenchymal-related genes, CDH2 and MEF2C were also selected into the 9-gene set.
Table 8 The 32 lung cancer-related genes and their relevant papers
Table 9 presents performance comparisons of various gene sets in terms of the prediction accuracy. The training accuracy, disease-free survival area, and independent test accuracy obtained using the 32-gene set are 65.38%, 34.47%, and 50.00%, respectively. The performance of the 32-gene set is not good, especially for the independent test. This lung cancer-related gene set is not designed especially to predict distant metastasis of lung cancer. The 9-gene set yields a training accuracy of 73.07%, a disease-free survival area of 55.70%, and an independent test accuracy of 80.76%. As a result, the 9-gene set that is obtained using the SBS method is more effective in identifying the distant metastasis than is the 32-gene set, and is worthy of further validation.
Table 9 Performance comparison among various gene sets
The 16-gene set for predicting distant metastasis is identified using microarray and survival data at the same time. The test accuracy is 76.92% for the 16-gene set using an SVM ensemble classifier. To analyze the 16-gene set further, its two subsets (5- and 11-gene sets) are independently evaluated using the same prediction method as that used to evaluate the 16-gene set, and the results are shown in Table 9. The training accuracy, disease-free survival area, and independent test accuracy achieved using the 5-gene set with six transcripts are 78.25%, 52.33%, and 76.92%, respectively, which are close to those achieved using the 9-gene set. The 11-gene set of EMT and cancer-related genes has a higher training accuracy of 87.18%, which is nearly equal to the survival area of 53.37%, and has a smaller test accuracy of 69.23% than the 5-gene set. To compare individual genes in terms of their prediction performance, three representative genes in the 16-gene set were selected. The top genes with the highest training accuracy and disease-free survival area are CCL16 (74.36%) and CDKN2AIP (46.00%), respectively. Figure 3 plots the Kaplan-Meier survival curves and their corresponding disease-free survival areas for real and predicted classes obtained using various gene sets.
The disease-free survival areas of various gene sets using Kaplan-Meier survival curves. There are 37 non-distant- (in blue) and 41 distant-metastasis samples (in red). (A) real class (73.21%), (B) 16-gene set (74.81%), (C) 11-gene set (53.37%), (D) 5-gene set (52.33%), (E) CCL16 (43.58%) and (F) CDKN2AIP (46.00%).
The large disease-free survival area (74.81%) of predicted classes, obtained using the 16-gene set, is remarkably very close to that of the real classes (73.21%). To investigate the ability of this set to predict patient's disease-free survival time, the support vector regression in the LIBSVM [41] is used to establish a survival prediction model. Figure 4 shows that, in the estimate of the disease-free survival time, the correlation coefficient between the real and predicted disease-free survival times is R = 0.9672. The mean survival prediction error is 3.99 months. This result reveals that the 16-gene set is also effective in predicting disease-free survival times of patients.
The predicted disease-free survival time using the support vector regression with the 16-gene set.
Examination of the 16 putative biomarkers
The 16 putative biomarkers herein are examined with reference to the relevant papers. Five of the 16 genes have been shown to be related to lung cancer progression. Decreased expression of EDN1 has been reported in primary lung cancers, possibly owing to the high methylation in the CpG island of its intron 1 and exon 2 junction [42]. Overexpressions of EDN1 and EDNRA were already reported as being associated with impaired survival in human breast cancer [43]. Expression analysis reveals that hypoxia-induced lung cancer related-biomarkers HIF and its modulating proteins, including CSNK1A1, are significantly down-regulated [44]. The CCL15 level correlates with response to combination therapy with erlotinib and celecoxib in patients with NSCLC [45]. Chemokine CCL15 is the most significant marker that is associated with increased odds of short survival [46]. Splicing factor SF1 participates in the ATP-dependent formation of the spliceosome complex [47]. Down-regulation of the oncogenic serine/arginine-rich splicing factor 1 (SF1) leads to the skipping of an exon that is overexpressed in primary lung tumors [48]. PRKCB belongs to a family of serine/threonine-specific kinases and is predominantly activated by diacylglycerol, calcium, and phorbol ester. The two splice-variants are called PRKCB1 and PRKCB2. PRKCB1 exists in lung cancer cell lines in the context of enzastaurin-induced proliferation and kinase inhibition, determined by using exon sequencing, immunoblotting, and cytotoxicity assays in NSCLC and SCLC cell lines [49].
The nine cancer-related genes are briefly described below. Breast cancer cell lines that harbor Notch gene rearrangements are uniquely sensitive to the inhibition of Notch signaling, and the overexpression of MAST1 or MAST2 gene fusions has a proliferative effect [50]. The overexpressed gene ZAP70 significantly relates to prognostic factors such as tumor size, advanced stage, invasive depth, lymph node metastasis and differentiation [51]. An adenovirus that encodes CCL16, when injected into established nodules significantly delayed tumor growth [52]. ELF5 can transactivate through the sequences of ETS transcription factors. ELF5 is localized to human chromosome 11p13-15, which is a region that frequently undergoes loss of heterozygosity in several types of carcinoma, including those of the breast, kidney and prostate [53]. CDKN2AIP has a known role in tumorigenesis [54]. A functional role of HDAC5 and HDAC9 in tumor cell growth in medulloblastoma cell lines has been reported [55]. Three sonic hedgehog effectors, GLI1, GLI2, and GLI3, regulate the transcription of diverse genes that are involved in cell growth and cell proliferation [56]. FOXE1 (TTF-2) is a thyroid-specific transcription factor and a marker in thyroid tumors. The lung is the most common distant metastatic site for thyroid carcinomas [57]. MAK is a direct transcriptional target of androgen receptor which plays an important role in the normal development of prostate as well as in the progression of prostate cancer [58]. Phylogenetically, TULP3 is the family member that is most closely related to TUB. TULP3 is detected at high levels in human RNA from testes, ovaries, the thyroid, and the spinal chord [59]. TUB encodes a protein of 561 amino acids that is highly expressed in a number of tissues examined, including the heart, brain, ovary, thyroid, spinal chord, and retina and it maps to chromosome 11p15.4 [60]. The overexpression of CREG1 reduces cell proliferation in immortal LFS and cancer cell lines. CREG1 has been identified as a potent inhibitor of apoptosis [61]. The cooperation of CREG1 and p16 (INK4a) inhibits the expression of cyclin A and cyclin B by inhibiting promoter activity, reducing mRNA and protein levels, and these proteins are required for S-phase entry and G2/M transition [61,62].
After these putative biomarkers are mapped into the KEGG pathway database (http://www.genome.jp/kegg/), several signaling pathways that involve lung cancer metastasis are identified. Mitogen-activated protein kinase signaling pathway, NF-kappa B signaling pathway, and immune response IL-1 signaling pathway have been reported in lung cancer metastasis. While regulating proliferation, gene expression, differentiation, mitosis, cell survival, and apoptosis, the mitogen-activated protein kinase signaling pathway has long been viewed as an attractive pathway for anticancer therapies [63]. A higher nuclear factor-kappa B expression pattern is associated with more advanced stages of oncogenesis and it will expand related pathways for invasion and metastasis in lung cancer [64]. Abnormal IL-1 expression and its related pathway seem to be related to IKK alpha/beta activation, p65 translocation and transcription activity, and associated to migration of cancer cells [65]. We suggest that the pathways that are associated with these biomarkers might play an important role in NSCLC metastasis. This inference offers an opportunity to expand greatly our knowledge of the expression patterns in NSCLC. Considering together the expression status of biomarkers, related signaling pathways, and clinical outcome will further reveal the roles of the biomarkers in lung cancer metastasis. An enhanced understanding of the basic biological mechanisms of NSCLC will likely facilitate development of improved methods for survival prediction.
This work proposes a novel methodological approach to discovering a set of prognostic biomarkers for predicting the distant metastasis of lung cancer through simultaneous utilization of microarray and survival data. The presented optimization method uses an objective function to maximize both prediction accuracy and the disease-free survival area to identify a set of biomarkers. The additional use of clinical disease-free survival time can greatly facilitate the discovery of biomarkers and the prediction of survival time. The proposed method that combines both microarray and survival data can also alleviate the problem of overtraining that arises from the insufficiency of samples. The experimental results herein show that a combination of multiple biomarkers may increase diagnostic sensitivity and specificity over those obtained using individual biomarkers. The proposed method has high generalizability in the discovery of prognostic biomarkers not only for the distant metastasis of lung cancer, but also for other cancers using microarray data and clinical outcomes.
Few studies have investigated the discovery of prognostic biomarkers for predicting the distant metastasis of lung cancer. This work identified a set of 16 prognostic biomarkers that can be used to predict distant metastasis with high accuracies, a leave-one-out cross-validation accuracy of 93.59%, an independent test accuracy of 76.92%, a large predicted disease-free survival area of 74.81% (close to 73.21% for real survival area), and a mean survival prediction error of 3.99 months. Closely examining the 16-gene set by mapping these genes into the KEGG pathway database reveals some signaling pathways that are involved in lung cancer metastasis. Future work will incorporate these findings of signaling pathways with related biomarkers and clinical outcomes to develop novel methods for predicting distant metastasis and the survival times of patients with early-stage NSCLC.
Yang P. Epidemiology of lung cancer prognosis: quantity and quality of life. Methods Mol Biol. 2009;471:469–86.
Kulasingam V, Diamandis EP. Strategies for discovering novel cancer biomarkers through utilization of emerging technologies. Nat Clin Pract Oncol. 2008;5(10):588–99.
Thiery JP. Epithelial-mesenchymal transitions in tumour progression. Nat Rev Cancer. 2002;2(6):442–54.
Lyons JG, Lobo E, Martorana AM, Myerscough MR. Clonal diversity in carcinomas: its implications for tumour progression and the contribution made to it by epithelial-mesenchymal transitions. Clin Exp Metastasis. 2008;25(6):665–77.
Lee JM, Dedhar S, Kalluri R, Thompson EW. The epithelial-mesenchymal transition: new insights in signaling, development, and disease. J Cell Biol. 2006;172(7):973–81.
Walser T, Cui X, Yanagawa J, Lee JM, Heinrich E, Lee G, et al. Smoking and lung cancer: the role of inflammation. Proc Am Thorac Soc. 2008;5(8):811–5.
Chow G, Tauler J, Mulshine JL. Cytokines and growth factors stimulate hyaluronan production: role of hyaluronan in epithelial to mesenchymal-like transition in non-small cell lung cancer. J Biomed Biotechnol. 2010;2010:485468.
Article CAS PubMed PubMed Central Google Scholar
Shimamura T, Imoto S, Shimada Y, Hosono Y, Niida A, Nagasaki M, et al. A novel network profiling analysis reveals system changes in epithelial-mesenchymal transition. Plos One. 2011;6(6):e20804.
Shih JY, Yang PC. The EMT regulator slug and lung carcinogenesis. Carcinogenesis. 2011;32(9):1299–304.
Kawata M, Koinuma D, Ogami T, Umezawa K, Iwata C, Watabe T, et al. TGF-beta-induced epithelial-mesenchymal transition of A549 lung adenocarcinoma cells is enhanced by pro-inflammatory cytokines derived from RAW 264.7 macrophage cells. J Biochem. 2012;151(2):205–16.
Thomson S, Petti F, Sujka-Kwok I, Epstein D, Haley JD. Kinase switching in mesenchymal-like non-small cell lung cancer lines contributes to EGFR inhibitor resistance through pathway redundancy. Clin Exp Metastasis. 2008;25(8):843–54.
Fuchs BC, Fujii T, Dorfman JD, Goodwin JM, Zhu AX, Lanuti M, et al. Epithelial-to-mesenchymal transition and integrin-linked kinase mediate sensitivity to epidermal growth factor receptor inhibition in human hepatoma cells. Cancer Res. 2008;68(7):2391–9.
Sun Z, Yang P. Gene expression profiling on lung cancer outcome prediction: present clinical value and future premise. Cancer Epidemiol Biomarkers Prev. 2006;15(11):2063–8.
Chen HY, Yu SL, Chen CH, Chang GC, Chen CY, Yuan A, et al. A five-gene signature and clinical outcome in non-small-cell lung cancer. N Engl J Med. 2007;356(1):11–20.
Lau SK, Boutros PC, Pintilie M, Blackhall FH, Zhu CQ, Strumpf D, et al. Three-gene prognostic classifier for early-stage non small-cell lung cancer. J Clin Oncol. 2007;25(35):5562–9.
Dudoit S, Fridlyand J, Speed TP. Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002;97(457):77–87.
Eisen MB, Spellman PT, Brown PO, Botstein D. Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci U S A. 1998;95(25):14863–8.
Liu HY, Kho AT, Kohane IS, Sun Y. Predicting survival within the lung cancer histopathological hierarchy using a multi-scale genomic model of development. Plos Med. 2006;3(7):1090–102.
Huang HL, Chang FL. ESVM: evolutionary support vector machine for automatic feature selection and classification of microarray data. Bio Systems. 2007;90(2):516–28.
Huang HL, Lee CC, Ho SY. Selecting a minimal number of relevant genes from microarray data to design accurate tissue classifiers. Bio Systems. 2007;90(1):78–86.
Ho SY, Shu LS, Chen JH. Intelligent evolutionary algorithms for large parameter optimization problems. Ieee Trans Evol Comput. 2004;8(6):522–41.
Shi W, Banerjee A, Ritchie ME, Gerondakis S, Smyth GK. Illumina WG-6 BeadChip strips should be normalized separately. Bmc Bioinformatics. 2009;10:372.
Sugiura H, Yamada K, Sugiura T, Hida T, Mitsudomi T. Predictors of survival in patients with bone metastasis of lung cancer. Clin Orthop Relat Res. 2008;466(3):729–36.
Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: Synthetic minority over-sampling technique. J Artif Intell Res. 2002;16:321–57.
Ho SY, Chen JH, Huang MH. Inheritable genetic algorithm for biobjective 0/1 combinatorial optimization problems and its applications. IEEE Trans Syst Man Cybern Part B Cybern Publ IEEE Syst Man Cybern Soc. 2004;34(1):609–20.
Tung CW, Ho SY. POPI: predicting immunogenicity of MHC class I binding peptides by mining informative physicochemical properties. Bioinformatics. 2007;23(8):942–9.
Huang HL, Lin IC, Liou YF, Tsai CT, Hsu KT, Huang WL, et al. Predicting and analyzing DNA-binding domains using a systematic approach to identifying a set of informative physicochemical and biochemical properties. BMC Bioinformatics. 2011;12 Suppl 1:47.
Ohta H, Aoyagi K, Fukaya M, Danjoh I, Ohta A, Isohata N, et al. Cross talk between hedgehog and epithelial-mesenchymal transition pathways in gastric pit cells and in diffuse-type gastric cancers. Br J Cancer. 2009;100(2):389–98.
Gemmill RM, Roche J, Potiron VA, Nasarre P, Mitas M, Coldren CD, et al. ZEB1-responsive genes in non-small cell lung cancer. Cancer Lett. 2011;300(1):66–78.
Feng J, Zhang XS, Zhu HJ, Wang XD, Ni SS, Huang JF. FoxQ1 Overexpression Influences Poor Prognosis in Non-Small Cell Lung Cancer, Associates with the Phenomenon of EMT. Plos One. 2012;7(6):e39937.
Yoshikawa M, Hishikawa K, Marumo T, Fujita T. Inhibition of histone deacetylase activity suppresses epithelial-to-mesenchymal transition induced by TGF-beta 1 in human renal epithelial cells. J Am Soc Nephrol. 2007;18(1):58–65.
Ward C, Forrest IA, Murphy DM, Johnson GE, Robertson H, Cawston TE, et al. Phenotype of airway epithelial cells suggests epithelial to mesenchymal cell transition in clinically stable lung transplant recipients. Thorax. 2005;60(10):865–71.
Qi J, Rice SJ, Salzberg AC, Runkle EA, Liao J, Zander DS, et al. MiR-365 regulates lung cancer and developmental gene thyroid transcription factor 1. Cell Cycle. 2012;11(1):177–86.
Singh A, Greninger P, Rhodes D, Koopman L, Violette S, Bardeesy N, et al. A Gene expression signature associated with "K-Ras addiction" reveals regulators of EMT and tumor cell survival. Cancer Cell. 2009;15(6):489–500.
Thomson S, Petti F, Sujka-Kwok I, Mercado P, Bean J, Monaghan M, et al. A systems view of epithelial-mesenchymal transition signaling states. Clin Exp Met. 2011;28(2):137–55.
Takeyama Y, Sato M, Horio M, Hase T, Yoshida K, Yokoyama T, et al. Knockdown of ZEB1, a master epithelial-to-mesenchymal transition (EMT) gene, suppresses anchorage-independent cell growth of lung cancer cells. Cancer Lett. 2010;296(2):216–24.
Xiang X, Zhuang X, Ju S, Zhang S, Jiang H, Mu J, et al. miR-155 promotes macroscopic tumor formation yet inhibits tumor dissemination from mammary fat pads to the lung by preventing EMT. Oncogene. 2011;30(31):3440–53.
Matsuno Y, Coelho AL, Jarai G, Westvvick J, Hogaboam CM. Notch signaling mediates TGF-beta 1-induced epithelial-mesenchymal transition through the induction of Snail. Int J Biochem Cell Biol. 2012;44(5):776–89.
Pallier K, Cessot A, Cote JF, Just PA, Cazes A, Fabre E, et al. TWIST1 a New Determinant of Epithelial to Mesenchymal Transition in EGFR Mutated Lung Adenocarcinoma. Plos One. 2012;7(1):e29954.
Whang YM, Jo U, Sung JS, Ju HJ, Kim HK, Park KH, et al. Wnt5a Is Associated with Cigarette Smoke-Related Lung Carcinogenesis via Protein Kinase C. Plos One. 2013;8(1):e53012.
Chang C-C, Lin C-J. LIBSVM: a library for support vector machines. ACM Transac Int Syst Technol (TIST). 2011;2(3):27.
Takai D, Yagi Y, Wakazono K, Ohishi N, Morita Y, Sugimura T, et al. Silencing of HTR1B and reduced expression of EDN1 in human lung cancers, revealed by methylation-sensitive representational difference analysis. Oncogene. 2001;20(51):7505–13.
Wiesmann F, Veeck J, Galm O, Hartmann A, Esteller M, Knuchel R, et al. Frequent loss of endothelin-3 (EDN3) expression due to epigenetic inactivation in human breast cancer. Breast Cancer Research. 2009;11:R34.
Srivastava M, Khurana P, Sugadev R. Lung cancer signature biomarkers: tissue specific semantic similarity based clustering of digital differential display (DDD) data. BMC Res Notes. 2012;5:617.
Reckamp KL, Gardner BK, Figlin RA, Elashoff D, Krysan K, Dohadwala M, et al. Tumor response to combination celecoxib and erlotinib therapy in non-small cell lung cancer is associated with a low baseline matrix metalloproteinase-9 and a decline in serum-soluble E-cadherin. J Thorac Oncol. 2008;3(2):117–24.
Bodelon C, Polley MY, Kemp TJ, Pesatori AC, McShane LM, Caporaso NE, et al. Circulating levels of immune and inflammatory markers and long versus short survival in early-stage lung cancer. Ann Oncol. 2013;24(8):2073–9.
Rino J, Desterro JM, Pacheco TR, Gadella Jr TW, Carmo-Fonseca M. Splicing factors SF1 and U2AF associate in extraspliceosomal complexes. Mol Cell Biol. 2008;28(9):3045–57.
de Miguel FJ, Sharma RD, Pajares MJ, Montuenga LM, Rubio A, Pio R. Identification of alternative splicing events regulated by the oncogenic factor SRSF1 in lung cancer. Cancer Res. 2014;74(4):1105–15.
Lee SH, Chen TA, Zhou J, Hofmann J, Bepler G. Protein kinase C-beta gene variants, pathway activation, and Enzastaurin activity in lung cancer. Clinical Lung Cancer. 2010;11(3):169–75.
Robinson DR, Kalyana-Sundaram S, Wu YM, Shankar S, Cao XH, Ateeq B, et al. Functionally recurrent rearrangements of the MAST kinase and Notch gene families in breast cancer. Nat Med. 2011;17(12):1646–U1163.
Huang MY, Wang JY, Chang HJ, Kuo CW, Tok TS, Lin SR. CDC25A, VAV1, TP73, BRCA1 and ZAP70 gene overexpression correlates with radiation response in colorectal cancer. Oncol Rep. 2011;25(5):1297–306.
Guiducci C, Di Carlo E, Parenza M, Hitt M, Giovarelli M, Musiani P, et al. Intralesional injection of adenovirus encoding CC chemokine ligand 16 inhibits mammary tumor growth and prevents metastatic-induced death after surgical removal of the treated primary tumor. J Immunol. 2004;172(7):4026–36.
Zhou J, Ng AYN, Tymms MJ, Jermiin LS, Seth AK, Thomas RS, et al. A novel transcription factor, ELF5, belongs to the ELF subfamily of ETS genes and maps to human chromosome 11p13-15, a region subject to LOH and rearrangement in human carcinoma cell lines. Oncogene. 1998;17(21):2719–32.
Yi CH, Zheng TZ, Leaderer D, Hoffman A, Zhu Y. Cancer-related transcriptional targets of the circadian gene NPAS2 identified by genome-wide ChIP-on-chip analysis. Cancer Lett. 2009;284(2):149–56.
Milde T, Oehme I, Korshunov A, Kopp-Schneider A, Remke M, Northcott P, et al. HDAC5 and HDAC9 in medulloblastoma: novel markers for risk stratification and role in tumor cell growth. Clin Cancer Res. 2010;16(12):3240–52.
Krauss S, Foerster J, Schneider R, Schweiger S. Protein phosphatase 2A and rapamycin regulate the nuclear localization and activity of the transcription factor GLI3. Cancer Res. 2008;68(12):4658–65.
Nonaka D, Tang Y, Chiriboga L, Rivera M, Ghossein R. Diagnostic utility of thyroid transcription factors Pax8 and TTF-2 (FoxE1) in thyroid epithelial neoplasms. Mod Pathol. 2008;21(2):192–200.
Ma AH, Xia L, Desai SJ, Boucher DL, Guan Y, Shih HM, et al. Male germ cell-associated kinase, a male-specific kinase regulated by androgen, is a coactivator of androgen receptor in prostate cancer cells. Cancer Res. 2006;66(17):8439–47.
Nishina PM, North MA, Ikeda A, Yan YZ, Naggert JK. Molecular characterization of a novel tubby gene family member, TULP3, in mouse and humans. Genomics. 1998;54(2):215–20.
North MA, Naggert JK, Yan YZ, NobenTrauth K, Nishina PM. Molecular characterization of TUB, TULP1, and TULP2, members of the novel tubby gene family and their possible relation to ocular diseases. Proc Natl Acad Sci U S A. 1997;94(7):3128–33.
Moolmuang B, Tainsky MA. CREG1 enhances p16(INK4a)-induced cellular senescence. Cell Cycle. 2011;10(3):518–30.
Peng F, Han YL, Jie D, Yan CH, Jian K, Bo L, et al. Overexpression of cellular repressor of E1A-stimulated genes inhibits TNF-alpha-induced apoptosis via NF-kappa B in mesenchymal stem cells. Biochem Biophys Res Commun. 2011;406(4):601–7.
Sebolt-Leopold JS, Herrera R. Targeting the mitogen-activated protein kinase cascade to treat cancer. Nat Rev Cancer. 2004;4(12):937–47.
Batra S, Balamayooran G, Sahoo MK. Nuclear factor-kappaB: a key regulator in health and disease of lungs. Arch Immunol Ther Exp. 2011;59(5):335–51.
Cheng CY, Hsieh HL, Sun CC, Lin CC, Luo SF, Yang CM. IL-1 beta induces urokinase-plasminogen activator expression and cell migration through PKC alpha, JNK1/2, and NF-kappaB in A549 cells. J Cell Physiol. 2009;219(1):183–93.
This work was funded by National Science Council of Taiwan under the contract number NSC-103-2221-E-009-117-, and "Center for Bioinformatics Research of Aiming for the Top University Program" of the National Chiao Tung University and Ministry of Education, Taiwan, R.O.C. for the project 103 W962. This work was also supported in part by the UST-UCSD International Center of Excellence in Advanced Bioengineering sponsored by the Taiwan National Science Council I-RiCE Program under Grant Number: NSC-103-2911-I-009-101-. This work was also supported by Taipei Veterans General Hospital in Taiwan with project VGHIRB No. 2013-04-015 AC.
Institute of Bioinformatics and Systems Biology, National Chiao Tung University, Hsinchu, Taiwan
Hui-Ling Huang, Phasit Charoenkwan, Hua-Chin Lee & Shinn-Ying Ho
Department of Biological Science and Technology, National Chiao Tung University, Hsinchu, Taiwan
Hui-Ling Huang, Wen-Liang Chen, Hua-Chin Lee & Shinn-Ying Ho
Division of Thoracic Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan
Yu-Chung Wu
Institute of Systems Biology and Bioinformatics, National Central University, Taoyuan, Taiwan
Li-Jen Su
Institute of Molecular Medicine and Bioengineering, National Chiao Tung University, Hsinchu, Taiwan
Yun-Ju Huang
Department of Computer Science, Tunghai University, Taichung, Taiwan
William Cheng-Chung Chu
Hui-Ling Huang
Phasit Charoenkwan
Wen-Liang Chen
Hua-Chin Lee
Shinn-Ying Ho
Correspondence to Hui-Ling Huang or Shinn-Ying Ho.
HLH carried out the system design, participated in the analysis and discussion of the study, and drafted the manuscript. YCW carried out the data collection, participated in the design and discussion of biomarker discovery. LJS participated in the analysis of biomarkers and their pathways, and helped to draft the manuscript. YJH participated in the survey of related work, the experimental analysis, and manuscript preparation. PC carried out the program implementation, and participated in the computational analysis. WLC, HCL and WCC participated in the statistical analysis and examination of biomarkers. SYH conceived of the study, and participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.
Additional file 1: Table S1.
The data set of 78 lung cancer patients for training.
The data set of 26 lung cancer patients for test.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Huang, HL., Wu, YC., Su, LJ. et al. Discovery of prognostic biomarkers for predicting lung cancer metastasis using microarray and survival data. BMC Bioinformatics 16, 54 (2015). https://doi.org/10.1186/s12859-015-0463-x
DOI: https://doi.org/10.1186/s12859-015-0463-x
Distant metastasis
Genetic algorithm
Prognostic biomarker
Survival curve | CommonCrawl |
Jonathan Haughton
Suffolk University · Department of Economics
10 Research Items
Natural Resource Economics
Measuring the impact of unconditional cash transfers on consumption and poverty in Rwanda
Dominique Habimana
Joseph Nkurunziza
Dominique Haughton
This study estimates the causal effect of Rwanda's unconditional cash transfer program (VUP-Direct Support) on the incidence of poverty, the poverty gap, and household food and non-food expenditure for direct support recipients. Our empirical analysis applies four matching methods to data from the 2013/14 household survey in order to estimate the p...
Beware of outliers!
When one of Jonathan Haughton's students set out to understand the recent rise in inflation in Venezuela, they first had to wrangle with some outliers that threatened to create a misleading impression When one of Jonathan Haughton's students set out to understand the recent rise in inflation in Venezuela, they first had to wrangle with some outlier...
Visualizing France with Cartograms
France has a long tradition of using statistical (choropleth) maps, which use shading to represent the spatial distribution of a variable, such as population, by department. Such maps lead the observer to underestimate the importance of urban areas, especially Paris. A solution that complements the choropleth map is to create a cartogram, which del...
Cause-and-Effect Business Analytics
Victor S. Y. Lo
Business analytics is the application of statistical and quantitative analysis, as well as formal modeling, to decision making. This book examines under what circumstances and with which techniques one can reasonably infer cause and effect in a business setting and use the insight to drive business decisions. The book is rooted in realistic and imp...
Discrimination against Migrants in Urban Vietnam
Wendi Sun
Le Thi Thanh Loan
In 2009, migrant workers in the two major cities of Vietnam, Hanoi and Ho Chi Minh City, earned 42% less per hour than did non-migrant ("resident") workers. We seek to explain this gap using data from a carefully-designed urban poverty survey undertaken in 2009 by the General Statistics Office. We use the method proposed by Brown, Moon, and Zoloth,...
Inequality and entrepreneurial thresholds
Soumodip Sarkar
Carlos Rufín
We explore the relationship between inequality and entrepreneurial activity. Drawing on cross-sectional data from a largescale survey of the economic conditions of individuals across India, we develop a number of dimensions of inequality to explore empirically how inequality interacts with entrepreneurship, operationalized as self-employment or as...
The Distributional Effects of the Trump and Clinton Tax Proposals
Paul Bachman
Keshab Bhattarai
David G. Tuerck
Hillary Clinton and Donald Trump, the Democratic and Republican candidates for President of the U.S. in 2016, proposed several changes in the federal tax code. Hillary Clinton would add a personal income tax surcharge of 4% on high annual incomes, limit the tax benefits of non-charitable deductions, set a minimum tax rate of 30% on taxpayers earnin...
Tax plan debates in the US presidential election: A dynamic CGE analysis of growth and redistribution trade-offs
Frank Conte
The two major candidates in the 2016 presidential election made sharply different proposals for reforming the Federal tax code. Donald Trump proposed cutting taxes to provide "tax relief for middle-class Americans", and lowering corporation taxes to boost economic growth, while Hillary Clinton proposed modest increases in taxes on high-income Ameri...
Simulating Corporate Income Tax Reform Proposals with a Dynamic CGE Model
Michael A. Head
Opinion leaders and policy makers in the United States have turned their focus to the corporate income tax, which now has the highest statutory rate in the developed world. Using a dynamic computable general equilibrium model (the "NCPA-DCGE Model"), we simulate alternative policies for reducing the U.S. corporate income tax. We find that reduction...
Historical Background: 1690 to Present
The economic effects of the fair tax: analysis of results of a dynamic CGE model of the US economy
By replacing the current income tax with a national sales tax, the FairTax proposal would end the double taxation of saving inherent in the existing tax code and, by doing so, raise output, employment, investment and capital stock relative to the benchmark economy. While these positive effects would be felt almost immediately, the FairTax is very m...
The Distributional Effects of Government Responses to the 2009 Recession in Thailand
Somchai Jitsuchon
Pungpond Rukumnuaykit
This paper explores the extent measures taken by the government of Thailand helped households cope with shocks caused by the 2009 recession. A counterfactual was created by: quantifying the effects of the shocks (to tourism and exports); applying a Social Accounting Matrix multiplier analysis to simulate the indirect and induced effects; and mappin...
Fiscal Policy, Growth and Income Distribution in the UK
Income and income inequality increased substantially in the UK during the industrial revolution. Income inequality was the highest around 1880.This triggered enactments of more egalitarian tax and transfer system, which halved income inequality by the 1960s. Inequality has risen again with fiscal system reforms in the last five decades. By analysin...
Student Performance in an Introductory Business Statistics Course: Does Delivery Mode Matter?
Alison Kelly Hawke
Approximately 600 undergraduates completed an introductory business statistics course in 2013 in one of two learning environments at Suffolk University, a mid-sized private university in Boston, Massachusetts. The comparison group completed the course in a traditional classroom-based environment, whereas the treatment group completed the course in...
Microcredit on a Large Scale: Appraising the Thailand Village Fund
The Thailand village fund (VF) is the second-largest microcredit scheme in the world. Nearly 80 000 elected local VF committees administer loans that reach 30 percent of all households. The value of VF loans has remained steady since 2006, even without new infusions of government funds, and loans go disproportionately to the poor. Based mainly on a...
The Politics of Local Government Stabilization Funds
Douglas Snow
Gerasimos A. Gianakis
The adoption, maintenance, and prudent use of budgetary stabilization funds are fundamental financial management precepts, yet the variables that influence the size of these funds are poorly understood. This article contributes to the stabilization fund literature by examining the extent to which variation in stabilization fund balances across muni...
The dynamics of economic change
The competitiveness of Vietnam's three largest cities
Khuong Vu
Ten Puzzles and Surprises Economic and Social Change in Vietnam, 1993-1998
Between 1993 and 1998 Vietnam's GDP grew by 8.9% annually. Recently-available household survey data of high quality show several apparently surprising changes: the total fertility rate fell rapidly from 3.2 to 1.8, son preference has almost disappeared, child stunting fell from 53% to 34% among under-fives, the open unemployment rate is just 1.6%,...
Does the Village Fund matter in Thailand? Evaluating the impact on incomes and spending
Jirawan Boonperm
Launched in 2001, the Thailand Village and Urban Community Fund (VF) provided almost US$2 billion – a million baht for each of Thailand's 78,000 villages and wards – to provide working capital for locally-run rotating credit associations. Using data from the Thailand Socioeconomic Surveys of 2002 and 2004, we find that VF borrowers were disproporti...
Why is agricultural trade within ECOWAS so high?
Lassana Cissokho
Kossi Makpayo
Abdoulaye Seck
It is widely believed that the countries of Africa trade relatively little with the outside world, and among themselves, despite an extensive network of regional trade agreements. We examine this proposition by focusing on agricultural trade. Specifically, we ask whether non-tariff barriers (NTBs) are stunting agricultural trade within the Economic...
The Surprising Effects of the Great Recession: Losers and Winners in Thailand in 2008-2009
In 2009, buffeted by the great recession, Thai gross domestic product fell by 2.3 percent. Using monthly data from the socio-economic surveys of 2007-2010, this paper finds, after controlling for household variables, that real consumption per capita rose in 2009 relative to 2008 for most groups, including the poor, urban and rural households, men,...
Appraising the Thailand village fund
The Thailand Village Fund is the second-largest microcredit scheme in the world. Nearly 80,000 elected local Village Fund committees administer loans that reach 30 percent of all households. The value of Village Fund loans has remained steady since 2006, even withoutnew infusions of government funds, and loans go disproportionately to the poor. Bas...
Household coping and response to government stimulus in an economic crisis : evidence from Thailand
Gayatri B. Koolwal
The crash of global financial markets in 2008 caused a ripple effect on economic demand and growth worldwide. Export-oriented economies were hit particularly hard, and many governments stepped in quickly with broad-ranging stimulus programs to lessen the effects on households of rising unemployment and falling income. To better understand the role...
It is rarely sufficient simply to present estimates of means, coefficients, or poverty rates that have been calculated based on survey data. We also need measures of the variability of these measures, so that we may judge how much confidence to have in them.
Spatial Models
Economists and statisticians are rediscovering geography. Until relatively recently, most economic models essentially ignored spatial variations in data and in relationships; these were not at the heart of the issues that were considered to be interesting.
Most household survey data come from a single cross-section of households surveyed at a single point in time. This is useful if the purpose is to get a snapshot of income or poverty, and it does allow for a detailed analysis – for instance, of the proximate determinants of health or malnutrition or income. However, it is rarely possible to get an a...
Graphical Methods
It is tempting, but wrong, to believe that graphical techniques have little to offer for serious researchers in economics, statistics, or policy analysis. Their true power comes from the ability of the eye to discern patterns in a graph that are not clearly evident from lists of numbers or tabulated statistics. In Tufte's pithy phrase, "graphics re...
Bayesian Analysis
In the classical (or frequentist) approach to statistical methods, the analyst uses a sample of data to make inferences about the value of fixed but unknown population parameters. Among other things, this allows one to construct confidence intervals. Suppose, for example, we wish to estimate a proportion p, say a poverty rate, from a simple random...
Well over 200 years ago, Adam Smith wrote his classic An Inquiry into the Nature and Causes of the Wealth of Nations. Of course interest in causality goes back much further: Democritus, the pre-Socratic "laughing philosopher," wrote, "I would rather discover one causal law than be King of Persia."
Household survey data are generated by sampling, and cannot be interpreted successfully unless the sampling has been done correctly.
Multilevel Models and Small-Area Estimation
Household surveys can provide a great deal of information about incomes, spending, crops grown, and other household and individual characteristics. This detail comes at a cost: given the expense of surveying each household, the number of households sampled is typically fairly modest, rarely exceeding 10,000. Samples of this size are adequate for es...
Duration Models
We are often interested in modeling the time that elapses between one event and another – for instance, between one birth and the next, between a medical treatment and recovery, or between losing a job and finding the next one. Duration models are concerned with describing and explaining these spells.
A government sets up a scheme for extending microcredit to farmers; or builds an irrigation canal; or provides free textbooks to 10-year-olds; or introduces supplemental nutrition for pregnant mothers; or strengthens the social security net with a food-for-work program.
Grouping Methods
We are often interested in grouping observations. Whenever we report statistics broken down by expenditure quintile, or by region, or by household size, we are gathering observations into clusters. The purpose is to help make more sense of the data, to create more order out of a potentially chaotic mass of information.
Beyond Linear Regression
Nearly all regression analysis begins by estimating a linear model of the form:$$ \begin{array}{ll} {y_i} = {\beta_0} + {\beta_1}{x_{{i1}}} + {\beta_2}{x_{{i2}}} + \cdots + {\beta_k}{x_{{ik}}} + {\varepsilon_i} \\={{{\mathbf{x\prime}}}_i}\beta + {\varepsilon_i}, \end{array} $$ (4.1)where \( \beta = ({\beta_0},{\beta_1}, \ldots, {\beta_k})\prime \)...
Measuring Poverty and Vulnerability
The measurement of poverty and inequality is surprisingly intricate. The purpose of this chapter is to provide a self-contained overview of the issues that arise when trying to measure poverty. The virtue of this chapter is concision; for more extensive treatments, one might start with the Handbook by Haughton and Khandker ( 2009), or the classic e...
This chapter reviews the essentials of regression analysis. For most readers it will be a refresher that can be skimmed quickly; it provides a concise, self-contained coverage of topics that are the staple of any good course on econometrics.
Reply to comment by Duvendack and Palmer-Jones
Living Standards Analytics
Evaluating the impact of Egyptian Social Fund for Development programmes
Hala Abou-Ali
Shahid Khandker
Since its inception in 1991, the Egyptian Social Fund for Development (SFD) has spent about US$600 million supporting microcredit, and financing community development and infrastructure. Applying propensity-score matching using household survey data for 2004/05, this paper finds that SFD programmes have had clear and measurable effects, in the expe...
How Important Are Non-Tariff Barriers to Agricultural Trade within ECOWAS?
It is widely believed that the countries of Africa trade relatively little with the outside world, and among themselves, despite an extensive network of regional trade agreements. We examine this proposition by focusing on agricultural trade. Specifically, we ask whether non-tariff barriers (NTBs) are stunting agricultural trade within ECOWAS, a gr...
10. Insecurity and Business Development in Rural Indonesia
John M. MacDougall
Is the FairTax Fair?
Among recent proposals for a radical overhaul of the U.S. tax system, the FairTax plan is perhaps the most prominent. It would replace most current federal taxes on income and transfers of wealth with a national retail sales tax levied at a rate of 23 percent of the tax-inclusive price, and would avoid taxing poor households by providing a cash "pr...
Evaluating the impact of Egyptian social fund for development programs
Ali Hesham
El-Azony Heba
The Impact Evaluation Series has been established in recognition of the importance of impact evaluation studies for World Bank operations and for development in general. The series serves as a vehicle for the dissemination of findings of those studies. Papers in this series are part of the Bank's Policy Research Working Paper Series. The papers car...
Handbook on Poverty and Inequality
Poverty lines
S.R. Khandker
Insecurity and business development in rural Indonesia
J.M. MacDougall
Vulnerability to poverty
S. Khandker
Understanding the determinants of poverty
Does the village fund matter in Thailand ?
This paper evaluates the impact of the Thailand Village and Urban Revolving Fund on household expenditure, income, and assets. The revolving fund was launched in 2001 when the Government of Thailand promised to provide a million baht (about $22,500) to every village and urban community in Thailand as working capital for locally-run rotating credit...
Tackling rural poverty: an assessment of alternative strategies for mixed-farming areas in Peninsular Malaysia
This paper seeks to answer how best might the standard of living of the large number of poor farmers living in the small rice-growing areas scattered throughout Peninsular Malaysia be improved? The analysis is based on the results of a series of sample surveys, undertaken by the Malaysian Department of Agriculture during 1976-80. In the 1960s and 1...
Ethnic Minority Development in Vietnam
Bob Baulch
Truong Thi Kim Chuyen
This study examines the disparities in living standards between and among the different ethnic groups in Vietnam. Using data from the Vietnam Living Standards Surveys and 1999 Census, we show that 'majority' Kinh and Hoa households have substantially higher living standards than 'minority' households from Vietnam's 52 other ethnic groups. While the...
A Distributional Analysis of Adopting the FairTax: A Comparison of the Current Tax System and the FairTax Plan
MSIE Ngo
The FairTax and Charitable Giving
The Economic Effects of the FairTax: Results from the Beacon Hill Institute CGE Model
Taxing Sales Under the FairTax: What Rate Works?
Laurence J. Kotlikoff
As specified in Congressional bill H.R. 25/S. 25, the FairTax is a proposal to replace the federal personal income tax, corporate income tax, payroll (FICA) tax, capital gains, alternative minimum, self-employment, and estate and gifts taxes with a single-rate federal retail sales tax. The FairTax also provides a prebate to each household based on...
The Incidence of State Taxes on Oil and Gas
The rising economic value of Outer Continental Shelf (OCS) waters, for oil and gas as well as wind farms, has attracted the attention of abutting states. Thus Louisiana, faced with dwindling revenues from the oil and gas severance tax and high costs of reconstruction after Hurricane Katrina, is again considering the merits of an oil and gas process...
Tax Incidence in Vietnam *
Nguyen The Quan
Hoang Chi Bao
This paper examines the incidence of taxation in Vietnam, using data from the Living Standards Survey of 1997-1998 and an input-output matrix for 1997. The tax system in 1998 was slightly progressive, taking the equivalent of 7.8percent of spending for households in the lowest, and 10.3percent from households in the highest expenditure quintile. Th...
The Competitiveness of Vietnam's Three Largest Cities
Living with Environmental Change: Social Vulnerability, Adaptation and Resilience in Vietnam
Velocity Effects of Increased Variability in Monetary Growth in Egypt: A Test of Friedman's Velocity Hypothesis
Mina Baliamoune‐Lutz
This paper tests Friedman's hypothesis that increased variability in the growth of money supply causes velocity to decline, using Egyptian data from the period 1960-99. The monetary aggregates M1 and M2 are decomposed into anticipated and unanticipated components and the variability of money growth is computed as the standard deviation of five year...
Rural Vietnam in Transition
Quy-Toan Do
Lakshmi Iyer
Simon Johnson
This paper examines the impact of the 1993 Land Law of Vietnam which gave households the power to exchange, transfer, lease, inherit and mortgage their land-use rights. We use household surveys before and after the law was passed, together with the considerable variation across provinces in the speed of implementation of the reform to identify the...
The Effects of Rice Policy on Food Self-Sufficiency and on Income Distribution in Vietnam
An Economic Analysis of a Wind Farm in Nantucket Sound
Douglas Giuffre
Living Standards during an Economic Boom: The Case of Vietnam
Vân Kình, H.
Ðang, L.Q,
Phong Nguyen
Explaining the Pattern of Regional Unemployment: The Case of the Midi-Pyr??n??es Region
Yves Aragon
Christine M Thomas-Agnan
Unemployment rates vary widely at the sub-regional level. We seek to explain why such variation occurs, using data for 174 districts in the Midi-Pyrénées region of France for 1990–1991. A set of explanatory variables is derived from theory and the voluminous literature. The best model includes a correction for spatially autocorrelated errors. Unemp...
Trade Liberalization and Foreign Direct Investment in Vietnam
Nguyen Nhu Binh
ABSTRACT InDecember,2001 a bilateral trade agreement ,(BTA) between ,Vietnam and the United States
Ethnic Minority Development in Vietnam: A Socioeconomic Perspective
Truong Thi
Kim Chuyen
This study examines the latest quantitative evidence on disparities in living standards between and among the different ethnic groups in Vietnam. Using data from the 1998 Vietnam Living Standards Survey and 1999 Census, we show that Kinh and Hoa ("majority") households have substantially higher living standards than "minority" households from Vietn...
Household enterprises in Vietnam: Survival, growth, and living standards
Wim Vijverberg
In Vietnam almost a quarter of adults worked in nonfarm household enterprises in 1998. Based on household panel data from the Vietnam Living Standards Surveys of 1993 and 1998, the authors find some evidence that operating an enterprise leads to greater affluence. The data show that nonfarm household enterprises are most likely to be operated by ur...
Household Enterprises in Vietnam:
this paper.
18. Reconstruction of War-torn Economies: Lessons for East Timor: Development Challenges for the World's Newest Nation
The use of frontier estimation in direct marketingThanks are due to Paul Jones for research assistance.
Tim Moriarty
Vietnam and Foreign Direct Investment: Speeding Economic Transition or Prolonging the Twilight Zone?
Since it committed itself to reform in late 1986, Vietnam has made a rapid transition from a planned to a market-driven economy. One of the first concrete steps towards reform was to promulgate a foreign investment law in 1988. The effect has been remarkable; in 1996, foreign investors committed themselves to projects worth $8.8 billion, in an econ...
The Use of Frontier Estimation in Direct Marketing
A common problem in direct marketing is to identify which physicians are the best prospects for an intervention that would encourage them to prescribe a drug. The standard procedure is to measure how far their prescribing behavior falls short of the level predicted by a regression line. We suggest that a better approach is to determine how far they...
Are simple tests of son preference useful? An evaluation using data from Vietnam
Son preference is widespread although not universal. Where it occurs it may lead to higher fertility rates. Ideally son preference should be measured in the context of a hazards or parity progression model of fertility, or a logistic model of contraceptive use. Such models require large amounts of survey data, particularly to measure the covariates...
Explaining the Pattern of Regional Unemployment: the Case of the Midi-Pyrenees Region.
Y. Aragon
Unemployment rates vary widely at the sub-regional level. We seek to explain why such variation occurs, using data for 174 districts in the Midi-Pyrenees region of France for 1990-91. A set of explanatory variables is derived from theory and the voluminous literature.
Tax Policy in Sub-Saharan Africa: Examining The Role of Excise Taxation
Bruce Bolnick
ABSTRACT Excisetaxes, notably on tobacco and petroleum products and on alcoholic beverages, raise revenue equivalent to 1.9 percent of gross domestic product in Sub-Saharan Africa. Their importance,varies widely and inexplicably across countries, and shows no trend over time. In principle, excise taxes are good revenue sources,cheap to administer a...
The Reconstruction of a War-Torn Economy: The Next Steps in the Democratic Republic of Congo
Falling Fertility in Vietnam
According to data collected by the Vietnam Living Standards Survey 1992-93, total fertility was 3.2. This level is low for such a poor country, and reflects a continued fall from 5.6 in 1979, uninterrupted by the rapid transition from a planned to a market economy. Oddly, the proximate causes of the low total fertility, including contraceptive user...
Explaining Child Nutrition in Vietnam*
UNICEF has written that widespread malnutrition in Vietnam stems not from the insufficient production of food but from problems of availability distribution and demand. The authors estimated two models of child nutrition using data from a representative and relatively large sample of Vietnamese households surveyed in 1992-93. No evidence was found...
Using a mixture model to detect son preference in Vietnam
Son preference is strong in Vietnam, according to attitudinal surveys and studies of contraceptive prevalence and birth hazards. These techniques assume a single model is valid for all families, but it is more plausible that son preference is found for some, but not all, families. Heterogeneous preferences may be addressed with a mixture model. Thi...
Gasoline Tax as a Corrective Tax: Estimates for the United States, 1970-1991
Gasoline consumption creates externalities, through pollution, road congestion, accidents, and import dependence. Mat effect would a higher gasoline tax have on the related magnitudes: gasoline consumption, miles driven, and road fatalities? In this paper, separate models are estimated for gasoline use per mile, miles driven per driver, and fatalit...
Designing VAT Systems: Some Efficiency Considerations
Ali Agha
This paper undertakes a cross-country analysis of the determinants of VAT compliance, using data from a sample of 17 OECD countries for 1987. An index of compliance is constructed and regressed against variables which represent characteristics of the countries and their VAT rates. It is found that (a) a higher VAT rate is associated with lower comp...
Son Preference in Vietnam
This article assesses the strength of son preference in Vietnam, as reflected in fertility behavior. It formulates and estimates a proportional hazards model applied to birth intervals, and a contraceptive prevalence model, using household survey data from 2,636 ever-married women aged 15-49 with at least one living child who were interviewed for t...
Adding mystery to the case of the missing currency
The Federal Reserve Bank reports that approximately $300 billion worth of United States currency was in circulation in 1992. Yet based on household survey and other information, households and firms in the country claim to hold only a fifth of this total. A recent article by Sprenkle argues that the "missing currency" is largely held outside the Un...
Model Selection in Harmonic Non-Linear Regression
Alan Izenman
In this paper we are concerned with the twin problems of fitting a harmonic model to a time series, and then using information criteria to determine how many harmonic components to include in the series. Let Y(t), t = 0, ±1, ±2,… be a time series. A harmonic model has the form $$ Y(t) = {\alpha_0} + \sum\limits_{{j = 1}}^k {{\alpha_j}\;\cos \left(...
Should OPEC use dollars in pricing oil?
Information criteria and harmonic models in time series analysis
In this paper we present a novel methodology for estimating harmonic models in time series. The amplitude density function is derived from an inversion of the spectral representation of the series and is found to have the property of strong consistency. It is used in conjunction with non-linear least squares regression in an iterative procedure to...
Farm price responsiveness and the choice of functional form : An application to rice cultivation in West Malaysia
The estimated responsiveness of farmers to changes in the prices of output or inputs is in general very sensitive to the econometric model used. Using cross-section survey data for farms in marginal rice-growing districts in West Malaysia price elasticities based on estimated input demand equations derived from a quadratic restricted profit functio...
Toward an economic theory of voluntary resignation by dictators
David Laband
The Sources And Determinants Of Income Of Single Cropping Padi Farmers In Malaysia
Glenn P. Jenkins
Despite the belief that single-cropping padi farmers constitute one of the very poorest groups in Peninsular Malaysia, little is known in detail about the level and sources of income of this group, or about the determinants of this income. Such knowledge is of considerable practical importance, for in its absence it is impossible to gauge the effec...
Do Project Labor Agreements Raise Construction Costs?
A Project Labor Agreement (PLA) is a form of "pre-hire" collective bargaining agreement between building trades un-ions and the construction clients that typically requires any firm that bids on a project hire workers through union halls and follow union rules on pensions, work conditions and dispute resolution. In return, unions agree not to strik...
A Comparison of the FairTax Base and Rate with Other National Tax Reform Proposals
Alfonso Sanchez-Penalver
Free but Still Costly: The Costs and Benefits of Offshore Wind Power in Massachusetts
Free But Costly: An Economic Analysis of a Wind Farm in Nantucket Sound
THE RECONSTRUCTION OF WAR-TORN ECONO- MIES Technical Paper
Trade Agreement and Tax Incentives: The Irish Experience
Trade liberalization played an important role in the transformation of the Irish economy, starting in the early 1960s. Tax relief and industrial subsidies were also significant. This document explains why the Irish case is relevant for Latin America; summarize the main explanations for Ireland's recent economic success; provide a chronology of Iris... | CommonCrawl |
Nutrient Cycling in Agroecosystems
September 2017 , Volume 109, Issue 1, pp 77–102 | Cite as
Soil nutrient maps of Sub-Saharan Africa: assessment of soil nutrient content at 250 m spatial resolution using machine learning
Tomislav Hengl
Johan G. B. Leenaars
Keith D. Shepherd
Markus G. Walsh
Gerard B. M. Heuvelink
Tekalign Mamo
Helina Tilahun
Ezra Berkhout
Matthew Cooper
Eric Fegraus
Nketia A. Kwabena
Spatial predictions of soil macro and micro-nutrient content across Sub-Saharan Africa at 250 m spatial resolution and for 0–30 cm depth interval are presented. Predictions were produced for 15 target nutrients: organic carbon (C) and total (organic) nitrogen (N), total phosphorus (P), and extractable—phosphorus (P), potassium (K), calcium (Ca), magnesium (Mg), sulfur (S), sodium (Na), iron (Fe), manganese (Mn), zinc (Zn), copper (Cu), aluminum (Al) and boron (B). Model training was performed using soil samples from ca. 59,000 locations (a compilation of soil samples from the AfSIS, EthioSIS, One Acre Fund, VitalSigns and legacy soil data) and an extensive stack of remote sensing covariates in addition to landform, lithologic and land cover maps. An ensemble model was then created for each nutrient from two machine learning algorithms—random forest and gradient boosting, as implemented in R packages ranger and xgboost—and then used to generate predictions in a fully-optimized computing system. Cross-validation revealed that apart from S, P and B, significant models can be produced for most targeted nutrients (R-square between 40–85%). Further comparison with OFRA field trial database shows that soil nutrients are indeed critical for agricultural development, with Mn, Zn, Al, B and Na, appearing as the most important nutrients for predicting crop yield. A limiting factor for mapping nutrients using the existing point data in Africa appears to be (1) the high spatial clustering of sampling locations, and (2) missing more detailed parent material/geological maps. Logical steps towards improving prediction accuracies include: further collection of input (training) point samples, further harmonization of measurement methods, addition of more detailed covariates specific to Africa, and implementation of a full spatio-temporal statistical modeling framework.
Macro-nutrients Micro-nutrients Random forest Machine learning Soil nutrient map Spatial prediction Africa
Sub-Saharan Africa (SSA) has over 50% of the world's potential land for cultivation, yet only a small portion of this land satisfies conditions for agricultural production from cropping (Lal 1987; Jayne et al. 2010). Although the proportion of arable land in SSA has been steadily growing since 1950's, currently only 9% of SSA is arable land and only 1% is permanently cultivated1. Current cropping yields in Sub-Saharan Africa are low, often falling well short of water-limited yield potentials (Jayne et al. 2010). This underperformance is due to number of factors: soil nutrient deficiencies, soil physical constraints, pests and diseases and sub-optimal management. Whilst it is well established that nutrient deficiencies are constraining yields in SSA (Giller et al. 2009), only limited information is available on soil nutrient contents and nutrient availability. Only very general (approximate) maps of soil micro-nutrients are at the moment available for the whole continent (see e.g. Kang and Osiname 1985; Roy et al. 2006 and/or Alloway 2008).
The Africa Soil Information Services project has recently developed a gridded Soil Information System of Africa at 250 m resolution showing the spatial distribution of primary soil properties of relatively stable nature, such as depth to bedrock, soil particle size fractions (texture), pH, contents of coarse fragments, organic carbon and exchangeable cations such as Ca, Mg, Na, K and Al and the associated cation exchange capacity (Hengl et al. 2015, 2017). These maps were derived from a compilation of soil profile data collected from current and previous soil surveys. There is now a growing interest in applying similar spatial prediction methods to produce detailed maps of soil nutrients (including micro-nutrients) for SSA, in order to support agricultural development, intensification and monitoring of the soil resource (Kamau and Shepherd 2012; Shepherd et al. 2015; Wild 2016). Detailed maps of soil nutrients, including micro-nutrients, are now possible due to the increasing inflow of soil samples collected at field point locations by various government and/or NGO funded projects: e.g. by projects supported by the National Governments of Ethiopia, Tanzania, Kenya, Uganda, Nigeria, Ghana, Rwanda, Burundi and others; and by organizations such as the Bill and Melinda Gates Foundation (Leenaars 2012; Shepherd et al. 2015; Towett et al. 2015; Vågen et al. 2016) and similar, as well as by the private sector.
We present here results of assessment of nutrient content for a selection of soil nutrients for Sub-Saharan Africa at a relatively detailed spatial resolution (250 m). Our overarching objective was to map general spatial patterns of soil nutrient distribution in Sub-Saharan Africa. This spatial distribution could then potentially be used as:
inputs for pan-continental soil-crop models,
inputs for large scale spatial planning projects,
inputs for regional agricultural decision support systems,
general estimates of total nutrient content against which future human-induced or natural changes may be recognized and measured, and as
prior information to guide more detailed soil sampling surveys.
As the spatial prediction framework we use an ensemble of random forest (Wright and Ziegler 2016) and gradient boosting (Chen and Guestrin 2016) machine-learning techniques, i.e. a weighted average formula described in Sollich and Krogh (1996). As inputs to model building we use the most complete compilation of soil samples obtainable and a diversity of soil covariates (primarily based on remote sensing data).
We generate predictions of individual nutrients, then look at the possibilities of delineating nutrient management zones using automated cluster analysis. At the end, we analyze whether the produced predictions of soil nutrients (maps) are correlated with field-measured crop yields based on field trials.
Soil nutrient samples
As input data, we used a compilation of georeferenced soil samples (ca 59,000 unique locations) processed and analyzed consistently using the Mehlich 3 method and/or equivalent (Eckert and Watson 1996; Roy et al. 2006). Data sets used for model building include:
AfSIS (Africa Soil Information Service) Sentinel Sites: 18,000 soil samples at 9600 locations i.e. 60 sites of 10 by 10 km (Walsh and Vågen 2006; Vågen et al. 2010). Samples were taken in the period 2008–2016 at 0–20 and 20–50 cm soil depth intervals; analyzed by mid-infrared (MIR) diffuse reflectance spectroscopy based on calibration points from 960 samples (10%) analyzed by conventional wet chemistry including Mehlich-3, and thermal oxidation for org. C and total N. Sentinel Sites were designed to cover all of the agro-ecological regions in SSA and therefore should provide a good range of covariates at each location.
EthioSIS (Ethiopia Soil Information Service): 15,000 topsoil samples (0–20 cm) from Ethiopia analyzed by conventional wet chemistry including Mehlich-3. The majority of samples was collected in the period 2012–2015.
The Africa Soil Profiles database compiled for AfSIS: over 60,000 samples of 18,500 soil profiles collected from on average four depth intervals to on average 125 cm depth in period 1960–2010 (mainly 1980–1990) and 40 countries, with C, N, K, Ca and Mg available for nearly all points, P for one third of the points and micro-nutrients for ca 20% of points (Leenaars 2012).
International Fertilizer Development Center (IFDC) projects co-funded by the government of The Netherlands: 3500 topsoil samples (0–20 cm) for Uganda, Rwanda and Burundi also analyzed using soil spectroscopy. Majority of samples was collected in the period 2009–2014.
One Acre Fund: some 2400 topsoil samples (0–20 cm) for Uganda and Kenya, collected in the period 2010–2016.
University of California, Davis: some 1800 topsoil samples (0–20 cm) for Kenya.
VitalSigns: 1374 soil samples from Ghana, Rwanda, Tanzania and Uganda also analyzed using mid-infrared spectroscopy, collected in the period 2013–2016.
Combined histograms (at log-scale) for the soil macro-nutrients based on a compilation of soil samples for Sub-Saharan Africa
Combined histograms (at log-scale) for the soil micro-nutrients based on a compilation of soil samples for Sub-Saharan Africa
We focused on producing spatial predictions for the following 15 nutrients (all concentrations are expressed as mass fractions using mg per kg soil fine earth i.e. ppm): organic carbon (C) and total nitrogen (N), total phosphorus (P), and extractable: phosphorus (P), potassium (K), calcium (Ca), magnesium (Mg), sulfur (S), sodium (Na), iron (Fe), manganese (Mn), zinc (Zn), copper (Cu), aluminum (Al) and boron (B). Although C, Na and Al are commonly not classified as soil nutrients, their spatial distribution can help assessment of soil nutrient constraints. For example, extractable Al can be an important indicator of soil production potential: high exchangeable Al levels can reduce growth of sensitive crops as soil pH (\(\hbox {H}_2\hbox {O}\)) drops below <5.3 and become toxic to the majority of plants <4.5 (White 2009).
Histograms of nutrients based on the data compilation are depicted in Figs. 1 and 2. Although the soil data sources used for model calibration are quite diverse, the majority of soil samples had been analyzed using the MIR technology by the (same) soil-plant spectral diagnostics laboratory at the World Agroforestry Centre (ICRAF), Nairobi, and Crop Nutrition Laboratory Services, Nairobi, and are hence highly compatible.
The time span of field data collection is wide and legacy soil data points have diverse origins, often referring to field work done in the last 20+ years (1980–2016). Apart from the legacy soil profile data set, all other soil samples (>60%) are relatively up-to-date and refer to the period 2008–2016. The following two assumptions, therefore, must be borne in mind:
the produced spatial predictions presented in this paper might not everywhere reflect current status of nutrients on the field, i.e. they should only be used as long-term, average estimates, and
the temporal variation in soil nutrients is ignored—or in other words, dynamics of soil nutrients over the 1980–2016 span is not discussed in this work.
Note also that nutrient status, in terms of total amount of extractable nutrients (kg/ha) in the soil, is only partially reflected by relative nutrient contents (g/kg) in a limited depth interval of e.g. 0–20 cm. Thus the available amount of nutrients is only a fraction of the measured amount. Additionally, bulk density would be necessary for conversion to kg/ha. Regardless, concentrations are still highly relevant as most fertilizer recommendations are based on nutrient concentrations, rather than nutrient stocks.
Unfortunately, not all soil nutrients were available at all sampling locations. Figure 3 shows an example of the differences in the spatial spread of points for extr. P, K, Mg and Fe. For micro-nutrients such as Fe, it is obvious that points are spatially clustered and available only in selected countries. Large gaps in geographical coverage also often occur due to limitations on sampling such as accessibility and safety issues, so that especially tropical forests and wetlands are under-represented in the sampling designs. Nevertheless, in most of main sampling campaigns such as the AfSIS sentinel sites, locations were purposely selected to represent the main climatic zones (Vågen et al. 2010), so in this sense coverage of sampling locations can be considered satisfactory for most nutrients.
Comparison of spatial coverage of sampling locations for four nutrients: ext. P, ext. K, ext. Mg and ext. Fe. Data sources: AfSIS Sentinel Sites soil samples, EthioSIS soil samples, Africa Soil Profiles DB soil samples, IFDC-PBL soil samples, One Acre Fund soil samples, University of California soil samples and Vital Signs soil samples. See text for more details
Covariates
As spatial covariates, a large stack of GIS layers as proxies for soil forming processes (climate, landform, lithology and vegetation) was used:
DEM-derived surfaces—slope, profile curvature, Multiresolution Index of Valley Bottom Flatness (VBF), deviation from mean elevation value, valley depth, negative and positive Topographic Openness and SAGA Wetness Index, all derived using SAGA GIS at 250 m resolution (Conrad et al. 2015);
Long-term averaged monthly mean and standard deviation of the MODIS Enhanced Vegetation Index (EVI) at 250 m;
Long-term averaged monthly mean and standard deviation of the MODIS land surface temperature (daytime and nighttime) based on the 1 km resolution data;
Land cover map of the world at 300 m resolution for the year 2010 prepared by the European Space Agency (http://www.esa-landcover-cci.org/);
Monthly precipitation images at 1 km spatial resolution based on the CHELSA climate data set obtained from http://chelsa-climate.org (Karger et al. 2016);
Global cloud dynamics images at 1 km resolution obtained from http://www.earthenv.org/cloud (Wilson and Jetz 2016);
Geologic age of surficial outcrops from the USGS map (at general scale) showing geology, oil and gas fields and geological provinces of Africa (Persits et al. 2002);
Kernel density maps based on the Mineral Resources Data System (MRDS) points (McFaul et al. 2000), for mineral resources mentioning Fe, Cu, Mn, Mg, Al and Zn;
Groundwater storage map, depth to groundwater and groundwater productivity map provided by the British Geological Survey (MacDonald et al. 2012);
Landform classes (breaks/foothills, flat plains, high mountains/deep canyons, hills, low hills, low mountains, smooth plains) based on the USGS Map of Global Ecological Land Units (Sayre et al. 2014);
Global Water Table Depth in meters based on Fan et al. (2013);
Landsat bands red, NIR, SWIR1 and SWIR2 for years 2000 and 2014 based on the Global Forest Change 2000–2014 data v1.2 obtained from http://earthenginepartners.appspot.com/science-2013-global-forest (Hansen et al. 2013);
Global Surface Water dynamics images: occurrence probability, surface water change, and water maximum extent (Pekel et al. 2016), obtained from https://global-surface-water.appspot.com/download;
Distribution of Mangroves derived from Landsat images and described in Giri et al. (2011);
Predicted soil pH (\(\hbox {H}_2\hbox {O}\)) maps at 250 m produced within the SoilGrids project (https://soilgrids.org);
DEM derivatives were based on the global merge of SRTMGL3 DEM and GMTED2010 data sets (Danielson and Gesch 2011). Long-term estimates of EVI seasonality were derived using a stack of MOD13Q1 EVI images (Savtchenko et al. 2004); and long-term MODIS LST day-time and night-time images, also derived from a stack of MOD11A2 LST images (Wan 2006). Both MODIS products were based on data for the period 2000–2015. Global Surface Water dynamics images refers to period 1984–2015 and CHELSA climate images to period 1979–2015.
Remote sensing data had been previously downloaded and prepared via ISRIC's massive storage server for the purpose of the SoilGrids project (Hengl et al. 2017). The majority of covariates cover the time period 2000–2015, i.e. they match the time span for most of the newly collected soil samples.
Prior to modeling, all covariates have been stacked to the same spatial grids of 250 m, as the best compromise between computational load and average resolution of all covariates. To downscale climatic images and similar coarser resolution images we used the bicubic spline algorithm as available in the GDAL software (Mitchell and Developers 2014).
Spatial prediction framework
Model fitting and prediction were undertaken using an ensemble of two Machine Learning algorithms (MLA) (Hengl et al. 2017): ranger (random forest) (Wright and Ziegler 2016) and xgboost (Gradient Boosting Tree) (Chen and Guestrin 2016), as implemented in the R environment for statistical computing. Both random forest and gradient boosting have already proven to be efficient in predicting soil chemical and physical soil properties at the continental and global scale (Hengl et al. 2017). Packages ranger and xgboost were selected also because both are highly suitable for dealing with large data sets and support parallel computing in R.
For all target variables we use depth as a covariate, so that the resulting models make depth-specific predictions of target variables:
$$\begin{aligned} \mathrm {Y}(xyd) = d + X_1 (xy) + X_2 (xy) + \cdots + X_p (xy) \end{aligned}$$
where Y is the target variable, usually nutrient concentration in ppm, d is the depth of observation and \(X_p (xy)\) are covariates and xy are easting and northing. Note that there is somewhat bias in sampling representation towards top-soil as large portion of samples only has values for 0–20 cm depths i.e. represent only one depth. On the other hand, almost all of legacy soil profiles (18,500 locations) contain measurements for all horizons, so that building of soil variable-depth relationship is still possible.
We make predictions at four standard depths: 0, 5, 15, and 30 cm (at point support), and aggregate these to the 0–30 cm standard depth interval using the trapezoidal rule for numerical integration:
$$\begin{aligned} \int _{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum _{k=1}^{N-1} \left( x_{k+1} - x_{k} \right) \left( f(x_{k+1}) + f(x_{k}) \right) \end{aligned}$$
where N is the number of depths in the [a, b] interval where predictions were made, x is depth (\(a=x_{1}<x_{2}< \cdots <x_{N}=b\)) and f(x) is the value of nutrient content at depth x. Although we could have made predictions for each 1 cm, for practical reasons (computational intensity and storage) four depths were considered good-enough to represent soil variable-per-depth changes. Depths 0, 5, 15, and 30 cm were chosen as standard depths also because these are standardly used in the SoilGrids project (Hengl et al. 2017). For several soil nutrients, especially organically bound nutrients as Nitrogen, Carbon, Sodium and to a lesser extent Phosphorus, modeling soil variable-depth relationship is important because the concentrations generally show distinct changes with depth.
We initially considered running kriging of remaining residuals, but eventually this was not finally considered worth the effort for the following two reasons. First, most of the observation points are far apart so kriging would have had little effect on the output predictions. Second, the variograms of the residuals all had a nugget-sill ratio close to 1, meaning that the residual variation lacked spatial structure and would not benefit spatial interpolation, e.g. by the use of kriging (Hengl et al. 2007). From our work so far on this and other soil related projects, it seems that there is a rule of thumb where once a machine learning model explains over 60% of variation in data, chances are that kriging is not worth the computational effort.
To optimize fine-tuning of the Machine Learning model parameters, the caret::train function (Kuhn 2008) was used consistently with all nutrients. This helped especially with fine-tuning of the xgboost model parameters and the mtry parameter used in the random forest models. Optimization and fine-tuning of Machine Learning algorithms was computationally demanding and hence time consuming, but our experience was that it often led to a 5–15% improvement in the overall accuracy.
All processing steps and data conversion and visualization functions have been documented via ISRIC's institutional github account2. Access to legacy data from the Africa soil profiles database and other data sets produced by the AfSIS project is public and to access the other data sets consider contacting the corresponding agencies.
Accuracy assessment
For accuracy assessment a 5–fold cross-validation was used where each model was re-fitted five times using 80% of the data and used to predict at the remaining 20% (Kuhn 2008). Predictions were then compared with the put-aside observations. For each soil nutrient content, the coefficient of determination (\(R^2\), the amount of variation explained by the model) and root mean squared error (RMSE) was derived. The amount of variation explained by the model was derived as:
$$\begin{aligned} {\varSigma }_{\%} = \left[ 1 - \frac{{ {SSE}}}{{ {SST}}} \right] = \left[ 1 - \frac{{ {RMSE}}^2}{\sigma _z^2} \right] [0-100\%] \end{aligned}$$
where \({SSE}\) is the sum of squares of residuals at cross-validation points (i.e. \({ {RMSE}}^2 \cdot n\)), and \({SST}\) is the total sum of squares. A coefficient of determination close to 1 indicates a perfect model, i.e. 100% of variation is explained by the model. As all soil nutrients had a near log-normal distribution, we report the amount of variation explained by the model after log-transformation. Also, for the cross-validation correlation plots (observed vs predicted; see further Fig. 9) log scale was also used to ensure equal emphasis on low and high values.
Multivariate and cluster analysis
In addition to fitting models per nutrient, we also run multivariate and cluster analysis to determine cross-correlations and groupings in the values. First, we analyzed correlation between the nutrients by running principal component analysis. Secondly, we allocated individual sampling locations to clusters using unsupervised classification to determine areas with relatively homogeneous concentrations of nutrients. For this we used the fuzzy k-means algorithm as implemented in the h2o package (Aiello et al. 2016).
Both principal component analysis and unsupervised fuzzy k-means clustering were run on transformed variables using the Aitchison compositions as implemented in the compositions package (van den Boogaart and Tolosana-Delgado 2008). Note that transforming the original nutrient values into compositions is important as in its absence, application of statistical methods assuming free Euclidean space (e.g. PCA and unsupervised fuzzy k-means clustering) gives a highly skewed view of the variable space (van den Boogaart and Tolosana-Delgado 2008).
After clusters in nutrient values were determined, they were correlated with the same stack of covariates used to model individual nutrients, and a random forest classification model was fit and used to generate predictions for the whole of SSA (see further Fig. 10). As probability maps were produced for each cluster, we also calculated a map of the Scaled Shannon Entropy Index (SSEI) to provide a measure of the classification uncertainty (Shannon 1949; Borda 2011):
$$\begin{aligned} \mathsf {H}_{s}(x) = -\sum _{k=1}^K{ p_k(x) \cdot \log _{K} (p_k(x))} \end{aligned}$$
where K is the number of clusters, \(\log _{K}\) is the logarithm to base K and \(p_{k}\) is the probability of cluster k. The scaled Shannon Entropy Index (\(\mathsf {H}_s\)) is in the range from 0–100%, where 0 indicates no ambiguity (one of the \(p_{k}\) equals one and all others are zero) and 100% indicates maximum confusion (all \(p_{k}\) equal \(\frac{1}{K}\)). The \(\mathsf {H}_s\) indicates where the 'true' cluster is most uncertain.
In summary, the process of generating maps of nutrient clusters consists of five major steps:
Transform all nutrient values from ppm's to compositions using the compositions package (van den Boogaart and Tolosana-Delgado 2008).
Determine the optimal number of classes for clustering using the mclust package (Fraley et al. 2012) i.e. by using the Bayesian Information Criterion for expectation-maximization.
Allocate sampling points to clusters using unsupervised classification with fuzzy k-means using the h2o package (Aiello et al. 2016).
Fit a spatial prediction model using the ranger package based on clusters at sampling points and the same stack of covariates used to predict nutrients.
Predict clusters over the whole area of interest and produce probabilities per cluster.
Derive scaled Shannon Entropy Index (SSEI) map and use it to quantify spatial prediction uncertainty.
Importance of soil nutrient maps for crop yield data
In order to evaluate the importance of these soil nutrient maps for actual agricultural planning, we use the publicly available Optimising Fertilizer Recommendations in Africa (OFRA) field trials database. OFRA, a project led by CABI (Kaizzi et al. 2017), contains 7954 legacy rows from over 600 trials collected in the period 1960–2010. Field trials include crop yields and field conditions for majority of crops including maize, cowpea, sorghum, (lowland, upland) rice, groundnut, bean, millet, soybean, wheat, cassava, pea, climbing bean, barley, sunflower, (sweet, irish and common) potato, cotton, and similar. The OFRA database covers only 13 countries in Sub-Saharan Africa, hence it does not have the ideal representation considering all combinations of climatic and land conditions of crop growing. It is, nevertheless, the most extensive field trial database publicly available for SSA.
We model the relationship between the crop yield and mapped climatic conditions (monthly temperatures and rainfall based on the CHELSA data set), mapped soil nutrients using a single model in the form:
$$\begin{aligned} \mathrm {cropyield} [\mathrm {t/ha}] = f \begin{Bmatrix} \mathrm {croptype}, \mathrm {variety}, \mathrm {application}, \mathrm {nutrients}, \mathrm {climate} \end{Bmatrix} \end{aligned}$$
where \(\mathrm {cropyield}\), \(\mathrm {croptype}\), \(\mathrm {variety}\) and \(\mathrm {application}\) are defined in the OFRA database, \(\mathrm {nutrients}\) are maps we produced, and \(\mathrm {climate}\) are CHELSA climatic images for SSA. Variable \(\mathrm {croptype}\) is the factor type variable (e.g. "maize", "cowpea", "sorghum", "rice" etc) and so are \(\mathrm {variety}\) (e.g. "H625", "Glp 2", "Maksoy2&4" etc) and \(\mathrm {application}\) ("2 splits", "2/3 applied basally", "all fertilizer applied along the furrows" etc). This model we also fit using random forest, so that crop yields can be differentiated for various crop types, covariates and crop applications via a single model. Once the model in Eq. (5) is fitted, it can be used to generate predictions for various combinations of the former, which could lead to an almost infinite number of possible maps.
Here we primarily concentrate on testing whether soil nutrients are important factor controlling crop yield. Note also that fitting one model for all crop types is statistically elegant (one multivariate model to explain all crop yield) also because one can then explore all interactions e.g. between crop types, varieties, treatments etc, and produce predictions for all combinations of crop types, varieties and treatments; this would have been otherwise very difficult if not impossible if we were to fit models per each crop type.
The results of the principal component analysis (Fig. 4) shows that there are two positively correlated groups of nutrients (K, Mg, Na and Ca and org. C and N and total P). Negatively correlated nutrients are: Na, Mg, Ca vs Fe, Zn, Cu, Mn and S (i.e. high iron content commonly results in low Na, Mg, Ca content), and B vs org. C and N and total P. Because most nutrients were inter-correlated, >75% of variation in values can be explained with the first five components: PC1 (48.8 %), PC2 (19.4%), PC3 (6.7%), PC4 (5.2%) and PC5 (3.8% variation).
Principal component analysis plots generated using sampled data: (left) biplot using first two components, (right) biplot using the third and fourth component. Prior to PCA, original values were transformed to compositions using the compositions package. P is the extractable phosphorus, and P.T is the total phosphorus
Model fitting results
The model fitting and cross-validation results are shown in Table 1 and most important covariates per nutrient are shown in Table 2. For most of nutrients successful models can be fitted using the current set of covariates with R-square at cross-validation points ranging from 0.40 to 0.85. For extractable S and P, models are significantly less prominent and hence the maps produced using these models can be associated with wide prediction uncertainty (hence probably not ready for operational mapping).
List of target soil macro- and micro-nutrients of interest and summary results of model fitting and cross-validation
R-square
RMSE
org. N
total (organic) N extractable by wet oxidation
tot. P
ext. K
extractable by Mehlich 3
ext. Ca
ext. Mg
ext. Na
ext. S
ext. Al
ext. P
ext. B
ext. Cu
ext. Fe
ext. Mn
ext. Zn
All values are expressed in ppm. N = "Number of samples used for training", R-square = "Coefficient of determination" (amount of variation explained by the model based on cross-validation) and RMSE = "Root Mean Square Error". Underlined cells indicate poorer models (or too small sample sizes)
The model fitting results show that the most important predictors of soil nutrients are usually soil pH, climatic variables (precipitation and temperature), MODIS EVI signatures and water vapor images. The order of importance varies from nutrient to nutrient: soil pH is clearly most important covariate for Na, K, Ca, Mg, Al; it is somewhat less important for N. Precipitation (especially for months November, December and January) distinctly comes as the most important covariate overall. Considering that soil pH, at global scale, is mostly correlated with precipitation (Hengl et al. 2017), this basically indicates that precipitation, overall, comes as the most important covariate.
Top ten most important covariates per nutrient, reported by the ranger package
Most important covariates (10)
Depth, LSTD November, TWI (DEM), LSTD October, precipitation November, soil pH, DEM, water vapor January-February, precipitation December, mean annual temperature
Precipitation July, density of mineral exploration sites (Al), precipitation August, September, lithology, precipitation February, LSTD August, mean annual precipitation, water vapor January-February, precipitation June
Soil pH, water vapour July-August, DEM, precipitation January, std. EVI April, precipitation February, water vapor January-February, depth, cloud fraction February, water vapor November-December
Soil pH, water vapour January-February, water vapour November-December, cloud fraction March, DEM, mean EVI May-June, Landsat NIR, std. LSTD November, mean EVI July-August, Landsat SWIR1
Soil pH, water vapor January-February, Landsat NIR, Landsat SWIR1, cloud fraction February, Landsat SWIR2, water vapor November-December, LSTD March, water vapor March-April, Landsat SWIR1
Soil pH, depth, cloud fraction seasonality, cloud fraction March, LSTN December, mean EVI January-February, slope (DEM), std. LSTN April, mean EVI May-June, LSTD July
Lithology, Landsat SWIR2, cloud fraction December, precipitation October, May, TWI (DEM), precipitation November, std. EVI July-August, LSTD November
Soil pH, LSTD November, precipitation November, TWI, LSTD December, cloud fraction November, DEM, cloud fraction December, precipitation total, precipitation February
Valley depth (DEM), precipitation July, Deviation from mean (DEM), precipitation November, DEM, std. EVI May-June, precipitation January, positive openness (DEM), mean EVI July-August, mean EVI May-June
Precipitation August, January, depth, precipitation November, soil pH, DEM, std. EVI July-August, precipitation September, positive openness (DEM), precipitation December
Water vapor May-June, precipitation December, water vapor November-December, July-August, September-October, depth, water vapor January-February, precipitation July, cloud fraction November, precipitation August
Water vapor January-February, density of mineral exploration sites (Phosphates), water vapor September-October, July-August, cloud fraction seasonality, water vapor May-June, March-April, depth, DEM, cloud fraction mean annual
Depth, precipitation November, April, cloud fraction January, land cover, DEM, precipitation February, January, water vapor January-February, precipitation December
Precipitation January, December, mean EVI May-June, precipitation March, std. EVI March-April, precipitation February, November, April, TWI
Explanation of codes: depth = depth from soil surface, LSTD = MODIS mean monthly Land Surface Temperature day-time, LSTN = MODIS mean monthly Land Surface Temperature night-time, EVI = MODIS Enhanced Vegetation Index, TWI = topographic wetness index, DEM = Digital Elevation Model, NIR = Landsat Near Infrared band, SWIR = Landsat Shortwave Infrared band. Underlined covariates indicate distinct importance
The fact that Landsat bands also come as important covariate for number of nutrients (Na, Ca, Mg, S) is a promising discovery for those requiring higher resolution maps (Landsat bands are available at resolution of 60–30 m). Nevertheless, for majority of nutrients, the most important covariates are various climatic images, especially precipitation images. Although climatic images are only available at coarse resolution of 1 km or coarser, it seems that climate is the key factor controlling formation and evolution of nutrients in soil.
Model fitting results also show that apart from org. C and N, and ext. Mn, Fe, B and P, the majority of nutrient values do not change significantly with depth. For the majority of soil macro- and micro-nutrients, it is probably enough to sample nutrients at a single depth. For C, N, P, Mn, Fe and B, depth is relatively high on the list of important covariates and hence should not be ignored.
Spatial predictions
Final spatial predictions for nutrients with significant models are shown in Figs. 5 and 6. The spatial patterns produced match our expert knowledge and previously mapped soil classes in general, which is true especially for Fe, org. C and N and Ca and Na. Our predictions also indicate that the highest deficiencies for B and Cu are in sub-humid zones, which corresponds to the results of Kang and Osiname (1985). As several of the micro-nutrients have been mapped for the first time for the whole of Sub-Saharan Africa, many produced spatial patterns will need to be validation both locally and regionally.
Some artifacts, in the form of sharp gradients that could not occur naturally, are visible in the output maps, primarily due to the very coarse resolution of the geological layer used for model building. Unfortunately, the lithological map (Persits et al. 2002) and the map of groundwater resources (MacDonald et al. 2012), were available only at relatively general scale, i.e. these corresponds to spatial resolutions of 10 km or coarser, so that consequently artifacts, due to resolution mismatch and manually drawn geomorphological boundaries, are also visible in the output predictions.
Predicted soil macro-nutrient concentrations (0–30 cm) for Sub-Saharan Africa. All values are expressed in ppm
Predicted soil micro-nutrient concentrations (0–3 cm) and extractable Al for Sub-Saharan Africa. All values are expressed in ppm
Examples of nutrient deficiency maps based on our results: zoom in on town Bukavu at the border between the eastern Democratic Republic of the Congo (DRC) and Rwanda. Points indicated samples used for model training. The threshold levels are based on Roy et al. (2006, p.78) ranging from very low (<50% expected yield) to medium (80–100% yield) to very high (100% yield). All values are in ppm's. Background data source: OpenStreetMap
Examples of locally defined nutrient deficiency maps based on our results: Eastern Africa. The adopted threshold levels are based on Roy et al. (2006, p.78) ranging from very low (<50% expected yield) to medium (80–100% yield) to very high (100% yield). All values are in ppm's. Background data source: OpenStreetMap
Predictions of all soil nutrients at four depths took approximately 40 h on ISRIC's dedicated server with 256 GiB RAM and 48 cores (whole of Sub-Saharan Africa is about 7500 by 7000 km, i.e. covers about 23.6 million square kilometers). Fitting of models on the dedicated server running R software is efficient and models can be generated within 1 hour even though there were, on average, >50,000 of measurements per nutrient. With some minor additional investments in computing infrastructure, spatial predictions could be updated in future within 24 hrs (assuming all covariates are ready and harmonization of nutrients data already implemented).
Figures 7 and 8 show the level of spatial detail of the output maps and demonstrates how these maps could be used for delineation of areas potentially deficient in key soil nutrients, i.e. a somewhat more useful/interpretable form of summary information to agronomists and ecologists. In this case, determination of deficient and suitable nutrient content was based on soil fertility classes by Roy et al. (2006), ranging from very low (<50% expected yield) to medium (80–100% yield) to very high (100% yield) and assuming soil of medium CEC. Crop specific threshold levels can be set by users to quickly map areas of nutrient deficiency/high potential fertility to spatially target suitable agronomic intervention. Similar, threshold values beyond which crop does not respond to fertilizer nutrient application can be diversified and mapped at regional scale based on the spatial diversity of measured or calculated attainable yield levels.
Accuracy assessment results
The cross-validation results are reported in Table 1 and in Fig. 9. For org. C and N, extractable K, Ca, Na, Mg, Fe, Mn, Cu and Al, cross-validation R-square was above 50%, which is often considered a solid result in soil mapping projects (Hengl et al. 2015). For S and ext. P we could not fit highly significant models (R-square <30%). It could be very difficult, if not impossible, to make any significant maps of estimates of extractable soil P and S with the existing data, hence these maps should be used with caution. However, maps of other nutrients and also of properties such as pH and CEC could be useful for informing the potential for low contents in these elements, for example association of high P fixation and low extractable P with high Fe and Mn, and low sulfur in soils with low C.
Accuracy assessment plots for all nutrients. Predictions derived using 5–fold cross-validation. All values expressed in ppm and displayed on a log-scale
Spatial distribution of soil nutrient clusters
The results of the cluster analysis show that the optimal number of clusters, based on the Bayesian Information Criterion for expectation-maximization, initialized by hierarchical clustering for parameterized Gaussian mixture models, as implemented in the mclust package function Mclust (Fraley et al. 2012), can be set at 20. It appears, however, that optimal number of clusters cannot be set clearly as majority of points were not split into distinct clouds, hence other smaller and larger numbers than 20 could have been derived probably with other similar cluster analysis packages.
A random forest model fitted using 20 clusters shows significance with an average out-of-bag classification accuracy, reported by the ranger package, of 65%. Class centers and corresponding interpretations are shown in Table 3, while the spatial distribution of clusters of soil nutrients is shown in Fig. 10. Note that, although it might seem difficult to assign meaningful names to clusters, it is clear that for example cluster c1 can be associated with high organic C and N, and cluster c11 with high K content. Cluster analysis shows that especially classes 8, 12 and 13 have systematic deficiencies in most of micro-nutrients; classes 2 and 7 shows specific nutrient deficiencies in K and Mg.
Class centers for 20 clusters determined using supervised fuzzy k-means clustering
org. C
P tot.
Underlined numbers indicate highest values per nutrient; italic indicates top two lowest concentrations per class
Predicted spatial distribution of the determined clusters (20) (above), and the corresponding map of scaled Shannon Entropy Index (below). High values in scaled Shannon Entropy Index indicate higher prediction uncertainty. Cluster centers are given in Table 3
Map in Fig. 10 confirms that the produced clusters, in general, match combinations of climate and lithology. A map of the scaled Shannon Entropy Index (SSEI) for produced clusters is also shown in Fig. 10 (right). The differences in uncertainty for different parts of Sub-Saharan Africa are high. Especially large parts of Namibia, Democratic Republic Congo, Botswana, Somalia and Kenya have relatively high scaled Shannon Entropy Index (SSEI), hence higher uncertainty. In general it can be said that the SSEI map closely corresponds to extrapolation effects, i.e. that uncertainty primarily reflects density of points—as we get further away from main sampling locations, the SSEI grows to >80% (high uncertainty). In that sense, further soil sampling campaigns, especially in areas where the SSEI is >80%, could help decrease uncertainty of mapping soil nutrients in Sub-Saharan Africa. Map of SSEI is provided via the download data section.
Correlation with crop yield data
The result of modeling relationship between crop yield and nutrient and climatic maps (Eq. 5) show that a potentially accurate model can be fitted using random forest: this model explains 78% of variation in the crop yield values with an Out-Of-Bag (OOB) RMSE of \(\pm 2.4\,\hbox {t ha}^{-1}\). The variable importance plot (Fig. 11) further shows that the most influential predictors of the crop yield are: crop type, selection of nutrients and micro-nutrients (Mn, Zn, Al, B, Na), and, from climatic data, primarily monthly rainfall for June, October, September, May and July. This proves that producing maps of soil nutrients is indeed valuable for modeling agricultural productivity.
Note however that, although some micro-nutrients such as Mn and Zn come highest on the variable importance list, this does not necessarily makes them the most important nutrients in Africa. Figure 11 only indicates that these nutrients matter the most for the crop yield estimated at OFRA points. Also note that because soil nutrients are heavily cross-correlated (Fig. 4) relatively high importance of Mn and Zn could also indicate high importance of P, B, Fe and/or S.
Variable importance plot for prediction of the crop yield using the model from Eq. (5). Training points include 7954 legacy rows for 606 trials
Examples of predicted potential crop yield for the land mask of SSA (excluding: forests, semi-deserts and deserts, tropical jungles and wetlands). Circles indicate the OFRA field trials database points used to train the model
Again, although the potential yield modeling results are promising and although even maps of potential crop yield can be generated (Fig. 12) using this model, we need to emphasize that these results should be taken with a reserve. Especially considering the following limitations:
Distribution of the OFRA field trials is clustered and limited to actual 606 trials/locations (Fig. 12), hence probably not representative of the whole SSA.
Most of field trials are legacy trials (often over 20 years old) and hence correlating them with the current soil conditions probably increases uncertainty in the models.
This model ignores weather conditions for specific years (instead long-term estimates of rainfall, temperatures are used). Matching exact weather conditions per year would probably be more appropriate.
If all OFRA training data was temporally referenced (day or at least month of the year known), we could have maybe produced even higher accuracy maps of potential crop yield e.g. by using the model of type:
$$\begin{aligned} \mathrm {cropyield} (t) [\mathrm {t/ha}] = f \left\{\begin{array}{l} \mathrm {croptype} (t), \mathrm {variety} (t), \mathrm {application} (t), \\ \mathrm {nutrients} (t), \mathrm {weather} (t), \mathrm {av.water} (t) \end{array}\right\} \end{aligned}$$
so that also very dry and very wet months and their impact during the growing season could be incorporated into the model. Unfortunately, temporal reference (begin/end date of application) in OFRA trials is often missing. Also weather maps for specific months for the African continent are only available at relatively coarse resolutions (e.g. 10 km or coarser) and often not available for periods before year 2000 at all.
In the following section we address some open issues and suggest the approaches to overcome these. This is mainly to emphasize limitations of this work, and to try to announce future research directions.
Harmonization problems
One of the biggest problems of mapping soil nutrients for large areas are the laboratory and field measurement diversity. There is large complexity considering the methods and approaches to measurement of soil nutrients (Barber 1995). At farm-scale, this might not pose a too serious problem, but for pan-continental data modeling efforts it is certainly something that can not be ignored. In principle, many extractable soil nutrient content determination methods are highly correlated and harmonization of values is typically not a problem. For example, Phosphorus can be determined using Bray-1, Olsen and Mehlich-3 methods, which are all highly correlated depending on the pH range considered (Bray-1 and Mehlich-3 could be considered equivalent in fact). Conversion from one method to another however depends also on the soil conditions, such as soil pH and soil types (Roy et al. 2006) and requires data which are not readily available.
Some nutrient measurements might come from the X-ray fluorescence method (XRF), especially where plant available nutrient levels relate to total element concentrations (Towett et al. 2015). In this project, we did not invest in harmonization of measurement methods as this was well beyond the project budget. It is, for example, well known that extractable P, K and micro-nutrients do not predict well from MIR, hence there are still many limitations with using nutrient concentrations derived from soil spectroscopy. Improving harmonization, geolocation accuracy of samples and standardizing sufficiently large measurement support sizes (some samples were taken at fixed depths restricted to the topsoil, others were taken per soil horizon over soil depth), could possibly help improve accuracy of predictions.
Computational challenges
Machine learning methods have already been proven effective in representing complex relationships with large stacks of variables (Strobl et al. 2009; Biau 2012). However, MLA's can demand excessive computing time. Even though possibly more accurate, more generic algorithms than ranger and xgboost exist, these might require computing time which is beyond the feasibility of this project. For example, we have also tested using the bartMachine (bayesian additive regression trees) (Kapelner and Bleich 2013) and cubist (Kuhn et al. 2013) packages for generating spatial predictions, but due to very excessive computing times (even with full parallelization) we had no choice but to limit prediction modeling to ranger and xgboost. Computing time becomes a limiting factor especially as the number of training points is \(\gg\)10,000 and number of predictions locations goes beyond few million. In our case, whole of Sub-Saharan Africa at 250 m is an image of ca. 29,000 by 28,000 pixels, i.e. about 382 million pixels to represent the land mask.
Critically poor predictions for P and S
Although the preliminary results presented in this paper are promising and many significant correlations have been detected, for nutrients such as ext. P, S and B we obtained relatively low accuracies. It could very well be that these types of nutrients will be very difficult to map at some significant accuracy using this mapping framework. To address these shortcomings in the near future one could test developing spatial predictions at high spatial detail e.g. at 100 m spatial resolution, and/or test developing spatiotemporal models for mapping the space-time dynamics of soil nutrients over Africa. Drechsel et al. (2001), for example, recognized that much of the soils of Sub-Saharan Africa are actually constantly degrading, hence spatiotemporal modeling of nutrients could probably lead to higher accuracy in many areas. In addition, all soil tests need calibration with crop response trials for different soil types and climates, and future efforts may be better directed at more accurate calibration of crop responses to soil test data.
Since this study focussed on predictions of soil nutrients using soil samples from a long period of years (1980–2016), we cannot tell from the current data what the rate of soil nutrient depletion is, nor where it is most serious. As nutrient contents can also be quite dynamic and controlled by the land use system (especially for nitrogen and organic carbon, and potentially phosphorus depending on fertilisation history), spatiotemporal models which take into account changes in land use could help increase mapping accuracy. Although we already have preliminary experience with developing spatiotemporal models for soil data, there are still many methodological challenges that need to be addressed, e.g. especially considering poor representation of time within the given sampling plans.
Missing covariates
Accuracy of spatial predictions of nutrients could also be improved by investing in new and/or more detailed covariates. Unfortunately, no better parent material i.e. surface lithology map was available to us than the most general map of surface geology provided by USGS (Persits et al. 2002). Kang and Osiname (1985) suggests that the micro-nutrient deficiencies are especially connected with the type of parent material, hence lack of detailed parent material/lithology map of Africa is clearly a problem. Using gamma-radiometrics images in future could likely help increase accuracy of nutrient maps (especially for P and K). In Australia for example, a national agency has collected and publicly released gamma-radiometrics imagery for the whole of the continent (Minty et al. 2009); similar imagery is also available for the whole of conterminous USA (Duval et al. 2005). Although it is not realistic to expect that the African continent would soon have an equivalent, gamma-radiometric imagery could contribute substantially to regional soil nutrient mapping due to its ability to differentiate topsoil mineralogy. The recent initiatives such as the World Bank's Sustainable Energy, Oil, Gas and Mining Unit (SEGOM) programme "The Billion Dollar Map" (Ovadia 2015), could only help with bridging these gaps.
Another opportunity for increasing the accuracy of maps of nutrients is to try to utilize Landsat 8 and Sentinel-2 near and mid-infrared imagery to derive proxies of surface minerology. Several research groups are now working on integrating airborne/satellite sensing with ground-based soil sensing into a single framework (see e.g. work of Stevens et al. (2008) and Ben-Dor et al. (2009)). The newly launched SHALOM Hyperspectral Space Mission (Feingersh and Dor 2016) could be another source of possibly crucial remote sensing data for nutrient mapping and monitoring.
Other soil nutrient data bases of interest
Accuracy and value of produced predictions could be improved if more sampling points were added to the training dataset, especially those funded and/or collected by national government agencies and NGO's. Relevant data from additional soil data sets (not currently available for spatial prediction or with unknown user restrictions) include the AfSIS data recently generated in collaboration with Ethiopia, Tanzania, Ghana and Nigeria, ICRAF (World Agroforestry Centre) and CIAT (The International Center for Tropical Agriculture) institutes additional data generated in collaborative projects, private sector funded data (e.g. MARS in Ivory Coast and others in Ivory Coast, Nigeria), USAID-funded IFDC project (https://ifdc.org/) data from West Africa, CASCAPE project (http://www.cascape.info/) data sets, N2Africa project samples (http://www.n2africa.org/), and data generated by various national initiatives.
As gradually more soil samples are added, especially in the (extrapolation) areas with highest spatial prediction error, it is reasonable to expect that the models and derived maps will also gradually become better. If not more accurate, then at least more representative of the main lithologic, climatic and land cover conditions in the SSA.
Usability of produced maps
There is a critical need for agricultural and ecological data in Africa, where an expected 3.5–fold population increase this century (Gerland et al. 2014) will place immense demand on soil nutrients that form the basis of food production. Researchers and policy makers have repeatedly called for data and monitoring systems to track the state of the world's agriculture (Sachs et al. 2010). In response to this need, this soil nutrients data set provides both a useful tool for researchers interested in the role that soil nutrients play in ecological, agricultural and social outcomes in Sub-Saharan Africa, as well as a general estimate of soil nutrient stocks at a time when the continent is facing significant climate and land-use change.
As the resolution of maps is relatively detailed, it is possible to spatially identify regional areas (Figs. 7, 8) that are 'naturally': (1) deficient, (2) adequate and (3) in excess relative to specific land-use requirements; and pair these with the nutrient-specific agronomic interventions required to achieve critical crop thresholds. Such usage could help optimize the use of soil resource and possibly (major) agronomic interventions across African countries (Vanlauwe et al. 2014). These agronomic interventions could consist of: targeting degraded areas that are suitable for restoration projects, and/or targeting areas for agricultural intensification and investment by modeling crop suitability and yield gaps at the regional scale (Nijbroek and Andelman 2016), and/or assessing the nutrient gaps to predict fertilizer nutrient use efficiency.
Although we have only estimated long-term nutrient contents using relatively scarce data, the maps produced could be used to derive various higher-level data products, such as nutrient mass balance maps, when combined with soil bulk density data, Soil Fertility Index maps (Schaetzl et al. 2012) and/or nutrient gap (deficiency) maps. Such maps can be beneficial for non-specialist audiences who are nevertheless interested in spatial distributions of soil nutrients. The maps from this research could also be used as prior estimates that could be updated with more intensive local level sampling.
In addition to deriving higher-level products from this data set, combining these soil nutrient data with other continent-wide data sets will also yield insights. For example, data sets on weather (multiple years), farm management and root depth soil water (Leenaars et al. 2015) combined with data sets on crop distribution and yield, both actual and potential, will lead to insights about edaphic and agronomic drivers of yields gaps and associated nutrient gaps, or help policy makers target areas likely to undergo future nutrient depletion through crop removal and prevent areas that would otherwise fall below some critical nutrient level in the near to medium future. Other socio-economic data sets, such as health or income surveys, could be paired with these data to demonstrate how soil nutrient depletion can affect livelihoods and health outcomes, as well as to model the effects of predicted soil nutrient changes. Finally, this dataset could be combined with ecological data, such as biophysical inventories or NDVI data sets to refine our understanding of the role soil nutrients play in the heterogeneous and seemingly stochastically shifting plant community regimes of the semi-arid tropics, the underlying dynamics of which are still poorly understood (Murphy and Bowman 2012).
As we have already noted, probably the most serious limitation of this project was the high spatial clustering of points, i.e. under-sampling in countries with security issues or poor road infrastructures (tropical jungles, wetlands and similar). Fitting models with (only) 60 sites could result in many parts of Africa containing only extrapolated areas as topsoil data are predominantly collected/available for Eastern Africa (Ethiopia, Kenya, Uganda, Rwanda, Burundi, Tanzania), with large areas of relatively fertile soils developed from materials of volcanic origin located at relatively high altitude. More sampling points are certainly needed to improve spatial prediction models (and also to make the cross-validation more reliable), especially in West African soils developed in basement complexes (granites, gneisses, schists) and deposits and which are generally very much lower in soil nutrient contents. Because of high spatial clustering of points, and consequent extrapolation problems, the maps presented in this work should be used with caution. In that context, for the purpose of pan-African mapping it would be important to further optimize spreading of the sampling locations especially to increase representation of the geological and particularly the pedological feature space. This would increase sampling costs, but it might be the most efficient way to improve accuracy and usability of maps for the whole continent.
Also soil subsoil could be somewhat better represented. As the majority (>90%) of measurements refer to topsoil, unfortunately, we cannot tell if these soil-depth relationships are also valid for subsoil i.e. beyond 50 cm of depth and including the soil C horizon in weathering substrate. So also collecting soil nutrient measurements for depths beyond 50 cm could lead to interesting discoveries, especially when it comes to mapping organic Carbon and Nitrogen, soil alkalinity and similar.
Spatial predictions of main macro- and micro-nutrients have been produced for soils of Sub-Saharan Africa using an international compilation of soil samples from various projects. Our focus was mainly on producing spatial prediction of extractable concentrations of soil nutrients (thus relative nutrient content estimates based on Mehlich-3 and compatible methods). For phosphorus we also produced maps of the total P content and for carbon and nitrogen we produced maps of organic component of the two elements.
The results of cross-validation showed that, apart from S, P and B, which seemed to be more difficult to model spatially using the given framework, significant models can be produced for most targeted nutrients (R-square between 40–85%; Table 1). Produced maps of soil macro- and micro-nutrients (Figs. 5, 6) could potentially be used for delineating areas of nutrient deficiency/sufficiency relative to nutrient requirements and as an input to crop modeling. Results of cluster analysis indicate that whole of SSA could be represented with ca. 20 classes (Fig. 10), which could potentially serve as the (objectively delineated) nutrient management zones.
The finally produced predictions represent a long-term (average) status of soil nutrients for a period from 1960–2016. The training data set could have been subset to more recently collected soil samples (2008–2016) to try to produce baseline estimates of soil nutrients for e.g. 2010. We have decided to use all available nutrient data instead, mainly to avoid huge sampling gaps, but also because our covariates cover longer time spans.
A limiting factor for mapping nutrients using the existing point data in Africa is a high spatial clustering of sampling locations with many countries/land cover and land use groups completely unrepresented (based on the Shannon Entropy Index map in Fig. 10). Logical steps towards improving prediction accuracies include: further collection of input (training) point samples, especially in areas that are under-represented or where the models perform poorly, harmonization of observations, addition of more detailed covariates specific to Africa, and implementation of full spatio-temporal statistical modeling frameworks (i.e. matching more exactly in time domain nutrient concentrations, crop yields and weather conditions).
Overlaying soil nutrient data with crop yield trials data shows that soil nutrients are indeed important for agricultural development with especially Mn, Zn, Al, B and Na, being listed high as the most important variables for prediction of crop yield (Fig. 11). If both nutrient maps and climatic images of the area are available, crop yields can be predicted with an average error of \(\pm 2.4\,\hbox {t ha}^{-1}\). If a more up-to-date field trial database was available, the model from Eq. (5) could have been used to produce more actual maps of potential yield (as compared to Fig. 12). Because the model from Eq. (5) can be used to produce almost infinite combinations of predictions, it would be also fairly interesting to serve the model as a web-service, i.e. so that users can inspect potential yields on-demand (for arbitrary chosen combination of crop type, variant and application).
The gridded maps produced in this work are available under the Open Data Base licenses and can be downloaded from http://data.isric.org. These maps will be gradually incorporated into Web-services for soil nutrient data, so that also users on the field can access the data in real-time (i.e. through mobile phone apps and cloud services).
http://data.worldbank.org/indicator/AG.LND.ARBL.ZS?locations=ZG.
https://github.com/ISRICWorldSoil/AfricaSoilNutrients.
This study has been conducted primarily upon request of the Netherlands Environmental Assessment Agency (PBL). Acknowledgments are due to the various projects and organizations who made soil data collected from various countries available for this study, including projects partially or completely funded by the Bill and Melinda Gates Foundation (BMGF), such as the AfSIS (Africa Soil Information Service) project, which was co-funded by the Alliance for a Green Revolution in Africa (AGRA), the Vital Signs project with interventions in Ghana, Rwanda, Tanzania and Uganda, and the EthioSIS project, funded primarily by the Ethiopian government and co-funded by the the Bill and Melinda Gates Foundation and the Netherlands government through the CASCAPE project. Also co-funded by the Netherlands government are projects of the International Fertilizer Development Center (IFDC) in collaboration with the governments of Burundi, Rwanda and Uganda. The One Acre Fund made the collection of soil samples possible in Rwanda and Kenya and the University of California, Davis, in Kenya. We are grateful to these organizations for providing soil sample data and for commenting on the first drafts of the manuscript. ISRIC—World Soil Information is a non-profit foundation primarily funded by the Dutch government. Authors are thankful to the two anonymous reviewers for thoroughly reading the manuscript and helping with improving various sections of results and discussion.
Aiello S, Kraljevic T, Maj P (2016) with contributions from the H2Oai team (2016) h2o: R Interface for H2O. https://CRAN.R-project.org/package=h2o, R package version 3.8.1.3
Alloway BJ (2008) Micronutrients and crop production: An introduction. In: Micronutrient deficiencies in global crop production, Springer, pp 1–39Google Scholar
Barber SA (1995) Soil nutrient bioavailability: a mechanistic approach. Wiley, HobokenGoogle Scholar
Ben-Dor E, Chabrillat S, Demattê J, Taylor G, Hill J, Whiting M, Sommer S (2009) Using imaging spectroscopy to study soil properties. Remote Sens Environ 113:S38–S55CrossRefGoogle Scholar
Biau G (2012) Analysis of a random forests model. J Mach Learn Res 13:1063–1095Google Scholar
Borda M (2011) Fundamentals in information theory and coding. Springer, BerlinCrossRefGoogle Scholar
Chen T, Guestrin C (2016) XGBoost: A Scalable Tree Boosting System. ArXiv e-prints arXiv:160302754
Conrad O, Bechtel B, Bock M, Dietrich H, Fischer E, Gerlitz L, Wehberg J, Wichmann V, Böhner J (2015) System for Automated Geoscientific Analyses (SAGA) v. 2.1.4. Geosci Model Dev 8(7):1991–2007. doi: 10.5194/gmd-8-1991-2015 CrossRefGoogle Scholar
Danielson J, Gesch D (2011) Global multi-resolution terrain elevation data 2010 (GMTED2010). Open-File Report 2011-1073, U.S. Geological SurveyGoogle Scholar
Drechsel P, Kunze D, de Vries FP (2001) Soil nutrient depletion and population growth in sub-saharan africa: a malthusian nexus? Popul Environ 22(4):411–423. doi: 10.1023/A:1006701806772 CrossRefGoogle Scholar
Duval JS, Carson JM, Holman PB, Darnley AG (2005) Terrestrial radioactivity and gamma-ray exposure in the United States and Canada. USGS, open-File Report 2005-1413Google Scholar
Eckert D, Watson M (1996) Integrating the Mehlich-3 extractant into existing soil test interpretation schemes 1. Commun Soil Sci Plant Anal 27(5–8):1237–1249CrossRefGoogle Scholar
Fan Y, Li H, Miguez-Macho G (2013) Global patterns of groundwater table depth. Science 339(6122):940–943CrossRefPubMedGoogle Scholar
Feingersh T, Dor EB (2016) Shalom-a commercial hyperspectral space mission. In: Qian S (ed) Optical payloads for space missions. Wiley, HobokenGoogle Scholar
Fraley C, Raftery AE, Murphy TB, Scrucca L (2012) Package mclust: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Department of Statistics, University of Washington, technical Report No. 597Google Scholar
Gerland P, Raftery AE, Ševčíková H, Li N, Gu D, Spoorenberg T, Alkema L, Fosdick BK, Chunn J, Lalic N et al (2014) World population stabilization unlikely this century. Science 346(6206):234–237CrossRefPubMedPubMedCentralGoogle Scholar
Giller KE, Witter E, Corbeels M, Tittonell P (2009) Conservation agriculture and smallholder farming in Africa: the heretics' view. Field Crops Res 114(1):23–34CrossRefGoogle Scholar
Giri C, Ochieng E, Tieszen LL, Zhu Z, Singh A, Loveland T, Masek J, Duke N (2011) Status and distribution of mangrove forests of the world using earth observation satellite data. Glob Ecol Biogeogr 20(1):154–159CrossRefGoogle Scholar
Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, Thau D, Stehman SV, Goetz SJ, Loveland TR et al (2013) High-resolution global maps of 21st-century forest cover change. Science 342(6160):850–853CrossRefPubMedGoogle Scholar
Hengl T, Heuvelink G, Rossiter DG (2007) About regression-kriging: from equations to case studies. Comput Geosci 33(10):1301–1315CrossRefGoogle Scholar
Hengl T, Heuvelink GB, Kempen B, Leenaars JG, Walsh MG, Shepherd KD, Sila A, MacMillan RA, Mendes de Jesus J, Tamene L, Tondoh JE (2015) Mapping soil properties of Africa at 250 m resolution: random forests significantly improve current predictions. PLoS ONE 10:e0125814. doi: 10.1371/journal.pone.0125814 CrossRefPubMedPubMedCentralGoogle Scholar
Hengl T, Mendes de Jesus J, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, Blagotić A, Shangguan W, Wright MN, Geng X, Bauer-Marschallinger B, Guevara MA, Vargas R, MacMillan RA, Batjes NH, Leenaars JGB, Ribeiro E, Wheeler I, Mantel S, Kempen B (2017) SoilGrids250m: Global gridded soil information based on machine learning. PLOS ONE 12(2):1–40. doi: 10.1371/journal.pone.0169748 CrossRefGoogle Scholar
Jayne TS, Mather D, Mghenyi E (2010) Principal challenges confronting smallholder agriculture in sub-Saharan Africa. World Dev 38(10):1384–1398CrossRefGoogle Scholar
Kaizzi KC, Mohammed MB, Nouri M (2017) Fertilizer Use optimization: principles and approach. vol 17, CAB International, Nairobi, Kenya, pp 9–19Google Scholar
Kamau M, Shepherd K (2012) Mineralogy of Africa's soils as a predictor of soil fertility. In: RUFORUM third biennial conference, 24–28 September 2012, Entebbe, Uganda, p NAGoogle Scholar
Kang B, Osiname O (1985) 5. Micronutrient problems in tropical Africa. Fertil Res 7(1):131–150. doi: 10.1007/BF01048998 CrossRefGoogle Scholar
Kapelner A, Bleich J (2013) bartMachine: Machine learning with bayesian additive regression trees. arXiv preprint arXiv:13122171
Karger DN, Conrad O, Böhner J, Kawohl T, Kreft H, Soria-Auza RW, Zimmermann N, Linder HP, Kessler M (2016) Climatologies at high resolution for the earth's land surface areas. World Data Center for ClimateGoogle Scholar
Kuhn M (2008) Building predictive models in R using the caret package. J Stat Softw 28(1):1–26. doi: 10.18637/jss.v028.i05 Google Scholar
Kuhn M, Weston S, Keefer C, Coulter N, Quinlan R (2013) Cubist: Rule-and Instance-Based Regression Modeling. http://CRAN.R-project.org/package=Cubist, R package version 0.0.13
Lal R (1987) Managing the soils of sub-Saharan Africa. Science 236(4805):1069–1076CrossRefPubMedGoogle Scholar
Leenaars J (2012) Africa Soil Profiles Database, version 1.0: a compilation of georeferenced and standardised legacy soil profile data for Sub-Saharan Africa (with dataset). Technical report, ISRIC — World Soil InformationGoogle Scholar
Leenaars J, Hengl T, González MR, de Jesus JM, Heuvelink G, Wolf J, van Bussel L, Claessens L, Yang H, Cassman K (2015) Root Zone Plant-Available Water Holding Capacity of the Sub-Saharan Africa soil, version 1.0. Technical Report, ISRIC — World Soil InformationGoogle Scholar
MacDonald A, Bonsor H, Dochartaigh BÉÓ, Taylor R (2012) Quantitative maps of groundwater resources in Africa. Environ Res Lett 7(2):024009CrossRefGoogle Scholar
McFaul E, Mason G, Ferguson W, Lipin B (2000) US Geological Survey mineral databases; MRDS and MAS/MILS. 52, US Geological SurveyGoogle Scholar
Minty B, Franklin R, Milligan P, Richardson M, Wilford J (2009) The radiometric map of Australia. Explor Geophys 40(4):325–333CrossRefGoogle Scholar
Mitchell T, Developers G (2014) Geospatial power tools: GDAL raster & vector commands. Locate Press, BarcelonaGoogle Scholar
Murphy BP, Bowman DM (2012) What controls the distribution of tropical forest and savanna? Ecol Lett 15(7):748–758CrossRefPubMedGoogle Scholar
Nijbroek RP, Andelman SJ (2016) Regional suitability for agricultural intensification: a spatial analysis of the Southern Agricultural Growth Corridor of Tanzania. Int J Agric Sustain 14(2):231–247CrossRefGoogle Scholar
Ovadia DC (2015) Improving access to Africa's geological information through the Billion Dollar Map project. Miner Econ 28(3):117–121. doi: 10.1007/s13563-015-0074-z CrossRefGoogle Scholar
Pekel JF, Cottam A, Gorelick N, Belward AS (2016) High-resolution mapping of global surface water and its long-term changes. Nature 504:418–422CrossRefGoogle Scholar
Persits F, Ahlbrandt T, Tuttle M, Charpentier R, Brownfield M, Takahashi K (2002) Map Showing Oil and Gas Fields and Geologic Provinces of Africa. Open File Report 97-470A, U.S. Geological Survey, Denver COGoogle Scholar
Roy R, Finck A, Blair G, Tandon H (2006) Plant nutrition for food security: A guide for integrated nutrient management, Fertilizer and Plant Nutrition Bulletin, vol 16. FAOGoogle Scholar
Sachs J, Remans R, Smukler S, Winowiecki L, Andelman SJ, Cassman KG, Castle D, DeFries R, Denning G, Fanzo J et al (2010) Monitoring the world's agriculture. Nature 466(7306):558–560CrossRefPubMedGoogle Scholar
Savtchenko A, Ouzounov D, Ahmad S, Acker J, Leptoukh G, Koziana J, Nickless D (2004) Terra and Aqua MODIS products available from NASA GES DAAC. Adv Space Res 34(4):710–714. doi: 10.1016/j.asr.2004.03.012 CrossRefGoogle Scholar
Sayre R, Dangermond J, Frye C, Vaughan R, Aniello P, Breyer S, Cribbs D, Hopkins D, Nauman R, Derrenbacher W et al (2014) A new map of global ecological land units–an ecophysiographic stratification approach. USGS / Association of American Geographers, Washington, DCGoogle Scholar
Schaetzl RJ, Krist FJ Jr, Miller BA (2012) A taxonomically based ordinal estimate of soil productivity for landscape-scale analyses. Soil Sci 177(4):288–299CrossRefGoogle Scholar
Shannon CE (1949) Communication in the presence of noise. Proceed IRE 37(1):10–21CrossRefGoogle Scholar
Shepherd KD, Shepherd G, Walsh MG (2015) Land health surveillance and response: a framework for evidence-informed land management. Agric Syst 132:93–106. doi: 10.1016/j.agsy.2014.09.002 CrossRefGoogle Scholar
Sollich P, Krogh A (1996) Learning with ensembles: how over-fitting can be useful. In: Proceedings of the 1995 conference, vol 8, p 190Google Scholar
Stevens A, van Wesemael B, Bartholomeus H, Rosillon D, Tychon B, Ben-Dor E (2008) Laboratory, field and airborne spectroscopy for monitoring organic carbon content in agricultural soils. Geoderma 144(1):395–404CrossRefGoogle Scholar
Strobl C, Malley J, Tutz G (2009) An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychol Methods 14(4):323CrossRefPubMedPubMedCentralGoogle Scholar
Towett EK, Shepherd KD, Tondoh JE, Winowiecki LA, Lulseged T, Nyambura M, Sila A, Vågen TG, Cadisch G (2015) Total elemental composition of soils in sub-saharan africa and relationship with soil forming factors. Geoderma Reg 5:157–168CrossRefGoogle Scholar
Vågen T, Shepherd KD, Walsh MG, Winowiecki L, Desta LT, Tondoh JE (2010) AfSIS technical specifications: soil health surveillance. Africa Soil Information Service (AfSIS), World Agroforestry Centre, Nairobi, KenyaGoogle Scholar
Vågen TG, Winowiecki LA, Tondoh JE, Desta LT, Gumbricht T (2016) Mapping of soil properties and land degradation risk in Africa using MODIS reflectance. Geoderma 263:216–225CrossRefGoogle Scholar
van den Boogaart KG, Tolosana-Delgado R (2008) Compositions: a unified R package to analyze compositional data. Comput Geosci 34(4):320–338CrossRefGoogle Scholar
Vanlauwe B, Descheemaeker K, Giller K, Huising J, Merckx R, Nziguheba G, Wendt J, Zingore S (2014) Integrated soil fertility management in sub-Saharan Africa: unraveling local adaptation. Soil Discuss 1:1239–1286CrossRefGoogle Scholar
Walsh MG, Vågen T (2006) LDSF Guide to Field Sampling and Measurement Procedures. The Land Degradation Surveillance Framework, World Agroforestry Centre, Nairobi, KenyaGoogle Scholar
Wan Z (2006) MODIS land surface temperature products users' guide. ICESS, University of CaliforniaGoogle Scholar
White R (2009) Principles and Practice of soil science: the soil as a natural resource. Wiley, HobokenGoogle Scholar
Wild S (2016) Mapping Africa's soil microbiome. Nature 539:152CrossRefPubMedGoogle Scholar
Wilson AM, Jetz W (2016) Remotely sensed high-resolution global cloud dynamics for predicting ecosystem and biodiversity distributions. PLOS Biol 14(3):1–20. doi: 10.1371/journal.pbio.1002415 Google Scholar
Wright MN, Ziegler A (2016) ranger: A fast implementation of random forests for high dimensional data in C++ and R. J Stat Softw. doi: 10.18637/jss.v077.i01 Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Email authorView author's OrcID profile
1.ISRIC — World Soil Information / Wageningen UniversityWageningenThe Netherlands
2.World Agroforestry Centre (ICRAF)NairobiKenya
3.The Earth InstituteColumbia UniversityNew YorkUSA
4.Selian Agricultural Research InstArushaTanzania
5.Ethiopian Agricultural Transformation Agency (ATA)Addis AbabaEthiopia
6.PBL Netherlands Environmental Assessment AgencyThe HagueThe Netherlands
7.Conservation InternationalArlingtonUSA
8.Envirometrix IncWageningenThe Netherlands
9.CSIR-Soil Research InstitutePMB KwadasoKumasiGhana
Hengl, T., Leenaars, J.G.B., Shepherd, K.D. et al. Nutr Cycl Agroecosyst (2017) 109: 77. https://doi.org/10.1007/s10705-017-9870-x
Received 19 January 2017
Accepted 28 July 2017
DOI https://doi.org/10.1007/s10705-017-9870-x
Publisher Name Springer Netherlands | CommonCrawl |
Could a non-relativistic universe be logically consistent?
As I understand it, the principle of relativity was accepted from Galileo until Maxwell, whereupon the equations which predict constant speed of light imply a preferred reference frame in a Euclidian space, suddenly making relativity testable.
The Michelson–Morley experiment showed that light speed was constant in all reference frames, ergo Maxwell, relativity, or Euclidian space had to be wrong.
As it turns out Euclidian space is what broke, and moving the equations of physics to Minkowski space allowed us to preserve constant light speed and the principle of relativity (although relativity implies some surprising things in Minkowski space, nevertheless it's merely a logical consequence of the geometry).
Suppose we changed something else instead?
Would physics still make sense if the Michaelson-Morley experiment had shown that light speed was not constant in all reference frames?
i.e. would everything else still be logically consistent, perhaps with some tweaks, and hence the universe would have a sense of absolute space and time? What would be the consequences of this?
I guess another model universe could have no preferred reference frame but that would imply non-constant lightspeed. Is this a logically consistent hypothetical physics?
I imagine that would imply a change in the assumptions underlying Maxwell's equations, but my physics isn't good enough to follow this line of reasoning through. What would we have to adjust to make this work, or is it nonsense?
physics electricity relativity
spraffspraff
$\begingroup$ Related (and possible duplicate) $\endgroup$
– overactor
$\begingroup$ Are you asking if it we could have a have a consistent theory of electromagnetism and classical physics if we dropped the principle of relativity, and assumed Maxwell's equations are only correct in a preferred frame and have to be modified by a Galilei transformation in other frames? If so the answer is yes, this is just the luminiferous ether theory. But if you're asking if we could preserve relativity but have a non-constant light speed, I think the answer is no. $\endgroup$
– Hypnosifl
$\begingroup$ I discovered I could write down valid kinematics in which this is true; but the headaches are real. Maxwell holds if you assume that it's valid in the source reference frame rather than the observation reference frame. The uglyness comes in elsewhere. $\endgroup$
From what I have been lead to believe, there was a non relativistic universe model which we used after Maxwell. It had a large number of "correction" terms which resulted in varying speeds of light. The proving power of Minkowski space was not that it was "more correct," but that it baked all of those correction terms into the space. It was found that this was a more convenient way of thinking about the problem, so we ran with it. Our universe is not defined to be a Minkowski space, but rather we have found the laws to be simplest if we map our models into a Minkowski space. This is the same argument for saying "the Earth was never flat, but we found the laws of navigation to be simplest if we map our models onto a flat surface."
As for a world where there actually is a "preferred" frame of reference for light, it would likely have little impact on the world. Given that we could not measure the speed of light until Maxwell's era, and the fact that the universe doesn't even seem to be very dependent on this speed, I would not expect much to change.
As technology advances however, this ether like behavior might have a large effect. As we become more and more enamored with things that move at the speed of light, being able to measure things with precise timing would be an issue. Atomic clocks would have to synchronize to account for their rotation within the ether. There would be a preference for building structures in the ether frame whenever possible, making calculations easier. Eventually there could be a galactic building crunch, as everyone tries to align themselves with the ether.
$\begingroup$ A bit of a nitpick, but mapping from the surface of a sphere to a plane makes some things more complicated. Distances get distorted, and the shortest path between two points is usually not a straight line. But I suppose this could actually extend the analogy, since it works well enough on "everyday" scales, just like classical mechanics. $\endgroup$
– KSmarts
$\begingroup$ "we could not measure the speed of light until Maxwell's era" Not true, Ole Rømer measured (with limited accuracy, of course) the speed of light almost 200 years before Maxwell's Laws, and even before Galileo had proposed an experiment to measure it $\endgroup$
– Bosoneando
$\begingroup$ @KSmarts yes, it does make things more complicated, in cases where your process pushes the model to the limit, which is true of any model which is designed to be simpler than reality. If it seems more accessable, I might instead change that to talking about rotating vs. fixed frames which is probably a hair closer to a "similar" situation. I just felt maps were more accessible than explaining the Coreolis effect. $\endgroup$
– Cort Ammon
$\begingroup$ @Bosoneando From what I understand, it was not understood that the speed of light was constant in all reference frames until roughly 1900, which is the strange behavior that lead those like Einstein towards a relativistic model. If I add verbiage to point towards this being the special part, will that help clarify what you are seeing? $\endgroup$
$\begingroup$ @CortAmmon Having the ability to measure the speed of light and knowing that is constant in all reference frames are two completely different things, and you seem to mix them up. And in its 1905 paper, Einstein talked only about thought experiments, he didn't reference any experimental results. It is not clear if he knew the Michelson-Morley experiment and was inluenced by it. $\endgroup$
By "non-relativistic" I assume you mean Galilean relativity. That is, a universe with no weird high-speed effects, where changing reference frames is as simple as adding velocities and there is only one universal time.
The reason Galilean relativity was the prevailing theory at one time was that it is an extremely close approximation to reality at the low speeds we deal with. In order to make it exactly true, then all speeds must be small compared to the speed of light. In other words, the speed of light is infinite.
Just take all our physical laws, and take the limit as $c$ goes to infinity. Then the factor $\beta=v/c$ is always zero, and the factor $\gamma=1/\sqrt{1-\beta^2}$ is always 1. This means there is no time dilation or length contraction.
This universe is logically consistent, but due to the infinite speed of light there would be no electromagnetic radiation as we know it. I'm not sure what implications this has, but you can probably handwave those problems away.
Details (Update)
Maxwell's equations implicitly involve the speed of light (they must, since they are what govern electromagnetic radiation i.e. light). The key one to consider is Ampere's law:
$$ \nabla\times\mathbf{B}=\mu_0\epsilon_0\frac{\partial\mathbf{E}}{\partial t} $$
It so happens that the product of the constants in that equation is related to the speed of light:
$$ \nabla\times\mathbf{B}=\frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t} $$
This means that if $c$ is infinite, this equation becomes:
$$ \nabla\times\mathbf{B}=0 $$
That is to say, instead of electrodynamics we have magnetostatics, under which conditions no waves exist.
Another way to think of this is with the relationship between frequency and wavelength:
$$ \lambda f=c $$
If $c$ is infinite, this equation can only be satisfied is if the frequency or wavelength (or both) are infinite. Since infinite frequency makes no physical sense, we must conclude that all waves are infinitely long: that is, that there are no waves at all.
2012rcampion2012rcampion
$\begingroup$ Why do you say "no electromagnetic radiation" and not "infinitely fast electromagnetic radiation"? $\endgroup$
– spraff
$\begingroup$ @spraff Does that help? $\endgroup$
– 2012rcampion
$\begingroup$ That would imply that mass has infinite energy by e = mc^2. It seems like that might result in... interesting side effects to the output of fission and fusion. $\endgroup$
– Dan Smolinske
$\begingroup$ @Dan or it might imply that all matter has zero mass. Either way, I bet the same sort of troubles would happen with the strong force, so there may not even be composite baryons like protons and neutrons. Setting an infinite speed of light breaks all known physics. But the question was about whether it could be logically consistent, not consistent with our current models of physics, so I left those points alone initially. $\endgroup$
$\begingroup$ @DanSmolinske Fission and fusion? You get mass loss even with chemical reactions! $\endgroup$
– Loren Pechtel
Not the answer you're looking for? Browse other questions tagged physics electricity relativity .
Can I keep our universe, but without the speed limit (of light)?
Is it possible to have a logically consistent world where F=mv instead of F=ma?
hypothetical criteria for non-paradox FTL
Calculating Time Dilation Effects on Relativistic Accelerations
Could a universe with a relativity that is a cross between Galilean and Lorentzian Relativity be self consistent?
Could graviton cushions, theoretically, be used for deceleration?
Re-working physics for a Newtonian universe
Could a universe with different wavefunctions for elementary particles from those of our universe be self-consistent? | CommonCrawl |
Do innovation and financial constraints affect the profit efficiency of European enterprises? | springerprofessional.de Skip to main content
Open Access 23-09-2022 | Regular Article
Do innovation and financial constraints affect the profit efficiency of European enterprises?
Authors: Graziella Bonanno, Annalisa Ferrando, Stefania Patrizia Sonia Rossi
Published in: Eurasian Business Review
2 Theoretical background and research hypotheses
2.1 Performance, technical efficiency and financial constraints
2.2 Performance, technical efficiency and innovation
3 Empirical setting
3.1 Stochastic frontier approach
4 Econometric results
4.1 The impact of innovation and finance constraints in industry and services sectors
4.2 Sectoral heterogeneity: high- and low-tech sectors
4.3 Further analysis: the impact of firm indebtedness
4.4 Robustness analysis: a focus on micro-small firms
5 Discussion and conclusions
PATENTSEARCH
Activate to find patents in this article.
Search for Patents
SEARCH IN ARTICLE
This paper investigates the relationship between profit efficiency, finance and innovation. By adopting stochastic frontiers, we pioneer the use of a novel dataset merging firm level survey data with balance sheet information for a large sample of European companies. We find that firms having difficulties in access to finance as well as firms introducing product innovation display an incentive to improve their efficiency. While innovation produces benefit for firms' profitability, financial constraints impose a discipline to the firms forcing them to cut unproductive costs that reduce the profitability. We document nuanced differences between firms in industry and services, while they are more pronounced when we look at disaggregation across High-Tech and Low-Tech companies. From a policy perspective, our results enrich the understanding on the link between innovation, financial constraints and efficiency, which goes beyond the idea that easier access to finance is the panacea to get higher performance.
Supplementary file1 (DOCX 135 KB)
The online version contains supplementary material available at https://doi.org/10.1007/s40821-022-00226-z.
This study assesses how firms' performances can stem from innovation efforts and financial friction in credit access by using the perspective of the economic efficiency approach.
From a broader perspective, the relation between performance, innovation growth and their links with finance has been well documented in the literature (Acemoglu et al., 2006 ; Aghion et al., 2010 ; Bartelsman et al., 2013 ; Love & Roper, 2015 ). Technological innovation is indeed a critical element in enhancing and fostering firm performance and therefore is considered a conductor of economic development (Acemoglu et al., 2006 ; Aghion & Howitt, 1998 ; Archibugi & Coco, 2004 ; Goedhuys & Veugelers, 2012 ; Grossman & Helpman, 1991 ; Romer, 1990 ). Through research and development (R&D) activities, firms are able to launch new products and services for the market, which allow them to attain a strategic advantage over competitors (see among others, Dosi et al., 2015 ; Love & Roper, 2015 ; Ferreira & Dionísio, 2016 ).
The literature has also underlined that the growth ambitions of firms are often compromised or weakened by the presence of financial constraints, which are particularly binding for small- and medium- sized enterprises (SMEs) that often suffer from lack of transparencies on their credit records, lack of own capital and ability to provide collateral (Acharya & Xu, 2017 ; Becker, 2015 ; Cowan et al., 2015 ; Pigini et al., 2016 ).When financial frictions are strong, such as during the recent great financial crisis (Agénor & Pereira da Silva, 2017 ; Carbo-Valverde et al., 2015 ), enterprises tend to counteract their adverse impact on profitability by reducing investment activities and, in particular, by abandoning innovation projects (García-Quevedo et al. 2018 ).
However, looking closely at the complex links of firm efficiency with innovation and financial constraints, we discover that they remained largely unexplored in the literature and they deserve additional scrutiny. More specifically, while few studies—closely related with our research target—have investigated the interplay between efficiency and financial constraints (Bhaumik et al., 2012 ; Maietta & Sena, 2010 ; Sena, 2006 ; Wang, 2003 ), the academic research directly focusing on the relation between innovation and firm profit efficiency is relatively scant. In fact, the literature has rather concentrated on the link between innovation and firm profitability or firm performance from one side (i.e., Koellinger, 2008 ; Lööf & Heshmati, 2006 ; Shao & Lin, 2016 ), and on the effect of R&D activities on innovation efficiency, from the other side (i.e., Yang et al., 2020 ; Zhang et al., 2018 ). Furthermore, by focusing on productivity, several studies have explored the effects that financial constraints will exert on firm productivity—without tackling the perspective of economic efficiency (Butler & Cornaggia, 2011 ; Ferrando & Ruggieri, 2018 ; Jin et al., 2019 ; Midrigan & Xu, 2014 ). Others have focused on the link between innovation efforts and firm productivity (Calza et al., 2018 ; Dabla-Norris et al., 2012 ; Dai & Sun, 2021 ; Kumbhakar et al., 2012 ), showing that enterprises, particularly European ones, display a low ability to translate R&D activities into productivity gains (Castellani et al., 2019 ; Ortega-Argilés et al., 2015 ).
Building on these research inputs, our paper aims at filling the gap in the literature by providing, in a unified framework, novel evidence on whether innovations efforts and obstacles in access to finance affect firms' profit efficiency. The empirical analysis relies on a unique firm-level dataset comprising a large sample of European SMEs and large enterprises over the period 2012–2017. The investigation compares firm profit performance across sectors taking into account also the technological and knowledge-intensive content of their activities.
Our main contributions to the literature move along the following three dimensions.
First, to the best of our knowledge, there are no other studies that test in a unified framework whether innovation efforts and financial constraints exert a significant effect on firms' performance. In particular, we enrich our understanding on firms' performance by adopting the stochastic frontier approach (SFA) to estimate profits functions and to obtain efficiency scores for a sample of European firms.
As well known in the literature, profit efficiency measures the distance between the current profit of a firm and the efficient profit frontier (Berger & Mester, 1997 ). Compared to cost/revenue efficiency and other measures based on financial ratios, profit efficiency is able to account for the overall firm performance (Arbelo et al., 2021 ; Chen et al., 2015 ; Pilar et al., 2018 ).
To estimate profit frontiers, we prefer to use the SFA, proposed by Battese and Coelli ( 1995 ), which offers several theoretical and empirical advantages. First, it allows to formulate a model for inefficiency in terms of observable variables (Coelli et al., 2005 ; Kumbhakar & Lovell, 2000 ) and second, by exploiting the panel dimension of the data, it allows to overcome the shortcomings of time-invariant firm-level inefficiency, while benefitting from easier identification and smaller bias (Cornwell & Smith, 2008 ; Greene, 2005 , among others).
Second, we pioneer the use of a novel dataset that merges firms' survey-based replies derived from the European Central Bank Survey on access to finance for enterprises (ECB SAFE) with their financial statements—taken from AMADEUS by Bureau van Dijk (BvD). From the survey data, we retrieve harmonized and homogeneous information on several aspects of financial constraints and innovation for a large set of European countries. From the financial statements, we make use of output and input variables to be included in the production frontier as well as of other financial information useful to define firms' characteristics. In fact, this unique dataset allows us to rely on various indicators of financial constraints and innovation activities. The literature has underlined the difficulties in directly measuring the financial constraints and have relied on indirect proxies (Farre-Mensa & Ljungqvist, 2016 ). By contrast, we use both perceived and objective indicators of financial constraints based on the qualitative responses of surveyed firms and we complement them with a quantitative measure based on the cash flow available to firms (Fazzari et al., 1988 ).
Third, to consider the different technologies and production functions across sectors, we complement our dataset with the Eurostat classification on high-technology/knowledge-intensive sectors. We then estimate different frontiers for two main productive sectors—Industry and Services—and for two main technological sectors—High-technology and Low-technology sectors- as well as for their subsectors, up to 5 distinct macro sectors.
This approach allows us to exploit an alternative sectoral heterogeneity—in a similar fashion to Baum et al. ( 2017 ) and to Pellegrino and Piva ( 2020 )—which might bring novel evidence on the topic. In fact, high-technology/knowledge-intensive companies in industry and services turn to be more like to each other than high technology/knowledge-intensive and low technology/knowledge intensive within the two sectors.
Based on a variety of model specifications, we document that innovation has an important impact on firms' profit efficiency. Additionally, bearing in mind that policymakers and economists generally agree that well-functioning financial institutions and markets contribute to economic growth (Urbano & Alvarez, 2014 ), we provide evidence that, in the presence of market failure, financial constraints induce firms to improve efficiency. We also contribute to the literature by providing novel evidence on the effects of debt maturity and the cost of debt for the efficiency decisions of firms. We find that firms with long-term debt tend to increase efficiency. As for the cost of debt, the impact is different depending on the sector in which a firm operates. More precisely, firms operating in Services seem to have more stringent cost burden on the debt side impacting negatively on profit efficiency. An opposite picture emerges for firms in the manufacturing sector. This supposedly occurs because of a better bank-firm relation that seems to favor firms operating in the traditional manufacturing sector. Finally, we document the presence of heterogeneity across technology and knowledge-based sectors. Specifically, for high-technology and high-knowledge-intensive companies product innovation has a strong positive impact on profit efficiency, while for low-technology and low-knowledge-intensive companies product innovations negatively affect firms' efficiency.
The rest of the paper is organized as follows. The next section highlights the theoretical background and the research hypotheses. In Sect. 3 we present the methodological issues, and we describe the firm-level database as well as the empirical models. The estimated results are presented in Sect. 4, while the last section concludes.
Efficiency in production function focuses on the relationship between inputs and outputs, and a production plan is called efficient if it is not possible to produce more using the same inputs, or to reduce these inputs leaving the output unchanged (Farrell, 1957 ). The presence of frictions either in terms of agency problems, lags between the choice of the plan and its implementation or inertia in human behavior and bad management, can drive observable data away from the optimum production plan (Leibenstein, 1978 ) and create instances of technical inefficiency.
Focusing on the economic efficiency perspective to assess firms' performance, in this section we shortly refer to the theoretical and empirical evidence related to our research. From one side we consider the link between performance, profit efficiency and financial constraints; on the other side the link between performance, profit efficiency and innovation. Building on those links we develop our research hypotheses that provide the backbone of our empirical model.
It is well known that financial frictions influence firm performance (Farre-Mensa & Ljungqvist, 2016 ) and the issue of financial constraints has been investigated in literature using different theoretical perspectives, such as monetary policy (Bernanke et al., 1996 ), corporate finance (Hanousek et al., 2015 ) and entrepreneurship (Kerr & Nanda, 2009 , Kerr & Nanda, 2015 ).
Looking at the link between finance and productivity, some studies argue that lower financial constraints exert a positive effect on productivity and growth (Aghion et al., 2010 , and Aghion et al., 2012 , for France, and Manaresi & Pierri, 2017 , for Italy) as firms exposed to higher financial constraints lower their investment, in particular on assets that have a strong impact on productivity. By contrast, the strand of the literature focusing on the cleansing "Schumpeterian" effect of financial constraints points to the fact that the highest productive firms crowd out the least efficient ones. In the environment of low real interest rates and low financial constraints which characterized the period just before the financial crisis, the cleansing mechanism has been weakened with a detrimental impact on average productive growth (Cette et al., 2016 ; Gopinath et al., 2017 ). Interestingly, Jin et al. ( 2019 ) show that financial constraints might have two opposite effects on the firm productivity: from one side financial constraints increase productivity because they force to clean-out "sub-optimal investment"; on the other side they harm productivity, because the scarcity of financial resources reduce the "productivity-enhancing investment".
If we look at the effect that financial constraints exert on firm efficiency, it is indisputable that the increase in the cost of borrowing has a negative impact on firms' performance in terms of investment activity. Due to the presence of information asymmetries, borrowing external funds for firms turn to be more expensive than using internal finance (Nickell & Nicolitsas, 1999 ). In presence of binding financial frictions, enterprises tend to counteract their adverse impact on profitability by reducing investment activities and improving their efficiency in order to reduce the risk of failure (Bhaumik et al., 2012 ; Maietta & Sena, 2010 ; Sena, 2006 ).
The preceding arguments lead us to test whether being financially constrained has an impact on firms profit efficiency, by formulating the following hypothesis:
Binding finance constraints exert a positive effect on firms' efficiency, as debt constrained firms try to reduce their risk of failure.
While this hypothesis has already been researched in the literature, our empirical testing based on survey and balance sheet data is quite novel and could contribute to the actual debate.
Our second working hypothesis relates to the innovative activity of companies. While it is easily recognized the central role of innovation as engine of economic growth (Grossman & Helpman, 1991 ; Romer, 1990 ; Solow, 1957 ), the relationship between innovation and efficiency in production is more complex and contingent to several factors.
Starting from the seminal work of Griliches ( 1979 ), the literature has investigated the impact of R&D activities on productivity. Innovation can cause shifts outwards in the production frontier and this in turn might reduce the inefficiency of firms that do not lie on the technical efficient frontier (Aghion et al., 2005 ; Nickell, 1996 ). More recently, a stream of the literature has documented that R&D activities contribute—with different nuances—in improving firm productivity (Heshmati & Kim, 2011 ; Janz et al., 2004 ; Klette & Kortum, 2004 ; Lööf & Heshmati, 2006 ; Ugur et al., 2016 ), especially for high-tech sector (Castellani et al., 2019 ; Kumbhakar et al., 2012 ; Ortega-Argilés et al., 2015 ).
However, if innovation efforts are just inputs and do not generate innovation outputs they shall be considered as sunk costs that do not necessarily increase firm performance (Koellinger, 2008 ). Additionally, some scholars have also underlined the limited ability of firms, particularly European ones, in translating R&D efforts into productivity gains (Castellani et al., 2019 ; Ortega-Argilés et al., 2015 ).
Looking more closely at the stream of literature on the impact of innovation on profitability, many papers provide support to a positive relationship (Cefis & Ciccarelli, 2005 ; Geroski et al., 1993 ; Leiponen, 2000 ; Lööf & Heshmati, 2006 ), which may arise because either innovative firms are able to shield their new products from competition, or they display higher internal capabilities, compared to non-innovators (Love et al., 2009 ).
By contrast, other studies do not find a clear relationship between technological innovation and firm performance (Díaz-Díaz et al., 2008 ), when analyzing either short term effects (Deeds, 2001 ; George et al., 2002 ; Le et al., 2006 ) or long term indirect effects (Schroeder et al., 2002 ). Furthermore, the empirical evidence on the linkages between innovation and efficiency is mixed and, in some cases, it even documents a trade-off between them (Zorzo et al., 2017 ).
The link between innovation and performance is crucial also for the productivity literature that recognizes the strategic role of profit-seeking entrepreneurs investing on R&D activities for higher productivity, and, consequently, for higher economic growth (Bravo-Ortega & García-Marín, 2011 ; Castellani et al., 2019 ; Foster et al., 2008 ; Kumbhakar et al., 2012 ; O'Mahony & Vecchi, 2009 ; Ortega-Argilés et al., 2015 ).
We bring new evidence to the extant contributions by employing the economic efficiency approach to measure firm performance. As we focus on the output of innovation efforts (product innovation introduced by firms) rather than the inputs of innovations efforts (R&D investments), in our investigation we look at the subsequent efficiency gains stemming from increasing revenues.
Consequently, we would like to investigate whether undertaking innovation exerts an effect on firms' profit efficiency by formulating our second working hypothesis as follows:
Innovation output improves profit efficiency, as the developments of new products or services are aimed at attaining a strategic advantage over competitors and leveraging revenues.
In the previous section we introduced the theoretical reasoning of why the presence of financing constraints and the innovation efforts of firms may have a positive impact on profit efficiency. To test these hypotheses, we estimate the profit functions by employing the SFA, which is a stochastic method that allows companies to be distant from the frontier also for randomness (Aigner et al., 1977 ; Meeusen & van den Broeck, 1977 ).1 The SFA is a parametric method, which means that it assigns a distribution function to the stochastic component of the model and, thus, allows making inference. In our analysis we make use of the specification introduced by Battese and Coelli ( 1995 ), which permits the simultaneous estimation of the stochastic frontier and the inefficiency model, given appropriate distributional assumptions associated with panel data. This approach improves, in terms of consistency, previous modeling based on two-step approaches.2
We estimate a two-inputs-one-output model described by the following Translog profit frontier.3
$$\mathrm{log}\left(\frac{{Profit}_{it}}{{w}_{kit}}\right)= {\beta }_{0}+{\beta }_{1}\mathrm{log}{y}_{it}+{\delta }_{1}\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}+\frac{1}{2}\left[{\beta }_{2}{\left(\mathrm{log}{y}_{it}\right)}^{2}+{\delta }_{2}{\left(\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}\right)}^{2}\right]+\alpha \left(\mathrm{log}{y}_{it}\right)\left(\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}\right)+\gamma {controls}_{cst}+{v}_{it}-{u}_{it}$$
where the dependent variable is the natural logarithm of value added over the cost of fixed asset calculated for each firm i at time t.4 Additionally, y represents the output and is equal to the operating revenues; wl is the cost of labor (measured as the ratio between the personnel expenses and the number of employees), wk is the cost of fixed assets (measured as the ratio between the depreciation and total amount of fixed assets); α, β, δ and \(\gamma\) are the parameters to be estimated; v is the random error; u is the inefficiency. In the profit frontier, the inefficiency tends to reduce the profit, thus the composite error is equal to (v-u).
Equation (1) is an alternative profit function since it depends on inputs and output, whereas actual profits depend on the prices of outputs. It uses the same variables as for a cost function, implying that output-prices are free to vary (Huizinga et al., 2001 ).5,6
The specification includes also a set of control dummies (controls) to guarantee that the efficiency scores are net of additional heterogeneity: at country level, c, to exclude any geographical and institutional fixed effect; at firm size level s, to take care of the possibility of different shifts in the frontier for different group of firms (Micro, Small and Medium)7 and at each survey round t to control for the dynamics over time.
We also take into account the different technologies and production functions across sectors by estimating different frontiers for several productive sectors: Industry and Services and also two technological sectors—High-tech and Low-tech sectors- using the Eurostat classification.8
From the Eq. (1), profit efficiency (PE) is the ratio between the observed firms' profit and the maximum level of profit achievable in case of full efficiency:
$${PE}_{it}=\frac{{F}_{p}\left({y}_{it},{w}_{it}\right){e}^{{v}_{itp}}{e}^{{-u}_{itp}}}{{F}_{p}\left({y}_{it},{w}_{it}\right){e}^{{v}_{itp}}}={e}^{-{u}_{itp}}$$
where Fp(.) indicates a generic profit function in which the profit is obtainable from producing y at input price w.
Finally, we assume that vit is normally distributed with mean zero and uit is distributed as a truncated Normal, as proposed by Battese and Coelli ( 1995 ), we estimate the following inefficiency equation:
$${u}_{it}={\eta }_{1} {Finance \, constraints}_{it}+{\eta }_{2}{ Product \, innovation}_{it}+{\eta }_{3} {Firm \, controls}_{it}+ {\eta }_{4} {macroeconomic \, controls}_{jt}+{e}_{it}$$
where i indicates the ith firm, j the country, t is time and eit the random component.
Efficiency is time-varying, ensuring a change in the relative ranking among enterprises, which accommodates the case where an initially inefficient firm becomes more efficient over time and vice versa.
To test the effect of the determinants of firms' efficiency, we simultaneously estimate Eqs. (1) and (3), by employing the following covariates for the Eq. (3):
(i) Finance constraintsit includes a set of variables able to capture firms' experience in their access to finance. We consider three different alternative proxies of financial constraints. First, we use the ratio between cash flow and total assets (Cash flow/Total assetsit), as the dependence to internal finance represents a particularly binding constraint for firms to finance investment (Fazzari et al., 1988 ; Guariglia & Liu, 2014 ; Sasidharan et al., 2015 ). We are aware of the criticism of the subsequent literature on the use of this indicator (starting by Kaplan & Zingales, 1995 , and recently summarized in Farre-Mensa & Ljungqvist, 2016 ). For this reason, we turn to the information derived from the survey to define financial constrained firms.
Our second proxy of financial constraint is derived directly from the survey information. Problem of Financeit captures firms' perception of potential financing constraints. It is a dummy equal to one if firms reported that access to finance represents the most relevant problem among a set of other problems (competition, finding customers, costs of production or labor, availability of skilled staffs and business regulation), and 0 otherwise.
Our third financial friction indicator—Finance obstaclesit—is an "objective" measure of credit constraints, also derived from the survey. This dummy variable indicates firms as financially constrained if they report that: (1) their loan applications were rejected; (2) only a limited amount of credit was granted; (3) they themselves rejected the loan offer because the borrowing costs were too high, or (4) they did not apply for a loan for fear of rejection (i.e. discouraged borrowers). The indicator is equal to one if at least one of the above conditions (1–4) is verified, and 0 otherwise.
As shown in Ferrando and Mulier ( 2015 ), firms that self-report finance as the largest obstacle for their business activity display often different characteristics compared with financially constrained firms. For instance, the authors find that more profitable firms are less likely to face actual financing constraints, while firms are more likely to perceive access to finance problematic when they have more debt with short term maturity. For this reason, we consider both indicators in our analysis.
(ii) Product innovationit: is a dummy equal to one if a firm declares in the survey to have undertaken product innovation, and 0 otherwise.9 It is worth noting that this variable has the advantage of providing direct information on the innovation undertaken by the firms, rather than the information on R&D investments, which do not necessarily turn into product innovation outcome. This is for us relevant as we need to assess how the innovation output will impact on revenues and therefore on profit efficiency.
(iii) In addition, we use some firm-varying covariates describing firms' market and debt conditions, Firm controlsit. To capture the change in profitability—in a similar fashion to Srairi ( 2010 ) and to Luo et al. ( 2016 )—we rely on two alternative measures. The first is Profit marginit, defined as net income divided by sales and the second Profit upit which is a dummy equal to one if the firm has experienced an increase in profit in the past six months, and 0 otherwise.10 We proxy firms' debt conditions using the dummy Leverage upit which is equal to one if the firm has experienced an increase in the ratio debt/assets in the past six months, and 0 otherwise. In some specifications, we consider also the maturity structure of indebtedness and the debt burden in terms of interest expenses paid on total debt. Both variables are derived from the financial statements as explained in detail in the next section.
Finally, we use the real GDP growth rate as an additional time-country varying control for the business cycle, named macroeconomic controls (Ferrando et al., 2017 ).
In order to test our hypotheses, we rely on a novel dataset that merges survey-based data derived from the ECB SAFE with detailed balance sheet and profit and loss information gathered from BvD AMADEUS. We also augmented firm level data with the firms´ technological intensity provided by the Eurostat classification on high-tech industry and knowledge-intensive services.
SAFE gathers information about access to finance for non-financial enterprises in the European Union. It is an on-going survey conducted on behalf of the European Commission and the European Central Bank every 6 months since 2009. The sample of interviewed firms is randomly selected from the Dun and Bradstreet database and it is stratified by firm-size class, economic activity and country. The firms´ selection guarantees satisfactory representation at the country level.
The combined dataset has several advantages. First, we retrieve harmonized and homogeneous information on several aspects of financial constraints and innovation from the survey dimension of the dataset. Second, we can use output and input variables to be included in the production frontier as well as other firm-level information useful to pursue our research trajectory (e.g. leverage compositions and profitability measures). Third, we are also able to disentangle the different technological characteristics of the firms, high-, medium-, low-technology industries, and knowledge-intensive and less knowledge-intensive services.
Our investigation is based on firms belonging to the following eight European countries (Austria, Belgium, France, Finland, Germany, Italy, Portugal and Spain)11 observed from wave 8th (second part of 2012)12 until wave 17th (first part of 2017).
Moreover, we focus our analysis only on firms belonging to Industry and Services.13 Our choice is driven by the following considerations. (i) they are the largest sectors (Industry accounts for about 19% of European GDP, and Services account for about two thirds of European value added (Eurostat Data, 2016). (ii) they have displayed divergent trends in recent years in terms of shares of value added to GDP with a declining trend for Industry and an increasing one for Services (Stehrer et al., 2015 ). (iii) the two sectors differ also for their efficient allocation of resources as shown by the allocative efficiency index, which is particularly low for the Services (European Commission, 2013 ).
Our starting sample includes almost 30,000 observations, of which 53% are from the Industry sector and 47% from the Services sector. Once we take into consideration the variable Product innovation, the sample reduces more than half to 7279 observations for Industry and 5864 for Services. Table 1 displays some descriptive statistics of the variables used in defining the frontiers (Panel A) and the determinants of efficiency (Panel B) for both sectors of activities. All balance sheet data are deflated using HICP index.
Descriptive statistics for profit functions and inefficiency equations
N. obs
Panel A: Profit frontier
Added valueb
Operating revenuesb
Labor costb
Capital costb
Log (added value/cost fixed asset)b
Log (operating revenues)b
Log (labor cost)b
Number of employeesa
Panel B: Inefficiency equation
Cash flow to total assetsb
Problem of financea
Finance obstaclesa
Product innovationa
Profit upa
Profit marginb
Leverage upa
ROEb
Financial leverageb
Use bank loansa
Use credit linesa
Long-term debtb
Interest ratiob
Real GDP growth
Microa
Smalla
Mediuma
Largea
Source: our elaboration on data from ECB/EC SAFE & BvD AMADEUS
aFor variables retrieved from ECB/EC SAFE
bFor variables retrieved from BvD AMADEUS
Regarding the production factors in Panel A, firms in Industry report on average higher operating revenues with lower labor and capital costs than firms in Services. Moreover, they are also on average bigger than firms in Services in terms of numbers of employees. From Panel B, our sample is mostly composed by SMEs by construction of the survey.14 13.6% of firms in Industry and 14.3% in Services perceived access to finance as a major problem. A slightly lower percentage of firms (around 11%) are financially constrained according to the objective indicator Finance obstacles. Regarding the innovative activity, around 46% and 31% of firms have indicated that they introduced product innovation in the previous six months in Industry and Services, respectively.
Turning to firms' financial position, the average firm in our sample is profitable with a profit margin of 1.7% in both sectors, although looking at the distribution we see that at least 10% of all firms in our sample are reporting losses. On average, firms in our sample can generate internal funds (6.6% and 7.5% of total assets for Industry and Services, respectively). At the same time, at least 19% of companies are reporting increasing debt to total assets in the previous six months (Leverage up). In the table we report also some additional financial ratios to quantify better the financial conditions of firms in the sample: financial leverage and return on equity (ROE).15 On average, firms have a financial debt which is around 22% of total assets, while the return on equity ratios are just above 3%. This latter ratio, although relatively low, shows some efficiency by firms in using their equity. It also emerges from the survey that the use of bank loans and credit lines are more relevant for firms operating in Industry (37% and 47%) than in Services (28% and 38%).16 Looking at the debt maturity structure, firms in the manufacturing sector tend to use on average the same amount of short- and long-term debt (50%), while those in the service sector report a slightly higher percentage of long-term debt (60%).17 As for the interest burden, this is slightly higher for firms in the service sector (7.9%) than for those in the manufacturing sector (7.5%).
Looking at size classes, our sample has a total of 2876 (3626) wave-firm observations of micro/small companies (up to 49 employees) in the Industry (Services) sector. The remaining observations belong to medium and large companies (from 50 employees) for industry (4420) and for Services (2238). Common to other studies based on matched databases like ours, the sample composition in terms of size classes might not reflect the general distribution of the population of firms within the different countries. This is a caveat for our empirical results and, for this reason, we perform some additional robustness checks based on firm size classes in Sect. 4.4.
Finally, in the Supplementary material, Figure S1 reports the sample composition by country and by industries. Most firms are from Italy, France and Spain, covering almost three quarters of all observations, which reflects mostly the fact that the country information on financial statements from BvD Amadeus is much higher for those countries.
In this section, we discuss the results of the maximum likelihood estimations of the profit functions for both Industry and Services. Following the approach proposed by Battese and Coelli ( 1995 ), the coefficients are obtained by simultaneous estimates of the profit efficiency frontier (Eq. 1) and the inefficiency term, expressed as a function of a set of explanatory variables (Eq. 3). We point out that in the framework of Battese and Coelli ( 1995 ), we can interpret only the sign and the significance of the estimated coefficients.
Before presenting the estimated results, we report some information on model diagnostics. All estimations in Table 2 show the appropriateness of the Translog specification used in the analysis. In fact, it turns out that most of the second-order terms parameter estimates (σ2) of the profit function are significant. In addition, the high value of the estimation of the γ parameter, reflecting the importance of the inefficiency effects, strongly advocates the use of the stochastic frontier production function rather than the standard OLS method.18 Finally, the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are used to provide some models diagnostics (Burnham & Anderson, 2004 ).19
Estimation of profit functions and inefficiency equations for Industry and Services sectors
Profit frontier
3.1480***
−0.8539***
Log (operating revenues)
Log(wl/wk)
Log (operating revenues)2
−0.0117**
−0.0038
Log(wl/wk)2
Log (operating revenues*wl/wk)
Country/size/time effect
Inefficiency equation
Cash flow/total assets
Problem of finance
Finance obstacles
−0.1279*
Profit up
Leverage up
σ2
Macroeconomic controls
Log-likelihood
Mean PE
Source: Our elaboration on data from ECB/EC SAFE & BvD AMADEUS. Significance levels: '***' 0.01, '**' 0.05, '*' 0.10
Table 2 displays several model specifications (columns 1–3 for Industry, and 4–6 for Services), which differ for the alternative inclusion of financial constraints as z-variables of the inefficiency equation.20
In the various specifications we take care of the likely high correlation between the cash flow ratio and the profit margin. Hence when we employ the continuous variable Cash flow ratio we use the dummy Profit up as our preferred profitability measure. When the financial constraints indicators are those derived from the survey (Problem of finance and Finance obstacles), we use the Profit margin, retrieved from balance sheet data.
Starting with Industry, all three measures of firms' external financial constraints display a negative and significant coefficient, signaling, in the context of the Battese and Coelli ( 1995 ) model, lower inefficiency scores. Most likely, when the availability of external finance decreases, financially constrained firms are forced to be more efficient in order to counter the potential adverse impact of financial frictions on their profitability. These findings provide support to our prediction (H1) and are in line with previous studies based on different countries and sample periods (Bhaumik et al., 2012 ; Maietta & Sena, 2010 ; Nickell & Nicolitsas, 1999 ; Sena, 2006 ).
Interestingly, we also find a negative and significant coefficient for the variable Product innovation. This indicates that the efforts of firms to develop new products or services—in order to attain a strategic advantage over competitors—produce some leverage on revenues. This evidence corroborates the prediction of our hypothesis (H2).21 Our analysis also shows the relevance of the performance indicators. In all specifications, the two alternative measures of profit (Profit up and Profit margins) and leverage (Leverage up) display a negative and significant coefficient, suggesting that the increase in profitability and leverage have a positive effect on efficiency.
Turning to the Services sector results are displayed in columns 4–6 (Table 2). Remarkably, some similarities emerge with the analysis performed for the Industry firms: the financial obstacles and the firm performance indicators show a negative sign indicating a positive effect exerted by those variables on efficiency. By contrast, and differently from the Industry case, the variable Product innovation is not statistically significant in most specifications, except for the last one (Column 6). A tentative interpretation of this outcome could be that firms operating in the Services sector, which is traditionally considered non-tradeable, are less exposed to the international competition. For this reason, the pressure for these firms to develop and launch new product and services for the market might be less cogent. However, our results also show that for firms that report access to finance as an acute obstacle for their activity, innovation positively affects their profit efficiency, as indicated by the negative sign displayed in Column 6.
To address potential endogeneity issues related to the link between efficiency and innovation, we implemented the instrumental variable approach proposed by Karakaplan and Kutlu ( 2017 ).22 By using R&D expenses as percentage of GDP by sector of activity as instrument for innovation, this approach allows us to test the endogeneity bias in the stochastic frontier estimation in both the frontier and efficiency determinants equations. The results rule out any evidence of endogeneity referred to Product innovation.23
In this section we exploit further the sectoral heterogeneity by aggregating firms according to the technological and knowledge-intensive content of their activities, in a similar fashion to Baum et al. ( 2017 ) and Pellegrino and Piva ( 2020 ).
Starting from the Eurostat classification on technological and knowledge intensity, we collapse the sectors into two main groups: High-Tech and Low-Tech sectors. In the first group we include high-technology industries and knowledge-intensive services (HT and KIS, respectively) and in the second one medium–low and low-technology industries and less knowledge intensive (MT, LT and less-KIS, respectively).
Our assumption is that HT companies in Industry and KIS companies in Services are more like to each other than high knowledge-intensive and low technology/knowledge-intensive within the two sectors.
The results of the simultaneous estimations of Eq. (1) and several specifications of Eq. (3) for High-Tech and Low-Tech sectors are displayed in Table 3, in columns 1–3, and columns 4–6, respectively.
Estimation of profit functions and inefficiency equations for High-tech and Low-tech sectors
0.8191**
Log (wl/wk)
0.0078*
−11,985
Note: High-tech includes high-technology industries and knowledge-intensive services; Low-tech includes medium–low and low-technology industries and less-knowledge intensive services
Results are noteworthy. First, the variables accounting for the financial constrains turn to be significant with the negative sign in all specifications for LT firms. Conversely, this effect disappears for HT firms. The only exception is the negative coefficient for the cash flow ratio, which confirms that firms using internal sources are forced to be more efficient. In other words, when financial constraints are binding, LT firms are induced to be more efficient to enhance their profitability than HT firms. Second, our evidence shows that for the HT sector Product innovation displays a negative sign, indicating that it produces positive effect on profit efficiency. By contrast, for LT firms we get a positive sign signaling a reduction in profit efficiency. This evidence might indicate that investments in product innovation for HT companies imply complementing different tasks such as information technologies, which in turns produce efficiency gains. In the case of Low-Tech companies, instead, it seems that the business activities needed to introduce new products might divert funds and efforts that could be otherwise used in a more efficient way. Noticeably, the signs of all the other inefficiency determinants are stable across the High-Tech and Low-Tech disaggregation.
In a second step, we estimate our model by disaggregating the sectors into five sectors: HT, MT and LT for Industry and for KIS and less-KIS for Services. For the sake of brevity, Table S1 in Supplementary material reports only the results of the specification including the Cash flow ratio, as financial constraints indicator.24 Noticeably, the variable product innovation displays the expected negative sign for HT industry and KIS Services—indicating that innovation output reduces profit inefficiency. By contrast, in the case of less-KIS companies, innovation efforts seem to enhance profit inefficiency. For MT and LT results are not conclusive (the coefficients are not significant). Overall, this evidence reinforces once more our assumption that the two sectors (HT industry and KIS Services) share more common characteristics in terms of the impact of innovation on efficiency than the other subgroups.
So far in our analysis we have not considered in an explicit way the role of firms' indebtedness on their performance (Maietta & Sena, 2004 , 2010 ; Sena, 2006 ; Vermoesen et al., 2013 ). It is known that companies choose between short-term and long-term debt depending on their productive needs. While they typically utilize short-term financing for working capital, they turn on long-term debt to better align their capital structure with long-term strategic goals, including innovation plans. Hence, long-term financing affords companies more time to realize a return on investment and innovation activities and reduces refinancing risks that come with shorter-term debt maturities. In this respect, we should expect a positive impact of long-term debt on profit efficiency.
To address the issue, we re-estimate our model including two additional variables in the inefficiency equation: (i) Long-term debt and (ii) Interest ratio.25 If our predictions are corroborated, we should find, ceteris paribus, a negative coefficient for the variable Long-term debt. For similar reasons, we shall expect a positive sign for the variable Interest ratio, as an increase of the debt burden will affect the firm cost structure and, in turns, will deteriorate profit efficiency. Panel A of Table 4 displays the results of the new specifications for both Industry (columns 1–3) and Services (columns 4–6), while Panel B reports the same specifications for HT (columns 1–3) and LT sectors (columns 4–6).
The impact of the indebtedness: estimation of inefficiency equations for industry and services sectors (panel A) high-tech and low-tech sectors (panel B)
Panel A
−11.9818***
Interest ratio
Panel B
−0.6097s
Source: Our elaboration on data from ECB/EC SAFE & BvD AMADEUS. Significance levels: '***' 0.01, '**' 0.05, '*' 0.10 . For sake of brevity, we report in this Table only the z-variables coefficients. Note: High-tech includes high-technology industries and knowledge-intensive services; Low-tech includes medium–low and low-technology industries and less-knowledge intensive services
Starting with panel A (Table 4), results can be summarized as follows. First, the inclusion of the two new variables in our specifications does not affect the baseline results for the key variables of the inefficiency equation (displayed in Table 3), thus providing support to the robustness of our analysis in terms of innovation and financial constraints.
Second, the estimated coefficients of the variable Long-term debt are always negative and statistically significant in both sectors, confirming the positive effect of the debt maturity structure on profit efficiency, as shown also by Sena ( 2006 ) and Maietta and Sena ( 2010 ). Interestingly, some differences between sectors arise for the estimated coefficients of the Interest ratio variable. Specifically, they turn out to be positive and significant across the three specifications (columns 4–6) for Services, while they are never statistically significant for Industry. This evidence may suggest that firms operating in the Industry sector might be less affected by their debt burden as a consequence of a better bargaining power with banks.26
The main takeaway from this additional analysis is that while the maturity of the debt structure is not unpaired between sectors, the effect of the debt burden appears to damage firms in Services, regardless of the macroeconomic context. However, we are cautious when interpreting such a result as we acknowledge that further investigation might be required to check out its consistency.
Turning to Panel B, where we present the new specifications (including firm indebtedness indicators) for HT and LT sectors, results show that the impact of innovation and financial constraints on efficiency is consistent with the estimates displayed in Table 3. The coefficients of Long-term debt are always negative and significant indicating a positive impact on the profit efficiency independently from the sectoral disaggregation. This evidence turns to be consistent with the results presented in Panel A. As an additional check, we estimate models (1)–(6) of Table 4 (for both Panels A and B) with lagged innovation and finance constraints. Although this leads to a notable reduction in the number of observations by roughly two/thirds, the results corroborate the main findings reported in Table 4.27
A different picture emerges when we look at the estimated coefficients of the Interest ratio. The coefficient is statistically significant only in the subgroup of HT firms and when firms face objective difficulties in their access to finance. The sign is positive indicating that increases of the interest ratio reduce profit efficiency for firms in that group (Column 3).
By looking at our sample composition, the summary statistics show that close to 40% of the industrial companies and almost 60% of the service companies are classified as micro and small (see Table 1). Previous studies have shown that firm size matters on the decision to innovate (see among others, Leal-Rodríguez et al., 2015 ). Two opposite perspectives are recalled here. According to the Schumpeterian point of view (Karlsson & Olsson, 1998 ; Schumpeter, 1942 ), large firms have an advantage to innovate vis à vis smaller companies as innovation requires effort, long-time investment, know–how and resources that small firms cannot often afford. By contrast, it is also argued that smaller-sized firms tend to display more innovative and efficient efforts than large firms in order to survive (Baumann & Kritikos, 2016 ; Cohen & Klepper, 1996 ; De Jong & Marsili, 2006 ; Laforet, 2008 , 2013 ).
To address the issue, we re-estimate our model specifications for the sub-sample of micro and small firms (up to 49 employees). Results are displayed in Table S2 of Supplementary Material where we report only the z-variables of the several inefficiency term specifications. As far as Industry is concerned, while the sign and the significance of the financial constraints covariates are consistent with the previous analysis, the variable product innovation turns to be not significant. We read these inconclusive results as a signal that we should investigate the impact of innovation on small-sized firms by focusing more on their innovative characteristics. By contrast, no relevant differences emerge for the micro-small firms compared to the full sample in the Services sector.
Indeed, the sector disaggregation based on the technology and knowledge intensity brings more clear-cut findings (see Table S3 of the Supplementary Material). In detail, we observe that for the micro-small HT enterprises product innovation matters in reducing firm inefficiency, while for LT firms the innovation efforts are negligible or they could be even counter-productive as they induce an increase of inefficiency. Once again, these results are largely consistent with the findings on the full sample on the similarity between technological and knowledge-intensive companies independently from firm size.
This paper contributes to the literature that investigates the interplay between firms' efficiency, innovation and access to finance. Despite the policy relevance of this topic, the fundamental assumptions underlying it have remained largely unexplored. Indeed, the related literature has mainly focused on the role of financial constraints and innovation on productivity separately, yet these studies do not address the direct link between innovation and profit efficiency and the effect that a limited access to credit may exert on profit efficiency.
We fill this gap through the lens of the economic efficiency perspective. To the best of our knowledge, no previous work considered the effects that both innovation and credit access limitations may have on profit efficiency. To accomplish such a task, we pioneer the use of a novel dataset that merges survey-based data with balance sheet information. This allows us to exploit the heterogeneity across firms' financial and financing positions.
Furthermore, we consider heterogeneous production functions across sectors by estimating different frontiers: first for two main productive sectors (Industry and Services) and second for an alternative sectoral distribution based on the technological and knowledge intensity. The empirical analysis confirms the hypothesis that technological and knowledge-intensive companies in the manufacturing and service sectors are more like to each other than high knowledge-intensive and low technology/knowledge-intensive firms within Industry and Services.
Our main findings support the prediction (H1) according to which firms that perceived difficulties in accessing external finance, or that are objectively financially constrained, tend to improve efficiency to reduce their risk of failure and to maintain profits, independently from the macro sector disaggregation. This outcome seems to be consistent with previous literature (Sena, 2006 ). Our analysis also documents that when financial constraints are binding, LT firms are induced to be more efficient to enhance their profitability than HT firms. Moreover, we find that debt maturity matters as well. Firms making more use of long-term debt are more efficient as they have more time to realize a return on their business activities to cover their debt independently from the sectoral disaggregation employed in our investigation. As for the cost of debt, the impact is different depending on the sector in which a firm operates. Enterprises operating in Services seem to have a more stringent cost burden on the debt side determining an increase in inefficiency. In the case of HT firms, an increase of the interest ratio is reducing their efficiency but only when they perceive difficulties in the access to finance.
Consistently with our second hypothesis (H2), we show that firms which stated in the survey to have introduced product innovation, have a higher likelihood to improve efficiency. This evidence is robust for firms in the manufacturing sector, and only weakly present for firms belonging to Services. We also find that product innovation is important for the profit efficiency of high-technology and knowledge-intensive companies, while it tends to diminish it for LT and less-KIS firms.
Finally, we consider also the role of firm size within sectors. Specifically, we show that micro and small HT firms are able to turn innovation into revenue gains, while it is not the case for LT and less-KIS companies, broadly confirming the findings of the full sample on the relevance of the sectoral composition. This supports the idea that the different sectoral aggregations provided in our analysis are indeed relevant for detecting additional firm heterogeneity and this does not depend on the firm's size.
The implications of our results are not negligible. Fostering innovation and growth opportunities for enterprises is particularly relevant in times of economic slowdown and financial distress. Recommendations for public policy to encourage long-term investment in innovation and to reduce conducts which particularly penalize enterprises when investing in R&D would be another outcome of our investigation. Hence, our results give support to the line of firm-level policy interventions directly aimed at mitigating the underinvestment in R&D in Europe while considering firm heterogeneity across technology and knowledge intensity sectors. Though not explicitly analyzed in this paper, the line of reasoning goes beyond the idea that easier access to finance is the panacea to get higher profit efficiency. What seems more important is the support provided to businesses to be competitive by encouraging them to adopt new business models and innovative practices.
Despite its numerous contributions to the literature, we acknowledge some limitations of our study which can provide input for further research. First, while the results of our analysis seem to be robust to different econometric approaches and turn to be robust to endogeneity issues, we could argue that the introduction of additional variables controlling, for instance, the different degrees of entrepreneurship could be an advantage to better understand firms' different efficient scores. Beyond size and innovation, several studies have focused on age, ownership structure, skill and competencies (Binnui & Cowling, 2016 ; Falk & Hagsten, 2021 ).
Second, while the instrument-based approach proposed by Karakaplan and Kutlu ( 2017 ) could remove some potential sources for reverse causality, it does not eliminate other possible sources of endogeneity. In fact, identifying valid instruments has been proved a difficult task for our financial constraints indicators, also because the majority of observable firm characteristics are already included in the inefficiency equation.
Third, one of the main advantages of our investigation is the use of a unique dataset that allows us to employ not only the qualitative survey-based information to measure financial constraints, but also balance sheet data to estimate production functions at firm level. However, it could be argued that, by merging two different data sources, firms in our sample might not reflect the composition of the population of firms by size and sector of activity within the different countries. We are aware of this limitation and one step further in our analysis would be to set appropriate weights to be used to have representative results. This is of particular importance for the novel results related to HT and LT companies. Fourth, we also recognise that differences in institutional settings might play a role in innovation policy at the national level, so we control for the country macroeconomic context in our estimates. However, we leave a more in-depth country-level analysis on this issue to future research tasks.
Finally, though our analysis starts just after the great financial crisis due to the availability of our survey database, we would expect that our results are not specific to the period that we study. Once more data will become available, further research should be devoted to substantiating our assessment over the business cycle, in particular considering the long-term impact of the Covid19 pandemic on firms' efficiency.
Previous versions of this paper were presented at the World Finance Banking Symposium 2019 (Delhi), at the XVIII SIEPI Annual Conference, University Cà Foscari, Venezia, 30–31 January 2020, at the 27th Annual MFS conference, 28 June 2020, and at the Virtual North American Productivity Workshop (vNAPW XI), 8–12 June 2020. We thank the discussants and participants of these conferences for useful suggestions and comments. Furthermore, we thank the participants of the seminars—where the paper was presented—for helpful comments. This paper contains views of the authors and not necessarily those of the European Central Bank. Usual disclaimer applies. Grateful acknowledgements are due to the European Central Bank for making available the Survey on the Access to Finance of Enterprises (SAFE) data set. Graziella Bonanno gratefully acknowledges research grant [300399FRB20BONAN] from the University of Salerno. Stefania P.S. Rossi gratefully acknowledges research grant [FRA 2018] from the University of Trieste.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Below is the link to the electronic supplementary material.
Econometric methods used to estimate the economic efficiency are largely employed in several strands of literature (among others, Aiello and Bonanno, 2018 , and Zhang et al., 2021 , for the banking sector; Le et al., 2018 , Hanousek et al., 2015 , and Bonanno, 2016 , for manufacturing and non-manufacturing firms).
In the two-step approach, first inefficiency is estimated using a frontier and, in the second step, the estimated efficiency-scores are the dependent variable in a subsequent regression (Greene, 1993 ). As shown by Lensink and Meesters ( 2014 ) and Wang and Schmidt ( 2002 ), the two-step approach suffers from the fact that the inefficiency is assumed to be identically and independently distributed in the main frontier equation, while it depends on other variables in the inefficiency equation.
We use the Translog function to model the form of frontiers (expressed in log-linear form), which satisfies the assumptions of non-negativity, concavity and linear homogeneity (Kumbhakar and Lovell, 2000 ). We take into account the constraint of homogeneity in relation to input-prices (which requires the sum to one of the input price elasticities).
We choose the value added to proxy profit as it includes both revenues and costs information. Moreover, by choosing this variable we overcome the practical problem of having too many negative values for the profit variable. Empirically, the correlation between value added and profit is very high.
Exhaustive discussions on alternative versus traditional profit efficiency are in Berger and Mester ( 1997 ) and Vander-Vennet ( 2002 ).
As in Berger and Mester ( 1997 ), Fitzpatrick and McQuinn ( 2008 ) and Huizinga et al. ( 2001 ), we transform profits by adding the absolute value of minimum profit plus one to actual profits. This ensures that log(Profit) = log[π +|π^min |+ 1] is defined in [0, + ∞).
Micro and Small firms register a turnover less than 2 and between 2 and 10 million (euros), respectively; Medium firms have a turnover between 10 and 50 million (euros); Large enterprises have a turnover more than 50 million (euros). In our analysis Micro firms are the controlling group.
See Eurostat: https://ec.europa.eu/eurostat/cache/metadata/Annexes/htec_esms_an3.pdf. Based on NACE Rev. 2 at 2-digit level Eurostat has compiled a classification of 12 sectors and subsectors according to their degree of technology and knowledge intensity. In the paper, we use the main 5 sectors. The supplementary material section will report the estimations also for those 5 sectors.
The information on this variable (question Q1 in the survey) is provided by SAFE every second wave, and refers to the previous 12 months, i.e. two waves. As the SAFE survey is conducted every six months, in order to restore this information at the wave round, we replicate this data for firms present on consecutive waves.
In the empirical analysis we use this proxy when the measure of finance constraints is the Cash flow/Total assets ratio because of the high correlation with the profit margin ratio.
The selection of those countries is driven by the availability of the data after the merge of surveyed firms in SAFE with the financial statements in the dataset from BvD AMADEUS.
This is due to the availability of the variable Product innovation from the 8th wave onwards in SAFE.
We use the 2-digit NACE classification used in the survey to define the two sectors. Industry includes manufacturing, mining, electricity, gas and water supply, while Services include construction, wholesale and retail trade, transport, accommodation, food services and other services to business or persons. We exclude public administration, financial and insurance services.
Albeit the focus of SAFE is on SMEs, the survey also provides information on large firms. As for Services 12% of our sample are large enterprises. In the case of Industry, they are 16%.
The financial leverage is defined as the ratio of short- and long-term debt, excluding trade credit and provisions, to total assets while the return on equity (ROE) is the amount of net profit earned as a percentage of shareholders' equity.
The variables Use bank loans, Use credit lines and Use bank credit capture firms' use of banking products as reported by firms in the survey. They are dummy variables equal to 1 if one financing source is used by the firm and 0 otherwise.
Long-term debt is defined as the ratio between long term debt and total financial debt. Interest ratio is the ratio between the interest payable on short and long-term debt, accrued during the period covered by the financial statements, and total financial debt.
The variance σ2 is equal to the sum of the variances of the two error components: \({\sigma }_{u}^{2}\) and \({\sigma }_{v}^{2}\). γ is equal to \(\frac{{\sigma }_{u}^{2}}{{\sigma }^{2}}\) where the zero value of this parameter indicates that deviations from the frontier are only due to random error, while values close to one indicate that the distance from the frontier is due to inefficiency.
AIC is equal to [2 k-2Log-likelihood], where k is the number of estimated parameters; BIC is equal to [ln(N. obs) k-2 Log-likelihood].
In order to exclude the possibility that our findings are driven by the contemporaneous presence of financial constraints and innovation, we run different specifications for Industry and Services, in which we introduce one by one the three proxies for financial constraints (without Product innovation)—Cash flow ratio, Problem of Finance, Finance obstacles—and the variable Product innovation (without the variables accounting for financial constraints). The main results on the impact of the different variables on efficiency are confirmed and are available upon request.
In order to disentangle possible combined effects of financial frictions and innovation efforts, we also estimated model specifications where we included interaction terms between Product innovation and each indicator of financial constraints. Estimates (available upon request) on the interactions did not provide conclusive results.
We are aware that our approach is far from being conclusive in eliminating other possible sources of endogeneity which might affect the relationship under scrutiny. Other approaches have been used for addressing the endogeneity of inputs and output in the SFA (Lien et al., 2018 ). We acknowledge this potential limitation of our study in the conclusion section of this paper.
Untabulated results are available upon request.
Similar results are obtained when we use the other two indicators of financial frictions. The results are available upon request.
The variable Long-term debt is the ratio between long term debt and total financial debt. Interest ratio is measured as the ratio between the interest payable on short and long-term debt, accrued during the period covered by the financial statements, and total financial debt.
We find analogous results, available on request, when we use alternative variables for measuring the interest burden such as the interest coverage ratio which is defined as the ratio between earnings before interest and taxes and interest payments due within the same period.
Untabulated regressions are available upon request.
go back to reference Acemoglu, D., Aghion, P., & Zilibotti, F. (2006). Distance to frontier, selection, and economic growth. Journal of the European Economic Association, 4(1), 37–74. CrossRef Acemoglu, D., Aghion, P., & Zilibotti, F. (2006). Distance to frontier, selection, and economic growth. Journal of the European Economic Association, 4(1), 37–74. CrossRef
go back to reference Acharya, V., & Xu, Z. (2017). Financial dependence and innovation: The case of public versus private firms. Journal of Financial Economics, 124(2), 223–243. CrossRef Acharya, V., & Xu, Z. (2017). Financial dependence and innovation: The case of public versus private firms. Journal of Financial Economics, 124(2), 223–243. CrossRef
go back to reference Agénor, P.-R., & Pereira da Silva, L. (2017). Cyclically adjusted provisions and financial stability. Journal of Financial Stability, 28, 143–162. CrossRef Agénor, P.-R., & Pereira da Silva, L. (2017). Cyclically adjusted provisions and financial stability. Journal of Financial Stability, 28, 143–162. CrossRef
go back to reference Aghion, P., Angeletos, G.-M., Banerjee, A., & Manova, K. (2010). Volatility and growth: Credit constraints and the composition of investment. Journal of Monetary Economics, 57, 246–265. CrossRef Aghion, P., Angeletos, G.-M., Banerjee, A., & Manova, K. (2010). Volatility and growth: Credit constraints and the composition of investment. Journal of Monetary Economics, 57, 246–265. CrossRef
go back to reference Aghion, P., Askenazy, P., Berman, N., Cette, G., & Eymard, L. (2012). Credit constraints and the cyclicality of R&D investment: Evidence from France. Journal of the European Economic Association, 10, 1001–1024. CrossRef Aghion, P., Askenazy, P., Berman, N., Cette, G., & Eymard, L. (2012). Credit constraints and the cyclicality of R&D investment: Evidence from France. Journal of the European Economic Association, 10, 1001–1024. CrossRef
go back to reference Aghion, P., Bloom, N., Blundell, R., Griffith, R., & Howitt, P. (2005). Competition and innovation: An inverted-U relationship. Quarterly Journal of Economics, 120, 701–728. Aghion, P., Bloom, N., Blundell, R., Griffith, R., & Howitt, P. (2005). Competition and innovation: An inverted-U relationship. Quarterly Journal of Economics, 120, 701–728.
go back to reference Aghion, P., & Howitt, P. (1998). Endogenous Growth Theory Cambridge. MIT Press. Aghion, P., & Howitt, P. (1998). Endogenous Growth Theory Cambridge. MIT Press.
go back to reference Aiello, F., & Bonanno, G. (2018). On the sources of heterogeneity in banking efficiency literature. Journal of Economic Surveys, 32(1), 194–225. CrossRef Aiello, F., & Bonanno, G. (2018). On the sources of heterogeneity in banking efficiency literature. Journal of Economic Surveys, 32(1), 194–225. CrossRef
go back to reference Aigner, D., Lovell, C. K., & Schmidt, P. (1977). Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6(1), 21–37. CrossRef Aigner, D., Lovell, C. K., & Schmidt, P. (1977). Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6(1), 21–37. CrossRef
go back to reference Arbelo, A., Arbelo-Pérez, M., & Pérez-Gómez, P. (2021). Profit efficiency as a measure of performance and frontier models: A resource-based view. BRQ Business Research Quarterly, 24(2), 143–159. CrossRef Arbelo, A., Arbelo-Pérez, M., & Pérez-Gómez, P. (2021). Profit efficiency as a measure of performance and frontier models: A resource-based view. BRQ Business Research Quarterly, 24(2), 143–159. CrossRef
go back to reference Archibugi, D., & Coco, A. (2004). A new indicator of technological capabilities for developed and developing countries (ArCr). World Development, 32(4), 629–654. CrossRef Archibugi, D., & Coco, A. (2004). A new indicator of technological capabilities for developed and developing countries (ArCr). World Development, 32(4), 629–654. CrossRef
go back to reference Bartelsman, E., Haltiwanger, J., & Scarpetta, S. (2013). Cross-country differences in productivity: The role of allocation and selection. American Economic Review, 103(1), 305–334. CrossRef Bartelsman, E., Haltiwanger, J., & Scarpetta, S. (2013). Cross-country differences in productivity: The role of allocation and selection. American Economic Review, 103(1), 305–334. CrossRef
go back to reference Battese, G. E., & Coelli, T. J. (1995). A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empirical Economics, 20(2), 325–332. CrossRef Battese, G. E., & Coelli, T. J. (1995). A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empirical Economics, 20(2), 325–332. CrossRef
go back to reference Baum, C. F., Lööf, H., Nabavi, P., & Stephan, A. (2017). A new approach to estimation of the R&D–innovation–productivity relationship. Economics of Innovation and New Technology, 26(1–2), 121–133. CrossRef Baum, C. F., Lööf, H., Nabavi, P., & Stephan, A. (2017). A new approach to estimation of the R&D–innovation–productivity relationship. Economics of Innovation and New Technology, 26(1–2), 121–133. CrossRef
go back to reference Baumann, J., & Kritikos, A. S. (2016). The link between R&D, innovation and productivity: Are micro firms different? Research Policy, 45(6), 1263–1274. CrossRef Baumann, J., & Kritikos, A. S. (2016). The link between R&D, innovation and productivity: Are micro firms different? Research Policy, 45(6), 1263–1274. CrossRef
go back to reference Becker, B. (2015). Public R&D policies and private R&D investment: A survey of the empirical evidence. Journal of Economic Surveys, 29(5), 917–942. CrossRef Becker, B. (2015). Public R&D policies and private R&D investment: A survey of the empirical evidence. Journal of Economic Surveys, 29(5), 917–942. CrossRef
go back to reference Berger, A. N., & Mester, L. J. (1997). Inside the black box: What explains differences in the efficiencies of financial institutions? Journal of Banking & Finance, 21(7), 895–947. CrossRef Berger, A. N., & Mester, L. J. (1997). Inside the black box: What explains differences in the efficiencies of financial institutions? Journal of Banking & Finance, 21(7), 895–947. CrossRef
go back to reference Bernanke, B., Gertler, M., & Gilchrist, S. (1996). The financial accelerator and flight to quality. Review of Economics and Statistics, 78, 1–15. CrossRef Bernanke, B., Gertler, M., & Gilchrist, S. (1996). The financial accelerator and flight to quality. Review of Economics and Statistics, 78, 1–15. CrossRef
go back to reference Bhaumik, S. K., Das, P. K., & Kumbhakar, S. C. (2012). A stochastic frontier approach to modelling financial constraints in firms: An application to India. Journal of Banking & Finance, 36(5), 1311–1319. CrossRef Bhaumik, S. K., Das, P. K., & Kumbhakar, S. C. (2012). A stochastic frontier approach to modelling financial constraints in firms: An application to India. Journal of Banking & Finance, 36(5), 1311–1319. CrossRef
go back to reference Binnui, A., & Cowling, M. (2016). A conceptual framework for measuring entrepreneurship and innovation of young hi-technology firms. GSTF Journal on Business Review, 4(3), 32–47. Binnui, A., & Cowling, M. (2016). A conceptual framework for measuring entrepreneurship and innovation of young hi-technology firms. GSTF Journal on Business Review, 4(3), 32–47.
go back to reference Bonanno, G. (2016). ICT and R&D as inputs or efficiency determinants? Analysing Italian manufacturing firms (2007–2009). Eurasian Business Review, 6(3), 383–404. CrossRef Bonanno, G. (2016). ICT and R&D as inputs or efficiency determinants? Analysing Italian manufacturing firms (2007–2009). Eurasian Business Review, 6(3), 383–404. CrossRef
go back to reference Bravo-Ortega, C., & García-Marín, A. (2011). R&D and productivity: A two way avenue? World Development, 39(7), 1090–1107. CrossRef Bravo-Ortega, C., & García-Marín, A. (2011). R&D and productivity: A two way avenue? World Development, 39(7), 1090–1107. CrossRef
go back to reference Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. CrossRef Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. CrossRef
go back to reference Butler, A. W., & Cornaggia, J. (2011). Does access to external finance improve productivity? Evidence from a natural experiment. Journal of Financial Economics, 99(1), 184–203. CrossRef Butler, A. W., & Cornaggia, J. (2011). Does access to external finance improve productivity? Evidence from a natural experiment. Journal of Financial Economics, 99(1), 184–203. CrossRef
go back to reference Calza, E., Goedhuys, M., & Trifković, N. (2018). Drivers of productivity in Vietnamese SMEs: The role of management standards and innovation. Economics of Innovation and New Technology, 28(1), 1–22. Calza, E., Goedhuys, M., & Trifković, N. (2018). Drivers of productivity in Vietnamese SMEs: The role of management standards and innovation. Economics of Innovation and New Technology, 28(1), 1–22.
go back to reference Carbo-Valverde, S., Degryse, H., & Rodríguez-Fernández, F. (2015). The impact of securitization on credit rationing: Empirical evidence. Journal of Financial Stability, 20, 36–50. CrossRef Carbo-Valverde, S., Degryse, H., & Rodríguez-Fernández, F. (2015). The impact of securitization on credit rationing: Empirical evidence. Journal of Financial Stability, 20, 36–50. CrossRef
go back to reference Castellani, D., Piva, M., Schubert, T., & Vivarelli, M. (2019). R&D and productivity in the US and the EU: Sectoral specificities and differences in the crisis. Technological Forecasting and Social Change, 138, 279–291. CrossRef Castellani, D., Piva, M., Schubert, T., & Vivarelli, M. (2019). R&D and productivity in the US and the EU: Sectoral specificities and differences in the crisis. Technological Forecasting and Social Change, 138, 279–291. CrossRef
go back to reference Cefis, E., & Ciccarelli, M. (2005). Profit differentials and innovation. Economics of Innovation and New Technology, 14(1–2), 43–61. CrossRef Cefis, E., & Ciccarelli, M. (2005). Profit differentials and innovation. Economics of Innovation and New Technology, 14(1–2), 43–61. CrossRef
go back to reference Cette, G., Fernald, J., & Mojon, B. (2016). The pre-Great Recession slowdown in productivity. European Economic Review, 88, 3–20. CrossRef Cette, G., Fernald, J., & Mojon, B. (2016). The pre-Great Recession slowdown in productivity. European Economic Review, 88, 3–20. CrossRef
go back to reference Chen, C. M., Delmas, M. A., & Lieberman, M. B. (2015). Production frontier methodologies and efficiency as a performance measure in strategic management research. Strategic Management Journal, 36(1), 19–36. CrossRef Chen, C. M., Delmas, M. A., & Lieberman, M. B. (2015). Production frontier methodologies and efficiency as a performance measure in strategic management research. Strategic Management Journal, 36(1), 19–36. CrossRef
go back to reference Coelli, T. J., Rao, D. S. P., O'Donnell, C. J., & Battese, G. E. (2005). An introduction to efficiency and productivity analysis. Springer Science & Business Media. Coelli, T. J., Rao, D. S. P., O'Donnell, C. J., & Battese, G. E. (2005). An introduction to efficiency and productivity analysis. Springer Science & Business Media.
go back to reference Cohen, W. M., & Klepper, S. (1996). A reprise of size and R & D. The Economic Journal, 106, 925–951. CrossRef Cohen, W. M., & Klepper, S. (1996). A reprise of size and R & D. The Economic Journal, 106, 925–951. CrossRef
go back to reference Cornwell, C. M., & Smith, P. C. (2008). Stochastic frontier analysis and efficiency estimation. In P. Sevestre & L. Mátyás (Eds.), The econometrics of panel data (pp. 697–726). Springer Verlag. CrossRef Cornwell, C. M., & Smith, P. C. (2008). Stochastic frontier analysis and efficiency estimation. In P. Sevestre & L. Mátyás (Eds.), The econometrics of panel data (pp. 697–726). Springer Verlag. CrossRef
go back to reference Cowan, K., Drexler, A., & Yanez, Á. (2015). The effect of credit guarantees on credit availability and delinquency rates. Journal of Banking & Finance, 59, 98–110. CrossRef Cowan, K., Drexler, A., & Yanez, Á. (2015). The effect of credit guarantees on credit availability and delinquency rates. Journal of Banking & Finance, 59, 98–110. CrossRef
go back to reference Dabla-Norris, E., Kersting, E. K., & Verdier, G. (2012). Firm productivity, innovation, and financial development. Southern Economic Journal, 79(2), 422–449. CrossRef Dabla-Norris, E., Kersting, E. K., & Verdier, G. (2012). Firm productivity, innovation, and financial development. Southern Economic Journal, 79(2), 422–449. CrossRef
go back to reference Dai, X., & Sun, Z. (2021). Does firm innovation improve aggregate industry productivity? Evidence from Chinese manufacturing firms. Structural Change and Economic Dynamics, 56, 1–9. CrossRef Dai, X., & Sun, Z. (2021). Does firm innovation improve aggregate industry productivity? Evidence from Chinese manufacturing firms. Structural Change and Economic Dynamics, 56, 1–9. CrossRef
go back to reference European Commission. (2013). Product market review 2013: Financing the real economy. European Economy 8, 2013. ISBN 978-92-79-33667-6. https://doi.org/10.2765/58867. European Commission. (2013). Product market review 2013: Financing the real economy. European Economy 8, 2013. ISBN 978-92-79-33667-6. https://doi.org/10.2765/58867.
go back to reference de Jong, J. P., & Marsili, O. (2006). The fruit flies of innovations: A taxonomy of innovative small firms. Research Policy, 35(2), 213–229. CrossRef de Jong, J. P., & Marsili, O. (2006). The fruit flies of innovations: A taxonomy of innovative small firms. Research Policy, 35(2), 213–229. CrossRef
go back to reference Deeds, D. L. (2001). The role of R&D intensity, technical development and absorptive capacity in creating entrepreneurial wealth in high technology start-ups. Journal of Engineering and Technology Management, 18(1), 29–47. CrossRef Deeds, D. L. (2001). The role of R&D intensity, technical development and absorptive capacity in creating entrepreneurial wealth in high technology start-ups. Journal of Engineering and Technology Management, 18(1), 29–47. CrossRef
go back to reference Díaz-Díaz, N. L., Aguiar-Díaz, I., & De Saá-Pérez, P. (2008). The effect of technological knowledge assets on performance: The innovative choice in Spanish firms. Research Policy, 37(9), 1515–1529. CrossRef Díaz-Díaz, N. L., Aguiar-Díaz, I., & De Saá-Pérez, P. (2008). The effect of technological knowledge assets on performance: The innovative choice in Spanish firms. Research Policy, 37(9), 1515–1529. CrossRef
go back to reference Dosi, G., Grazzi, M., & Moschella, D. (2015). Technology and costs in international competitiveness: From countries and sectors to firms. Research Policy, 44(10), 1795–1814. CrossRef Dosi, G., Grazzi, M., & Moschella, D. (2015). Technology and costs in international competitiveness: From countries and sectors to firms. Research Policy, 44(10), 1795–1814. CrossRef
go back to reference Falk, M., & Hagsten, E. (2021). Innovation intensity and skills in firms across five European countries. Eurasian Business Review, 11(3), 371–394. CrossRef Falk, M., & Hagsten, E. (2021). Innovation intensity and skills in firms across five European countries. Eurasian Business Review, 11(3), 371–394. CrossRef
go back to reference Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society Series A (General), 120(3), 253–290. CrossRef Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society Series A (General), 120(3), 253–290. CrossRef
go back to reference Farre-Mensa, J., & Ljungqvist, A. (2016). Do measures of financial constraints measure financial constraints? The Review of Financial Studies, 29(2), 271–308. CrossRef Farre-Mensa, J., & Ljungqvist, A. (2016). Do measures of financial constraints measure financial constraints? The Review of Financial Studies, 29(2), 271–308. CrossRef
go back to reference Fazzari, S. M., Hubbard, R. G., Petersen, B. C., Blinder, A. S., & Poterba, J. M. (1988). Financing constraints and corporate investment. Brookings Papers on Economic Activity, 1988(1), 141–206. CrossRef Fazzari, S. M., Hubbard, R. G., Petersen, B. C., Blinder, A. S., & Poterba, J. M. (1988). Financing constraints and corporate investment. Brookings Papers on Economic Activity, 1988(1), 141–206. CrossRef
go back to reference Ferrando, A., & Mulier, K. (2015). Financial obstacles and financial conditions of firms: Do perceptions match the actual conditions? The Economic and Social Review, 46(1), 87–118. Ferrando, A., & Mulier, K. (2015). Financial obstacles and financial conditions of firms: Do perceptions match the actual conditions? The Economic and Social Review, 46(1), 87–118.
go back to reference Ferrando, A., Popov, A., & Udell, G. F. (2017). Sovereign stress and SMEs' access to finance: Evidence from the ECB's SAFE survey. Journal of Banking & Finance, 81, 65–80. CrossRef Ferrando, A., Popov, A., & Udell, G. F. (2017). Sovereign stress and SMEs' access to finance: Evidence from the ECB's SAFE survey. Journal of Banking & Finance, 81, 65–80. CrossRef
go back to reference Ferrando, A., & Ruggieri, A. (2018). Financial constraints and productivity: Evidence from euro area companies. International Journal of Finance & Economics, 23(3), 257–282. CrossRef Ferrando, A., & Ruggieri, A. (2018). Financial constraints and productivity: Evidence from euro area companies. International Journal of Finance & Economics, 23(3), 257–282. CrossRef
go back to reference Ferreira, P. J. S., & Dionísio, A. T. M. (2016). What are the conditions for good innovation results? A fuzzy-set approach for European Union. Journal of Business Research, 69(11), 5396–5400. CrossRef Ferreira, P. J. S., & Dionísio, A. T. M. (2016). What are the conditions for good innovation results? A fuzzy-set approach for European Union. Journal of Business Research, 69(11), 5396–5400. CrossRef
go back to reference Fitzpatrick, T., & McQuinn, K. (2008). Measuring bank profit efficiency. Applied Financial Economics, 18(1), 1–8. CrossRef Fitzpatrick, T., & McQuinn, K. (2008). Measuring bank profit efficiency. Applied Financial Economics, 18(1), 1–8. CrossRef
go back to reference Foster, L., Haltiwanger, J., & Syverson, C. (2008). Reallocation, firm turnover, and efficiency: Selection on productivity or profitability? American Economic Review, 98(1), 394–425. CrossRef Foster, L., Haltiwanger, J., & Syverson, C. (2008). Reallocation, firm turnover, and efficiency: Selection on productivity or profitability? American Economic Review, 98(1), 394–425. CrossRef
go back to reference García-Quevedo, J., Segarra-Blasco, A., & Teruel, M. (2018). Financial constraints and the failure of innovation projects. Technological Forecasting and Social Change, 127, 127–140. CrossRef García-Quevedo, J., Segarra-Blasco, A., & Teruel, M. (2018). Financial constraints and the failure of innovation projects. Technological Forecasting and Social Change, 127, 127–140. CrossRef
go back to reference George, G., Zahra, S. A., & Wood, D. R., Jr. (2002). The effects of business–university alliances on innovative output and financial performance: A study of publicly traded biotechnology companies. Journal of Business Venturing, 17(6), 577–609. CrossRef George, G., Zahra, S. A., & Wood, D. R., Jr. (2002). The effects of business–university alliances on innovative output and financial performance: A study of publicly traded biotechnology companies. Journal of Business Venturing, 17(6), 577–609. CrossRef
go back to reference Geroski, P., Machin, S., & Van Reenen, J. (1993). The profitability of innovating firms. The RAND Journal of Economics, 24, 198–211. CrossRef Geroski, P., Machin, S., & Van Reenen, J. (1993). The profitability of innovating firms. The RAND Journal of Economics, 24, 198–211. CrossRef
go back to reference Goedhuys, M., & Veugelers, R. (2012). Innovation strategies, process and product innovations and growth: Firm-level evidence from Brazil. Structural Change and Economic Dynamics, 23(4), 516–529. CrossRef Goedhuys, M., & Veugelers, R. (2012). Innovation strategies, process and product innovations and growth: Firm-level evidence from Brazil. Structural Change and Economic Dynamics, 23(4), 516–529. CrossRef
go back to reference Gopinath, G., Kalemli-Özcan, Ş, Karabarbounis, L., & Villegas-Sanchez, C. (2017). Capital allocation and productivity in South Europe. The Quarterly Journal of Economics, 132(4), 1915–1967. CrossRef Gopinath, G., Kalemli-Özcan, Ş, Karabarbounis, L., & Villegas-Sanchez, C. (2017). Capital allocation and productivity in South Europe. The Quarterly Journal of Economics, 132(4), 1915–1967. CrossRef
go back to reference Greene, W. H. (1993). The econometric approach to efficiency analysis. The measurement of productivity efficiency: Techniques and applications (pp. 92–250). Oxford University Press. Greene, W. H. (1993). The econometric approach to efficiency analysis. The measurement of productivity efficiency: Techniques and applications (pp. 92–250). Oxford University Press.
go back to reference Greene, W. (2005). Fixed and random effects in stochastic frontier models. Journal of Productivity Analysis, 23(1), 7–32. CrossRef Greene, W. (2005). Fixed and random effects in stochastic frontier models. Journal of Productivity Analysis, 23(1), 7–32. CrossRef
go back to reference Griliches, Z. (1979). Issues in assessing the contribution of research and development to productivity growth. The Bell Journal of Economics, 10, 92–116. CrossRef Griliches, Z. (1979). Issues in assessing the contribution of research and development to productivity growth. The Bell Journal of Economics, 10, 92–116. CrossRef
go back to reference Grossman, G., & Helpman, E. (1991). Innovation and growth in the global economy. MIT Press. Grossman, G., & Helpman, E. (1991). Innovation and growth in the global economy. MIT Press.
go back to reference Guariglia, A., & Liu, P. (2014). To what extent do financing constraints affect Chinese firms' innovation activities? International Review of Financial Analysis, 36, 223–240. CrossRef Guariglia, A., & Liu, P. (2014). To what extent do financing constraints affect Chinese firms' innovation activities? International Review of Financial Analysis, 36, 223–240. CrossRef
go back to reference Hanousek, J., Kočenda, E., & Shamshur, A. (2015). Corporate efficiency in Europe. Journal of Corporate Finance, 32, 24–40. CrossRef Hanousek, J., Kočenda, E., & Shamshur, A. (2015). Corporate efficiency in Europe. Journal of Corporate Finance, 32, 24–40. CrossRef
go back to reference Heshmati, A., & Kim, H. (2011). The R&D and productivity relationship of Korean listed firms. Journal of Productivity Analysis, 36(2), 125–142. CrossRef Heshmati, A., & Kim, H. (2011). The R&D and productivity relationship of Korean listed firms. Journal of Productivity Analysis, 36(2), 125–142. CrossRef
go back to reference Huizinga, H. P., Nelissen, J. H. M., & Vander Vennet, R. (2001). Efficiency effects of bank mergers and acquisitions in Europe. Working paper no. 088/3. Tinbergen Institute. Huizinga, H. P., Nelissen, J. H. M., & Vander Vennet, R. (2001). Efficiency effects of bank mergers and acquisitions in Europe. Working paper no. 088/3. Tinbergen Institute.
go back to reference Janz, N., Lööf, H., & Peters, B. (2004). Firm level innovation and productivity-is there a common story across countries? Problems and Perspectives in Management, 2, 184–204. Janz, N., Lööf, H., & Peters, B. (2004). Firm level innovation and productivity-is there a common story across countries? Problems and Perspectives in Management, 2, 184–204.
go back to reference Jin, M., Zhao, S., & Kumbhakar, S. C. (2019). Financial constraints and firm productivity: Evidence from Chinese manufacturing. European Journal of Operational Research, 275(3), 1139–1156. CrossRef Jin, M., Zhao, S., & Kumbhakar, S. C. (2019). Financial constraints and firm productivity: Evidence from Chinese manufacturing. European Journal of Operational Research, 275(3), 1139–1156. CrossRef
go back to reference Kaplan, S. N., & Zingales, L. (1995). Do financing constraints explain why investment is correlated with cash flow? (No. w5267). National Bureau of Economic Research. Kaplan, S. N., & Zingales, L. (1995). Do financing constraints explain why investment is correlated with cash flow? (No. w5267). National Bureau of Economic Research.
go back to reference Karakaplan, M. U., & Kutlu, L. (2017). Endogeneity in panel stochastic frontier models: An application to the Japanese cotton spinning industry. Applied Economics, 49(59), 5935–5939. CrossRef Karakaplan, M. U., & Kutlu, L. (2017). Endogeneity in panel stochastic frontier models: An application to the Japanese cotton spinning industry. Applied Economics, 49(59), 5935–5939. CrossRef
go back to reference Karlsson, C., & Olsson, O. (1998). Product innovation in small and large enterprises. Small Business Economics, 10(1), 31–46. CrossRef Karlsson, C., & Olsson, O. (1998). Product innovation in small and large enterprises. Small Business Economics, 10(1), 31–46. CrossRef
go back to reference Kerr, W. R., & Nanda, R. (2009). Democratizing entry: Banking deregulations, financing constraints, and entrepreneurship. Journal of Financial Economics, 94(1), 124–149. CrossRef Kerr, W. R., & Nanda, R. (2009). Democratizing entry: Banking deregulations, financing constraints, and entrepreneurship. Journal of Financial Economics, 94(1), 124–149. CrossRef
go back to reference Kerr, W. R., & Nanda, R. (2015). Financing innovation. Annual Review of Financial Economics, 7, 445–462. CrossRef Kerr, W. R., & Nanda, R. (2015). Financing innovation. Annual Review of Financial Economics, 7, 445–462. CrossRef
go back to reference Klette, T. J., & Kortum, S. (2004). Innovating firms and aggregate innovation. Journal of Political Economy, 112(5), 986–1018. CrossRef Klette, T. J., & Kortum, S. (2004). Innovating firms and aggregate innovation. Journal of Political Economy, 112(5), 986–1018. CrossRef
go back to reference Koellinger, P. (2008). The relationship between technology, innovation, and firm performance—Empirical evidence from e-business in Europe. Research Policy, 37(6), 1317–1328. CrossRef Koellinger, P. (2008). The relationship between technology, innovation, and firm performance—Empirical evidence from e-business in Europe. Research Policy, 37(6), 1317–1328. CrossRef
go back to reference Kumbhakar, S. C., & Lovell, C. A. K. (2000). Stochastic frontier analysis. Cambridge University Press. CrossRef Kumbhakar, S. C., & Lovell, C. A. K. (2000). Stochastic frontier analysis. Cambridge University Press. CrossRef
go back to reference Kumbhakar, S. C., Ortega-Argilés, R., Potters, L., Vivarelli, M., & Voigt, P. (2012). Corporate R&D and firm efficiency: Evidence from Europe's top R&D investors. Journal of Productivity Analysis, 37(2), 125–140. CrossRef Kumbhakar, S. C., Ortega-Argilés, R., Potters, L., Vivarelli, M., & Voigt, P. (2012). Corporate R&D and firm efficiency: Evidence from Europe's top R&D investors. Journal of Productivity Analysis, 37(2), 125–140. CrossRef
go back to reference Laforet, S. (2008). Size, strategic, and market orientation affects on innovation. Journal of Business Research, 61(7), 753–764. CrossRef Laforet, S. (2008). Size, strategic, and market orientation affects on innovation. Journal of Business Research, 61(7), 753–764. CrossRef
go back to reference Laforet, S. (2013). Organizational innovation outcomes in SMEs: Effects of age, size, and sector. Journal of World Business, 48(4), 490–502. CrossRef Laforet, S. (2013). Organizational innovation outcomes in SMEs: Effects of age, size, and sector. Journal of World Business, 48(4), 490–502. CrossRef
go back to reference Le, S. A., Walters, B., & Kroll, M. (2006). The moderating effects of external monitors on the relationship between R&D spending and firm performance. Journal of Business Research, 59(2), 278–287. CrossRef Le, S. A., Walters, B., & Kroll, M. (2006). The moderating effects of external monitors on the relationship between R&D spending and firm performance. Journal of Business Research, 59(2), 278–287. CrossRef
go back to reference Le, V., Vu, X. B. B., & Nghiem, S. (2018). Technical efficiency of small and medium manufacturing firms in Vietnam: A stochastic meta-frontier analysis. Economic Analysis and Policy, 59, 84–91. CrossRef Le, V., Vu, X. B. B., & Nghiem, S. (2018). Technical efficiency of small and medium manufacturing firms in Vietnam: A stochastic meta-frontier analysis. Economic Analysis and Policy, 59, 84–91. CrossRef
go back to reference Leal-Rodríguez, A. L., Eldridge, S., Roldán, J. L., Leal-Millán, A. G., & Ortega-Gutiérrez, J. (2015). Organizational unlearning, innovation outcomes, and performance: The moderating effect of firm size. Journal of Business Research, 68(4), 803–809. CrossRef Leal-Rodríguez, A. L., Eldridge, S., Roldán, J. L., Leal-Millán, A. G., & Ortega-Gutiérrez, J. (2015). Organizational unlearning, innovation outcomes, and performance: The moderating effect of firm size. Journal of Business Research, 68(4), 803–809. CrossRef
go back to reference Leibenstein, H. (1978). General X-efficiency theory and economic development. Oxford University Press. Leibenstein, H. (1978). General X-efficiency theory and economic development. Oxford University Press.
go back to reference Leiponen, A. (2000). Competencies, innovation and profitability of firms. Economics of Innovation and New Technology, 9(1), 1–24. CrossRef Leiponen, A. (2000). Competencies, innovation and profitability of firms. Economics of Innovation and New Technology, 9(1), 1–24. CrossRef
go back to reference Lensink, R., & Meesters, A. (2014). Institutions and bank performance: A stochastic frontier analysis. Oxford Bulletin of Economics and Statistics, 76(1), 67–92. CrossRef Lensink, R., & Meesters, A. (2014). Institutions and bank performance: A stochastic frontier analysis. Oxford Bulletin of Economics and Statistics, 76(1), 67–92. CrossRef
go back to reference Lien, G., Kumbhakar, S. C., & Alem, H. (2018). Endogeneity, heterogeneity, and determinants of inefficiency in Norwegian crop-producing farms. International Journal of Production Economics, 201, 53–61. CrossRef Lien, G., Kumbhakar, S. C., & Alem, H. (2018). Endogeneity, heterogeneity, and determinants of inefficiency in Norwegian crop-producing farms. International Journal of Production Economics, 201, 53–61. CrossRef
go back to reference Lööf, H., & Heshmati, A. (2006). On the relationship between innovation and performance: A sensitivity analysis. Economics of Innovation and New Technology, 15(4–5), 317–344. CrossRef Lööf, H., & Heshmati, A. (2006). On the relationship between innovation and performance: A sensitivity analysis. Economics of Innovation and New Technology, 15(4–5), 317–344. CrossRef
go back to reference Love, J. H., & Roper, S. (2015). SME innovation, exporting and growth: A review of existing evidence. International Small Business Journal, 33(1), 28–48. CrossRef Love, J. H., & Roper, S. (2015). SME innovation, exporting and growth: A review of existing evidence. International Small Business Journal, 33(1), 28–48. CrossRef
go back to reference Love, J. H., Roper, S., & Du, J. (2009). Innovation, ownership and profitability. International Journal of Industrial Organization, 27(3), 424–434. CrossRef Love, J. H., Roper, S., & Du, J. (2009). Innovation, ownership and profitability. International Journal of Industrial Organization, 27(3), 424–434. CrossRef
go back to reference Luo, Y., Tanna, S., & De Vita, G. (2016). Financial openness, risk and bank efficiency: Cross-country evidence. Journal of Financial Stability, 24, 132–148. CrossRef Luo, Y., Tanna, S., & De Vita, G. (2016). Financial openness, risk and bank efficiency: Cross-country evidence. Journal of Financial Stability, 24, 132–148. CrossRef
go back to reference Maietta O. W., & Sena V. (2004), Profit sharing, technical efficiency change and finance constraints. In Employee participation, firm performance and survival (pp. 149–167). Emerald Group Publishing Limited. Maietta O. W., & Sena V. (2004), Profit sharing, technical efficiency change and finance constraints. In Employee participation, firm performance and survival (pp. 149–167). Emerald Group Publishing Limited.
go back to reference Maietta, O. W., & Sena, V. (2010). Financial constraints and technical efficiency: Some empirical evidence for Italian producers' cooperatives. Annals of Public and Cooperative Economics, 81(1), 21–38. CrossRef Maietta, O. W., & Sena, V. (2010). Financial constraints and technical efficiency: Some empirical evidence for Italian producers' cooperatives. Annals of Public and Cooperative Economics, 81(1), 21–38. CrossRef
go back to reference Manaresi, F., & Pierri, N. (2017). Credit constraints and firm productivity: Evidence from Italy. Mo. Fi. R. Working Papers, 137. Manaresi, F., & Pierri, N. (2017). Credit constraints and firm productivity: Evidence from Italy. Mo. Fi. R. Working Papers, 137.
go back to reference Meeusen, W., & van Den Broeck, J. (1977). Efficiency estimation from Cobb–Douglas production functions with composed error. International Economic Review, 18(2), 435–444. CrossRef Meeusen, W., & van Den Broeck, J. (1977). Efficiency estimation from Cobb–Douglas production functions with composed error. International Economic Review, 18(2), 435–444. CrossRef
go back to reference Midrigan, V., & Xu, D. Y. (2014). Finance and misallocation: Evidence from plant-level data. American Economic Review, 104(2), 422–458. CrossRef Midrigan, V., & Xu, D. Y. (2014). Finance and misallocation: Evidence from plant-level data. American Economic Review, 104(2), 422–458. CrossRef
go back to reference Nickell, S. (1996). Competition and corporate performance. Journal of Political Economy, 104, 724–746. CrossRef Nickell, S. (1996). Competition and corporate performance. Journal of Political Economy, 104, 724–746. CrossRef
go back to reference Nickell, S., & Nicolitsas, D. (1999). How does financial pressure affect firms? European Economic Review, 43(8), 1435–1456. CrossRef Nickell, S., & Nicolitsas, D. (1999). How does financial pressure affect firms? European Economic Review, 43(8), 1435–1456. CrossRef
go back to reference O'Mahony, M., & Vecchi, M. (2009). R&D, knowledge spillovers and company productivity performance. Research Policy, 38(1), 35–44. CrossRef O'Mahony, M., & Vecchi, M. (2009). R&D, knowledge spillovers and company productivity performance. Research Policy, 38(1), 35–44. CrossRef
go back to reference Ortega-Argilés, R., Piva, M., & Vivarelli, M. (2015). The productivity impact of R&D investment: Are high-tech sectors still ahead? Economics of Innovation and New Technology, 24(3), 204–222. CrossRef Ortega-Argilés, R., Piva, M., & Vivarelli, M. (2015). The productivity impact of R&D investment: Are high-tech sectors still ahead? Economics of Innovation and New Technology, 24(3), 204–222. CrossRef
go back to reference Pellegrino, G., & Piva, M. (2020). Innovation, industry and firm age: Are there new knowledge production functions? Eurasian Business Review, 10(1), 65–95. CrossRef Pellegrino, G., & Piva, M. (2020). Innovation, industry and firm age: Are there new knowledge production functions? Eurasian Business Review, 10(1), 65–95. CrossRef
go back to reference Pigini, C., Presbitero, A. F., & Zazzaro, A. (2016). State dependence in access to credit. Journal of Financial Stability, 27, 17–34. CrossRef Pigini, C., Presbitero, A. F., & Zazzaro, A. (2016). State dependence in access to credit. Journal of Financial Stability, 27, 17–34. CrossRef
go back to reference Pilar, P. G., Marta, A. P., & Antonio, A. (2018). Profit efficiency and its determinants in small and medium-sized enterprises in Spain. BRQ Business Research Quarterly, 21(4), 238–250. CrossRef Pilar, P. G., Marta, A. P., & Antonio, A. (2018). Profit efficiency and its determinants in small and medium-sized enterprises in Spain. BRQ Business Research Quarterly, 21(4), 238–250. CrossRef
go back to reference Romer, P. (1990). Endogenous technological change. Journal of Political Economy, 98, 72–102. CrossRef Romer, P. (1990). Endogenous technological change. Journal of Political Economy, 98, 72–102. CrossRef
go back to reference Sasidharan, S., Lukose, P. J., & Komera, S. (2015). Financing constraints and investments in R&D: Evidence from Indian manufacturing firms. The Quarterly Review of Economics and Finance, 55, 28–39. CrossRef Sasidharan, S., Lukose, P. J., & Komera, S. (2015). Financing constraints and investments in R&D: Evidence from Indian manufacturing firms. The Quarterly Review of Economics and Finance, 55, 28–39. CrossRef
go back to reference Schroeder, R. G., Bates, K. A., & Junttila, M. A. (2002). A resource-based view of manufacturing strategy and the relationship to manufacturing performance. Strategic Management Journal, 23(2), 105–117. CrossRef Schroeder, R. G., Bates, K. A., & Junttila, M. A. (2002). A resource-based view of manufacturing strategy and the relationship to manufacturing performance. Strategic Management Journal, 23(2), 105–117. CrossRef
go back to reference Schumpeter, J. A. (1942). Socialism, capitalism and democracy. Harper and Brothers. Schumpeter, J. A. (1942). Socialism, capitalism and democracy. Harper and Brothers.
go back to reference Sena, V. (2006). The determinants of firms' performance: Can finance constraints improve technical efficiency? European Journal of Operational Research, 172(1), 311–325. CrossRef Sena, V. (2006). The determinants of firms' performance: Can finance constraints improve technical efficiency? European Journal of Operational Research, 172(1), 311–325. CrossRef
go back to reference Shao, B. B., & Lin, W. T. (2016). Assessing output performance of information technology service industries: Productivity, innovation and catch-up. International Journal of Production Economics, 172, 43–53. CrossRef Shao, B. B., & Lin, W. T. (2016). Assessing output performance of information technology service industries: Productivity, innovation and catch-up. International Journal of Production Economics, 172, 43–53. CrossRef
go back to reference Solow, R. (1957). A contribution to the theory of economic growth. Quarterly Journal of Economics, 70, 65–94. CrossRef Solow, R. (1957). A contribution to the theory of economic growth. Quarterly Journal of Economics, 70, 65–94. CrossRef
go back to reference Srairi, S. A. (2010). Cost and profit efficiency of conventional and Islamic banks in GCC countries. Journal of Productivity Analysis, 34(1), 45–62. CrossRef Srairi, S. A. (2010). Cost and profit efficiency of conventional and Islamic banks in GCC countries. Journal of Productivity Analysis, 34(1), 45–62. CrossRef
go back to reference Stehrer, R., Baker, P., Foster, N., McGregor, N., Koenen, J., Leitner, S., Schricker, J., Strobel, T., Vieweg, H. G., Vermeulen, J., & Yagafarova, A. (2015). The relation between industry and services in terms of productivity and value creation. Wiiw Research Report 404. Stehrer, R., Baker, P., Foster, N., McGregor, N., Koenen, J., Leitner, S., Schricker, J., Strobel, T., Vieweg, H. G., Vermeulen, J., & Yagafarova, A. (2015). The relation between industry and services in terms of productivity and value creation. Wiiw Research Report 404.
go back to reference Ugur, M., Trushin, E., Solomon, E., & Guidi, F. (2016). R&D and productivity in OECD firms and industries: A hierarchical meta-regression analysis. Research Policy, 45(10), 2069–2086. CrossRef Ugur, M., Trushin, E., Solomon, E., & Guidi, F. (2016). R&D and productivity in OECD firms and industries: A hierarchical meta-regression analysis. Research Policy, 45(10), 2069–2086. CrossRef
go back to reference Urbano, D., & Alvarez, C. (2014). Institutional dimensions and entrepreneurial activity: An international study. Small Business Economics, 42(4), 703–716. CrossRef Urbano, D., & Alvarez, C. (2014). Institutional dimensions and entrepreneurial activity: An international study. Small Business Economics, 42(4), 703–716. CrossRef
go back to reference Vander-Vennet, R. (2002). Cost and profit efficiency of financial conglomerates and universal banks in Europe. Journal of Money, Credit, and Banking, 34(1), 254–282. CrossRef Vander-Vennet, R. (2002). Cost and profit efficiency of financial conglomerates and universal banks in Europe. Journal of Money, Credit, and Banking, 34(1), 254–282. CrossRef
go back to reference Vermoesen, V., Deloof, M., & Laveren, E. (2013). Long-term debt maturity and financing constraints of SMEs during the global financial crisis. Small Business Economics, 41(2), 433–448. CrossRef Vermoesen, V., Deloof, M., & Laveren, E. (2013). Long-term debt maturity and financing constraints of SMEs during the global financial crisis. Small Business Economics, 41(2), 433–448. CrossRef
go back to reference Wang, H.-J. (2003). A stochastic frontier analysis of financing constraints on investment: The case of financial liberalization in Taiwan. Journal of Business and Economic Statistics, 21, 406–419. CrossRef Wang, H.-J. (2003). A stochastic frontier analysis of financing constraints on investment: The case of financial liberalization in Taiwan. Journal of Business and Economic Statistics, 21, 406–419. CrossRef
go back to reference Wang, H. J., & Schmidt, P. (2002). One-step and two-step estimation of the effects of exogenous variables on technical efficiency levels. Journal of Productivity Analysis, 18(2), 129–144. CrossRef Wang, H. J., & Schmidt, P. (2002). One-step and two-step estimation of the effects of exogenous variables on technical efficiency levels. Journal of Productivity Analysis, 18(2), 129–144. CrossRef
go back to reference Yang, Z., Shao, S., Li, C., & Yang, L. (2020). Alleviating the misallocation of R&D inputs in China's manufacturing sector: From the perspectives of factor-biased technological innovation and substitution elasticity. Technological Forecasting and Social Change, 151, 119878. CrossRef Yang, Z., Shao, S., Li, C., & Yang, L. (2020). Alleviating the misallocation of R&D inputs in China's manufacturing sector: From the perspectives of factor-biased technological innovation and substitution elasticity. Technological Forecasting and Social Change, 151, 119878. CrossRef
go back to reference Zhang, D., Zheng, W., & Ning, L. (2018). Does innovation facilitate firm survival? Evidence from Chinese high-tech firms. Economic Modelling, 75, 458–468. CrossRef Zhang, D., Zheng, W., & Ning, L. (2018). Does innovation facilitate firm survival? Evidence from Chinese high-tech firms. Economic Modelling, 75, 458–468. CrossRef
go back to reference Zhang, J., Naseem, M. A., Ahmed, M. I., & Ali, R. (2021). Board independence and Chinese banking efficiency: A moderating role of ownership restructuring. Eurasian Business Review, 11(3), 517–536. CrossRef Zhang, J., Naseem, M. A., Ahmed, M. I., & Ali, R. (2021). Board independence and Chinese banking efficiency: A moderating role of ownership restructuring. Eurasian Business Review, 11(3), 517–536. CrossRef
go back to reference Zorzo, L. S., Diehl, C. A., Venturini, J. C., & Zambon, E. P. (2017). The relationship between the focus on innovation and economic efficiency: A study on Brazilian electric power distribution companies. RAI Revista De Administração e Inovação, 14(3), 235–249. CrossRef Zorzo, L. S., Diehl, C. A., Venturini, J. C., & Zambon, E. P. (2017). The relationship between the focus on innovation and economic efficiency: A study on Brazilian electric power distribution companies. RAI Revista De Administração e Inovação, 14(3), 235–249. CrossRef
Graziella Bonanno
Annalisa Ferrando
Stefania Patrizia Sonia Rossi
Eurasian Business Review | CommonCrawl |
Nonlinear Dirichlet problem for the nonlocal anisotropic operator $ L_K $
CPAA Home
Existence and regularity of solutions for an evolution model of perfectly plastic plates
July 2019, 18(4): 1827-1846. doi: 10.3934/cpaa.2019085
Second order non-autonomous lattice systems and their uniform attractors
Ahmed Y. Abdallah and Rania T. Wannan
Department of Mathematics, The University of Jordan, Amman 11942 Jordan
Received July 2018 Revised November 2018 Published January 2019
The existence of the uniform global attractor for a second order non-autonomous lattice dynamical system (LDS) with almost periodic symbols has been carefully studied. Considering the nonlinear operators $ \left( f_{1i}\left( \overset{.}{u}_{j}\mid j\in I_{iq_{1}}\right) \right) _{i\in \mathbb{Z} ^{n}} $ and $ \left( f_{2i}\left( u_{j}\mid j\in I_{iq_{2}}\right) \right) _{i\in \mathbb{Z} ^{n}} $ of this LDS, up to our knowledge it is the first time to investigate the existence of uniform global attractors for such second order LDSs. In fact there are some previous studies for first order autonomous and non-autonomous LDSs with similar nonlinear parts, cf. [3, 24]. Moreover, the LDS under consideration covers a wide range of second order LDSs. In fact, for specific choices of the nonlinear functions $ f_{1i} $ and $ f_{2i} $ we get the autonomous and non-autonomous second order systems given by [1, 25, 26].
Keywords: Lattice dynamical system, absorbing set, uniform global attractor, almost periodic symbol.
Mathematics Subject Classification: Primary: 37L30; Secondary: 37L60.
Citation: Ahmed Y. Abdallah, Rania T. Wannan. Second order non-autonomous lattice systems and their uniform attractors. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1827-1846. doi: 10.3934/cpaa.2019085
A. Y. Abdallah, Upper semicontinuity of the attractor for a second order lattice dynamical system, Discrete. Contin. Dyn. Syst. Ser. B., 5 (2005), 899-916. doi: 10.3934/dcdsb.2005.5.899. Google Scholar
A. Y. Abdallah, Long-time behavior for second order lattice dynamical systems, Acta Appl. Math., 106 (2009), 47-59. doi: 10.1007/s10440-008-9281-8. Google Scholar
A. Y. Abdallah, Uniform global attractors for first order non-autonomous lattice dynamical systems, Proc. Amer. Math. Soc., 138 (2010), 3219-3228. doi: 10.1090/S0002-9939-10-10440-7. Google Scholar
A. Y. Abdallah, Attractors for second order lattice systems with almost periodic symbols in weighted spaces, J. Math. Anal. Appl., 442 (2016), 761-781. doi: 10.1016/j.jmaa.2016.04.071. Google Scholar
P. W. Bates, K. Lu and B. Wang, Attractors for lattice dynamical systems, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 143-153. doi: 10.1142/S0218127401002031. Google Scholar
V. Bellrti and V. Pata, Attractors for semilinear strongly damped wave equation on $\mathbb{R}^3$, Disc. Cont. Dyn. Sys., 7 (2001), 719-735. doi: 10.3934/dcds.2001.7.719. Google Scholar
T. Caraballo, F. Morillas and J. Valero, Random attractors for stochastic lattice systems with non-Lipschitz nonlinearity, J. Diff. Eqs. Appl., 17 (2011), 161-184. doi: 10.1080/10236198.2010.549010. Google Scholar
T. Caraballo, F. Morillas and J. Valero, Attractors of stochastic lattice dynamical systems with a multiplicative noise and non-Lipschitz nonlinearities, J. Diff. Eqs., 253 (2012), 667-693. doi: 10.1016/j.jde.2012.03.020. Google Scholar
H. Chate and M. Courbage (Eds.), Lattice systems, Phys. D, 103 (1997), 1-612. doi: 10.1016/S0167-2789(96)00256-4. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Non-autonomous dynamical systems and their attractors, Appendix in the book Asymptotic Behavior of Solutions of Evolutionary Equaions (M. I. Vishik ed.), Cambridge University Press, 1992. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Non-autonomous evolution equations and their attractor, Russ. J. Math. Physics, 1 (1993), 165–190. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors of non-autonomous dynamical systems and their dimension, J. Math. Pures Appl., 73 (1994), 279-333. Google Scholar
S. N. Chow, Lattice Dynamical Systems, Dynamical System, Lecture Notes in Mathematics (Springer, Berlin), 2003, pp. 1-102. doi: 10.1007/978-3-540-45204-1_1. Google Scholar
J. Huang, X. Han and S. Zhou, Uniform attractors for non-autonomous Klein-Gordon-Schródinger lattice systems, Appl. Math. Mech. Engl. Ed., 30 (2009), 1597-1607. doi: 10.1007/s10483-009-1211-z. Google Scholar
X. Jia, C. Zhao and X. Yang, Global attractor and Kolmogorov entropy of three component reversible Gray–Scott model on infinite lattices, Appl. Math. Comp., (2012). doi: 10.1016/j.amc.2012.03.036. Google Scholar
B. M. Levitan and V. V. Zhikov, Almost Periodic Functions and Differential Equations, Cambridge Univ. Press, Cambridge, 1982. Google Scholar
H. Li and L. Sun, Upper semicontinuity of attractors for small perturbations of Klein-Gordon-Schrödinger lattice system, Adv. Difference Equ., 2014, 2014: 300, 16 pp. doi: 10.1186/1687-1847-2014-300. Google Scholar
J. Oliveira, J. Pereira and M. Perla, Attractors for second order periodic lattices with nonlinear damping, J. Diff. Eqs. Appl., 14 (2008), 899-921. doi: 10.1080/10236190701859211. Google Scholar
R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, 2nd edn. Appl. Math. Sci. 68. Springer, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar
B. Wang, Asymptotic behavior of non-autonomous lattice systems, J. Math. Anal. Appl., 331 (2007), 121-136. doi: 10.1016/j.jmaa.2006.08.070. Google Scholar
X. Yang, C. Zhao and J. Cao, Dynamics of the discrete coupled nonlinear Schrödinger-Boussinesq equations, Appl. Math. Comp., 219 (2013), 8508-8524. doi: 10.1016/j.amc.2013.01.053. Google Scholar
C. Zhao, G. Xue and G. Lukaszewicz, Pullback attractors and invariant measures for discrete Klein-Gordon-Schrödinger equations, Discrete Cont. Dyn. Syst.-B, 23 (2018), 4021-4044. Google Scholar
C. Zhao and S. Zhou, Compact uniform attractors for dissipative lattice dynamical systems with delays, Disc. Cont. Dyn. Sys., 21 (2008), 643-663. doi: 10.3934/dcds.2008.21.643. Google Scholar
S. Zhou, Attractors for first order dissipative lattice dynamical systems, Physica D, 178 (2003), 51-61. doi: 10.1016/S0167-2789(02)00807-2. Google Scholar
S. Zhou, Attractors and approximations for lattice dynamical systems, J. Diff. Eqs., 200 (2004), 342-368. doi: 10.1016/j.jde.2004.02.005. Google Scholar
S. Zhou and M. Zhao, Uniform exponential attractor for second order lattice system with quasi-periodic external forces in weighted space, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 24 (2014), 9 pp. doi: 10.1142/S0218127414500060. Google Scholar
Ahmed Y. Abdallah. Upper semicontinuity of the attractor for a second order lattice dynamical system. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 899-916. doi: 10.3934/dcdsb.2005.5.899
Shi-Liang Wu, Cheng-Hsiung Hsu. Entire solutions with merging fronts to a bistable periodic lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2329-2346. doi: 10.3934/dcds.2016.36.2329
Jong-Shenq Guo, Chang-Hong Wu. Front propagation for a two-dimensional periodic monostable lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 197-223. doi: 10.3934/dcds.2010.26.197
Messoud Efendiev, Etsushi Nakaguchi, Wolfgang L. Wendland. Uniform estimate of dimension of the global attractor for a semi-discretized chemotaxis-growth system. Conference Publications, 2007, 2007 (Special) : 334-343. doi: 10.3934/proc.2007.2007.334
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Xinyuan Liao, Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative non-autonomous lattice dynamical systems. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1087-1111. doi: 10.3934/cpaa.2007.6.1087
Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative lattice dynamical systems with delays. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 643-663. doi: 10.3934/dcds.2008.21.643
Ahmed Y. Abdallah. Attractors for first order lattice systems with almost periodic nonlinear part. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1241-1255. doi: 10.3934/dcdsb.2019218
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata. Totally dissipative dynamical processes and their uniform global attractors. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1989-2004. doi: 10.3934/cpaa.2014.13.1989
Tibor Krisztin. The unstable set of zero and the global attractor for delayed monotone positive feedback. Conference Publications, 2001, 2001 (Special) : 229-240. doi: 10.3934/proc.2001.2001.229
Sorin Micu, Ademir F. Pazoto. Almost periodic solutions for a weakly dissipated hybrid system. Mathematical Control & Related Fields, 2014, 4 (1) : 101-113. doi: 10.3934/mcrf.2014.4.101
Jörg Härterich, Matthias Wolfrum. Describing a class of global attractors via symbol sequences. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 531-554. doi: 10.3934/dcds.2005.12.531
Younghun Hong, Changhun Yang. Uniform Strichartz estimates on the lattice. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3239-3264. doi: 10.3934/dcds.2019134
Felipe García-Ramos, Brian Marcus. Mean sensitive, mean equicontinuous and almost periodic functions for dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 729-746. doi: 10.3934/dcds.2019030
Fanchao Kong, Juan J. Nieto. Almost periodic dynamical behaviors of the hematopoiesis model with mixed discontinuous harvesting terms. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5803-5830. doi: 10.3934/dcdsb.2019107
Fang-Di Dong, Wan-Tong Li, Li Zhang. Entire solutions in a two-dimensional nonlocal lattice dynamical system. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2517-2545. doi: 10.3934/cpaa.2018120
Jong-Shenq Guo, Ying-Chih Lin. Traveling wave solution for a lattice dynamical system with convolution type nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 101-124. doi: 10.3934/dcds.2012.32.101
Chin-Chin Wu. Monotonicity and uniqueness of wave profiles for a three components lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2813-2827. doi: 10.3934/dcds.2017121
Caibin Zeng, Xiaofang Lin, Jianhua Huang, Qigui Yang. Pathwise solution to rough stochastic lattice dynamical system driven by fractional noise. Communications on Pure & Applied Analysis, 2020, 19 (2) : 811-834. doi: 10.3934/cpaa.2020038
Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809
Ahmed Y. Abdallah Rania T. Wannan | CommonCrawl |
Performance and working mechanism of tension–compression composite anchorage system for earthen heritage sites
Donghua Wang1,
Kai Cui ORCID: orcid.org/0000-0002-6060-61301,2,
Guopeng Wu1,2,
Fei Feng1 &
Xiangpeng Yu1
Although the fully grouted tension anchorage system (T-anchorage system) has been widely used in the field of anchoring of earthen heritage sites, it has always suffered from the failure of the bolt–slurry interface debonding and the strong attenuation of the shear stress of the bolt–slurry interface along the embedment length. In this article, a new type of fully grouted tension–compression composite anchorage system (TC-anchorage system) is proposed, which is made up of a pressure bearing system with the same diameter of the anchor hole at the end of the tension bolt. The in situ anchorage, pull-out test and interfacial stress–strain monitoring were carried out for the T-anchorage systems and the TC-anchorage systems. This was based on the preliminary analysis and comparison of the structure and working mechanism of the two anchorage systems. The test results show that: the average ultimate load of the TC-anchorage system is about 40% higher than that of the T-anchorage system. TC-anchorage systems have the characteristics of small elastic deformation and large plastic deformation. The strain at the bolt–slurry interface varies progressively along the embedded length with the increase of load in a positive skewness–bimodal–negative skewness curve. The working mechanism of TC-anchorage systems includes three progressive stages: bolt–slurry interface debonding, conical slurry formation and slurry extrusion. The research conclusion provides a reference for the application of TC-anchorage systems in the protection and reinforcement of earthen heritage sites.
Earthen heritage sites are the cultural heritage with soil as the main building material. They are the direct physical evidence of the origin and evolution of human civilization and the important historical roots of excellent traditional culture. They have great historical, artistic, and scientific value. Earthen heritage sites are abundant in the mainly arid zones of Africa, Asia, North and South America, and the Middle East and are also quite rich in types [1]. For example, in China's arid environment and along the Silk Road, the Great Wall [2] and its subsidiary buildings from the Qin Dynasty to the Ming Dynasty, as well as Buddhist temple sites and ancient cities in different historical periods are representatives of earthen heritage sites [1]. However, earthen heritage sites have been eroded by various natural forces such as wind erosion, rain erosion, freezing and thawing, earthquakes and human activities for a long time. Collapse, fissure and excavation, which directly or indirectly lead to the destabilization and destruction of earthen heritage sites, have been widely developed and have become the primary problem endangering the safe preservation of earthen heritage sites [3]. Therefore, the protection methods of stability control of earthen heritage sites have become a research hotspot.
Since the 1960s, fully grouted tension bolts (T-bolts) have been used in civil and mining engineering as temporary and permanent reinforcement components, and have made considerable progress in practical experience and theoretical research [4,5,6]. T-anchorage system is fully taken into account the characteristics of making maximum use of the existing structure, repairing the structure in a hidden way, and adjusting the strength of the repairs to the appropriate degree. As an attempt, T-anchorage system has been applied to the reinforcement of earthen heritage sites along the Silk Road since the 1990s [7]. However, anchorage materials widely used in conventional civil engineering reinforcement fields, such as metal bolts [8], cement mortar [9], and resin slurry [10], are poorly compatible with earthen heritage sites in physical, mechanical, and chemical properties. Consequently, the above-mentioned bolts and slurry have insufficient durability, and the deformation of them are respectively inconsistent with that of earthen heritage sites [1]. Therefore, these anchorage materials can not be directly applied to the reinforcement of earthen heritage sites, and the corresponding research results can only be used as a reference.
However, it should be noted that based on the conservation philosophy of "minimal intervention, maximum compatibility and unchanged status", the current research focuses on the selection of anchorage materials, optimization of anchorage technology, and macro-anchorage performance and mechanism of T-anchorage system. Researches on the types of bolts have successively emerged such as wood bolts [11], bamboo bolts [12], bamboo rebar composite bolts [13], geotechnical filaments [14], and GFRP bolt [15]. On grouting materials, modified grout based on high modulus potassium silicate solution [16], glutinous rice paste [17], modified polyvinyl alcohol [18] as the main cementing materials and modified grout based on calcined ginger nuts [19] as the main material have appeared successively. However, in T-anchorage system of earthen heritage sites, the physical and mechanical properties of bolt and slurry cannot be perfectly compatible with that of earthen heritage sites, which leads to the main failure of bolt–slurry interface debonding and the shear stress of bolt–slurry interface attenuates strongly along the embedment length. Therefore, it is difficult to improve the performance of anchorage system only by improving the physical and mechanical indexes of slurry and bolt materials. Moreover, with the continuous development of anchorage technology, "short embedment length, high anchorage force, large deformation resistance" excellent anchorage performance has become an inevitable trend of the development of earthen heritage sites anchorage system. Therefore, from the point of view of interaction mechanism between bolt and slurry, synergistic use of compression and shear resistance of slurry can improve the anchorage performance of anchorage system of earthen heritage sites more comprehensively.
In view of the above situation and few reports of tension–compression composite anchorage in the field of earthen heritage sites anchorage, a new type of fully grouted tension–compression composite anchorage system (TC-anchorage system) is proposed in this article, which is made up of a pressure-bearing body with the same diameter of the anchor hole at the end of the tension bolt. It is hoped that the TC-anchorage system can improve the anchorage force per unit embedment length and the large deformation resistance of the structure, thereby reducing the protective damage caused by the existing T-anchorage system, increasing the safe storage of anchorage system against large deformation and providing a more reasonable spatio-temporal continuity of the stress process at the bolt–slurry interface. Therefore, TC-anchorage system and T-anchorage system were selected as the research objects, and the structures and working mechanisms of the two anchorage systems were preliminarily expounded. In-situ anchorage, pull-out test, and stress–strain monitoring of bolt–slurry interface were carried out for TC-anchorage system and T-anchorage system. Then, on the test results, the performance of the TC-anchorage system is analyzed based. According to the interaction between bolt and slurry, the working mechanism of the TC-anchorage system is discussed. The applicability of TC-anchorage system in the reinforcement of the mechanical stability of earthen heritage sites is evaluated. This study offers a reference and basis for further theoretical research and engineering application of TC-anchorage system for earthen heritage sites.
Structure and working mechanism of TC-anchorage system
Researches on the performance and working mechanism of T-anchorage system have been carried out [11, 16, 17]: (1) The excavation of the damaged T-anchorage system can be observed: sites soil and slurry remained intact; slurry–soil interface remained tight; debonding and decoupling occurred at the bolt–slurry interface (Fig. 1). In T-anchorage system, there is a sequential transmission mode of bolt–slurry-soil under pull-out load, and the difference of elastic modulus between slurry and bolt is much larger than that between slurry and soil, which leads to the following situations: displacement of the anchor bolt head quickly passes to the end of the bolt; relative movement occurs between bar and slurry in tension anchorage section; bolt produces stronger axial shear and weaker radial extrusion on the thin-bedded slurry around it [17]. The debonding and decoupling failure modes of bolt–slurry interface include two forms (Fig. 2): Slurry shearing—bolt shears the thin-bedded slurry around it; Dilational slip—when bolt is forced to move forward under pull-out load, if the confining pressure of slurry stored on bolt is not enough or uneven, bolt ribs will not cut slurry ribs but climb slurry ribs. The bolt ribs will simultaneously extrude surrounding slurry ribs radially or expand and extrude slurry to the side of weak confining pressure, and bolt will appear undulating slip relative to the slurry. (2) The stress at the bolt–slurry interface is transmitted directly from the front end of embedment length to the back end. It distributes unevenly in the embedment length and attenuates sharply toward the back end of the embedment length. This results in obvious stress concentration at the bolt–slurry interface at the front end of embedment length [17, 18] and the bond stress easily exceeds the ultimate bond strength of the interface, resulting in interface softening, which leads to the debonding and decoupling of the bolt–slurry interface [20]. Therefore, although the compressive strength of the slurry is much greater than its shear strength, the pull-out performance of T-anchorage system depends mainly on the shear strength of the thin-bedded slurry around the bolt, which fails to make full use of the compressive properties of slurry.
Physical excavation after pull-out failure of T-anchorage system
Two types of debonding and decoupling of bolt–slurry interface in T-anchorage system
The fully grouted tension–compression composite bolt (TC-bolt) used in this study is different from T-bolt conventional used in earthen sites. Its structure is similar to the "T" type, in other words, a pressure-bearing body is consolidated at the end of T-bolt. Pressure-bearing body includes hollow wedge lock and hollow truncated cone. The inner surface of the hollow truncated cone is mosaic with the outer surface of the hollow wedge lock. The hollow wedge lock is fixed at the end of the bolt and the hollow truncated cone is fixed with the hollow wedge lock, which forms a TC-bolt. The maximum axial section of hollow truncated cone is equal to that of cylindrical anchor hole. Tension section is from the orifice part to the front end of the pressure-bearing body and pressure-bearing section is from the front end of the pressure-bearing body to the end of the bolt. The structure and stress diagram of TC-bolt are shown in Fig. 3.
TC-bolt structure and interaction force between bolt and slurry. F: drawing force; τ: shear stress; P: pressure; L: embedment length; Lc: length of pressure-bearing section; blue dot lines: boundary between tension section and pressure-bearing section
The working mechanism of TC-anchorage system is more reasonable than that of T-anchorage system. When the anchor head is under low load, the tension transfers from the front end of the embedment length to the back end and relative displacement trend or relative displacement occurs between bolt and slurry. The bolt in front of the hollow truncated cone is in tension state, forming a tension anchorage section. The bolt–slurry interface of the tension section at the front end of embedment length bears the main pull-out effect and the main displacement also occurs here. With the increasing load, the debonding and decoupling of the bolt–slurry interface transfer to the back end of embedment length and the displacement of the TC-anchorage system larger and larger. When the load is transferred to the hollow truncated cone at the end of bolt, that is, the bolt pulls the hollow truncated cone forward to produce a relative displacement trend or displacement. In this process, the hollow truncated cone extrudes the slurry located in the front of it, which makes it in a pressure state and forms a pressure-bearing anchor. With the aggravation of drawing effect, the extrusion effect of pressure-bearing body on slurry becomes stronger and stronger. If the stiffness of the bolt structure is enough, the slurry in front of the hollow truncated cone will continue to yield and the main effective pull-out section will gradually move back. Therefore, the bearing body formed by the tensile section and the pressure-bearing section bears the load on the anchor head. TC-bolt's acceptance mode greatly reduces the load on the compressive anchorage section and the tension anchorage section respectively, and effectively reduces the stress concentration at the bolt–slurry interface. Therefore, the ultimate uplift bearing capacity of the TC-anchorage system is higher than that of the T-anchorage system, which can effectively save the embedment length. Considering that the embedded length of the interface determines the main pull-out effect of the anchorage system, the stress process of the bolt–slurry interface of the TC-anchorage system has excellent space–time continuity and the stress concentration of the bolt–slurry interface is small, which enables the synergistic utilization of compressive and shear properties of slurry to be fully achieved.
According to the mechanical model of TC-anchorage system mentioned above (Fig. 3) and the equations of calculating T-anchorage system in the code [21, 22], it is considered that the ultimate anchorage force of TC-anchorage system consists of tension section anchorage force and pressure-bearing section anchorage force. The equations for calculating the ultimate anchorage force of TC-anchorage system are as follows:
$$P_{TC} = P_{t} + P_{c}$$
$$P_{t} = \mu \pi d(L - L_{c} )\tau_{f}$$
$$P_{c} = \frac{{\pi (D^{2} - d^{ 2} )}}{4}f_{cu}$$
where \(P_{TC}\) is the ultimate anchorage force of TC-anchorage system (kN); \(P_{t}\) is the anchorage force of tension section (kN); \(P_{c}\) is the anchorage force (kN) of pressure-bearing section (kN); \(\mu\) is the influence factor of embedment length on bond strength; \(d\) is the diameter of bolt (mm); \(L\) is the embedment length (mm); \(\tau_{f}\) is the standard bond value between bolt and slurry in the tension section (kPa); \(D\) is the diameter of anchor hole (mm); \(L_{\text{c}}\) is the length of pressure-bearing section (mm); \(f_{cu}\) is the uniaxial unconfined compressive strength of slurry (kPa); \(\mu\), \(\tau_{f}\), \(L_{\text{c}}\) and \(f_{cu}\) can be obtained from the corresponding tests.
For the calculation Eq. (3) of anchorage force in pressure section, there are three aspects which do not match reality very well and need to be improved: (1) \(f_{cu}\) is the unconfined compressive strength of slurry, while slurry is in the state of three-dimensional stress, and the value calculated by \(f_{cu}\) is smaller than the actual compressive strength value; (2) only the crushing of slurry in pressure section is considered, but the pull-out effect formed by the cohesion between slurry and soil is not considered; (3) although the force between bolt and slurry in the pressure-bearing section is less than that in the tension section, there is also a certain pull-out resistance. Considering that the shear stress of bolt–slurry interface decreases along embedment length and the pressure-bearing section is at the end of embedment length, the pull-out force formed at the bolt–slurry interface in the pressure-bearing section is neglected. Therefore, the ultimate anchorage force of TC-anchorage system calculated by Eq. (1) needs to be improved more accurately.
Research objective
To explore the failure modes, ultimate loads, load–displacement curve characteristics, and the time–space law of stress transfer at the bolt–slurry interface of TC-anchorage system.
Based on the analysis of the performance and interface stress mechanism of the TC-anchorage system, the working mechanism of TC-anchorage system is preliminarily explored.
Comparing and analyzing the anchorage performance and working mechanism of TC-anchorage system and T-anchorage system, and evaluating the applicability of TC-anchorage system in earthen heritage sites anchorage.
First, bolts and the samples of grout and earthen heritage sites were made in the laboratory. Next, their physical and mechanical indexes were tested. Final, in situ pull-out tests were carried out for the TC-anchorage system and T-anchorage system. Two parallel samples of bolts were set for each anchorage system and strain gauges were used to monitor the stress–strain of the bolt–slurry interface.
Bolt preparation
The glass-fibre-reinforced plastic (GFRP) bolt is selected as the test bolt. The hollow truncated cone and hollow wedge lock are both GFRP materials. The inner spiral surface and outer surface of hollow wedge lock were coated with appropriate epoxy resin. Then, the hollow wedge lock was locked at the end of the bolt, and finally the hollow truncated cone was installed with hollow wedge lock. After the structure of the pressure-bearing body was consolidated, strain gauges were laid at a distance of 20 cm on the surface of the bolt. The strain gauge model is BE120-3BB(11)-X30 (resistance: 120.0 ± 0.3 Ω, sensitivity: 2.17 ± 1%) produced by AVIC. The physical objects and the locations of strain gauges of TC-bolt and T-bolt are shown in Fig. 4.
The physical objects and the locations of strain gauges of TC-bolt and T-bolt
The main materials used in this testing include water, SH solution, quicklime, fly ash, and heritage site soil. The water is tap water, whose quality meets the specifications of the "Concrete Water Standards (JGJ63-2006)" [23]. SH is a new type of polymer material and a liquid modified polyvinyl alcohol with a density of 1.09 g cm−3 and it has the advantages of non-toxicity, environmental protection, and excellent physical, chemical, and mechanical properties. The mass concentration of the original SH solution is 5% [24]. The mass concentration of the SH solution used in the test is 1.5%, which is created by adding laboratory water to the original solution. Quicklime belongs to industrial grade I, in which CaO content is 93.6%. The fly ash is also industrial grade I and has a particle size of 0.005–0.1 mm. Its chemical composition is shown in Table 1. The heritage site soil is retrieved from the in situ test site and its particle size is less than 0.1 mm after mechanical grinding. The main physical parameters of heritage site soil are obtained after it is tested according to the "Standard for Soil Test Method (GB/T50123-1999)" [25], as shown in Table 2.
Table 1 Chemical composition and content of fly ash
Table 2 Properties of heritage site soil
SH-(CaO + C+F) slurry was used as grouting material. It was composed of quicklime (CaO), ground site soil (C) and fly ash (F) mixed with modified polyvinyl alcohol (SH) solution. The mass mixing ratio of SH-(CaO + C+F) slurry was MCaO: MC: MF = 3:2:5 and MSH: M CaO+C+F = 1:1. Slurry sample preparation was carried out in accordance with the "Technical Regulations for Soil Heritage Protection Test (WW/T0039-2012)" [26]. The slurry was shaped into a cubic sample with a size 70.7 mm × 70.7 mm × 70.7 mm. The ambient temperature and relative humidity of the laboratory were approximately 20 °C and 25%, respectively. The samples were numbered and demolded after being molded for 24 h, and then, they were cured in the laboratory for 30 days.
Test for physical and mechanical indicators of materials
The undisturbed heritage site soil sample was shaped into a cubic test block of the same size as the mortar according to the abovementioned specifications "(WW/T0039-2012)" [26]. Parameters such as the density, porosity, moisture content, strength, and elastic modulus of the soil samples in the mortar and the original site were determined in accordance with the "Standard for Soil Test Method (GB/T50123-1999)" [25]. The physical and mechanical indexes of GFRP bolts were determined by consulting the quality inspection report provided by the producer. The test results of physical and mechanical indexes of anchorage materials are shown in Table 3.
Table 3 Physical and mechanical indexes of anchorage materials field
In situ anchorage and pull-out test
An abandoned rammed earth wall near the Great Wall of Ming Dynasty in Linze Section of Zhangye City, Gansu Province, China, was selected as the anchorage test site. The wall was rammed with red sand on natural basis. The thickness of rammed layer is 18–22 cm. The wall profile is trapezoidal with 5 m bottom width, 3 m top width, and 3.5 m residual height. The anchorage systems were maintained for more than 30 days after completion of the steps of hole formation, anchor insertion in the middle, high pressure grouting and secondary grouting, and then pull-out tests were carried out. In situ test anchorage parameters are shown in Table 4.
Table 4 Arrangement of parameters of test groups in field test
The anchorage drawing instrument was the Beijing HiChance HCYL-60 comprehensive anchorage parameter measurement instrument. The hole diameter at the centre of the lifting jack was 60 mm, and the working stroke of the cylinder was 120 mm, with a measurement range of 0–500 kN. Field pull-out test system and equipments are shown in Fig. 5. The pull-out tests followed the "Code for the design and protection of dry earth sites for protection and reinforcement (WW/T 0038-2012)" [21] and the "Technical specification for rock and soil anchorage bolts and shotcrete support engineering (GB 50086-2015)" [27] and referred to the cyclic loading method of previous drawing tests [17, 18]. Each stage is increased or decreased by 4 kN as the preset drawing load. The maximum value of the load in the latter cycle is 4 kN larger than the maximum value of the current cycle, that is, the pulling load is carried out according to the "0 → 4→8 → 4→8 → 16 → 8→4 → …(kN)" method. Stop loading when the load is applied to the preset load. When the difference of displacement meter display number for three consecutive times is not more than 0.01 mm, it is considered that the anchorage system has reached a stable state, and the next stage load can be implemented. Ultimately, all bolts are stopped because the displacement of anchor head can not convergent and the load can not be increased any more.
Field drawing test system and equipments. 1: bolt; 2: anchor head; 3: pressure sensor; 4: fixed bracket; 5: lifting jack; 6: displacement sensor; 7: backing plate; 8: leads; 9: catenary ropes
Results and analysis
Ultimate load
As shown in Fig. 6, the average ultimate pull-out loads of the TC-anchorage system and T-anchorage system are 26.30 kN/m and 18.85 kN/m, respectively. Ignoring the difference of bolts structure and assuming that the shear stress distributes uniformly along the embedment length, the average shear strength of the bolt–slurry interface of TC-anchorage system and T-anchorage system are 0.26 MPa and 0.19 MPa, respectively. Obviously, the average interfacial shear stress of TC-anchorage system increases by about 40% compared with that of T-anchorage system. This benefits from the positive role played by TC-bolt structure in improving anchorage capacity, which is also supported by the research of different research fields, different materials anchors and different bolt structures, as well as similar research on changing the stress mode between bolt and slurry by changing bolt diameter [28,29,30]. When comparing with the practical design value (3 kN/m) [31] from the perspective of stabilisation, the anchorage capacity provided by TC-anchorage system can better meet the protection requirements of earthen heritage sites.
Measured ultimate load and means of measured ultimate loads of two types of anchorage systems and the calculated value of ultimate load
According to the results of laboratory tests and field tests, the following parameters can be obtained: \(\tau_{f}\) is equal to the average shear strength of bolt–slurry interface of T-anchorage system, i.e., \(\tau_{f}\) = 0.19 MPa, \(\mu\) = 1, \(L_{\text{c}}\) = 100 mm and \(f_{cu}\) = 1.92 MPa. The value of \(P_{TC}\) calculated from Eq. (1) is 25.07 kN. From Fig. 6, it can be seen that \(P_{TC}\) is less than the actual ultimate load mean of TC-anchorage system. This may be mainly due to the uniaxial unconfined compressive strength of \(f_{cu}\) used in calculating \(P_{c}\), while the slurry in pressure-bearing section is in a three-dimensional stress state during the actual stress process, so \(f_{cu}\) is less than the actual compressive strength of slurry, resulting in a slightly smaller calculation result \(P_{TC}\) than the mean test value. For practical engineering, the calculation of ultimate load of the TC-anchoring system using \(f_{cu}\) is biased towards safety.
Load–displacement
As shown in Fig. 7, there are three inflection points and four stages in load–displacement curve of TC-anchorage system during cyclic loading process. The load–displacement curve is in the elastic stage before the first inflection points (yield point), which is basically linear. The curve between yield point and the second inflection point (reinforcement point) is in the plastic stage, which shows a non-linear and non-uniform increasing relationship. In this stage, the bolt–slurry interface enters the stage of full plasticity, and the pressure-bearing body and slurry are compressed tightly into rigid body, which can be proved from each cyclic path of the load–displacement curve. No response is observed for displacement of each cyclic path regardless of load rise or fall. Only when the load exceeds the maximum cyclic load of the previous stage and increases into the next stage of larger cyclic load, the displacement can continue to increase. The curve between the reinforcement point and the third inflection point (peak point) is reinforcement service stage, and the load–displacement curve shows an obvious non-linear non-uniform increase relationship. When the load reaches peak point, load–displacement curve enters the stable stage. In the stable stage, the load value decreases slightly (1–2 kN) compared with the peak value, and then remains stable in the process of about 40 mm displacement after the peak value.
Load-displacement curves of two types of anchorage systems
During the cyclic loading process of T-anchorage system, there are two inflection points and three stages in the load–displacement curve. Similar to TC-anchorage system, the load–displacement curve is elastic before the first inflection point (yield point), which is basically linear. The curve between yield point and the second inflection point (peak point) is the plastic stage, which shows a non-linear and non-uniform increase relationship. In this stage, the bolt–slurry interface enters a stage of plastic deformation with a certain amount of elastic deformation. This phenomenon can be evidenced by the elastic and plastic strains in the load–displacement curves of the two cyclic loading processes. When the load reaches peak point, load–displacement curve enters the failure stage, which includes three processes. First, during the displacement of about 5 mm after the peak point, the load shows a sharp decline (8–10 kN). Then, the load increases slightly (2–3 kN) during the next 4 mm; Thereafter, during the next 70 mm displacement, the load (residual load) remained basically stable.
Moreover, according to the characteristics of load–displacement curves of two kinds of anchorage systems, TC-anchorage system shows greater ultimate load and ductility than T-anchorage system. This is manifested in: (1) although the load at the yield point of TC-anchorage system and T system are both 8 kN, the displacement at the yield point of TC-anchorage system is about 7.5 mm larger than that of T-anchorage system. (2) The load–displacement curve of TC-anchorage system has more reinforcement point and reinforcement service stage than that of T-anchorage system, and this reinforcement service stage has the characteristics of small increase of load and large increase of displacement. (3) The displacement of the peak point of TC-anchorage system is 7–14 times of that of T-anchorage system. (4) The post-peak load of T-anchorage system decreases sharply and the residual load is about 60–70% of the peak load. However, the residual load of TC-anchorage system anchorage system decreases only slightly, which is about 92–97% of the peak load and remains stable during the process of large displacement. Therefore, from the results and analysis of load–displacement curves of two kinds of anchorage systems, it can be concluded that T-anchorage system is a typical burst failure mode, while TC-anchorage system is an ideal progressive reinforced failure mode. The TC-anchorage system exhibits the expected characteristics of "high anchorage force, large deformation resistance". The TC-anchorage system still provides considerable anchorage force when large deformation (e.g., over 80 mm) occurs.
Strain distribution along the embedment length
Figure 8a, b shows the distribution of interfacial strain along the embedment length of each anchorage system under different levels of load. For TC1-anchorage system: (1) Elastic stage (0 → 8 kN). In 0 → 4 kN stage, the curve shows a positive skewness curve (a short curve on the left side and a long curve on the right side) with a peak value at L = 20 cm, in other words, the strain increases sharply from L = 0 to L = 20 cm, and decreases from L = 20 to L = 100 cm. In 4 → 8 kN stage, the strain of each monitoring point increases with the increase of load in varying degrees and the curve is still positive skewness distribution, but the peak value appears at L = 40 cm. This phenomenon shows that in 0 → 8 kN stage, the pull-out effect mainly occurs in the front of the embedment length. With the increase of load, the peak strain shifts back, that is, the pull-out stress of the interface transfers to the end of the embedment length. (2) Plastic stage (8 → 24 kN). In 8 → 12 kN stage, the curve still shows a positive skewed single peak distribution. Although the peak value is still at L = 40 cm, the strain increment at L = 40–100 cm is much larger than that at L = 0–20 cm. In 12 → 16 kN stage, the curve has a double peak shape, the higher peak value is L = 40 cm, the lower peak value is slightly prominent at L = 80 cm, and the strain increment at L = 60–100 cm is much larger than that at L = 0–40 cm. During the period of 16 → 20 kN, the double peak position of the curve remained unchanged, except that the strain at the peak value of L = 80 cm increased significantly. In 20 → 24 kN stage, the strain increment at L = 60–100 cm is much larger than that at L = 0–40 cm, and the peak value at L = 80 cm has exceeded that at L = 40 cm. During the 8 → 24 kN period, the curve experienced the typical stages of positive skewness, double peaks of front-high and back-low, and double peaks of front-low and back-high. The above phenomena show that: In this process, the interface strain at the back of the embedment length increases gradually, and the uplift resistance provided by the back of the embedment length increases gradually. The bolt–slurry interface at the front of the embedment length is gradually decoupled and the main pull-out action section is gradually transited from the front tension anchorage section to the back end of the tension anchorage section and the pressure-bearing body to bear together. (3) Reinforcement service stage (24 → 28kN): In this stage, the curve shows a single peak distribution with negative skewness, and the peak value is at L = 80 cm. The closer the measuring point is to L = 80 cm, the greater the strain increase is. This indicates that the main pull-out part has transited to L = 80–100 cm, in other words, the pressure-bearing body plays the main pull-out role.
Strain distribution curve of bolt–slurry interface along embedment length. a TC1, b TC2, c T1, d T2
The distribution of interfacial strain along embedment length in TC2-anchorage system is similar to that in TC1-anchorage system: (1) Elastic stage (0 → 8kN). The curve path has a positive skewed single peak distribution, and the peak value shifts from L = 20 to L = 40 cm. (2) Plastic stage (8 → 16kN): The curves show double peaks at L = 40 cm and L = 80 cm respectively, and the strain increment in L = 60–100 cm is much larger than that in L = 0–40 cm. The curve also shows a transition from the double peaks of front-high and back-low to the double peaks of front-low and back-high. (3) Reinforcement service stage (16 → 20 kN): the curve shows a negative skewness single peak shape and the peak value is at L = 80 cm. The above phenomena show that: with the continuous increase of pull-out load, the strain in front of embedment length increases slightly, while the strain in back of embedment length increases substantially. The position where the bolt–slurry interface provides the main pull-out resistance is transferred to the end of embedment length. The larger the load is, the stronger the pull-out resistance of the back part of the embedment length of TC-anchorage system can be.
As shown in Fig. 8c, d, the strain distribution curve of the bolt–slurry interface in T-anchorage system shows a negative exponential decay along the embedment length, and the peak strain is always at L = 0 cm. With the increase of load, the strain of each measuring point increases. The closer the measuring point is to L = 0 cm, the greater the strain increase is, and the more significant the curve attenuation is. The strain mainly distributes in L = 0–40 cm. This phenomenon is similar to the results of previous studies [17, 18], that is, the strain distribution at the bolt–slurry interface is not uniform, and the stress mainly concentrates on the front end of the embedment length.
The strain distribution along embedment length of T-anchorage system decreases negatively exponentially from low load to high load, and the strain increment is always large in the range of L = 0–40 cm, which fully shows that only the front end of embedment length plays a major role in uplift resistance. However, the strain distribution along embedment length of bolt–slurry interface in TC-anchorage system is unimodal positive skewness distribution under low load; It is in the process of bimodal dynamic transition from high front peak-low back peak to low front peak-high back peak under the load from low to high; It presents a unimodal negative skewness distribution under ultimate load. Distribution of strain at bolt–slurry interface along embedment length of TC-anchorage system shows that the whole embedment length plays a full role in pull-out, especially in the pressure-bearing section. Obviously, TC-anchorage system has more reasonable spatio-temporal continuity than T-anchorage system. However, it is noteworthy that TC-anchorage system still has defects. Under different levels of loads, the whole bolt–slurry interface of TC-anchorage system is not an ideal uniform force, but a local force mode which transfers from the front end of embedment length to the back end of embedment length in stages. Although this force mode can ensure the continuity of the force on the interface, it is not conducive to greatly improving the ultimate load of TC-anchorage system.
Failure law
During the pull-out test, there were no cracks in the heritage sites soil, and the bolts were not broken. All tests were forced to stop when the displacement of anchor head cannot converge or the pull-out load cannot increase. Observing the external phenomena of field parallel test (Fig. 9a), the bolts of the two anchorage systems were pulled out in slurry and a few radial cracks appeared in slurry near the orifice (the front end of embedment length). After pulling out the bolts of the two kinds of anchorage systems, the failure modes of the two kinds of anchorage systems show obvious differences: (1) there is a certain amount of slurry bonding on the whole bolt surface of T-anchorage system. The slurry-soil interface did not exhibit debonding and slippage. There is no excessive damage to the slurry in the hole, and slurry powder residues at the bottom of the hole (Fig. 9b). The local slurry in the hole presents threads mosaic with the shape of the bolt, and there are rough scratches in the local slurry. (2) There is a certain amount of slurry bond on the surface of TC-anchorage system bolt with the whole embedment length. The slurry forms a conical slurry based on the pressure-bearing body at the end of the bolt (Fig. 9c). The surface of the conical slurry has friction marks and is smooth. The slurry in the anchor hole is broken, and the hole wall is partly scratched and relatively complete. Observing the residual slurry in the anchor hole, it has become small fragments or powders. There is a thin-bedded slurry bonding somewhere in the inner wall of the hole, and some of the thin layer of soil on the surface of the hole wall appears fresh friction fracture marks.
Failure law of two types of anchorage system. a Phenomenon at the Orifice of failure anchorage system, b pulled T-bolt and failure hole, c pulled TC-bolt and failure hole, d the angle between the bottom of the conical slurry and the failure surface of TC1-anchorage system
The measured heights of conical slurry of TC1-anchorage system and TC2-anchorage system are 55 mm and 50 mm respectively, and the calculated angles between the conical surface and the bottom are 66.43° and 64.36° degree, respectively (Fig. 9d). According to Mohr–Coulomb criterion, the internal friction angle of conical slurry can be obtained by Eq. (4). The formula is as follows:
$$\alpha = 4 5^\circ + \varphi / 2$$
where α is the angle between the direction of conical surface and the direction of the hollow truncated cone plane; φ is the internal friction angle of the conical slurry. The units of α and φ are the degree (°).
The calculated values of TC1-anchorage system and TC2-anchorage system are 42.86° and 38.73° respectively, which are not much different from the internal friction angle (φ = 41.92°) measured in laboratory. This shows that the conical surface is a shear fracture surface formed by the combined action of the maximum principal stress provided by the hollow truncated cone and the small principal stress provided by the hole wall of heritage site soil.
From the analysis of the above phenomena and results, it can be concluded that slurry shearing and dilational slip lead to the debonding and decoupling of the bolt–slurry interface in T-anchorage system. The failure mode of tension section of TC-anchorage system is consistent with that of T-anchorage system, but the failure mode of pressure-bearing section is completely different. The difference between the failure modes of the two anchorage systems is mainly related to the stress mode of the confined structure in the pressure-bearing section of TC-anchorage system. The failure mode of pressure-bearing section of TC-anchorage system is that slurry is cut into the conical slurry. According to Mohr strength theory, the shear strength of failure surface is related to the properties of slurry and the normal stress of failure surface. Under the confining pressure produced by pressure-bearing body, anchor hole wall and slurry in the tension section, normal stress of slurry failure surface in the pressure-bearing section is effectively increased, which effectively improves shear strength of slurry. After the formation of conical slurry, under the continuous pull-out load, conical slurry moves forward to compact its peripheral slurry. The confining pressure of the hole wall and the compressive performance of slurry in the compaction zone will be fully utilized, which once again forms an effective pull-out effect on the conical slurry of TC-anchorage system. It is also revealed that the pull-out performance of TC-anchorage system depends on the shear and compressive properties of slurry in the pressure-bearing section.
Based on the results and analysis of failure mode, ultimate load, load–displacement relationship and interface strain distribution along embedment length of in situ pull-out test of TC-anchorage system, it can be concluded that the TC-anchorage system has excellent anchorage performance. In order to have a clearer understanding of the working mechanism of TC-anchorage system, it is necessary to make a preliminary discussion on the experimental results, the failure mechanism of the anchorage system and the mechanism of the mechanical interaction between the bolt and the slurry.
According to the existing research [32] and the experimental results, it is pointed out that the working mechanism of TC-anchorage system includes three progressive stages. First, the debonding-decoupling of bolt–slurry interface: This stage is similar to the mechanical behavior of the debonding–decoupling of the bolt–slurry interface in the T-anchorage system. There are weak chemical bonding force, strong friction and mechanical occlusion force between the bolt and slurry in the tensile anchorage section. This stage mainly occurs in the elastic stage. The main pull-out force is determined by the frictional resistance of the tensile anchorage section, and the pressure-bearing body is subjected to smaller static earth pressure (Fig. 10a). From the point of view of the bond force of the bolt–slurry interface in the whole tension section, the bond weakens at the front end of the anchorage and keeps moving backward, which can be demonstrated by the phenomenon that the strain increment at the front end of the embedment length is small, the increment at the back end is large and the location of peak strain moves backward.
Working mechanism of TC-anchorage system. a Tension section subjected to friction and pressure-bearing body subjected to static earth pressure, b conical surface penetrates gradually, c conical slurry formation, d compaction and crushing of slurry. F: drawing force; τ: shear stress; P: pressure; B: bolt; S: slurry; H: heritage sites soil
Secondly, the formation of conical slurry: After the peak value of static friction is reached in the tension section, if the load continues to increase, the pressure-bearing body will begin to move forward obviously and its pressure on the slurry will increase. The slurry in front of the pressure-bearing body begins to produce a conical local shear surface under the restraint of the hole wall and the constant compression of the bearing plate (Fig. 10b). If the external tension of bolt continues to increase, the conical shear surface of slurry expands from bottom to top, and gradually connects to form a conical surface. Finally, the sheared slurry and the pressure-bearing body form a expansion (Fig. 10c). This stage mainly occurs in the plastic stage and a large displacement occurs between the bolt and the slurry. At this stage, the mechanical deformation performance of the bolt is determined by the friction of the tension section and the shear and compressive strength of the slurry. These phenomena can be confirmed by the characteristics of the progressive change process of strain distribution curve along embedment length in plastic stage, in which the curve changes from positive skewness single peak curve to high front peak–low back peak double peak curve, and then to low front peak–high back peak double peak curve.
Finally, compaction and crushing of slurry: After the formation of the conical slurry, the compression and crushing deformation of the slurry is larger than the shear deformation, that is, the load–displacement curve has an obvious inflection point (reinforcement point), and then immediately entered reinforcement service stage. As the load continues to increase, the conical slurry will move forward greatly. The slurry around the conical slurry is constrained by the confining pressure of the hole wall, and is sheared, broken and continuously compacted under the compression of the conical slurry. The slurry pressed into powder is extruded from the gap between the conical slurry and the hole wall and filled in the cavity area behind the pressure-bearing body (Fig. 10d). This is consistent with the phenomenon of residual slurry powder at the bottom of anchor hole after the TC-bolt was pulled out. When the pull-out effect caused by the cone reaches its limit state, the load also reaches its peak value. The process of compression of slurry by the conical slurry continues, and the conical effect achieves a dynamic and stable equilibrium, which is manifested in that the load does not decrease in the process of a certain displacement (at least about 40 mm) occurring in the residual stage after the peak. At this stage, the mechanical deformation performance of the bolt is determined by the compaction and crushing deformation performance of slurry in front of the conical slurry. The tension section provides less friction, and the pull-out resistance is mainly provided by the rear conical slurry. This can be proved by the negative skewness single peak (peak L = 80 cm) curve of strain distribution with embedment length in the reinforcement service stage.
Comparing the above analysis results of the working mechanism of TC-anchoring system with the expected results from theoretical prediction, it can be seen that there are differences between the two. The test results indicate that the working mechanism of the bolt–slurry interface of TC-anchorage system in the initial debonding-decoupling stage is the same as that of the "T" model (Fig. 3) predicted by theory. In the following stress process, TC-anchorage system does not crush slurry in the pressure-bearing section completely as the "T" model predicted by theory, but actually shears slurry in the pressure-bearing section into a conical slurry. After the formation of conical slurry, the conical slurry and pressure-bearing body have been integrated into a whole. Its mechanical structure is similar to the "Y" type (Fig. 10c), and is no longer the "T" model predicted by theory. TC-anchorage system compacts the periphery slurry of conical slurry with "Y" type stress structure in slurry compaction zone. Although the actual "Y" structure is different from the theoretical "T" model (vertical compression slurry), the "Y" structure effectively increases the shear surface area of slurry and still makes good use of the confining pressure of pore wall and the compressive performance of slurry. All in all, the anchorage force of the new TC-anchorage system is 40% higher than that of T-anchorage system, which really realizes "short anchorage, high anchorage force". In engineering practice, TC-bolt can effectively reduce the embedment length and number of bolts compared with T-bolt under the same anchorage force requirement, thus effectively reducing the protective damage caused by anchorage engineering to earthen heritage sites. The peak point displacement of TC-anchorage system is about 7–14 times of that of T-anchorage system, that is to say, TC-anchorage system can provide considerable anchorage force even if there is a large deformation (80–91 mm). Moreover, it should be also noted that the residual load after the peak point of TC-anchorage system still reaches 92–97% of the peak load, and remains stable during the process of accompanying bolt displacement of more than 40 mm, which fully increases the storage of safe deformation of anchorage structure. In the process of small increase of pull-out load, the strain distribution at bolt–slurry interface of TC-anchorage system has the phenomenon of alternating transfer from positive skewness to bimodality and then to negative skewness, which reveals that the interfacial force gradually transfers from the front end to the back end of embedment length and gradually reaches a stable state after the back end of embedment length is stressed. TC-anchorage system is superior to T-anchorage system whose interface stress is concentrated at the front end of embedment length. In other words, TC-anchorage system shows more reasonable spatio-temporal continuity than T-anchorage system in the force mode of bolt–slurry interface.
In summary, TC-anchorage system exhibits more reasonable working mechanism than T-anchorage system, which achieves its high anchorage force and strong ductility characteristics. This feature has been confirmed in the present conservation situation of earthen heritage sites. The scale and stability of earthen heritage sites with reinforced structures are obviously more excellent than those without reinforced structures. For example, in the Great Wall and beacon towers of the Han and Ming Dynasties [33], the Juyan sites, the Kizil Gaha beacon tower, the Loulan sites, and the ancient city of Songshan (Fig. 11), although some of the earthen heritage sites have been deformed and cracked, they have not collapsed due to the existence of irregular reinforcement structure. The excellent anti-large deformation reinforcement model of TC-anchorage system can adjust the self-stability of heritage sites soil to a great extent and is beneficial to the seismic performance of the self-structure of heritage sites soil, which ensures that the latter reinforcement measures (reinforcement of earthen heritage sites by bolts) play a similar role to the original construction (Wooden tendons are embedded in earthen heritage sites), thus forming a compatibility of physical, structural and mechanical significance.
Application of irregular reinforced structures in the control of mechanical stability of earthen heritage sites. a Kizil Gaha beacon tower reinforced by wood in Tang Dynasty (Kuqa, Xinjiang, China), b wooden poles embedded in the walls of Songshan ancient city in Ming Dynasty (Tianzhu, Gansu, China)
The average ultimate pull-out load of TC-anchorage system bolt is about 40% higher than that of T-anchorage system, and its displacement at peak point is about 7–14 times of that of T-anchorage system, and the residual load after peak load is about 92–97% of that of peak load. TC-anchorage system has the characteristics of strong ductility with small elastic deformation and large plastic deformation.
When the load is low, the strain distribution along the embedment length of the bolt–slurry interface in TC-anchorage system is positive skewness and single peak; when the load changes from low to high, it is in the process of double peak dynamic transition from high front peak–low back peak to low front peak–high back peak; when the ultimate load is applied, it presents a negative skewed single peak shape.
The working mechanism of TC-anchorage system includes three stages: debonding of bolt–slurry interface, formation of conical slurry, and compaction and crushing of slurry, which belongs to typical progressive failure.
The mechanism of mechanical interaction between bolt and slurry in TC-anchorage system fully realizes the synergistic utilization of compression and shear properties of slurry. Compared with T-anchorage system, TC-anchorage system exhibits more reasonable time–space continuity and more excellent anchorage performance.
According to the progressive working machine of TC-anchorage system, which includes the debonding-decoupling of bolt–slurry interface, the formation of conical slurry and compaction and crushing of slurry, the differential expressions and analytical solutions of shear stress around bolt, displacement and axial force of TC-bolt should be further constructed in the future to accurately explore the pull-out load transfer characteristics of TC-anchorage system in earthen heritage sites.
Most of the data on which the conclusions of the manuscript rely are published in this paper, and the full data are available for consultation upon request.
TC-anchorage system:
fully grouted tension–compression composite anchorage system
T-anchorage system:
fully grouted tension anchorage system
TC-bolt:
fully grouted tension–compression composite bolt
T-bolt:
fully grouted tension bolt
GFRP:
glass-fibre-reinforced plastic
SH:
modified polyvinyl alcohol
CaO:
quicklime
ground site soil
fly ash
Li ZX. Conservation of ancient sites along the silk road. Beijing: Science Press; 2010.
Du Y, Chen W, Cui K, Gong S, Pu T, Fu X. A model characterizing deterioration at earthen sites of the Ming Great Wall in Qinghai Province, China. Soil Mech Found Eng. 2017;53(6):426–34.
Pu T, Chen W, Du Y, Li W, Su N. Snowfall-related deterioration behavior of the Ming Great Wall in the eastern Qinghai-Tibet Plateau. Nat Hazards. 2016;84:1539–50.
Benmokrane B, Zhang B, Chennouf A. Tensile properties and pullout behaviour of AFRP and CFRP rods for grouted anchor applications. Constr Build Mater. 2000;14(3):157–70.
Martin LB, Tijani M, Hadj-Hassen F, Noiret A. Assessment of the bolt–grout interface behavior of fully grouted rock bolts from laboratory experiments under axial loads. Int J Rock Mech Min Sci. 2013;63(10):50–61.
Ma SQ, Nemcik J, Aziz Z. An analytical model of fully grouted rock bolts subjected to tensile load. Constr Build Mater. 2013;49(12):519–26.
Wang XD. Philosophy and practice of conservation of soil architecture sites: a case study of the Jiaohe ancient site in Xinjiang. Dunhuang Res. 2010;6:5–9 (in Chinese).
Hyett AJ, Moosavi M, Bawden WF. Load distribution along fully grouted bolts with emphasis on cable bolt reinforcement. Int J Numer Anal Meth Geomech. 1996;20(7):517–44.
Kilic A, Yasar E, Celik AG. Effect of grout properties on the pull-out load capacity of fully grouted rock bolt. Tunn Undergr Space Technol. 2002;17(4):355–62.
Cao C, Ren T, Cook C. Calculation of the effect of Poisson's ratio in laboratory push and pull testing of resin-encapsulated bolts. Int J Rock Mech Min Sci. 2013;64:175–80.
Zhang JK, Chen WW, Li ZX, Wang XD, He FG. Field tests on anchorage mechanism of wood bolts for conservation of soil sites. Chin J Geotech Eng. 2013;35(6):1166–71 (in Chinese).
Sun ML, Wang XD, Li ZX, Zhang JK. Study of the cohesional strength of reinforcing soil sites with a bamboo anchorage. Dunhuang Res. 2011;6:81–4 (in Chinese).
Zhang HY, Wang XD, Wang XD, Lü QF, Zhang YJ. Bond-slip model for bamboo-steel cable composite anchor. Rock Soil Mech. 2011;32(3):789–96 (in Chinese).
Mao XF, Zhao D, Chen P. Anchor theory and experimental study of geo-filament blot. Mech Pract. 2008;30(2):74–7 (in Chinese).
Zhang JK, Chen WW, He FG, Li ZX, Sun ML. Field experimental study on anchorage performance of GFRP at conservation earthen sites. J Eng Geol. 2014;22(5):804–10 (in Chinese).
Zhang JK, Li ZX, Chen WW, Wang XD, Guo QL, Wang N. Pull-out behaviour of wood bolt fully grouted by PS-F slurry in rammed earth heritages. Geomech Geoeng. 2017;12(4):279–90.
Cui K, Wang DH, Chen WW, Ren XF, Liu J, Yang G. Comparative study of anchorage performance of three types of bolts fully grouted by modified glutinous rice mortar. Rock Soil Mech. 2018;39(2):498–506 (in Chinese).
Cui K, Wang DH, Chen WW, Ren XF, Guan XP, Huang JJ. Performance and mechanism of bolts fully grouted with SH (C + F) slurry. J Eng Geol. 2017;25(1):19–26 (in Chinese).
Zhang JK, Chen WW, Li ZX, Wang XD, Guo QL, Wang N. Study on workability anddurability of calcined ginger nuts-based grouts used in anchoring conservation of earthen sites. J Cult Herit. 2015;16(6):831–7.
Yang ST, Wu ZM, Hu XZ, Zheng JJ. Theoretical analysis on pullout of anchorage from anchor-mortar-concrete anchorage system. Eng Fract Mech. 2008;75(5):961–85.
WW/T 0038-2012. Design specification for preservation and reinforcement engineering of arid soil sites. Beijing: State Administration of Cultural Heritage; 2012.
CECS 22:2005. Technical specification for ground anchors. Beijing: China Association for Engineering Construction Standardization; 2005.
JGJ63-2006. Standard of water for concrete. Beijing: Ministry of housing and urban-rural development of the People's Republic of China; 2006.
Chen WW, Guo ZQ, Xu YR, Chen PP, Zhang S, Ye F. Laboratory tests on rammed earth samples of earthen sites instilled by reinforcement material SH. Chin J Geotech Eng. 2015;37(8):1517–23 (in Chinese).
GB/T50123-1999. Standard for soil test method. Beijing: General Administration of quality supervision, inspection and Quarantine of the people's Republic of China, and Ministry of housing and urban-rural development of the People's Republic of China; 1999.
WW/T 0039-2012. Testing techniques specifications for preservation of soil sites. Beijing: State Administration of Cultural Heritage; 2012.
GB 50086-2015. Technical code for engineering of ground anchorages and shotcrete support. Beijing: China Metallurgical Construction Association; 2015.
Kılıc A, Yasar E, Atis CD. Effect of bar shape on the pull-out capacity of fully-grouted rockbolts. Tunn Undergr Space Technol. 2003;18(1):1–6.
Kim G, Ahn K, Min K, Jung C. Behavior characteristics of underreamed ground anchor through field test and numerical analysis. J Korean Math Soc. 2013;14(8):37–44 (In Korean).
Tu BX, Yu J, He JF, Cheng Q, Xu GP, Jia JQ. Analysis of anchorage performance on new tension-compression anchor II: model test. Chin J Geotech Eng. 2019;41(3):475–83 (in Chinese).
Li ZX, Sun ML, Wang XD. Conservation research on Jiaohe ruins. Beijing: Science Press; 2008.
Zhang N, Wu HN, Shen JSL, Hino T, Yin ZY. Evaluation of the uplift behavior of plate anchor in structured marine clay. Mar Georesour Geotechnol. 2017;35(6):758–68.
Chen W, Du Y, Cui K, Fu X, Gong S. Architectural forms and distribution characteristics of beacon towers of the Ming Great Wall in Qinghai Province. J Asian Archit Build Eng. 2017;16(3):503–10.
The research described in this paper was financially supported by the Natural Science Foundation of China (Grant No. 41562015 and No. 51208245) and the Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China (2017IRT17-51).
The research described in this paper was financially supported by the Natural Science Foundation of China (Grant Nos. 41562015 and 51208245) and the Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China (2017IRT17-51).
Western Center for Disaster Mitigation in Civil Engineering of Ministry of Education of China, School of Civil Engineering, Lanzhou University of Technology, Langongping Road No. 287, Lanzhou, 730050, People's Republic of China
Donghua Wang, Kai Cui, Guopeng Wu, Fei Feng & Xiangpeng Yu
Key Laboratory of Mechanics on Disaster and Environment in Western China, School of Civil Engineering and Mechanics, Lanzhou University, Tianshui South Road No. 222, Lanzhou, People's Republic of China
Kai Cui & Guopeng Wu
Donghua Wang
Kai Cui
Guopeng Wu
Fei Feng
Xiangpeng Yu
DW and KC were involved with the experimental designs and testing. GW was responsible for the sample collections and indoor experiments. DW, FF and XY performed the field experiments. This manuscript was written by KC and DW. KC also contributed to data analysis and processing. All authors read and approved the final manuscript.
Correspondence to Kai Cui.
Wang, D., Cui, K., Wu, G. et al. Performance and working mechanism of tension–compression composite anchorage system for earthen heritage sites. Herit Sci 7, 52 (2019). https://doi.org/10.1186/s40494-019-0297-3
Earthen heritage sites
Fully grouted tension–compression composite anchorage system (TC-anchorage system)
Fully grouted tension anchorage system (T-anchorage system)
Anchorage performance
Working mechanism | CommonCrawl |
HomeProgramming Languageorganic chemistry – Do vinyl cations adopt a classical or non-classical structure?
organic chemistry – Do vinyl cations adopt a classical or non-classical structure?
25 people think this question is useful
Whilst reading this question today, I remembered something that I had seen previously here. In the second linked question, @Martin provided a reference to suggest that vinyl cations actually adopt a non-classical structure with $\ce{sp}$ hybridization and a bridging hydrogen, as shown below:
The non-classical structure can be viewed as a protonated alkyne, in the same way that the ethyl cation, which adopts a non-classical structure, can be viewed as a protonated alkene.
What is the correct structure of a vinyl cation? Does it depend on what the substituents are? For example, in the first linked question, do the two cations (shown below) actually equilibrate to the same non-classical structure?
I am looking forward to the answer to this question since it may change the way I teach addition reactions of alkynes.
Search "bridged vinyl cation" in Stang's, "Vinyl Cations". In the parent vinyl cation itself, calculations suggest a very small energy difference between the open and bridged form, with a very lower barrier to interconversion between the two. A few other vinyl cations may also be bridged depending on the substituents.
@ron Thanks for the excellent reading. Do you want to write up an answer yourself? If not I will probably answer my own question using that reference.
I think that the cations on the left side of both images should be linear.
TL;DR: there have been many theoretical investigations of the relative energies of these two forms for the parent vinyl cation with more recent work indicating that the bridged form is slightly more stable (by about 1–3 kcal/mol)4,5. This prediction has received support from recent experimental work as well6.
Taking such relatively small energy differences into account I would not be surprised if the situation is reversed with a particular substituent. Thus, answering the second part of the question ("Does it depends on what the substituents are?") requires high-level calculations for a wide variety of different substituents which seems to be absent in literature yet.
Classical structure: trigonal (sp2) or linear (sp)?
First note that (give Marko a credit for paying our attention to that), as it is mentioned in Stang's "Vinyl Cations" book, the bent $\mathrm{sp^2}$-hybridized structure with a $\ce{C=C-H}$ bond angle of 120° (OP calls it "classical") is about 50 kcal/mol higher in energy than the linear $\mathrm{sp}$-hybridized form in all of the theoretical calculations.
These calculations confirm the organic chemist's intuitive and
qualitative feeling that carbenium ions prefer a trigonal planar
geometry with an empty p orbital and that it takes energy to bend the
methyl cation from its low-energy planar form.
Thus, in the studies below the meaning of the classical structure is different from the OP's one: the linear $\mathrm{sp}$-hybridized structure is called classical in almost all the studies.
So, Stang's "Vinyl Cations" references a CI/DZP study of the vinyl cation by Weber, Yoshimine, and McLean 1 as the most recent ab initio correlated calculations published at the time the book was written (the late 70s), so I'll start a review from this study.
Weber, Yoshimine, McLean, 1976
It is was found that the linear and bridged conformations have energies equal to within 0.01 kcal/mol with the later being lower in energy. However, such a small difference is below the accuracy of the CI/DZP level of theory, and thus, the authors didn't claim that the bridged conformation is more energetically favourable, just that it can be such ("probably" is the key word in the quote from the paper below).
While our CI calculation shows the linear and bridged conformations to
have energies equal to within 0.01 kcal/mol, we clearly cannot claim
this absolute accuracy. In fact, taking into account basis set
deficiencies, both in the one- and $n$-particle spaces, along with
optimization effects as discussed above, we interpret our calculations
to predict the relative energies of the two structures to be within
1–2 kcal/mol with the bridged structure probably having the lower
energy.
Besides, note that in the study the geometries have been optimized at the SCF level rather than at the correlated while CI calculations were done at the SCF optimized geometries. Taking into account that correlation effect were found to have major impact on relative stability of the two forms as mentioned in the study, it is especially interesting to look at the results obtained by Raghavachari, Whiteside, Pople, and Schleyer2 who optimized the structures not only at the HF/6-31G* but also at the MP2/6-31G* levels as well as did more high-level correlated calculations (MP4) with a bigger basis set (up to 6-311G** of TZP quality).
Raghavachari, Whiteside, Pople, Schleyer, 1981
For the bridged structure the $\ce{C-C}$ bond length was found to increase from 1.21 Å at HF/6-31G* to 1.23 Å at MP2/6-31G* while the distance of the bridging hydrogen to the middle of the $\ce{C-C}$ bond stays the same (1.18 Å). For the classical structure the authors report no significant changes in the geometry. With respect to the relative stability it was found that use of the MP2 geometries results in a slight additional stabilization to the bridged form: the MP4(SDQ)/6-31G**//MP2/6-31G* energy difference is 1.3 kcal/mol while the MP4(SDQ)/6-31G**//HF/6-31G* energy difference is only 0.6 kcal/mol (in both cases in favour of the bridged form).
With the larger 6-311G** basis, the bridged structure is stabilized
further, and our final result at MP4(SDQ)/6-311G**//MP2/6-31G*
indicates 11 [bridged] to be more stable than 10 [classical] by 3.0 kcal/mol.
The authors also mention that these final results are quite in line with the CEPA study by Lischka and Kohler3 done using a comparable basis set: the Lischka and Kohler geometries were very similar to the HF/6-31G* geometries obtained in the study and the bridged form was found to be 4.0 kcal/mol lower in energy than the classical form as well.
A further study done by the Pople group is also is in agreement with the findings quoted above: the bridged form predicted to be more stable by 3.1 kcal/mol (including zero point energies)4. А study by Lee and Schaefer5, though, reports a somewhat lower difference.
Lee, Schaefer, 1986
In a recent CI/DZP study by Lee and Schaefer with the largest CI expansion considered being CISDT the energy difference was found to be 0.68 kcal/mol with the nonclassical structure again lying lower. The authors also considered the effect of the zero-point vibrational energy corrections and found that it further lowers the nonclassical structure by 0.29 kcal/mol.
Our best strictly ab initio estimate of $\Delta E$(classical-non- classical) for $\ce{C2H3+}$ is obtained by adding the above 0.29 kcal correction to the 0.68 kcal difference from CISDT. In this manner, $\Delta E$ is predicted to be 0.97 kcal/mol. We estimate that a proper variational treatment of quadruple excitations might increase this $\Delta E$ value by perhaps 0.5 kcal/mol.
Weber, J.; Yoshimine, M.; McLean, A. D. A CI study of the classical and nonclassical structures of the vinyl cation and their optimum path for rearrangement. J. Chem. Phys. 1976, 64 (10), 4159–4164. DOI: 10.1063/1.431986.
Raghavachari, K.; Whiteside, R. A.; Pople, J. A.; Schleyer, P. V. R. Molecular orbital theory of the electronic structure of organic molecules. 40. structures and energies of C1-C3 carbocations including effects of electron correlation. J. Am. Chem. Soc. 1981, 103 (19), 5649–5657. DOI: 10.1021/ja00409a004.
Lischka, H.; Koehler, H. J. Structure and stability of the carbocations C2H3+ and C2H4X+, X = hydrogen, fluorine, chlorine, and methyl. Ab initio investigation including electron correlation and a comparison with MINDO/3 results. J. Am. Chem. Soc. 1978, 100 (17), 5297–5305. DOI: 10.1021/ja00485a010.
Curtiss, L. A.; Pople, J. A. Theoretical study of structures and energies of acetylene, ethylene, and vinyl radical and cation. J. Chem. Phys. 1988, 88 (12), 7405–7409. DOI: 10.1063/1.454303.
Lee, T. J.; Schaefer, H. F. The classical and nonclassical forms of protonated acetylene, C2H3+. structures, vibrational frequencies, and infrared intensities from explicitly correlated wave functions. J. Chem. Phys. 1986, 85 (6), 3437–3443. DOI: 10.1063/1.451828.
Crofton, M. W.; Jagod, M.; Rehfuss, B. D.; Oka, T. Infrared spectroscopy of carbo‐ions. v. classical vs nonclassical structure of protonated acetylene c 2 h + 3. J. Chem. Phys. 1989, 91 (9), 5139–5153. DOI: 10.1063/1.457612.
Tags:carbocation, molecular-structure, organic-chemistry
r – How to convert a table to a data frame
Get HTML body from mail in outlook 2013 using VBA Macro
javascript – Split array into chunks | CommonCrawl |
Extremal functions for an embedding from some anisotropic space, involving the "one Laplacian"
DCDS Home
Finite-time blowup for a Schrödinger equation with nonlinear source term
February 2019, 39(2): 1157-1170. doi: 10.3934/dcds.2019049
Characterization for the existence of bounded solutions to elliptic equations
Adnan Ben Aziza 1, and Mohamed Ben Chrouda 2,,
LAMMDA-ESST Hammam Sousse, Université de Sousse, Tunisie
LAMMDA-ISIM Monastir, Université de Monastir, Tunisie
* Corresponding author: [email protected]
Received May 2018 Revised August 2018 Published November 2018
We give necessary and sufficient conditions for which the elliptic equation
$\Delta u = \rho (x)\Phi (u)\;\;\;\;{\rm{in}}\;\;\;\;{\mathbb{R}^d}\;\;\;(d \ge 3)$
has nontrivial bounded solutions.
Keywords: Elliptic equation, bounded solution, thinness at ∞, Green function.
Mathematics Subject Classification: Primary: 31B05, 35B08; Secondary: 35J08, 35J91.
Citation: Adnan Ben Aziza, Mohamed Ben Chrouda. Characterization for the existence of bounded solutions to elliptic equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1157-1170. doi: 10.3934/dcds.2019049
R. Alsaedi, H. Mâagli, V. D. Radulescu and N. Zeddini, Entire bounded solutions versus fixed points for nonlinear elliptic equations with indefinite weight, Fixed Point Theory, 17 (2016), 255-265. Google Scholar
D. H. Armitage and S. J. Gardiner, Classical Potential Theory, Springer, London, 2001. doi: 10.1007/978-1-4471-0233-5. Google Scholar
M. Ben Chrouda and M. Ben Fredj, Nonnegative entire bounded solutions to some semilinear equations involving the fractional laplacian, Potential Anal, 48 (2018), 495-513. doi: 10.1007/s11118-017-9645-7. Google Scholar
J. Bliedtner and W. Hansen, Potential Theory, Springer-Verlag, Berlin, 1986. doi: 10.1007/978-3-642-71131-2. Google Scholar
K. L. Chung, Lectures from Markov Processes to Brownian Motion, Springer, Verlag, Berlin, 1982. Google Scholar
Ph. Clément and G. Sweers, Getting a solution between sub and supersolutions without monotone iteration, Rend. Istit. Mat. Univ. Trieste, 19 (1987), 189-194. Google Scholar
L. Dupaigne, M. Ghergu, O. Goubet and G. Warnault, Entire large solutions for semilinear elliptic equations, J. Differential Equations, 253 (2012), 2224-2251. doi: 10.1016/j.jde.2012.05.024. Google Scholar
E. B. Dynkin, Diffusions, Superdiffusions and Partial Differential Equations, American Math. Soc, Providence, Rhode Island, Colloquium Publications, 2002. doi: 10.1090/coll/050. Google Scholar
K. El Mabrouk, Entire bounded solutions for a class of sublinear elliptic equations, Nonlinear Anal, 58 (2004), 205-218. doi: 10.1016/j.na.2004.01.004. Google Scholar
N. Kawano, On bounded entire solutions of semilinear elliptic equations, Hiroshima Math. J, 14 (1984), 125-158. Google Scholar
O. D. Kellogg, Foundations of Potential Theory, Reprint from the first edition of 1929. Die Grundlehren der Mathematischen Wissenschaften, Band 31 Springer-Verlag, Berlin-New York, 1967. Google Scholar
A. V. Lair and A. W. Wood, Large solutions of sublinear elliptic equations, Nonlinear Anal, 39 (2000), 745-753. doi: 10.1016/S0362-546X(98)00233-8. Google Scholar
J. F. Le Gall, Spatial Branching Processes, Random Snakes and Partial Differential Equations, Originally published by Birkhliuser Verlag, 1999. doi: 10.1007/978-3-0348-8683-3. Google Scholar
M. Marcus and L. Véron, Nonlinear Second Order Elliptic Equations Involving Measures, De Gruyter Series in Nonlinear Analysis and Applications, 21 2014. Google Scholar
M. Naito, A note on bounded positive entire solutions of semilinear elliptic equations, Hiroshima Math. J, 14 (1984), 211-214. Google Scholar
W. M. Ni, On the elliptic equation $Δ u + K(x) u^{\frac{n+2}{n-2}}=0$, its generalizations and applications in geometry, Indiana Univ. Math. J, 31 (1982), 493-529. doi: 10.1512/iumj.1982.31.31040. Google Scholar
S. C. Port and C. J. Stone, Brownian Motion and Classical Potential Theory, Academic Press, New York-London, 1978. Google Scholar
D. Sattinger, Topics in Stability and Bifurcation Theory, Lecture notes in Mathematics 309, Springer-Verlag, Berlin/Heidelberg/New york, 1973. Google Scholar
M. Sharpe, General Theory of Markov Processes, Academic Press, Boston, 1988. Google Scholar
H. Yamabe, On a deformation of Riemannian structures on compact manifolds, Osaka Math. J, 12 (1960), 21-37. Google Scholar
D. Ye and F. Zhou, Invariant criteria for existence of bounded positive solutions, Discrete Contin. Dynam. Systems, 12 (2005), 413-424. doi: 10.3934/dcds.2005.12.413. Google Scholar
Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007
Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791
Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307
Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767
Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. Communications on Pure & Applied Analysis, 2016, 15 (2) : 657-699. doi: 10.3934/cpaa.2016.15.657
Galina V. Grishina. On positive solution to a second order elliptic equation with a singular nonlinearity. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1335-1343. doi: 10.3934/cpaa.2010.9.1335
Gleb G. Doronin, Nikolai A. Larkin. Kawahara equation in a bounded domain. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 783-799. doi: 10.3934/dcdsb.2008.10.783
Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1809-1818. doi: 10.3934/dcds.2013.33.1809
N. V. Chemetov. Nonlinear hyperbolic-elliptic systems in the bounded domain. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1079-1096. doi: 10.3934/cpaa.2011.10.1079
Jiabao Su, Zhaoli Liu. A bounded resonance problem for semilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 431-445. doi: 10.3934/dcds.2007.19.431
Radjesvarane Alexandre, Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Bounded solutions of the Boltzmann equation in the whole space. Kinetic & Related Models, 2011, 4 (1) : 17-40. doi: 10.3934/krm.2011.4.17
Kim Dang Phung. Boundary stabilization for the wave equation in a bounded cylindrical domain. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 1057-1093. doi: 10.3934/dcds.2008.20.1057
Diane Denny. A unique positive solution to a system of semilinear elliptic equations. Conference Publications, 2013, 2013 (special) : 193-195. doi: 10.3934/proc.2013.2013.193
Dario D. Monticelli, Fabio Punzo. Nonexistence results for elliptic differential inequalities with a potential in bounded domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 675-695. doi: 10.3934/dcds.2018029
Francesco Leonetti, Pier Vincenzo Petricca. Existence of bounded solutions to some nonlinear degenerate elliptic systems. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 191-203. doi: 10.3934/dcdsb.2009.11.191
Mónica Clapp, Jorge Faya. Multiple solutions to a weakly coupled purely critical elliptic system in bounded domains. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3265-3289. doi: 10.3934/dcds.2019135
PDF downloads (57)
HTML views (101)
Adnan Ben Aziza Mohamed Ben Chrouda | CommonCrawl |
Spectral Jacobi-Galerkin methods and iterated methods for Fredholm integral equations of the second kind with weakly singular kernel
DCDS-S Home
Couette flows of a viscous fluid with slip effects and non-integer order derivative without singular kernel
June 2019, 12(3): 665-684. doi: 10.3934/dcdss.2019042
Comparative study of a cubic autocatalytic reaction via different analysis methods
Khaled Mohammed Saad a,b,, and Eman Hussain Faissal AL-Sharif a,
Department of Mathematics, College of Arts and Sciences, Najran University, 61441, Najran, Saudi Arabia
Department of Mathematics, Faculty of Applied Science, Taiz University, Taiz, Yemen
* Corresponding author: [email protected]
Received June 2017 Revised September 2017 Published September 2018
Figure(17)
In this paper we discuss an approximate solutions of the space-time fractional cubic autocatalytic chemical system (STFCACS) equations. The main objective is to find and compare approximate solutions of these equations found using Optimal q-Homotopy Analysis Method (Oq-HAM), Homotopy Analysis Transform Method (HATM), Varitional Iteration Method (VIM) and Adomian Decomposition Method (ADM).
Keywords: Optimal q-Homotopy Analysis Method, homotopy analysis transform method, varitional iteration method and Adomian decomposition method, cubic an isothermal autocatalytic.
Mathematics Subject Classification: Primary: 35-XX, 35Kxx; Secondary: 35R11.
Citation: Khaled Mohammed Saad, Eman Hussain Faissal AL-Sharif. Comparative study of a cubic autocatalytic reaction via different analysis methods. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 665-684. doi: 10.3934/dcdss.2019042
K. Abbaoui and Y. Cherruault, Convergence of adomian's method applied to differential equations, Comput. Math. Appl., 28 (1994), 103-109. doi: 10.1016/0898-1221(94)00144-8. Google Scholar
S. Abbasbandy, E. Shivaniana and K. Vajravelu, Mathematical properties of h-curve in the frame work of the homotopy analysis method, Communications in Nonlinear Science and Numerical Simulation, 16 (2011), 4268-4275. doi: 10.1016/j.cnsns.2011.03.031. Google Scholar
S. Abbasbandy, Numerical method for non-linear wave and diffusion equations by the variational iteration method, International Journal for Numerical Methods in Engineering, 73 (2008), 1836-1843. doi: 10.1002/nme.2150. Google Scholar
S. Abbasbandy, Homotopy analysis method for heat radiation equations, Int Commun Heat Mass Transf, 34 (2007), 380-387. Google Scholar
S. Abbasbandy, Soliton solutions for the 5th-order kdv equation with the homotopy analysis method, Nonlinear Dyn, 51 (2008), 83-87. doi: 10.1007/s11071-006-9193-y. Google Scholar
S. Abbasbandy, The application of the homotopy analysis method to nonlinear equations arising in heat transfer, Phys Lett A, 360 (2006), 109-113. doi: 10.1016/j.physleta.2006.07.065. Google Scholar
S. M. Abo-Dahab, M. S. Mohamed and T. A. Nofal, A One Step Optimal Homotopy Analysis Method for propagation of harmonic waves in nonlinear generalized magneto-thermoelasticity with two relaxation times under influence of rotation, Abstract and Applied Analysis, 2013 (2013), Art. ID 614874, 14 pp. doi: doi.org/10.1155/2013/614874. Google Scholar
G. Adomian, Solving the mathematical models of neurosciences and medicine, Mathematics and Computers in Simulation, 40 (1995), 107-114. doi: 10.1016/0378-4754(95)00021-8. Google Scholar
G. Adomian, The kadomtsev-petviashvili equation, Applied Mathematics and Computation, 76 (1996), 95-97. doi: 10.1016/0096-3003(95)00186-7. Google Scholar
A. S. Arife, S. K. Vanani and F. Soleymani, The laplace homotopy analysis method for solving a general fractional diffusion equation arising in nano- hydrodynamics, J Comput Theor Nanosci, 10 (2013), 33-36. doi: 10.1166/jctn.2013.2653. Google Scholar
M. Caputo, Linear models of dissipation whose q is almost frequency independent, Geophysical Journal, 13 (1967), 529-539. doi: 10.1111/j.1365-246X.1967.tb02303.x. Google Scholar
Y. Cherruault, Convergence of adomian's method, Kybernetes, 18 (1989), 31-38. doi: 10.1108/eb005812. Google Scholar
Y. Cherruault and G. Adomians, Decomposition methods: A new proof of convergence, Math. Comput. Modelling, 18 (1993), 103-106. doi: 10.1016/0895-7177(93)90233-O. Google Scholar
V. F. M. Delgado, J. F. Gómez-Aguilar, H. Y. Martez and D. Baleanu, Laplace homotopy analysis method for solving linear partial differential equations using a fractional derivative with and without kernel singular, Adv. Differ. Equ. , 2016 (2016), Paper No. 164, 17 pp. doi: 10.1186/s13662-016-0891-6. Google Scholar
M. A. El-Tawil and S. N. Huseen, On convergence of the q-homotopy analysis method, Int. J. of Contemp. Math. Scies., 8 (2013), 481-497. doi: 10.12988/ijcms.2013.13048. Google Scholar
M. A. Gondal, A. S. Arife, M. Khan and I. Hussain, An efficient numerical method for solving linear and nonlinear partial differential equations by combining homotopy analysis and transform method, World Applied Sciences Journal, 14 (2011), 1786-1791. Google Scholar
C. Gong, W. Bao, G. Tang, Y. Jiang and J. Liu, A domain decomposition method for time fractional reaction-diffusion equation, The Scientific World Journal, 2014 (2014), Article ID 681707, 5 pages. doi: 10.1155/2014/681707. Google Scholar
V. G. Gupta and P. Kumar, Approximate solutions of fractional linear and nonlinear differential equations using laplace homotopy analysis method, Int. J. Nonlinear Sci., 19 (2015), 113-120. Google Scholar
T. Hayat, M. Khan and S. Asghar, Homotopy analysis of mhd flows of an oldroyd 8-constant fluid, Appl. Math. Comput., 155 (2004), 417-425. doi: 10.1016/S0096-3003(03)00787-2. Google Scholar
T. Hayat, S. B. Khan, M. Sajid and S. Asghar, Rotating flow of a third grade fluid in a porous space with hall current, Nonlinear Dyn, 49 (2007), 83-91. doi: 10.1007/s11071-006-9105-1. Google Scholar
J. H. He, Variational iteration method- a kind of non-linear analytical technique: Some examples, International Journal of Non-Linear Mechanics, 34 (1999), 699-708. doi: 10.1016/S0020-7462(98)00048-1. Google Scholar
J. H. He, Variational iteration method for autonomous ordinary differential systems, Appl. Math. Comput., 114 (2000), 115-123. doi: 10.1016/S0096-3003(99)00104-6. Google Scholar
J. H. He, Variational principle for some nonlinear partial differential equations with variable coefficients, Chaos, Solitons & Fractals, 19 (2004), 847-851. doi: 10.1016/S0960-0779(03)00265-0. Google Scholar
J. H. He, Some asymptotic methods for strongly nonlinear equations, Int. J. Modern Phys. B, 20 (2006), 1141-1199. doi: 10.1142/S0217979206033796. Google Scholar
O. S. Iyiola, On the solutions of non-linear time-fractional gas dynamic equations: An analytical approach, International Journal of Pure and Applied Mathematics, 98 (2015), 491-502. doi: 10.12732/ijpam.v98i4.8. Google Scholar
H. Jafari and V. Daftardar-Gejji, Solving a system of nonlinear fractional differential equations using adomian decomposition, Journal of Computational and Applied Mathematics, 196 (2006), 644-651. doi: 10.1016/j.cam.2005.10.017. Google Scholar
A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematical Studies, Vol. 204, Elsevier, Amsterdam, 2006. Google Scholar
A. C. King, J. Billingham and S. R. Otto. Differential Equations: Linear, Nonlinear, Ordinary, Partial, Cambridge University Press, 2003. doi: 10.1017/CBO9780511755293. Google Scholar
S. Kumar, J. Singh, D. Kumar and S. Kapoor, New homotopy analysis transform algorithm to solve volterra integral equation, Ain Shams Engineering Journal, 5 (2014), 243-246. doi: 10.1016/j.asej.2013.07.004. Google Scholar
S. Kumar and D. Kumar, Fractional modelling for bbm-burger equation by using new homotopy analysis transform method, Journal of the Association of Arab Universities for Basic and Applied Sciences, 16 (2014), 16-20. doi: 10.1016/j.jaubas.2013.10.002. Google Scholar
D. Kumar,J. Singh, S. Kumar and Sushila, Numerical computation of klein-gordon equations arising in quantum field theory by using homotopy analysis transform method, Alexandria Engineering Journal, 53 (2014), 469-474. doi: 10.1016/j.aej.2014.02.001. Google Scholar
D. Kumar, J. Singh and Sushila, Application of homotopy analysis transform method to fractional biological population model, Romanian Reports in Physics, 65 (2013), 63-75. Google Scholar
S. J. Liao, The proposed homotopy analysis technique for the solution of nonlinear problems, PhD thesis, Shanghai Jiao Tong University, 1992. Google Scholar
S. J. Liao, Beyond perturbation: introduction to the homotopy analysis method, Boca Raton: Chapman and Hall/CRC Press, 2003. Google Scholar
S. J. Liao, A kind of approximate solution technique which does not depend upon small parameters- Ⅱ: an application in fluid mechanics, Int J Non- Linear Mech, 32 (1997), 815-822. doi: 10.1016/S0020-7462(96)00101-1. Google Scholar
S.-J. Liao, An optimal homotopy-analysis approach for strongly nonlinear differential equations, Commun Nonlinear Sci Numer Simulat, 15 (2010), 2003-2016. doi: 10.1016/j.cnsns.2009.09.002. Google Scholar
S. J. Liao, On the homotopy analysis method for nonlinear problems, Appl Math Comput, 147 (2004), 499-513. doi: 10.1016/S0096-3003(02)00790-7. Google Scholar
H. M. Liu, Generalized variational principles for ion acoustic plasma waves by He's semi-inverse method, Chaos, Solitons Fractals, 23 (2005), 573-576. Google Scholar
M. Madani, M. Fathizadeh, Y. Khan and A. Yildirim, On the coupling of the homotopy perturbation method and laplace transformation, Math. and Comput. Model., 53 (2011), 1937-1945. doi: 10.1016/j.mcm.2011.01.023. Google Scholar
T. Mavoungou and Y. Cherruault, Convergence of adomian's method and applications to non-linear partial differential equation, Kybernetes, 21 (1992), 13-25. doi: 10.1108/eb005942. Google Scholar
K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley and Sons, New York, 1993. Google Scholar
M. S. Mohamed, K. A. Gepreel, M. R. Alharthi and R. A. Alotabi, Homotopy analysis transform method for integro-differential equations, General Mathematics Notes, 32 (2016), 32-48. Google Scholar
M. S. Mohamed, F. Al-Malki and M. Al-humyani, Homotopy analysis transform method for time-space fractional gas dynamics equation, Gen. Math. Notes, 24 (2014), 1-16. Google Scholar
Z. Odibat, A new modification of the homotopy perturbation method for linear and nonlinear operators, Appl. Math. Comput., 189 (2007), 746-753. doi: 10.1016/j.amc.2006.11.188. Google Scholar
I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. Google Scholar
A. Répaci, Nonlinear dynamical systems: On the accuracy of adomian's decomposition method, Appl. Mth. Lett., 3 (1990), 35-39. doi: 10.1016/0893-9659(90)90042-A. Google Scholar
K. M. Saad, Comparing the Caputo, Caputo-Fabrizio and Atangana-Baleanu derivative with fractional order: Fractional Cubic Isothermal Auto-catalytic Chemical System, Eur. Phys. J. Plus 133 (2018), p49. doi: 10.1140/epjp/i2018-11947-6. Google Scholar
K. M. Saad, D. Baleanu and A. Atangana, New fractional derivatives applied to the Korteweg-de Vries and Korteweg-de Vries-Burger's equations, A. Comp. Appl. Math., (2018), 1-14. doi: 10.1007/s40314-018-0627-1. Google Scholar
K. M. Saad, A. Atangana and D. Baleanu, New Fractional derivatives with non-singular kernel applied to the Burgers equation, Chaos: An Interdisciplinary Journal of Nonlinear Science, 28 (2018), 063109, 6 pp. doi: 10.1063/1.5026284. Google Scholar
K. M. Saad and J. F. Gómez-Aguilar, Analysis of reaction-diffusion system via a new fractional derivative with non-singular kernel, Physica A: Statistical Mechanics and its Applications, 476 (2017), 1-14. doi: 10.1016/j.physa.2017.02.016. Google Scholar
K. M. Saad, E. H. AL-Shareef, M. S. Mohamed and X. J. Yang, Optimal q-homotopy analysis method for time-space fractional gas dynamics equation, The European Physical Journal Plus, 132 (2017), 23. doi: 10.1140/epjp/i2017-11303-6. Google Scholar
K. M. Saad and A. A. AL-Shomrani, An application of homotopy analysis transform method for Riccati differential equation of fractional order, Journal of Fractional Calculus and Applications, 7 (2016), 61-72. Google Scholar
M. Shaban, E. Shivanian and S. Abbasbandy, Analyzing magneto-hydrodynamic squeezing flow between two parallel disks with suction or injection by a new hybrid method based on the tau method and the homotopy analysis method, The European Physical Journal Plus, 128 (2013), 133. doi: 10.1140/epjp/i2013-13133-x. Google Scholar
E. Shivanian, H. H. Alsulami, M. S. Alhuthali and S. Abbasbandy, Predictor homotopy analysis method (PHAM) for nano boundary layer flows with nonlinear Navier boundary condition: Existence of four solutions, Filomat, 28 (2014), 1687-1697. doi: 10.2298/FIL1408687S. Google Scholar
E. Shivanian and S. Abbasbandy, Predictor homotopy analysis method: Two points second order boundary value problems, Nonlinear Analysis: Real World Applications, 15 (2014), 89-99. doi: 10.1016/j.nonrwa.2013.06.003. Google Scholar
J. Singh, D. Kumar and Sushila, Homotopy perturbation algorithm using laplace transform for gas dynamics equation, Journal of the Applied Mathematics, Statistics and Informatics, 8 (2012), 55-61. doi: 10.2478/v10294-012-0006-2. Google Scholar
J. Singh, D. Kumar and S. Rathore, Application of homotopy perturbation transform method for solving linear and nonlinear klein-gordon equations, Journal of Information and Computing Science, 7 (2012), 131-139. Google Scholar
M. Singh, M. Naseem, A. Kumar and S. Kumar, Homotopy analysis transform algorithm to solve time-fractional foam drainage equation, Nonlinear Engineering, 5 (2016), 161-166. doi: 10.1515/nleng-2016-0014. Google Scholar
L. A. Soltania, E. Shivanianb and R. Ezzatia, Convection-radiation heat transfer in solar heat exchangers filled with a porous medium: Exact and shooting homotopy analysis solution, Applied Thermal Engineering, 103 (2016), 537-542. Google Scholar
Q. Sun, Solving the klein-gordon equation by means of the homotopy analysis method, Appl. Math. and Comput., 169 (2005), 355-365. doi: 10.1016/j.amc.2004.09.056. Google Scholar
J. Vahidi, The combined Laplace-homotopy analysis method for partial differential equations, J. Math. Computer Sci., 16 (2016), 88-102. doi: 10.22436/jmcs.016.01.10. Google Scholar
H. Vosoughi, E. Shivanian and S. Abbasbandy, Unique and multiple pham series solutions of a class of nonlinear reactive transport model, Numerical Algorithms, 61 (2012), 515-524. doi: 10.1007/s11075-012-9548-z. Google Scholar
H. Vosughi, E. Shivanian and S. Abbasbandy, A new analytical technique to solve volterra's integral equations, Mathematical methods in the applied sciences, 34 (2011), 1243-1253. doi: 10.1002/mma.1436. Google Scholar
Y. Wu, C. Wang and S. J. Liao, Solving the one-loop soliton solution of the vakhnenko equation by means of the homotopy analysis method, Chaos, Solitons and Fractals, 23 (2005), 1733-1740. doi: 10.1016/S0960-0779(04)00437-0. Google Scholar
L. Xu, J. H. He and Y. Liu, Electrospun nanoporous spheres with chinese drug, Int. J. Nonlinear Sci. Numer. Simul., 8 (2007), 199-202. doi: 10.1515/IJNSNS.2007.8.2.199. Google Scholar
K. Yabushita, M. Yamashita and K. Tsuboi, An analytic solution of projectile motion with the quadratic resistance law using the homotopy analysis method, J Phys A, 40 (2007), 8403-8416. doi: 10.1088/1751-8113/40/29/015. Google Scholar
M. Zurigat, Solving fractional oscillators using laplace homotopy analysis method, Annals of the University of Craiova, Mathematics and Computer Science Series, 38 (2011), 1-11. Google Scholar
Figure 1. The absolute error between the 3-terms of Oq-HAM solutions and numerical method using Mathematica for (4)-(5) with $\alpha = 1, \beta = 1, a = 0.001, b = 0.001, h = -3.055, n = 5$
Figure 2. The absolute error between the 3-terms of HATM solutions and numerical method using Mathematica for (4)-(5) with $ \alpha = 1, \beta = 1, a = 0.001, b = 0.001, h = -0.64$
Figure 3. The absolute error between the second approximation by VIM and the numerical method using Mathematica for (4)-(5) with $ \alpha = 1, \beta = 1, a = 0.001, b = 0.001 .$
Figure 4. The absolute error between the 3-terms of ADM and the numerical method using Mathematica for (4)-(5) with $ \alpha = 1, \beta = 1, a = 0.001, b = 0.001 $
Figure 7. The absolute error between the 3-terms of Oq-HAM solutions and numerical method using Mathematica for (4)-(5) with $\alpha = 0.9, \beta = 0.99, a = 0.001, b = 0.001, h = -1.9, n = 5$
Figure 8. The absolute error between the 3-terms of HATM solutions and numerical method using Mathematica for (4)-(5) with $\alpha = 0.9,\beta = 0.99, a = 0.001, b = 0.001, h = -0.64$
Figure 9. The absolute error between the second approximation by VIM and the numerical method using Mathematica for (4)-(5) with $ \alpha = 0.9, \beta = 0.99, a = 0.001, b = 0.001 $
Figure 10. The absolute error between the 3-terms of ADM and the numerical method using Mathematica for (4)-(5) with $ \alpha = 0.9, \beta = 0.99, a = 0.001, b = 0.001$
Figure 5. The comparison of Oq-HAM, HATM, VIM and ADM for (4)-(5) with numerical method in Mathematica for $x = 0.1, 5, 20, 40,100$ respectively and $\alpha = 1, \beta = 1, a = 0.001, b = 0.001, n = 5, h_{Oq-HAM} = -3.055, h_{HATM} = 0.64.$ Dash - dotted line (Oq-HAM), dotted line (HATM), dash line (VIM), and solid line (ADM)
Figure 11. The plot of Oq-HAM, HATM, VIM and ADM for (4)-(5) with $\alpha = 0.4,\beta = 0.7, a = 0.4, b = 0.2, n = 5, h_{Oq-HAM} = -3.00, h_{HATM} = -0.64 .$ Dash - dotted line (Oq-HAM), dotted line (HATM), dash line (VIM), and solid line (ADM)
Figure 13. The plot of Oq-HAM, HATM, VIM and ADM for (4)-(5) with $\alpha = 0.99,\beta = 0.99, a = 0.4, b = 0.2, n = 5, h_{q-HAM} = -3.00, h_{HATM} = -0.64 .$ Dash - dotted line (Oq-HAM), dotted line (HATM), dash line (VIM), and solid line (ADM)
Figure 14. The surface of Oq-HAM for (4)-(5) with $\alpha = 0.5, 0.8, 1.00,\beta = 0.75,0.90, 1.00$ and $ a = 0.4, b = 0.2, n = 5, h_{Oq-HAM} = -3.00$
Figure 15. The surface of HATM for (4)-(5) with $\alpha = 0.5, 0.8, 1.00,\beta = 0.75,0.90, 1.00$ and $ a = 0.4, b = 0.2, h_{HATM} = -0.64 $
Figure 16. The surface of VIM for (4)-(5) with $\alpha = 0.5, 0.8, 1.00,\beta = 0.75,0.90, 1.00$ and $ a = 0.4, b = 0.2 $
Figure 17. The surface of ADM for (4)-(5) with $\alpha = 0.5, 0.8, 1.00,\beta = 0.75,0.90, 1.00$ and $ a = 0.4, b = 0.2 $
Mehdi Bastani, Davod Khojasteh Salkuyeh. On the GSOR iteration method for image restoration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 27-43. doi: 10.3934/naco.2020013
Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019
Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, Stock price fluctuation prediction method based on time series analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 915-915. doi: 10.3934/dcdss.2019061
Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180
Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095
Qing-Hu Hou, Yarong Wei. Telescoping method, summation formulas, and inversion pairs. Electronic Research Archive, , () : -. doi: 10.3934/era.2021007
Li-Bin Liu, Ying Liang, Jian Zhang, Xiaobing Bao. A robust adaptive grid method for singularly perturbed Burger-Huxley equations. Electronic Research Archive, 2020, 28 (4) : 1439-1457. doi: 10.3934/era.2020076
Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang. A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28 (4) : 1487-1501. doi: 10.3934/era.2020078
Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020462
Noah Stevenson, Ian Tice. A truncated real interpolation method and characterizations of screened Sobolev spaces. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5509-5566. doi: 10.3934/cpaa.2020250
Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120
Gang Luo, Qingzhi Yang. The point-wise convergence of shifted symmetric higher order power method. Journal of Industrial & Management Optimization, 2021, 17 (1) : 357-368. doi: 10.3934/jimo.2019115
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139
C. J. Price. A modified Nelder-Mead barrier method for constrained optimization. Numerical Algebra, Control & Optimization, 2020 doi: 10.3934/naco.2020058
Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045
Kai Zhang, Xiaoqi Yang, Song Wang. Solution method for discrete double obstacle problems based on a power penalty approach. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021018
Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012
Tengfei Yan, Qunying Liu, Bowen Dou, Qing Li, Bowen Li. An adaptive dynamic programming method for torque ripple minimization of PMSM. Journal of Industrial & Management Optimization, 2021, 17 (2) : 827-839. doi: 10.3934/jimo.2019136
Hui Gao, Jian Lv, Xiaoliang Wang, Liping Pang. An alternating linearization bundle method for a class of nonconvex optimization problem with inexact information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 805-825. doi: 10.3934/jimo.2019135
Khaled Mohammed Saad Eman Hussain Faissal AL-Sharif | CommonCrawl |
Buchak on risk and rationality III: the redescription strategy
This is the third in a series of three posts in which I rehearse what I hope to say at the Author Meets Critics session for Lara Buchak's tremendous* new book Risk and Rationality at the Pacific APA in a couple of weeks. The previous two posts are here and here. In the first post, I gave an overview of risk-weighted expected utility theory, Buchak's alternative to expected utility theory. In the second post, I gave a prima facie reason for worrying about any departure from expected utility theory: if an agent violates expected utility theory (perhaps whilst exhibiting the sort of risk-sensitivity that Buchak's theory permits), then her preferences amongst the acts don't line up with her estimates of the value of those acts. In this post, I want to consider a way of reconciling the preferences Buchak permits with the normative claims of expected utility theory.
I will be making a standard move. I will be redescribing the space of outcomes in such a way that we can understand any Buchakian agent as setting her preferences in line with her expectation (and thus estimate) of the value of that act.
Redescribing the outcomes
Moving from expected utility theory to risk-weighted expected utility theory involves an agent evaluating an act in the way illustrated in Figure 1 below to evaluating it in the way illustrated in Figure 2 below.
Figure 1: As in previous posts, $h = \{u_1, F_1; u_2, F_2; u_3, F_3\}$. The expected utility $EU_{p, u}(h)$ of $h$ is given by the dark grey area. It is obtained by summing the areas of the three horizontal rectangles. Their areas are $p(F_1 \vee F_2 \vee F_3)u_1 = u_1$, $p(F_2 \vee F_3)(u_2 - u_1)$, and $p(F_3)(u_3 - u_2)$.
Figure 2: The risk-weighted expected utility $REU_{r_2, p, u}(h)$ of $h$ is given by the dark grey area, where $r_2(x) := x^2$.
In order to begin to see how we can redescribe the REU rule of combination as an instance of the EU rule of combination, we reformulate the REU rule in the way illustrated in Figure 3.
Figure 3: Again, the risk-weighted expected utility $REU_{r_2, p, u}(f)$ of $f$ is given by the grey area, where $r_2(x) = x^2$.
Thus, we can reformulate $REU_{r, p, u}(f)$ as follows:$$REU_{r, p, u}(f) = \sum^{k-1}_{j=1} (r(p(F_j \vee \ldots \vee F_k)) - r(p(F_{j+1} \vee \ldots \vee F_k)))u_j + r(p(F_k))u_n$$And we can reformulate this as follows: $REU_{r, p, u}(f) =$
$$\sum^{k-1}_{j=1} p(F_j) \frac{r(p(F_j \vee \ldots \vee F_k)) - r(p(F_{j+1} \vee \ldots \vee F_k))}{p(F_j \vee \ldots \vee F_k) - p(F_{j+1} \vee \ldots \vee F_k)}u_j + p(F_k)\frac{r(p(F_k))}{p(F_k)}u_n$$since $p(F_j \vee \ldots \vee F_k) - p(F_{j+1} \vee \ldots \vee F_k) = p(F_j)$.
Now, suppose we let$$
s_j = \left \{ \begin{array}{ll}
\frac{r(p(F_j \vee \ldots \vee F_k)) - r(p(F_{j+1} \vee \ldots \vee F_k))}{p(F_j \vee \ldots \vee F_k) - p(F_{j+1} \vee \ldots \vee F_k)} & \mbox{if } j = 1, \ldots, k-1\\
& \\
\frac{r(p(F_j))}{p(F_j)} & \mbox{if } j = k
\end{array}
\right.$$
Then we have:$$REU_{r, p, u}(f) = \sum^k_{j=1} p(F_j)s_ju_j$$
Reformulating Buchak's rule of combination in this way suggests two accounts of it. On the first, utilities attach ultimately to outcomes $x_i$, and they are weighted not by an agent's probabilities but rather by a function of those probabilities that encodes the agent's attitude to risk (given by a risk function). On this account, we group $p(F_j)s_j$ together to give this weighting. Thus, we assume that this weighting has a particular form: it is obtained from a probability function $p$ and a risk function $r$ to give $p(F_j)s_j$; this weighting then attaches to $u_j$ to give $(p(F_j)s_j)u_j$.
On the second account, probabilities do provide the weightings for utility, as in the EU rule of combination, but utilities attach ultimately to act-outcome pairs $(x_i, f)$. On this account, we group $s_ju_j$ together to give this utility; this utility is then weighted by $p(F_j)$ to give $p(F_j)(s_ju_j)$. That is, we say that an agent's utility function is defined on a new outcome space: it is not defined on a set of outcomes $\mathcal{X}$, but on a particular subset of $\mathcal{X} \times \mathcal{A}$, which we will call $\mathcal{X}^*$. $\mathcal{X}^*$ is the set of outcome-act pairs $(x_i, f)$ such that $x_i$ is a possible outcome of $f$: that is, $\mathcal{X}^* = \{(x, f) : \exists s \in \mathcal{S}(f(s) = x)\}$. Now, just as the first account assumed that the weightings of the utilities had a certain form---namely, it is generated by a risk function and probability function in a certain way---so this account assumes something about the form of the new utility function $u^*$ on $\mathcal{X}^*$: we assume that a certain relation holds between the utility that $u^*$ assigns to outcome-act pairs in which the act is the constant act over the outcome and the utility $u^*$ to outcome-act pairs in which this is not the case. We assume that the following holds:$$u^*(x, f) = s_ju^*(x, \overline{x})$$If a utility function on $\mathcal{X}^*$ satisfies this property, we say that it encodes attitudes to risk relative to risk function $r$. Thus, on this account an agent evaluates an act as follows:
She begins with a risk function $r$ and a probability function $p$.
She then assigns utilities to all constant outcome-act pairs $(x, \overline{x})$, defining $u^*$ on $\overline{\mathcal{X}}^* = \{(x, \overline{x}) : x \in \mathcal{X}\} \subseteq \mathcal{X}^*$.).
Finally, she extends $u^*$ to cover all outcome-act pairs in $\mathcal{X}^*$ in the unique way required in order to make $u^*$ a utility function that encodes attitudes to risk relative to $r$. That is, she obtains $u^*(x, f)$ by weighting $u^*(x, \overline{x})$ in a certain way that is determined by the agent's probability function and her attitudes to risk.
Let's see this in action in our example act $h$; we'll consider $h$ from the point of view of two risk functions, $r_2(x) = x^2$ and $r_{0.5}(x) = \sqrt{x}$. Recall: $r_2$ is a risk-averse risk function; $r_{0.5}$ is risk-seeking. We begin by assigning utility to all constant outcome-act pairs $(x, \overline{x})$:
Then we do the same trick as above and amalgamate the outcome-act pairs with the same utility: thus, again, $F_1$ is the event in which the act gives outcome-act pair $(x_1, h)$, $F_2$ is the event in which it gives $(x_2, h)$ or $(x_3, h)$, and $F_3$ the event in which it gives $(x_4, h)$. Next, we assigns utilities to $(x_1, h)$, $(x_2, h)$, $(x_3, h)$, and $(x_4, h)$ in such a way as to make $u^*$ encode attitudes to risk relative to the risk function $r$.
Let's start by considering the utility of $(x_1, h)$, the lowest outcome of $h$. Suppose our risk function is $r_2$; then $u^*(x_1, h) :=$ $$\frac{r_2(p(F_1 \vee F_2 \vee F_3)) - r_2(p(F_2 \vee F_3))}{p(F_1 \vee F_2 \vee F_3) - p(F_2 \vee F_3)}u^*(x_1, \overline{x_1}) = \frac{r_2(1) - r_2(0.7)}{1-0.7} = 1.7u^*(x_1, \overline{x_1})$$And now suppose our risk function is $r_{0.5}$; then $u^*(x_1, h) :=$ $$\frac{r_{0.5}(p(F_1 \vee F_2 \vee F_3)) - r_{0.5}(p(F_2 \vee F_3))}{p(F_1 \vee F_2 \vee F_3) - p(F_2 \vee F_3)}u^*(x_1, \overline{x_1}) = \frac{r_{0.5}(1) - r_{0.5}(0.7)}{1-0.7} \approx 0.54u^*(x_1, \overline{x_1})$$Thus, the risk-averse agent---that is, the agent with risk function $r_2$---values this lowest outcome $x_1$ as the result of $h$ more than she values the same outcome as the result of a certain gift of $x_1$, whereas the risk-seeking agent---with risk function $r_{0.5}$---values it less. And this is true in general: if $r(x) < x$ for all $x$, the utility of the lowest outcome as a result of $h$ will be more valuable than the same outcome as a result of the constant act on that outcome; if $r(x) < x$ it will be less valuable.
Next, let us consider the utility of $(x_4, h)$, the highest outcome of $h$. Suppose her risk function is $r_2$; then$$u^*(x_4, h) := \frac{r_2(p(F_3))}{p(F_3)}u^*(x_4, \overline{x_4}) = \frac{r_2(0.4)}{0.4}u^*(x_4, \overline{x_4}) = 0.4u^*(x_4, \overline{x_4})$$
And now suppose her risk function is $r_{0.5}$; then$$u^*(x_4, h) := \frac{r_{0.5}(p(F_3))}{p(F_3)}u^*(x_4, \overline{x_4}) = \frac{r_{0.5}(0.4)}{0.4}u^*(x_4, \overline{x_4}) = 2.5u^*(x_4, \overline{x_4})$$Thus, the risk-averse agent---that is, the agent with risk function $r_2$---values this highest outcome $x_4$ as the result of $h$ less than she values the same outcome as the result of a certain gift of $x_4$, whereas the risk-seeking agent---with risk function $r_{0.5}$---values it more. And, again, this is true in general: if $r(x) < x$ for all $x$, the utility of the highest outcome as a result of $h$ will be less valuable than the same outcome as a result of the constant act on that outcome; if $r(x) < x$ it will be more valuable.
This seems right. The risk-averse agent wants the highest utility, but also cares about how sure she was to obtain it. Thus, if she obtains $x_1$ from $h$, she knows she was guaranteed to obtain at least this much utility from $h$ or from $\overline{x_1}$ (since $x_1$ is the lowest possible outcome of each act). But she also knows that $h$ gave her some chance of getting more utility. So she values $(x_1, h)$ more than $(x_1, \overline{x_1})$. But if she obtains $x_4$ from $h$, she knows she was pretty lucky to get this much utility, while she knows that she would have been guaranteed that much if she had obtained $x_4$ from $\overline{x_4}$. So she values $(x_4, h)$ less than $(x_4, \overline{x_4})$. And similarly, but in reverse, for the risk-seeking agent.
Finally, let's consider the utilities of $(x_2, h)$ and $(x_3, h)$, the middle outcomes of $h$. They will have the same value, so we need only consider the utility of $(x_2, h)$. Suppose her risk function is $r_2$; then $u^*(x_2, h) :=$ $$\frac{r_2(p(F_2 \vee F_3)) - r_2(p(F_3))}{p(F_2 \vee F_3) - p(F_3)}u^*(x_2, \overline{x_2}) = \frac{r_2(0.7) - r_2(0.4)}{0.7-0.4}u^*(x_2, \overline{x_2}) = 1.1u^*(x_2, \overline{x_2})$$Thus, again, the agent with risk function $r_2$ assigns higher utility to obtaining $x_2$ as a result of $h$ than to obtaining $x_2$ as the result of $\overline{x_2}$. But this is not generally true of risk-averse agents. Consider, for instance, a more risk-averse agent, who has a risk function $r_3(x) := x^3$. Then $u^*(x_2, h) :=$ $$\frac{r_3(p(F_2 \vee F_3)) - r_3(p(F_3))}{p(F_2 \vee F_3) - p(F_3)}u^*(x_2, \overline{x_2}) = \frac{r_3(0.7) - r_3(0.4)}{0.7-0.4}u^*(x_2, \overline{x_2}) = 0.93u^*(x_2, \overline{x_2})$$Again, this seems right. As we said above, the risk-averse agent wants the highest utility, but she also cares about how sure she was to obtain it. The less risk-averse agent---whose risk function is $r_2$---is sufficiently sure that $h$ would obtain for her at least the utility of $x_2$ and possibly more that she assigns higher value to getting $x_2$ as a result of $h$ than to getting it as a result of $\overline{x_2}$. For the more risk-averse agent---whose risk function is $r_3$---she is not sufficiently sure. And reversed versions of these points can be made for risk-seeking agents with risk functions $r_{0.5}$ and $r_{0.333}$, for instance. Thus, we can see why it makes sense to demand of an agent that her utility function $u^*$ on $\mathcal{X}^*$ encodes attitudes to risk relative to a risk function in the sense that was made precise above.
Now, just as we saw that Savage's representation theorem is agnostic between EU and EU$^2$, we can now see that Buchak's representation theorem is agnostic between a version of REU in which utilities attach to elements of $\mathcal{X}$, and a version of EU in which utilities attach to elements of $\mathcal{X}^*$.
Theorem 1 (Buchak) If $\succeq$ satisfies the Buchak axioms, there is a unique probability function $p$, unique risk function, and unique-up-to-affine-transformation utility function $u$ on $\mathcal{X}$ such that $\succeq$ is determined by $r$, $p$, and $u$ in line with the REU rule of combination.
And we have the following straightforward corollary, just as we had in the Savage case above:
Theorem 2 If $\succeq$ satisfies the Buchak axioms, there is a unique probability function $p$ and unique-up-to-affine$^*$-transformation utility function $u^*$ on $\mathcal{X}^*$ that encodes attitudes to risk relative to a risk function such that $\succeq$ is determined by $p$ and $u^*$ in line with the EU rule of combination (where $u^*$ is unique-up-to-affine$^*$-transformation if $u^*|_{\overline{\mathcal{X}}^*}$ is unique-up-to-affine-transformation).
Thus, by redescribing the set of outcomes to which our agent assigns utilities, we can see how her preferences in fact line up with her estimates of the utility of her acts, as required by the de Finetti-inspired argument given in the previous section.
What's wrong with redescription?
Buchak consider the sort of redescription strategy of which this is an instance and raises what amount to two objections (Chapter 4, Buchak 2014). (She raises a further objection against versions of the redescription strategy that attempt to identify certain outcome-act pairs to give a more coarse-grained outcome space; but these do not affect my proposal.)
The problem of proliferation
One potential problem that arises when one moves from assigning utilities to $\mathcal{X}$ to assigning them to $\mathcal{X}^*$ is that an element in the new outcome space is never the outcome of more than one act: $(x, f)$ is a possible outcome of $f$ but not of $g \neq f$. Thus, this outcome never appears in the expected utility (or indeed risk-weighted expected utility) calculation of more than one act. The result is that very few constraints are placed on the utilities that must be assigned to these new outcomes and the probabilities that must be assigned to the propositions in order to recover a given preference ordering on the acts via the EU (or REU) rule of combination. Suppose $\succeq$ is a preference ordering on $\mathcal{A}$. Then, for each $f \in \mathcal{A}$, pick a real number $r_f$ such that $f \succeq g$ iff $r_f \geq r_g$. Now there are many ways to do this, and they are not all affine transformations of one another---indeed, any strictly increasing $\tau : R \rightarrow R$ will take one such assignment to another. Now pick any probability function $p$ on $\mathcal{F}$. Now, given an act $f = \{x_1, E_1; \ldots; x_n, E_n\}$, the only constraint on the values $u^*(x_1, f)$, $\ldots$, $u^*(x_n, f)$ is that $\sum_i p(E_i)u^*(x_i, f) = r_f$. And this of course permits many different values. (In general, for $r \in R$, $0 \leq \alpha_1, \ldots, \alpha_n$ with $\sum_i \alpha_i = 1$, there are many sequences $\lambda_1, \ldots, \lambda_n \in \mathbb{R}$ such that $\sum_i \lambda_i \alpha_i = r$.} Buchak dubs this phenomenon belief and desire proliferation (p. 140, Buchak 2014).)
Why is this a problem? There are a number of reasons to worry about belief and desire proliferation. There is the epistemological worry that, if utilities and probabilities are as loosely constrained as this, it is not possible to use an agent's observed behaviour to predict her unobserved behaviour. Divining her preferences between two acts will teach us nothing about the utilities she assigns to the outcomes of any other acts since those outcomes are unique to those acts. Also, those who wish to use representation theorems for the purpose of radical interpretation will be concerned by the complete failure of the uniqueness of the rationalisation of preferences that such a decision theory provides.
Neither of these objections seems fatal to me. But in any case, the version of the redescription strategy presented here avoids them altogether. The reason is that I placed constraints on the sort of utility function $u^*$ an agent can have over $\mathcal{X}^*$: I demanded that $u^*$ encode attitudes to risk; that is, $u^*(x, f)$ is defined in terms of $u^*(x, \overline{x})$ in a particular way. And we saw in Theorem 2 above that, for any agent whose preferences satisfy the Buchak axioms, there is a unique probability function $p$ and a unique utility function $u^*$ on $\mathcal{X}^*$ that encodes attitudes to risk relative to some risk function such that together $p$ and $u^*$ generate the agent's preferences in accordance with the EU rule of combination.
Ultimate ends and the locus of utility
Buchak's second objection initially seems more worrying (pp. 137-8, Buchak 2014). A theme running through Risk and Rationality is that decision theory is the formalisation of instrumental or means-end reasoning. One consequence of this is that an account of decision theory that analyses an agent as engaged in something other than means-end reasoning is thereby excluded.
Buchak objects to the redescription strategy on these grounds. According to Buchak, to understand an agent as engaged in means-end reasoning, one must carefully distinguish the means and the ends: in Buchak's framework, the means are the acts and the ends are the outcomes. One must then assign utilities to the ends only. Of course, in terms of these utilities and the agent's probabilities and possibly other representations of internal attitude such as the risk function, one can then assign value or utility to the means. But the important point is that this value or utility that attaches to the means is assigned on the basis of the assignment of utility to the ultimate ends. Thus, while there is a sense in which we assign a value or utility to means---i.e. acts---in expected utility theory, this assignment must depend ultimately on the utility we attach to ends---i.e. outcomes.
Thus, a first pass at Buchak's second complaint against the redescription strategy is this: the redescription strategy assigns utilities to something other than ends---it assigns utilities to outcome-act pairs, and these are fusions of means and ends. Thus, an agent analysed in accordance with the redescription strategy is not understood as engaged in means-end reasoning.
However, this seems problematic in two ways. Whether they constitute ultimate ends or not, there are at least two reasons why an agent must assign utilities to outcome-act pairs rather than outcomes on their own. That is, there are two reasons why at least this part of the redescription strategy---namely, the move from $\mathcal{X}$ to $\mathcal{X}^*$---is necessary irrespective of the need to accommodate risk in expected utility theory.
Firstly, utilities must attach to the true outcomes of an act. But these true outcomes aren't the sort of thing we've been calling an outcome here. When I choose Safe over Risky and receive $£50$, the outcome of that act is not merely $£50$; it is $£50$ as the result of Safe. Thus, the true outcomes of an act are in fact the elements of $\mathcal{X}^*$---they are what we have been calling the outcome-act pairs.
Of course, at this point, Buchak might accept that utilities attach to outcome-act pairs, but insist that it is nonetheless a requirement of rationality that an agent assign the same utility to two outcome-act pairs with the same act component; that is, $u^*(x, f) = u^*(x, g)$; that is, while utilities attach to fusions of means and ends, they must be a function only of the ends. But the second reason for attaching utilities to outcome-act pairs tells against this claim in general. The reason is this: As Bernard Williams urges, it is neither irrational nor even immoral to assign higher utility to a person's death as a result of something other than my agency than to that same person's death as a result of my agency (Williams 1973). This, one might hold, is what explains our hesitation when we are asked to choose between killing one person or letting twelve people including that person be shot by a firing squad: I assign higher utility to the death of a prisoner at the hands of the firing squad than to the death of that prisoner at my hands. Thus, it is permissible in at least some situations to care about the act that gives rise to the outcome and let one's utility in an outcome-act pair be a function also of outcome.
Nonetheless, this is not definitive. After all, Buchak could reply that this is peculiar to acts that have morally relevant consequences. Acts such as those in the Allais paradox do not have morally relevant consequences; but the redescription strategy still requires us to make utilities depend on acts as well as outcomes in those cases. Thus, for non-moral acts $f$ and $g$, Buchak might say, it is a requirement of rationality that $u^*(x, f) = u^*(x, g)$, even if it is not such a requirement for moral cases. And this would be enough to scupper the redescription strategy.
However, it is not clear why the moral and non-moral cases should differ in this way. Consider the following decision problem: I must choose whether to shoot an individual or not; I know that, if I do not shoot him, someone else will. I strictly prefer not shooting him to shooting him. My reasoning might be reconstructed as follows: I begin by assigning a certain utility to this person's death as the result of something other than my agency---natural causes, for instance, or murder by a third party. Then, to give my utility for his death at my hand, I weight this original utility in a certain way, reducing it on the basis of the action that gave rise to the death. Thus, the badness of the outcome-act pair (X's death, My agency) is calculated by starting with the utility of another outcome-act pair with the same outcome component---namely, (X's death, Not my agency)---and then weighting that utility based on the act component. We might call (X's death, Not my agency) the reference pair attached to the outcome X's death. The idea is that the utility we assign to the reference pair attached to an outcome comes closest to what we might think of as the utility that attaches solely to the outcome; the reference pair attached to an outcome $x$ is the outcome-act pair $(x, f)$ for which the act $f$ contributes least to the utility of the pair.
Now this is exactly analogous to what the redescription strategy proposes as an analysis of risk-sensitive behaviour. In that case, when one wishes to calculate the utility of an outcome-act pair $(x, f)$, one begins with the utility one attaches to $(x, \overline{x})$. Then one weights that utility in a certain way that depends on the riskiness of the act. This gives the utility of $(x, f)$. Thus, if we take $(x, \overline{x})$ to be the reference pair attached to the outcome $x$, then this is analogous to the moral case above. In both cases, we can recover something close to the notion of utility for ultimate ends or pure outcomes (i.e. elements of $\mathcal{X}$): the utility of the pure outcome $x$---to the extent that such a utility can be meaningfully said to exist---is $u^*(x, \overline{x})$, the utility of the reference pair attached to $x$. That seems right. Strictly speaking, there is little sense to asking an agent for the utility they assign to a particular person's death; one must specify whether or not the death is the result of that agent's agency. But we often do give a utility to that sort of outcome; and when we do, I submit, we give the utility of the reference pair. Similarly, we often assign a utility to receiving $£50$, even though the request makes little sense without specifying the act that gives rise to that pure outcome: again, we give the utility of $£50$ for sure, that is, the utility of $(£50, \overline{£50})$.
Understood in this way, the analysis of a decision given by the redescription strategy still portrays the agent as engaged in means-end reasoning. Of course, there are no pure ultimate ends to which we assign utilities. But there is something that plays that role, namely, reference pairs. An agent's utility for an outcome-act pair $(x, f)$ is calculated in terms of her utility for the relevant reference pair, namely, $(x, \overline{x})$; and the agent's value for an act $f$ is calculated in terms of her utilities for each outcome-act pair $(x, f)$ where $x$ is a possible outcome of $f$. Thus, though the value of an act on this account is not ultimately grounded in the utilities of pure, ultimate outcomes of that act, it is grounded in the closest thing that makes sense, namely, the utilities of the reference pairs attached to the pure, ultimate outcomes of the act.
Buchak proposes a novel decision theory. It is formulated in terms of an agent's probability function, utility function, and risk function. It permits a great many more preference orderings than orthodox expected utility theory. On Buchak's theory the utility that is assigned to an act is not the expectation of the utility of its outcome; rather it is the risk-weighted expectation. But the argument of the second post in this series suggested that estimates should be expectations and the utility of an act should be the estimate of the utility of its outcome. In this post, I have tried to reconcile the preferences that Buchak endorses with the conclusion of de Finetti's argument. To do this, I redescribed the outcome space so that utilities were attached ultimately to outcome-act pairs rather than outcomes themselves. This allowed me to capture precisely the preferences that Buchak permits, whilst letting the utility of an act be the expectation of the utility it will produce. The redescription strategy raises some questions: Does it prevent us from using decision theory for certain epistemological purposes? Does it fail to portray agents as engaged in means-end reasoning? In the final section of this post, I tried to answer these questions.
* I'm reliably informed by a native speaker of American English that "tremendous" is rarely used on that side of the Atlantic to mean "extremely good"; rather, it is used to mean "extremely large". So, to clarify: Buchak's book is extremely good, but not extremely large. Divided by a common language indeed.
Published by Richard Pettigrew at 12:33 pm
Is there a lower bound in the opposite direction of the data processing inequality? | CommonCrawl |
Facilitating safety evaluation in maternal immunization trials: a retrospective cohort study to assess pregnancy outcomes and events of interest in low-risk pregnancies in England
Megan Riley1,
Dimitra Lambrelli2,
Sophie Graham2,
Ouzama Henry1,
Andrea Sutherland1,3,
Alexander Schmidt1,4,
Nicola Sawalhi-Leckenby2,
Robert Donaldson2 &
Sonia K. Stoszek1,3
Maternal characteristics like medical history and health-related risk factors can influence the incidence of pregnancy outcomes and pregnancy-related events of interest (EIs). Data on the incidence of these endpoints in low-risk pregnant women are needed for appropriate external safety comparisons in maternal immunization trials. To address this need, this study estimated the incidence proportions of pregnancy outcomes and pregnancy-related EIs in different pregnancy cohorts (including low-risk pregnancies) in England, contained in the Clinical Practice Research Datalink (CPRD) Pregnancy Register linked to Hospital Episode Statistics (HES) between 2005 and 2017.
The incidence proportions of 7 pregnancy outcomes and 15 EIs were calculated for: (1) all pregnancies (AP) represented in the CPRD Pregnancy Register linked to HES (AP cohort; N = 298 155), (2) all pregnancies with a gestational age (GA) ≥ 24 weeks (AP24+ cohort; N = 208 328), and (3) low-risk pregnancies (LR cohort; N = 137 932) with a GA ≥ 24 weeks and no diagnosis of predefined high-risk medical conditions until 24 weeks GA.
Miscarriage was the most common adverse pregnancy outcome in the AP cohort (1 379.5 per 10 000 pregnancies) but could not be assessed in the other cohorts because these only included pregnancies with a GA ≥ 24 weeks, and miscarriages with GA ≥ 24 weeks were reclassified as stillbirths. Preterm delivery (< 37 weeks GA) was the most common adverse pregnancy outcome in the AP24+ and LR cohorts (742.9 and 680.0 per 10 000 pregnancies, respectively). Focusing on the cohorts with a GA ≥ 24 weeks, the most common pregnancy-related EIs in the AP24+ and LR cohorts were fetal/perinatal distress or asphyxia (1 824.3 and 1 833.0 per 10 000 pregnancies), vaginal/intrauterine hemorrhage (799.2 and 729.0 per 10 000 pregnancies), and labor protraction/arrest disorders (752.4 and 774.5 per 10 000 pregnancies).
This study generated incidence proportions of pregnancy outcomes and pregnancy-related EIs from the CPRD for different pregnancy cohorts, including low-risk pregnancies. The reported incidence proportions of pregnancy outcomes and pregnancy-related EIs are largely consistent with external estimates. These results may facilitate the interpretation of safety data from maternal immunization trials and the safety monitoring of maternal vaccines. They may also be of interest for any intervention studied in populations of pregnant women.
Maternal immunization has the potential to reduce the burden of infectious diseases in infants via the transplacental transfer of protective maternal antibodies, which persist after birth and help protect infants from infection in their first months of life [1, 2]. Maternal immunization may also provide additional benefits by preventing infectious diseases in pregnant women, potentially reducing adverse pregnancy and infant outcomes associated with maternal infections [3, 4].
Currently, immunization to protect against influenza, tetanus, and pertussis is recommended during pregnancy by the World Health Organization [5,6,7], and many individual countries including the United States (US) and the United Kingdom (UK) recommend that pregnant women receive influenza and pertussis vaccinations [8, 9]. In addition to licensed vaccines that are recommended during pregnancy, maternal vaccine candidates are being developed for the prevention of infections in mothers and their offspring, including vaccines against respiratory syncytial virus (RSV) and group B streptococcus (GBS) infections [10,11,12]. Group B streptococcus is a leading cause of neonatal sepsis and meningitis, with the highest incidence during the first 3 months of life [13, 14], and RSV causes respiratory tract infections that may be severe in infants and young children, with the highest hospitalization rate in infants < 1 year old [15, 16]. In the long term, these infants are more likely to suffer from recurrent respiratory symptoms and asthma [13].
Vaccines routinely recommended during pregnancy (e.g., inactivated influenza and tetanus-reduced-antigen-content diphtheria-acellular pertussis vaccines) were originally licensed based on data generated in non-pregnant populations. By contrast, maternal vaccine candidates against RSV and GBS aim to demonstrate safety in vaccinated pregnant women and their offspring, and efficacy (or immunogenicity as proxy) in the infants for their primary indication [2, 10]. The pregnancy-specific vaccine development approach requires the conduct of large-scale maternal immunization trials during clinical development. Prior to conducting such trials, it is critical to understand the background rates of pregnancy outcomes and pregnancy-related events of interest (EIs) in specific populations to facilitate the interpretation of these outcomes and EIs after maternal vaccination [10].
Previous studies have demonstrated that certain maternal characteristics, such as prior medical history and health-related risk factors, are associated with adverse pregnancy outcomes (e.g., stillbirth and preterm delivery) and pregnancy-related EIs (e.g., gestational diabetes and hypertension) [17,18,19,20,21,22]. Past studies have also demonstrated an increased risk of adverse pregnancy outcomes and pregnancy-related EIs in women from low socioeconomic backgrounds relative to those from high socioeconomic backgrounds [23,24,25]. Data describing the incidence of pregnancy outcomes and EIs in women with low-risk pregnancies (i.e., pregnancies without high-risk conditions expected to increase the risk of pregnancy complications) approaching the end of the second trimester (e.g., as of 24 weeks gestational age [GA]) are limited but needed as external reference in maternal immunization trial safety comparisons [26,27,28]. In addition, data are lacking to quantify pregnancy outcomes and pregnancy-related EIs in all pregnant women once they reach 24 weeks GA. We addressed this knowledge gap by conducting a retrospective, observational cohort study using the UK Clinical Practice Research Datalink (CPRD) with data linked to the Pregnancy Register and Hospital Episode Statistics (HES). The Pregnancy Register was created by an algorithm that identifies all pregnancies (and details on timing and outcomes) among women aged 11–49 years in CPRD GOLD, one of CPRD's primary care databases [29, 30].
The objective of this study was to estimate the incidence proportions of pregnancy outcomes and pregnancy-related EIs in three cohorts of pregnant women identified in the CPRD Pregnancy Register linked to HES: (1) all pregnancies, (2) all pregnancies with a GA ≥ 24 weeks, and (3) low-risk pregnancies with a GA ≥ 24 weeks. The study also examined adverse outcomes in liveborn infants from women in the different pregnancy cohorts with Mother-Baby Link. These data are published in an accompanying paper [31]. A plain language summary is provided in Fig. 1.
The protocol of this retrospective observational cohort study was approved by the Independent Scientific Advisory Committee (ISAC) for research involving CPRD data (protocol no. 18_144RA) and has been made available to the journal reviewers.
At the time of data extraction (September 2018), CPRD GOLD contained longitudinal primary care data from 745 real-world clinical practices in the UK, with 269 currently contributing practices, 40% of which were located in England. CPRD GOLD includes over 15 million patient lives, with over 2 million registered and active patients (covering 3.5% of the UK population). CPRD primary care data are representative of the UK population with respect to age, gender, and ethnicity [32]. This study used data from the Pregnancy Register liked to HES, Office for National Statistics (ONS) mortality data, and the Index of Multiple Deprivation (IMD).
The Pregnancy Register uses a validated algorithm that identifies pregnancy episodes in CPRD GOLD. The algorithm uses all available data to identify the timing (start, end, and trimester dates), outcome, and other associated details of each pregnancy episode. As each pregnancy episode is included in the Pregnancy Register as a separate event, more than one pregnancy per woman may be included in the Pregnancy Register over time [29, 33]. A previously published study showed that the internal and external validation of the algorithm had a 91% sensitivity for identifying and dating hospital deliveries and a 77% sensitivity for hospital-based early pregnancy losses. For miscarriages, the rates were comparable to external sources while for termination and live births, lower rates were observed in the Pregnancy Register. Further validation studies are ongoing [29]. Data linkage to HES provides diagnostic secondary care records, including inpatient and outpatient records, for England only [30] (thus restricting the analysis to pregnancies in England which were linkable to HES). ONS mortality data provide information on the date and cause of all deaths recorded in England and Wales [30]. The IMD is an area-based measure of relative deprivation that ranks small areas in England on the patient level as a proxy for socioeconomic status. Data are provided in the form of quintiles of deprivation, from 1 (least deprived) to 5 (most deprived) [30].
The study included pregnancies in the CPRD Pregnancy Register with linkage to HES and a pregnancy end date between 1 January 2005 and 31 December 2017. To increase outcome ascertainment, a 90-day follow-up period after the pregnancy end date was required (unless the woman died before the end of this period). Therefore, pregnancies with an end date up until 2 October 2017 were included in the study cohorts. In addition, continuous active registration starting from at least 365 days before the start of pregnancy was required to assess for high-risk factors at baseline, which were used to establish the Low-Risk (LR) cohort. Figure 2 provides a visualization of each phase within the study period.
Overview of the study period. GA, gestational age. *A minimum of 90 days of active registration after the pregnancy end date was required for women to be enrolled except if the woman died during this 90-day period
To generate a range of background rates for each endpoint, three study cohorts were designed that might be expected in maternal immunization clinical trials, depending on the strictness of the inclusion/exclusion criteria and the timing of vaccination.
The All Pregnancies (AP) cohort included all pregnancies recorded in the CPRD Pregnancy Register between 1 January 2005 and 31 December 2017 with linkage to HES, ≥ 365 days of continuous active registration prior to the pregnancy start date, ≥ 90 days of active registration following the pregnancy end date (unless the woman died before the end of this period), acceptable data quality (i.e., whether the patient met certain quality standards based on a valid age and gender, recording of events and registration status [32]), and a maternal age ≥ 18 to ≤ 45 years on the pregnancy end date. Pregnancy episodes associated with multiple births (e.g., twins, triplets) and with an unknown outcome were excluded as this study was designed to reflect the population expected to be enrolled in maternal immunization trials (Additional file 1). For live births with a GA < 20 weeks or > 44 weeks, the GA was recategorized to missing.
The All Pregnancies ≥ 24 weeks GA cohort (AP24+ cohort) was a subgroup of the AP cohort, including pregnancies with a GA ≥ 240/7 weeks. This subgroup excluded all women with a recorded GA < 240/7 weeks (calculated using the variable in the Pregnancy Register: "gestdays" < 168 days) and served as a GA-based descriptive comparator group for the LR cohort. The GA cut-off of 24 weeks was selected as it is the same as the one chosen in previous GBS maternal immunization trials [26,27,28] and falls within the timeframe of recommended maternal pertussis immunization in several countries [34].
The LR cohort included pregnancies from the AP24+ cohort without diagnosis of select high-risk medical conditions or procedures in the woman's medical history (including all available medical history prior to start of pregnancy through 240/7 weeks GA). See Additional files 2 and 3 for additional information on the eligibility criteria of the LR cohort, including the codes used to identify the exclusion criteria. The high-risk medical conditions and procedures determined as exclusion criteria for the LR cohort were selected based on potential exclusion criteria for maternal immunization trials.
Additional cohorts were defined with linkage to the Mother-Baby Link to assess adverse infant outcomes, as described in the accompanying paper [31].
Study endpoints and variables
The selection of study endpoints was guided by the standardized case definitions established by the Brighton Collaboration and Global Alignment of Immunization Safety Assessment (GAIA) project for use in maternal immunization trials. The aim of these standardized case definitions is to achieve global alignment in the case definitions of safety outcomes in clinical trials enrolling pregnant women. This harmonization will enable comparison of safety data between and among maternal immunization trials [35, 36]. To ensure the broad applicability of study results, the case definitions of pregnancy outcomes and pregnancy-related EIs were manually aligned with those provided by the Brighton Collaboration and GAIA wherever possible. However, the exact application of GAIA definitions was challenging because laboratory results, procedure results, and medication prescribed during a hospital stay are underreported in the CPRD and linked databases. Furthermore, GAIA case definitions were not available for all study endpoints. Therefore, diagnostic coding was used (Read codes in CPRD GOLD and International Classification of Diseases, 10th Revision [ICD-10] codes in HES). Additional files 4 and 5 show each endpoint with the corresponding GAIA definition and diagnostic codes.
Pregnancy outcomes
Table 1 lists the pregnancy outcomes assessed in the study (live birth and adverse pregnancy outcomes), as recorded in the Pregnancy Register between the pregnancy start and end dates. Of note, miscarriages with a GA > 24 weeks were reclassified as stillbirths. The identification algorithms and codes are listed in Additional files 4 and 5.
Table 1 Pregnancy outcomes and pregnancy-related events of interest
Pregnancy-related EIs
Table 1 provides the list of pregnancy-related EIs assessed in the study along with the associated timeframe for which each was assessed. All pregnancy-related EIs, with the exception of maternal death, were identified based on Read codes in CPRD or ICD-10 codes in HES (Additional files 4 and 5). Maternal death was identified based on the date of death in CPRD (Additional file 6) or ONS (Additional files 4 and 5). The date from ONS was used if conflicting information was reported.
The following variables were assessed in the study: contraception use, smoking status and alcohol intake in the 365 days before the pregnancy start date (data not shown); and maternal age at pregnancy start, calendar year at pregnancy start, number of pregnancies in the study period (data not shown), ethnicity, quintile of deprivation in IMD, and pregnancy number (data not shown) (Additional files 7 and 8). These variables were selected as being potential risk factors for the evaluated pregnancy outcomes and pregnancy-related EIs.
Analyses were conducted using SAS software version 9.4 (SAS Institute Inc., Cary, NC, US). No hypothesis testing was performed in this descriptive study. Potential differences between groups were based on non-overlapping 95% confidence intervals (CIs). Feasibility counts during protocol development indicated that the sample size obtained from the databases would provide sufficient precision for the descriptive purpose of the study. Standard data management practices were performed on the databases (i.e., the initial cohort selection process, subsequent revisions of the selection process and statistical analyses were reviewed by the Data Analyst, the Quality Control Analyst and the Principal Investigator).
Descriptive analyses of demographic characteristics of all pregnancy cohorts were conducted, including number and proportion for categorical variables, and mean, standard deviation, median, interquartile range (IQR), and minimum and maximum values for continuous variables. Within each cohort, the incidence proportion of each study endpoint was calculated as follows:
$$\frac{\mathrm{Number\ of\ new\ cases\ of\ study\ outcomes\ or\ EI\ in\ the\ period\ of\ interest}}{\mathrm{Number\ of\ pregnancies\ identified\ in\ CPRD\ in\ the\ period\ of\ interest}}$$
The incidence proportions and 95% CIs of the study endpoints were calculated for every 10 000 pregnancies. Due to the study design and use of the Pregnancy Register as a data source, women were permitted to contribute more than one sequential pregnancy to the dataset over time. To account for clustering in the data due to the non-independent nature of sequential pregnancies included in the dataset for the same woman, the 95% CIs of incidence proportions were estimated via a generalized estimating equation model [37]. Missing values in the data were identified but not replaced, as assuming a nature of missing at random. To maintain confidentiality and individual data anonymization, data were provided only if at least five cases were observed for a given strata or subgroup. Each study endpoint was presented for the entire study period. Exploratory analyses to stratify each study endpoint by calendar year of pregnancy start date, maternal age at start of pregnancy (18–24, 25–29, 30–34, 35–39, and 40–45 years of age), ethnicity (white, Asian, black, mixed, other, and unknown), and IMD quintile (1 [least deprived]–5 [most deprived]) were also conducted.
Sample selection and cohort description
We identified 1 757 557 pregnancies across the study period, of which 1 062 405 (60.4%) were linked to HES. Once selection criteria were applied, 298 155 pregnancies were ultimately included in the AP cohort, of which 208 328 (69.9%) had a recorded GA ≥ 24 weeks and were included in the AP24+ cohort (Fig. 3). Of the pregnancies in the AP24+ cohort, 137 932 (66.2%) were included in the LR cohort. Figure 3 provides the disposition of subjects within cohorts, and Fig. 4 provides an overview of the pregnancies excluded from the LR cohort by individual exclusion criteria.
Cohort selection flow chart. AP, All Pregnancies; AP24+, All Pregnancies with gestational age ≥24 weeks; LR, Low-Risk pregnancies; CPRD, Clinical Practice Research Datalink; HES, Hospital Episode Statistics; N, number of pregnancies in the corresponding group/category
High-risk medical conditions or procedures in medical history* leading to exclusion from LR cohort. AP, All Pregnancies; AP24+ , All Pregnancies with gestational age ≥ 24 weeks; LR, Low-Risk pregnancies; N, number of pregnancies in the corresponding group/category; CMV, cytomegalovirus; COPD, chronic obstructive pulmonary disorder; HBV, hepatitis B virus; HCV, hepatitis C virus; HIV, human immunodeficiency virus. *All available medical history prior to start of pregnancy through 240/7 weeks gestational age (see Additional file 2 for algorithms and assessment periods)
The median duration of pregnancy in the AP cohort was shorter with a wider IQR compared to the AP24+ and LR cohorts (Table 2). The median age of women at the start of pregnancy was 30 years for all three cohorts (Table 2). By age category, the highest proportion of women were 30–34 years of age at the start of pregnancy (around 30% for all cohorts, Table 2). Most women were white, and women within each cohort were evenly distributed across the five IMD quartiles.
Table 2 Demographics and baseline characteristics by study cohort
Between 2005 and 2017, the number and proportion of pregnancies identified generally decreased by calendar year of pregnancy start date across all cohorts, particularly from 2013 onward (Table 2).
Live birth was the most common pregnancy outcome across all cohorts (Table 3). In the AP cohort, which included pregnancies of any GA, 7 197.3 per 10 000 pregnancies resulted in live births. In the AP24+ and LR cohorts, which only included pregnancies with a GA ≥ 24 weeks, 9 944.7 and 9 949.4 pregnancies per 10 000 resulted in live births, respectively. Preterm delivery occurred less frequently in the AP cohort (534.3 per 10 000 pregnancies, Table 3) than in the AP24+ and LR cohorts; the incidence proportion of preterm delivery was higher in the AP24+ cohort (742.9 per 10 000 pregnancies) than the LR cohort (680.0 per 10 000 pregnancies, Table 3).
Table 3 Incidence proportions of pregnancy outcomes by study cohort for the entire study period
Stillbirth was relatively rare within all cohorts at ≤ 50.0 stillbirths per 10 000 pregnancies (Table 3). Miscarriage was the most common adverse pregnancy outcome in the AP cohort (1 379.5 per 10 000 pregnancies). It could not be assessed in the AP24+ and LR cohorts because in our study, miscarriages with a GA > 24 weeks were reclassified as stillbirths (Table 3). Likewise, the pregnancy outcomes of miscarriage or termination (composite endpoint) and ectopic pregnancy (which is also expected to occur prior to 24 weeks GA) could not be assessed in the AP24+ and LR cohorts. For termination, a very low incidence proportion was observed for the AP24+ and LR cohorts (5.3 and 4.4 per 10 000 pregnancies, respectively) relative to the AP cohort (522.9 per 10 000 pregnancies, Table 3).
Pregnancy-related events of interest
Across all cohorts, the most common pregnancy-related EIs were fetal/perinatal distress or asphyxia (1 318.7, 1 824.3, and 1 833.0 per 10 000 pregnancies in the AP, AP24+ , and LR cohorts, respectively), followed by vaginal or intrauterine hemorrhage (697.4, 799.2, and 729.0 per 10 000 pregnancies) and labor protraction/arrest disorders (541.6, 752.4, and 774.5 per 10 000 pregnancies) (Table 4).
Table 4 Incidence proportions of pregnancy-related events of interest by study cohort for the entire study period
The incidence proportions of pregnancy-related EIs were lower in the LR cohort than the AP24+ cohort for 10 out of the 15 EIs examined: vaginal or intrauterine hemorrhage, pre-eclampsia, pregnancy-related hypertension, liver or biliary disease, premature/preterm labor, oligohydramnios, polyhydramnios, intrauterine growth restriction/poor fetal growth, gestational diabetes, and preterm premature rupture of membranes (based on non-overlapping CIs, Table 4). The incidence proportions of maternal sepsis, eclampsia, labor protraction/arrest disorders, maternal death, and fetal/perinatal distress or asphyxia were similar in the AP24+ and LR cohorts (overlapping CIs, Table 4).
Exploratory stratification of study endpoints by select variables
The incidence proportions of pregnancy outcomes and most pregnancy-related EIs remained relatively constant by calendar year of pregnancy start date across all cohorts (Additional file 9, Tables S9.1–S9.25). However, an increase was observed for some, including maternal sepsis, gestational diabetes, and intrauterine growth restriction/poor fetal growth (Additional file 9, Tables S9.8, S9.19, S9.18). The incidence proportion of gestational diabetes increased approximately four-fold in each cohort between 2005 and 2016, while that of maternal sepsis remained constant until 2012 and then increased between three- and seven-fold in the three cohorts between 2012 and 2016 (Fig. 5 and Additional file 9, Tables S9.19 and S9.8). The incidence proportion of intrauterine growth restriction/poor fetal growth increased about two-fold in each cohort between 2005 and 2016 (Additional file 9, Table S9.18).
Incidence proportions of maternal sepsis A and gestational diabetes B by year of pregnancy start date. AP, All Pregnancies; AP24+ , All Pregnancies with gestational age ≥ 24 weeks; CI, Confidence interval; LR, Low-Risk pregnancies. *Because the study start date was 1 January 2005, pregnancies reported as starting in 2004 include only those which began in the last 9 months of 2004 (if full term, for example). °Pregnancies with a start date of 2017 were not included in this figure because the number was extremely low (and therefore incidence proportions less robust). Pregnancies reported as starting in 2017 included only those which began in the first month of 2017 (if full term, for example) because the study period end date was 31 December 2017 (with pregnancy end date up until 2 October 2017, as there was a requirement for at least a 90-day follow up after the pregnancy end date)
Across all cohorts, the incidence proportions of pregnancy outcomes and pregnancy-related EIs generally varied by maternal age, ethnicity, and IMD quintile; however, observed patterns of risk were complex and non-uniform. For example, the incidence proportions of several pregnancy outcomes (e.g., stillbirth, preterm delivery) and pregnancy-related EIs (e.g., gestational diabetes, polyhydramnios) were highest amongst pregnancies with advanced maternal age, non-white race and higher socioeconomic deprivation levels (Additional file 9, Tables S9.2, S9.7, S9.19, and S9.17). By contrast, the incidence proportions of other pregnancy-related EIs (e.g., vaginal or intrauterine hemorrhage, labor protraction/arrest disorders, and intrauterine growth restriction/poor fetal growth) were lowest among pregnancies with advanced maternal age (Additional file 9, Tables S9.9, S9.15 and S9.18). Additionally, the incidence proportion of pregnancy-related hypertension was lowest in pregnancies where the women were the most deprived (Additional file 9, Table S9.12).
This descriptive, retrospective cohort study based on CPRD and linked data showed that the incidence proportions of pregnancy outcomes and pregnancy-related EIs represented in the CPRD varied between a cohort including all pregnancies, a cohort including all pregnancies with a GA ≥ 24 weeks, and a cohort including only low-risk pregnancies with a GA ≥ 24 weeks. This demonstrates the importance of accounting for GA and maternal risk profile when establishing background rates for a population of interest.
Because (by definition) the AP24+ and LR cohorts only included pregnancies with a GA of at least 24 weeks, the median duration of pregnancy was 7 days shorter with a much wider IQR in the AP cohort than the AP24+ and LR cohorts. The impact of the GA restriction was reflected in the observed rates of pregnancy outcomes. For instance, the incidence proportions of pregnancies resulting in live birth and preterm delivery (outcomes normally occurring after 24 weeks GA) were notably lower in the AP cohort than the AP24+ or LR cohorts. By contrast, the incidence proportion of termination was higher in the AP cohort than the AP24+ and LR cohorts as this outcome is expected to occur early in pregnancy (i.e., prior to 24 weeks GA). For the same reason, ectopic pregnancies could not be assessed in the AP24+ and LR cohorts. Neither could miscarriages and miscarriages or terminations (composite endpoint) because miscarriages with a GA > 24 weeks were reclassified as stillbirths in our study. When focusing on the AP24+ and LR cohorts, the incidence proportions of live birth, stillbirth, and termination were similar between cohorts. However, the incidence proportion of preterm delivery was lower in the LR cohort than the AP24+ cohort, potentially as a result of the exclusion of pregnant women with known risk factors for preterm delivery (e.g., hypertension [18]).
Due to the inclusion criterion of ≥ 24 weeks GA in the AP24+ and LR cohorts, which corresponds to a likely timing of enrollment in a maternal immunization trial [26,27,28], the AP24+ and LR cohorts are the most relevant for understanding the background rates of pregnancy-related EIs that might be expected in maternal immunization trials. For 10 of the EIs examined, lower incidence proportions were reported in the LR cohort relative to the AP24+ cohort (based on non-overlapping CIs). For the 5 remaining EIs examined, the incidence proportions in the LR cohort were similar to those in the AP24+ cohort. These results suggest that maternal risk profile (as defined by the presence of certain medical conditions and/or procedures in a woman's available medical history and up to 240/7 weeks GA in the current study) influences the likelihood of developing certain pregnancy-related EIs more strongly than others. For some EIs, the lower incidence proportions may be explained by these being included as exclusion criteria for the LR cohort (e.g., gestational hypertension).
Although the requirement for pregnancies in all cohorts to have a linkage to HES restricted the study population to England only, the rates of pregnancy outcomes reported in this study are largely consistent with available reports from independent sources for England and Wales, supporting the external validity and generalizability of the results for these areas. Extrapolation to other high-income areas should be done with caution as population dynamics may vary. For example, the ONS reported annual rates of stillbirth in England and Wales decreasing from 54 per 10 000 births in 2005 to 42 per 10 000 births in 2017 [38]. In the current study, 42.7 stillbirths per 10 000 pregnancies were reported in the AP cohort in 2005 and 39.6/10 000 in 2016. The UK National Health Service estimated that 1 in 8 pregnancies (12.5%) ends in miscarriage [39]; in the current study, 1 379.5 miscarriages per 10 000 pregnancies (13.8%) were reported for the AP cohort over the entire study period. Similarly, the prematurity rate in England and Wales was 7.3 per 100 live births in 2012 [40]; for the same year, 609.5 premature deliveries per 10 000 pregnancies were reported in the AP cohort in the current study. The ONS reported that 22.7% of conceptions among women resident in England and Wales in 2017 led to a legal abortion [41]. This is four times higher than the termination rate observed in the AP cohort of the current study (522.9/10 000 pregnancies; 5.2%). An underestimation of termination rates was also observed by Minassian et al. in their external validation of the Pregnancy Register [29].
There was a decrease in the number of pregnancies identified in CPRD by calendar year over the study period. This reflects a decrease in the number of English general practices contributing data to CPRD GOLD over time as well as a decline in the fertility rate in England and Wales in recent years (from 1.94 in 2012 to 1.76 in 2017) [38, 41, 42].
An increase in the incidence proportions of maternal sepsis and gestational diabetes over time was observed in the current study, which is consistent with reports for both maternal sepsis in the US [43] and gestational diabetes [44, 45]. For maternal sepsis, which increased sharply from 2012 onwards, this was likely driven by a combination of true changes in incidence, coding changes (Read code "A3C..00: Sepsis" was introduced in 2012), changes in screening and testing practices, and an increased clinical awareness of signs and symptoms [46]. For gestational diabetes, this increase may also have been driven by revised diagnostic criteria and an increased clinician awareness following the publication of the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study, which was conducted during the study period [47]. The current study also showed an increase in the incidence proportion of intrauterine growth restriction/poor fetal growth over time, which may in part be explained by changes in screening and diagnosis guidelines, e.g., an update on the management for the small for GA fetus in the Royal College of Obstetricians and Gynecologists guidelines and the publication of the Perinatal Institute's "Growth Assessment Protocol" in 2013 [48,49,50]. The observed increase in these three endpoints highlights the importance of understanding changes in epidemiology and clinical practices over time when conducting retrospective studies with a long study period (e.g., 2005–2017 in the current study) or when selecting historical controls in real-world studies.
A key strength of this study is the use of the CPRD Pregnancy Register as the primary data source. As one of the largest and best-established primary care databases for research, the CPRD and the available linked datasets provide a rich and generalizable source of data on antenatal care, postnatal care, and pregnancy outcomes for England. Consistently, in the current study, the distribution of women across the five IMD quartiles was similar to the female population of England aged 18–45 years [51]. The recently validated CPRD Pregnancy Register leverages all available pregnancy data to identify pregnancy episodes. It has been demonstrated to closely agree with external hospitalization data in terms of the completeness and timing of pregnancy outcomes. However, some pregnancy outcomes such as termination and live birth appear to be underestimated in the Pregnancy Register as compared to data from the Department of Health and Social Care and ONS, respectively [29].
Another strength of this study is the application of the standardized case definitions established by the Brighton Collaboration and GAIA project for use in maternal immunization trials [35, 36]. These definitions were used to guide the selection and determination of study endpoints. Although the exact application of clinical case definitions was at times difficult within the context of this database study, with diagnoses recorded under the Read and ICD-10 systems, the incorporation of the GAIA guidance and philosophy contributes to the broad applicability and interest of the study results. This study may also help optimize the design of future studies (e.g., maternal immunization studies) by providing background rates of certain pregnancy outcomes and pregnancy-related EIs.
The major limitation of this study is its descriptive nature, which limits the strength of the conclusions that can be extracted from the analysis, particularly for the exploratory stratification of study endpoints by maternal age, ethnicity, and IMD. Demographic and temporal changes can substantially impact wider applicability of the present data to other populations. The vaccination history of the mother may also influence some outcomes. As this was not assessed, the potential impact could not be determined. In addition, the exclusion of a large proportion of pregnancies as a result of the required ≥ 365-day baseline period may have introduced selection bias. Cohort selection is also a limitation of this study. The LR cohort was selected to represent pregnant women likely to be enrolled in maternal immunization trials. However, coding limitations inherent to database studies (e.g., past medical conditions may be included as current diagnoses) may have led to erroneous exclusions from the LR cohort. On the other hand, past medical conditions or behavioral risk factors may have been omitted, thereby including high-risk pregnancies in the LR cohort. Another limitation is the possible presence of coding errors in the source data. Although the impact of coding errors is expected to be minimal based on prior CPRD validation studies for different disease states [52,53,54], they could have influenced incidence proportions. Additionally, Read and ICD-10 codes were used to identify the study outcomes and could have led to over- or underreporting of outcomes. Nevertheless, the study contributes to the evidence that maternal characteristics, including medical history and health-related risk factors, influence pregnancy outcomes and pregnancy-related EIs.
Before conducting maternal immunization trials, it is essential to understand the background incidence proportions of pregnancy outcomes and pregnancy-related EIs in specific populations to evaluate and reliably interpret and monitor the safety of maternal vaccine candidates. This real-world analysis, using English primary and secondary care data that are largely representative of the general population, addressed this knowledge gap by generating the incidence proportions of a comprehensive list of pregnancy outcomes and pregnancy-related EIs in all and low-risk pregnancies represented in the CPRD Pregnancy Register. The results of this study demonstrate the importance of considering both the GA of a pregnancy episode and maternal risk factors when establishing background rates for a population of interest. These data may facilitate the interpretation of safety data from maternal immunization trials and the safety monitoring of maternal vaccines. In addition, these data can be of interest for any intervention studied in populations of pregnant women.
Study documents can be requested for further research from www.clinicalstudydatarequest.com. The CPRD and linked data used in this study cannot be shared directly with others due to contractual agreements. CPRD data may be requested from [email protected].
AP:
All Pregnancies
AP24+ :
All Pregnancies with gestational age ≥ 24 weeks
CPRD:
EI:
Event of interest
GA:
GAIA:
Global Alignment of Immunization Safety Assessment
GBS:
Group B streptococcus
HES:
Hospital Episode Statistics
International Classification of Diseases, 10th Revision
IQR:
ISAC:
Independent Scientific Advisory Committee
IMD:
Low-Risk
ONS:
Office for National Statistics
RSV:
Marshall H, McMillan M, Andrews RM, Macartney K, Edwards K. Vaccines in pregnancy: The dual benefit for pregnant women and infants. Hum Vaccin Immunother. 2016;12(4):848–56.
Vojtek I, Dieussaert I, Doherty TM, Franck V, Hanssens L, Miller J, et al. Maternal immunization: where are we now and how to move forward? Ann Med. 2018;50(3):193–208.
Omer SB, Goodman D, Steinhoff MC, Rochat R, Klugman KP, Stoll BJ, et al. Maternal influenza immunization and reduced likelihood of prematurity and small for gestational age births: a retrospective cohort study. PLoS Med. 2011;8(5): e1000441.
Swamy GK, Heine RP. Vaccinations for pregnant women. Obstet Gynecol. 2015;125(1):212–26.
World Health Organization. Vaccines against influenza WHO position paper - November 2012. Wkly Epidemiol Rec. 2012;87(47):461–76.
World Health Organization. Pertussis vaccines: WHO position paper - September 2015. Wkly Epidemiol Rec. 2015;90(35):433–58.
World Health Organization. Tetanus vaccines: WHO position paper – February 2017. Wkly Epidemiol Rec. 2017;92(6):53–76.
National Health Service. NHS vaccinations and when to have them. https://www.nhs.uk/conditions/vaccinations/nhs-vaccinations-and-when-to-have-them/. Accessed 9 June 2021.
Center for Disease Control and Prevention. Vaccines During and After Pregnancy. https://www.cdc.gov/vaccines/pregnancy/vacc-during-after.html. Accessed 9 June 2021.
Munoz FM. Current challenges and achievements in maternal immunization research. Front Immunol. 2018;9:436.
Omer SB. Maternal immunization. N Engl J Med. 2017;376(13):1256–67.
Engmann C, Fleming JA, Khan S, Innis BL, Smith JM, Hombach J, et al. Closer and closer? Maternal immunization: current promise, future horizons. J Perinatol. 2020;40(6):844–57.
Heath PT, Culley FJ, Jones CE, Kampmann B, Le Doare K, Nunes MC, et al. Group B streptococcus and respiratory syncytial virus immunisation during pregnancy: a landscape analysis. Lancet Infect Dis. 2017;17(7):e223–34.
Madrid L, Seale AC, Kohli-Lynch M, Edmond KM, Lawn JE, Heath PT, et al. Infant group B streptococcal disease incidence and serotypes worldwide: Systematic review and meta-analyses. Clin Infect Dis. 2017;65(suppl_2):S160-72.
Reeves RM, Hardelid P, Gilbert R, Warburton F, Ellis J, Pebody RG. Estimating the burden of respiratory syncytial virus (RSV) on respiratory hospital admissions in children less than five years of age in England, 2007–2012. Influenza Other Respir Viruses. 2017;11(2):122–9.
Shi T, McAllister DA, O'Brien KL, Simoes EAF, Madhi SA, Gessner BD, et al. Global, regional, and national disease burden estimates of acute lower respiratory infections due to respiratory syncytial virus in young children in 2015: a systematic review and modelling study. Lancet. 2017;390(10098):946–58.
Alexopoulos AS, Blair R, Peters AL. Management of preexisting diabetes in pregnancy: A review. JAMA. 2019;321(18):1811–9.
Bramham K, Parnell B, Nelson-Piercy C, Seed PT, Poston L, Chappell LC. Chronic hypertension and pregnancy outcomes: systematic review and meta-analysis. BMJ. 2014;348: g2301.
Fuchs F, Monet B, Ducruet T, Chaillet N, Audibert F. Effect of maternal age on the risk of preterm birth: A large cohort study. PLoS ONE. 2018;13(1): e0191002.
Kenny LC, Lavender T, McNamee R, O'Neill SM, Mills T, Khashan AS. Advanced maternal age and adverse pregnancy outcome: evidence from a large contemporary cohort. PLoS ONE. 2013;8(2): e56583.
Lean SC, Derricott H, Jones RL, Heazell AEP. Advanced maternal age and adverse pregnancy outcomes: A systematic review and meta-analysis. PLoS ONE. 2017;12(10): e0186287.
Leon MG, Moussa HN, Longo M, Pedroza C, Haidar ZA, Mendez-Figueroa H, et al. Rate of gestational diabetes mellitus and pregnancy outcomes in patients with chronic hypertension. Am J Perinatol. 2016;33(8):745–50.
Lelong A, Jiroff L, Blanquet M, Mourgues C, Leymarie MC, Gerbaud L, et al. Is individual social deprivation associated with adverse perinatal outcomes? Results of a French multicentre cross-sectional survey. J Prev Med Hyg. 2015;56(2):E95–101.
Smith LK, Manktelow BN, Draper ES, Springett A, Field DJ. Nature of socioeconomic inequalities in neonatal mortality: population based study. BMJ. 2010;341: c6654.
Vos AA, Posthumus AG, Bonsel GJ, Steegers EA, Denktas S. Deprived neighborhoods and adverse perinatal outcome: a systematic review and meta-analysis. Acta Obstet Gynecol Scand. 2014;93(8):727–40.
Donders GG, Halperin SA, Devlieger R, Baker S, Forte P, Wittke F, et al. Maternal immunization with an investigational trivalent group B streptococcal vaccine: A randomized controlled trial. Obstet Gynecol. 2016;127(2):213–21.
Heyderman RS, Madhi SA, French N, Cutland C, Ngwira B, Kayambo D, et al. Group B streptococcus vaccination in pregnant women with or without HIV in Africa: a non-randomised phase 2, open-label, multicentre trial. Lancet Infect Dis. 2016;16(5):546–55.
Swamy GK, Metz TD, Edwards KM, Soper DE, Beigi RH, Campbell JD, et al. Safety and immunogenicity of an investigational maternal trivalent group B streptococcus vaccine in pregnant women and their infants: Results from a randomized placebo-controlled phase II trial. Vaccine. 2020;38(44):6930–40.
Minassian C, Williams R, Meeraus WH, Smeeth L, Campbell OMR, Thomas SL. Methods to generate and validate a Pregnancy Register in the UK Clinical Practice Research Datalink primary care database. Pharmacoepidemiol Drug Saf. 2019;28(7):923–33.
Clinical Practice Research Datalink (CPRD). CPRD linked data. https://www.cprd.com/linked-data. Accessed 9 June 2021.
Riley M, Lambrelli D, Graham S, Henry O, Sutherland A, Schmidt A, et al. Facilitating safety evaluation in maternal immunization trials: A retrospective cohort study to assess adverse infant outcomes following low-risk pregnancies in England. BMC Pregnancy Childbirth. 2021 (Submitted for publication).
Herrett E, Gallagher AM, Bhaskaran K, Forbes H, Mathur R, van Staa T, et al. Data resource profile: Clinical Practice Research Datalink (CPRD). Int J Epidemiol. 2015;44(3):827–36.
Clinical Practice Research Datalink (CPRD). Pregnancy Register and Mother-baby link. https://cprd.com/linked-data#Mother-baby%20link%20and%20Pregnancy%20Register. Accessed 9 June 2021.
Kandeil W, van den Ende C, Bunge EM, Jenkins VA, Ceregido MA, Guignard A. A systematic review of the burden of pertussis disease in infants and the effectiveness of maternal immunization against pertussis. Expert Rev Vaccines. 2020;19(7):621–38.
Bonhoeffer J, Kochhar S, Hirschfeld S, Heath PT, Jones CE, Bauwens J, et al. Global alignment of immunization safety assessment in pregnancy - The GAIA project. Vaccine. 2016;34(49):5993–7.
Jones CE, Munoz FM, Kochhar S, Vergnano S, Cutland CL, Steinhoff M, et al. Guidance for the collection of case report form variables to assess safety in clinical trials of vaccines in pregnancy. Vaccine. 2016;34(49):6007–14.
Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73(1):13–22.
Office for National Statistics. Births in England and Wales: 2017. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/livebirths/bulletins/birthsummarytablesenglandandwales/2017. Accessed 9 June 2021.
National Health Service. Miscarriage. https://www.nhs.uk/conditions/miscarriage/. Accessed 9 June 2021.
National Institute for Health and Care Excellence. Preterm labour and birth: NICE guideline [NG25]. https://www.nice.org.uk/guidance/ng25/resources/preterm-labour-and-birth-pdf-1837333576645. Accessed 9 June 2021.
Office for National Statistics. Conception and fertility rates. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/conceptionandfertilityrates. Accessed 9 June 2021.
Clinical Practice Research Datalink (CPRD). https://cprd.com/home. Accessed 9 June 2021.
Kendle AM, Salemi JL, Tanner JP, Louis JM. Delivery-associated sepsis: trends in prevalence and mortality. Am J Obstet Gynecol. 2019;220(4):391.e1–16.
Coton SJ, Nazareth I, Petersen I. A cohort study of trends in the prevalence of pregestational diabetes in pregnancy recorded in UK general practice between 1995 and 2012. BMJ Open. 2016;6(1): e009494.
Farrar D, Simmonds M, Griffin S, Duarte A, Lawlor DA, Sculpher M, et al. The identification and treatment of women with hyperglycaemia in pregnancy: an analysis of individual participant data, systematic reviews, meta-analyses and an economic evaluation. Health Technol Assess. 2016;20(86):1–348.
National Health Service England. Improving outcomes for patients with sepsis. https://www.england.nhs.uk/wp-content/uploads/2015/08/Sepsis-Action-Plan-23.12.15-v1.pdf. Accessed 9 June 2021.
Coustan DR, Lowe LP, Metzger BE, Dyer AR. The Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study: paving the way for new diagnostic criteria for gestational diabetes mellitus. Am J Obstet Gynecol. 2010;202(6):654.e1-6.
Williams M, Turner S, Butler E, Gardosi J. Fetal growth surveillance - Current guidelines, practices and challenges. Ultrasound. 2018;26(2):69–79.
Clifford S, Giddings S, Southam M, Williams M, Gardosi J. The Growth Assessment Protocol: a national programme to improve patient safety in maternity care. Midwifery Digest. 2013;23(4):516–23.
Royal College of Obstetricians and Gynaecologists. Small-for-Gestational-Age Fetus, Investigation and Management (Green-top Guideline No. 31). https://www.rcog.org.uk/en/guidelines-research-services/guidelines/gtg31/. Accessed 9 June 2021.
Office for National Statistics. Populations by sex, age group and Index of Multiple Deprivation (IMD) quintile, England, 2001 to 2017. https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/adhocs/009283populationsbysexagegroupandimdquintileengland2001to2017. Accessed 9 June 2021.
Nissen F, Morales DR, Mullerova H, Smeeth L, Douglas IJ, Quint JK. Validation of asthma recording in the Clinical Practice Research Datalink (CPRD). BMJ Open. 2017;7(8): e017474.
Quint JK, Müllerova H, DiSantostefano RL, Forbes H, Eaton S, Hurst JR, et al. Validation of chronic obstructive pulmonary disease recording in the Clinical Practice Research Datalink (CPRD-GOLD). BMJ Open. 2014;4(7): e005540.
Arana A, Margulis AV, Varas-Lorenzo C, Bui CL, Gilsenan A, McQuay LJ, et al. Validation of cardiovascular outcomes and risk factors in the Clinical Practice Research Datalink in the United Kingdom. Pharmacoepidemiol Drug Saf. 2021;30(2):237–47.
The authors are grateful to Amanda Leach, Marianne Cunnington, and Keele Wurst for epidemiological input and review of the study design; Veronique Bianco, Dominique Rosillon, and Emad Yanni for reviewing the Statistical Analysis Plan, protocol and other study materials, and Padmaja Patnaik for developing the protocol and Statistical Analysis Plan. The authors also wish to thank Huajun Wang for statistical review, Gael Dos Santos for epidemiological input and help with ensuring data integrity, Nadia Tullio and Vishvesh Shende for providing safety input, Janine Linden for project writing support, and Brooke Hanscom and Wendolyn Lopez for operational support. They also thank Modis for editorial assistance and manuscript coordination, on behalf of GSK: Natalie Denef and Tina Van den Meersche provided writing support and Camille Turlure coordinated manuscript development.
This study was funded by GlaxoSmithKline Biologicals S.A., which was involved in the study design, all stages of the study conduct, collection of data and analysis, and writing of the manuscript.
GSK, 14200 Shady Grove Rd, Rockville, MD, 20850, Washington, USA
Megan Riley, Ouzama Henry, Andrea Sutherland, Alexander Schmidt & Sonia K. Stoszek
Evidera, 201 Talgarth Rd, Hammersmith, London, W6 8BJ, UK
Dimitra Lambrelli, Sophie Graham, Nicola Sawalhi-Leckenby & Robert Donaldson
Moderna, Cambridge, MA, USA
Andrea Sutherland & Sonia K. Stoszek
Bill & Melinda Gates Medical Research Institute, Cambridge, MA, USA
Alexander Schmidt
Megan Riley
Dimitra Lambrelli
Sophie Graham
Ouzama Henry
Andrea Sutherland
Nicola Sawalhi-Leckenby
Robert Donaldson
Sonia K. Stoszek
Involved in conception or design: OH, ASc, ASu, SS, MR, SG, DL, NSL. Participated in the collection or generation of the study/project data: ASc, SS, MR, SG, DL, NSL. Contributed materials/analysis/reagents/tools: ASc, SS, MR, SG, DL, NSL. Performed the study/project: SS, SG, DL, NSL. Involved in analysis and interpretation of data: OH, ASc, ASu, SS, MR, SG, RD, DL, NSL. Drafted the paper: MR. Reviewed and edited the paper: all authors. The authors read and approved the final manuscript.
Correspondence to Megan Riley.
CPRD obtained ethics approval to receive and supply patient data for public health research (https://www.cprd.com/safeguarding-patient-data). The protocol of this retrospective observational cohort study was approved by the Independent Scientific Advisory Committee (ISAC) for research involving CPRD data (protocol no. 18_144RA) and has been made available to the journal reviewers. All methods were carried out in accordance with relevant guidelines and regulations. As per CPRD process (https://www.cprd.com/safeguarding-patient-data), no information that can identify a patient is ever sent to CPRD; CPRD never receives any patient identifiers from a GP practice such as patient name, address, NHS number, full date of birth or medical notes; and because a patient cannot be identified from data a GP practice sends to CPRD, ISAC does not require the GP practice to seek a patient's consent to share data with CPRD. This study is based on data from the CPRD obtained under license from the UK Medicines and Healthcare products Regulatory Agency. The data are provided by patients and collected by the National Health Service as part of their care and support. The interpretation and conclusions contained in this report are those of the authors alone.
MR and OH are employees of the GSK group of companies (GSK). ASc, ASu, and SS were GSK employees at the time of the study. OH, ASc, and SS own GSK stocks or shares. DL, NSL, RD, and SG are salaried employees of Evidera, the consultancy company hired by GSK to perform this work. The authors declare no other financial or non-financial interests.
Exclusion criteria for the all pregnancies cohort.
Identification algorithm and assessment period for low-risk cohort exclusion criteria.
Codes used to identify exclusion criteria for the low-risk cohort.
Endpoints with GAIA definitions and the feasibility of applying these using CPRD data.
Codes used to identify pregnancy outcomes and pregnancy-related events of interest.
Maternal death.
Variable definitions and measures.
Codes used to define the variables.
Incidence proportions of pregnancy outcomes and pregnancy-related events of interest per 10 000 pregnancies by study cohort and select variables.
Riley, M., Lambrelli, D., Graham, S. et al. Facilitating safety evaluation in maternal immunization trials: a retrospective cohort study to assess pregnancy outcomes and events of interest in low-risk pregnancies in England. BMC Pregnancy Childbirth 22, 461 (2022). https://doi.org/10.1186/s12884-022-04769-x
Maternal vaccination
Low-risk pregnancy
Background rate
Maternal immunization trial
Preterm delivery | CommonCrawl |
Call for Competition Entries
GECCO 2021 will have a number of competitions ranging from different types of optimization problems to games and industrial problems. If you are interested in a particular competition, please follow the links to their respective web pages (see list below).
In addition to the competitions listed on this page, GECCO also hosts the Humies Award for human-competitive results produced by genetic and evolutionary computation.
Information for authors of "2-page Competition Abstracts"
The submission system is still closed for Competition Abstracts
After logging into GECCO's submission site https://ssl.linklings.net/conferences/gecco/, click on "Make a new submission", then selection "Competition Entry", and then select the respective competition.
Note that not all competitions offer this (if in doubt, please check with the respective competition organisers). The submission is (in general) voluntary, except when explicitly made mandatory by a competition.
List of Competitions
"Continuous" Interaction Testing
Xi Jiang
Bound Constrained Single Objective Numerical Optimization
Ponnuthurai Suganthan
Competition on Niching Methods for Multimodal Optimization
Assistant Prof. Mike Preuss
Dr. Michael G. Epitropakis
Xiaodong Li
Jonathan Fieldsend
Competition on the optimal camera placement problem (OCP) and the unicost set covering problem (USCP)
Mathieu Brévilliers
Julien Lepagnot
Lhassane Idoumghar
Dota 2 1-on-1 Shadow Fiend Laning Competition
Malcolm Heywood
Alexandru Ianta
Dynamic Stacking Optimization in Uncertain Environments
Andreas Beham
Sebastian Leitner
Johannes Karder
Bernhard Werth
Evolutionary Computation in the Energy Domain: Smart Grid Applications
Fernando Lezama
Joao Soares
Bruno Canizes
Zita Vale
Ruben Romero
Game Benchmark Competition
Vanessa Volz
Tea Tušar
Boris Naujoks
Minecraft Open-Endedness Competition
Djordje Grbic
Rasmus Berg Palm
Elias Najarro
Claire Glanois
Sebastian Risi
Open Optimization Competition 2021: Competition and Benchmarking of Sampling-Based Optimization Algorithms
Carola Doerr
Olivier Teytaud
Jérémy Rapin
Thomas Bäck
Optimization of a simulation model for a capacity and resource planning task for hospitals under special consideration of the COVID-19 pandemic
Margarita Rebolledo
Frederik Rehbach
Sowmya Chandrasekaran
Thomas Bartz-Beielstein
Real-World Multi-Objective Optimization Competition
The AbstractSwarm Multi-Agent Logistics Competition
Daan Apeldoorn
Alexander Dockhorn
Lars Hadidi
Torsten Panholzer
Interaction testing allows for determining a fault in a complex engineered system where a fault is the result of a "small" number of factors interacting. Typically, the object of study is a "covering array", which is an array of symbols (a given number of rows and columns) that encodes the tests to-be-performed: the rows are the tests, and the columns correspond to the factors/components of the system. The property of the array is that all interactions of size at most a given number (the "strength") appear in the array at least once.
One main question to ask is: for a given system, when does a covering array exist? The most common goal in interaction testing is minimizing the number of rows/tests while maintaining the "coverage" property. It can be shown that as the number of factors increases, the minimum number of rows must increase.
Showing a covering array existing is easy, but proving that one does not exist is very difficult, and often mathematicians result to case analyses to prove an array cannot exist.
This competition is focused on a "continuous" generalization of covering arrays, which appear to allow (1) a more fine-grained approach to understand how "covering" an array is, and (2) possibly a better explanation for why certain arrays do not exist.
Let $N, t, k, v$ be positive integers such that $k \ge t$ (to avoid trivial cases).
A \textit{covering array} $CA_\lambda(N; t, k, v)$ is a 2-dimensional array with $N$ rows, $k$ columns, and every entry is one value from $\{1, \cdots, v\}$.
Additionally, for every choice of $t$ distinct columns $c_1, \cdots, c_t$, and every choice of $t$ values $v_1, \cdots, v_t$ from the set $\{1, \cdots, v\}$, there exists some row $R$ such that $Rc_j = v_j$ for all $1 \le j \le t$.
Additionally, let $\varepsilon$ be a nonnegative real number, and $f : \mathbb{R} \times \{1, \cdots, v\}^t \to \mathbb{R}_{\ge 0}$ be a function that takes a $t$-tuple $(v_1, \cdots, v_t)$ of real numbers, and a $t$-tuple of \textit{integers} from the set $\{1, \cdots, v\}$, and outputs a nonnegative real number.
A \textit{continuous covering array} is a 2-dimensional array with $N$ rows and $k$ columns.
However, here each entry is a \textit{real number}.
In general, the array has a property that depends on $f$, and the property holds if a numerical calculation is less than the parameter $\varepsilon$.
There are two types of continuous covering arrays that we will consider: \textit{max}, and \textit{sum}.
For every choice of $t$ distinct columns $C = (c_1, \cdots, c_t)$, and every choice of $t$ values $V = (v_1, \cdots, v_t)$, let $\rho(C,V)$ be:
\rho(C,V) = \min_{1 \le R \le N} f(RC, V).
In other words, $\rho(C,V)$ is the minimum $f$ value given the columns of $C$, across all rows.
For the purposes of this competition, we will use the Euclidean distance for $f$:
\f(X, Y) = \sqrt{\sum_{i=1}^t (X_i - Y_i)^2}
where $X = (X_1, \cdots, X_t), Y = (Y_1, \cdots, Y_t)$.
We define a "max" continuous covering array. Define the value $M$ as follows:
\C
In English, $M$ is the maximum distance between any interaction and its corresponding $\rho$ value.
We say that the array is a \textit{max covering array} if $M < \varepsilon$.
Note that $\varepsilon = 0$ if and only if a ``standard'' covering array exists with $N$ rows, $k$ columns, $v$ values, and strength $t$.
We define a "sum" continuous covering array. Define $S$ as follows:
Then we say that the array is a \textit{sum covering array} if $S < \varepsilon$.
Finally, we provide the test instances for this competition. For each, we describe whether or not the "standard" covering array exists or doesn't (or is not known to), and if it does exist, whether or not it is unique up to isomorphism (permutation of rows, columns, or values within a column). For appropriately large epsilon values, the "continuous" array exists, however.
1. $N = 16, t=2, k=5, v=4$ (exists, unique)
2. $N=6, t=2, k=10, v=2$ (exists, unique)
3. $N=15, t=2, k=20, v=3$ (exists, not unique)
5. $N=127, t=3, k=21, v=4$ (exists, not unique)
7. $N=3, t=2, k=3, v=2$ (does not exist)
8. $N=14, t=3, k=12, v=2$ (does not exist)
9. $N=38, t=3, k=7, v=3$ (does not exist)
10. $N=126, t=3, k=7, v=5$ (does not exist)
11. $N=100, t=2, k=5, v=10$ (unknown to exist)
12. $N=230, t=3, k=5, v=6$ (unknown to exist)
13. $N=60, t=5, k=13, v=2$ (unknown to exist)
14. $N=180, t=3, k=10, v=5$ (unknown to exist)
March 15, 2021: Deadline to register for the competition and submission of abstract for publication in the GECCO companion.
April 10, 2021: Deadline for submitting abstracts and solutions.
May 1, 2021: Notification of acceptance for GECCO companion.
May 10, 2021: Deadline for general registration for the competition.
June 5, 2021: Deadline for general submission of solutions and approaches.
July 10-14, 2021: GECCO 2021 conference, and announcement of winners and results.
Official webpage:
Xi (Chase) Jiang is an undergraduate student in Computer Science and Quantitative Economics at Colgate University. Jiang is conducting research in network efficiency, correctness, and verification, such as heterogeneous SDN routing protocol implementation interoperability. He also shares research interests in software testing and IoT security and privacy. Jiang is a member of the Colgate Coders club and also a recipient of the Colgate Dean's Award with Distinction for Academic Excellence. He will be graduating in May and is currently applying for Ph.D. positions in related fields.
In this competition, participants will test their single objective numerical optimization algorithms on a selected 10D and 30D problems with / without a number of transformations such as shifting, rotation, shearing, etc.
https://www3.ntu.edu.sg/home/epnsugan/index_files/cec-benchmarking.htm
Ponnuthurai Nagaratnam Suganthan finished schooling at Union College and subsequently received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept. of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept. of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He was an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018) and an associate editor of the IEEE Trans on Cybernetics (2012 - 2018). He is an associate editor of Applied Soft Computing (Elsevier, 2018- ), Neurocomputing (Elsevier, 2018- ), IEEE Trans on Evolutionary Computation (2005 - ), Information Sciences (Elsevier, 2009 - ), Pattern Recognition (Elsevier, 2001 - ) and IEEE Trans. on SMC: Systems (2020 - ). He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 - ), an SCI Indexed Elsevier Journal. His research interests include swarm and evolutionary algorithms, pattern recognition, forecasting, randomized neural networks, deep learning and applications of swarm, evolutionary & machine learning algorithms. He was selected as one of the highly cited researchers by Thomson Reuters every year from 2015 to 2020 in computer science. He served as the General Chair of the IEEE SSCI 2013. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2021.
The aim of the competition is to provide a common platform that encourages fair and easy comparisons across different niching algorithms. The competition allows participants to run their own niching algorithms on 20 benchmark multimodal functions with different characteristics and levels of difficulty. Researchers are welcome to evaluate their niching algorithms using this benchmark suite, and report the results by submitting a paper to the main tracks of GECCO (i.e., submitting via the online submission system of GECCO), or to hand in a GECCO short paper on their competition entry. The description of the benchmark suite, evaluation procedures, and established baselines can be found in the following technical report:
X. Li, A. Engelbrecht, and M.G. Epitropakis, ``Benchmark Functions for CEC'2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization'', Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013.
https://titan.csit.rmit.edu.au/~e46507/cec13-niching/competition/cec2013-niching-benchmark-tech-report.pdf
Mike Preuss is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Muenster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS).
Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search≠ based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi≠modal Optimization and a co-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society and the ACM SIGEVO.
Xiaodong Li received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization (niching), and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL'08, a Program Co-Chair AI'09, a Program Co-Chair for IEEE CEC'2012, a General Chair for ACALCI'2017 and AI'17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS """"IEEE Transactions on Evolutionary Computation Outstanding Paper Award"""".
Jonathan Fieldsend is Professor of Computational Intelligence at the University of Exeter. He has a degree in Economics from Durham University, a Masters in Computational Intelligence from the University of Plymouth and a PhD in Computer Science from the University of Exeter. He has over 100 peer-reviewed publications in the evolutionary computation and machine learning domains, and has been working on uncertain problems in evolutionary computation since 2005. This strand of work has mainly been in multi-objective data-driven problems, although more recently has undertaken work on expensive uni-objective robust problems. He is a vice-chair of the IEEE Computational Intelligence Society (CIS) Task Force on Data-Driven Evolutionary Optimisation of Expensive Problems, and sits on the IEEE CIS Task Force on Multi-modal Optimisation and the IEEE CIS Task Force on Evolutionary Many-Objective Optimisation. Alongside EAPwU, he has been a co-organiser of the VizGEC and SAEOpt workshops at GECCO for a number of years.
The use of camera networks is now common to perform various surveillance tasks. These networks can be implemented together with intelligent systems that analyze video footage, for instance, to detect events of interest, or to identify and track objects or persons. According to (7), whatever the operational needs are, the quality of service depends on the way in which the cameras are deployed in the area to be monitored (in terms of position and orientation angles). Moreover, due to the prohibitive cost of setting or modifying such a camera network, it is required to provide a priori a configuration that minimizes the number of cameras in addition to meeting the operational needs. In this context, the optimal camera placement problem (OCP) is of critical importance, and can be generically formulated as follows. Given various constraints, usually related to coverage or image quality, and an objective to optimise (typically, the cost), how can the set of positions and orientations which best (optimally) meets the requirements be determined?
More specifically, in this competition, the objective will be to determine camera locations and orientations which ensure complete coverage of the area while minimizing the cost of the infrastructure. To this aim, a discrete approach is considered here : the surveillance area is reduced to a set of three-dimensional sample points to be covered, and camera configurations are sampled into so-called candidates, each with a given set of position and orientation coordinates. A candidate can have several samples within range, and a sample can be seen by several candidates. Now, the OCP comes down to select the smallest subset of candidates which covers all the samples.
According to (5), the OCP is structurally identical to the unicost set covering problem (USCP), which is one of Karp's well-known NP-hard problems (3). The USCP can be stated as follows: given a set of elements I (rows) to be covered, and a collection of sets J (columns) such that the union of all sets in J is I, find the smallest subset C of J such that the union of all sets in C is I. In other words, identify the smallest subset of J wich covers I. As pointed out in (5), many papers dealing with the OCP use this relationship implicitly, but few works done on the USCP have been applied or adapted to the OCP, and vice versa. In very recent years however, approaches from the USCP literature have been successfully applied in the OCP context on both academic (1,2,6) and real-world (4,6) problem instances. These works suggest that bridges can be built between these two bodies of literature to improve the results obtained so far on both USCP and OCP problems.
The main goal of this competition is to encourage innovative research works in this direction, by proposing to solve OCP problem instances stated as USCP.
(1) Brévilliers M., Lepagnot J., Kritter J., and Idoumghar L. Parallel preprocessing for the optimal camera placement problem. International Journal of Modeling and Optimization, 8(1):33 – 40, 2018.
(2) Brévilliers M., Lepagnot J., Idoumghar L., Rebai M., and Kritter J. Hybrid differential evolution algorithms for the optimal camera placement problem. Journal of Systems and Information Technology, 20(4):446 – 467, 2018.
(3) Richard M. Karp. Reducibility among Combinatorial Problems, pages 85–103. Springer US, Boston, MA, 1972.
(4) J. Kritter, M. Brévilliers, J. Lepagnot, and L. Idoumghar. On the real-world applicability of state-of-the-art algorithms for the optimal camera placement problem. In 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), pages 1103–1108, April 2019.
(5) Julien Kritter, Mathieu Brévilliers, Julien Lepagnot, and Lhassane Idoumghar. On the optimal placement of cameras for surveillance and the underlying set cover problem. Applied Soft Computing, 74:133 – 153, 2019.
(6) Weibo Lin, Fuda Ma, Zhouxing Su, Qingyun Zhang, Chumin Li, and Zhipeng Lü. Weighting-based parallel local search for optimal camera placement and unicost set covering. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO '20, pages 3–4, New York, NY, USA, 2020. Association for
Computing Machinery.
(7) Junbin Liu, Sridha Sridharan, and Clinton Fookes. Recent advances in camera planning for large area surveillance: A comprehensive review. ACM Comput. Surv., 49(1):6:1–6:37, May 2016.
Mathieu Brévilliers received in 2008 his PhD degree in computer science from the University of Haute-Alsace (UHA), Mulhouse, France. He spent one year at the Grenoble Intitute of Technology (Grenoble INP, France) as temporary lecturer and researcher, and then has been hired by the UHA in 2009 as Associate Professor. Since 2014, he is part of the optimization team of the IRIMAS research institute. His main research interests include hybrid metaheuristics and their applications, massively parallel and distributed algorithms, and machine learning techniques.
Julien Lepagnot received his PhD in computer science in 2011 from University Paris 12, France. Since 2012, he is an associate professor in computer science at University of Haute-Alsace, France, in which he belongs to the OMEGA team of the IRIMAS research institute in computer science, mathematics, automation and signal. His main research interests include hybrid metaheuristics and their applications, machine learning and dynamic optimization.
Lhassane Idoumghar received in 2012 his accreditation to supervise research from University of Haute-Alsace, Mulhouse, France. Since 2015, he is Full Professor with University of Haute-Alsace and he is now head of the IRIMAS research institute in computer science, mathematics, automation and signal. His research activities include dynamic optimization, single/multiobjective optimization, uncertain optimization by hybrid metaheuristics, distributed and massively parallel algorithms.
The Dota 2 game represents an example of a multiplayer online battle arena video game. The underlying goal of the game is to control the behaviour/ strategy for a 'hero' character. A hero posses certain abilities, thus resulting in different performance tradeoffs. Moreover, the hero acts with a team of 'creeps' who have predefined behaviours, which can be influenced by the interaction between their hero and the opposing team. In short, the hero operates collaboratively with its own creeps and defensive structures (called towers) to defeat the opponent team (kill the opponent hero twice, or destroy their tower). In addition, there is an underlying economy in which developments in the game influence the amount of wealth received by each team. As a team's wealth increases, then the hero's abilities improve.
This competition will assume the 1-on-1 mid lane configuration of Dota 2 using the Shadow Fiend hero. Such a configuration still includes many of the properties that have turned the game into an 'e-sport', but without the computational overhead of solving the task for all heroes under multi-lane settings. Specific properties that make the 1-on-1 game challenging include: 1) the need to navigate a partially observable world under ego-centric sensor information, 2) state information that is high-dimensional, but subject to variation through the 'fog-of-war', 3) high-dimensional action space that is both discrete, continuous valued and context specific, 4) learning hero policies that act collectively with creeps, 5) supporting real-time decision making at frame-rate, and 6) the underlying physics of the game vary with the times of day and introduce stochastic states.
Participants will create a Dota 2 Shadow Fiend hero agent based on a preset API provided by the organizers. The competition entrants will be required to engage in a 1v1 match against the built-in Shadow Fiend hero AI, where the winner is determined by number of matches won. Evaluation will be performed against the top three levels of built-in hero over multiple games.
https://web.cs.dal.ca/~dota2/
Robert Smith is a PhD candidate at the Faculty of Computer Science at Dalhousie University, Canada. He has published on the topic of competitive and co-operative coevolution of reinforcement learning agents for solving Rubic's Cube configurations, and navigation under partially observable environments such as VizDoom and Dota 2. He is the ACM student chapter representative at Dalhousie University.
Malcolm Heywood is a Professor of Computer Science at Dalhousie University, Canada. He has a particular interest in scaling up the tasks that evolutionary computation can potentially be applied to. His current research is attempting to coevolve behaviours capable of demonstrating general game AI and multi-task learning under video game environments. Dr. Heywood is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer). He was a track co-chair for the GECCO GP track in 2014 and a co-chair for European Conference on Genetic Programming in 2015 and 2016. He received the Humies Silver medial with Stephen Kelly in 2018 for evolving human (and deep learning) competitive solutions to the Arcade Learning Environment at a fraction of the computational cost.
Alexandru Ianta is currently a Master's student at the Faculty of Computer Science at Dalhousie University, Canada. He has constructed both back end and front end code for this competition.
Stacking problems are central to multiple billion-dollar industries. The container shipping industry needs to stack millions of containers every year. In the steel industry the stacking of steel slabs, blooms, and coils needs to be carried out efficiently, affecting the quality of the final product. The immediate availability of data – thanks to the continuing digitalization of industrial production processes – makes the optimization of stacking problems in highly dynamic environments feasible.
This competition extends the 2020 Dynamic Stacking competition with a second track.
The first track is similar to the 2020 competition whereby a dynamic environment is provided that represents a simplified stacking scenario. Blocks arrive continuously at a fixed arrival location from which they have to be removed swiftly. If the arrival location is full, the arrival of additional blocks is not possible. To avoid such a state, there is a range of buffer stacks that may be used to store blocks. Each block has a due date before which it should be delivered to the customer. However, blocks may leave the system only when they become ready, i.e., some time after their arrival. To deliver a block it must be put on the handover stack – which must contain only a single block at any given time. There is a single crane that may move blocks from arrival to buffer, between buffers, and from buffer to handover. The optimization must control this crane in that it reacts to changes with a sequence of moves that are to be carried out. The control does not have all information about the world. A range of performance indicators will be used to determine the winner.
The second track represents another stacking scenario that is derived from real-world scenarios. It features two cranes and two different handovers. The cranes have a capacity of larger than one which represents an additional challenge for the solver. The solver may just provide the moves and the cranes will sort out the order in which these are performed (not optimal though) or the solver may optimize both the moves and the assignment and schedule of the cranes. In this scenario, not the arrival stack is the critical part, but the handover stacks and thus the downstream process must not run empty.
The dynamic environments are implemented in form of a realtime simulation which provides the necessary change events. The simulation runs in a separate process and publishes its world state and change events via message queuing (ZeroMQ), and also listens for crane orders. Thus, control algorithms may be implemented as standalone applications using a wide range of programming languages. Exchanged messages are encoded using protocol buffers – again libraries are available for a large range of programming languages. As in the 2020 competition a website will be used that participants can use to create experiment and test their solvers.
Andreas Beham received his MSc in computer science in 2007 and his PhD in engineering sciences in 2019, both from Johannes Kepler University Linz, Austria. He works as senior researcher at the R&D facility at University of Applied Sciences Upper Austria, Hagenberg Campus and is leading several funded research projects. Dr. Beham is co-architect of the open source software environment HeuristicLab and member of the Heuristic and Evolutionary Algorithms Laboratory (HEAL) research group led by Dr. Affenzeller. He has published more than 80 documents indexed by SCOPUS and applied evolutionary algorithms, metaheuristics, mathematical optimization, data analysis, and simulation-based optimization in industrial research projects. His research interests include applying dynamic optimization problems, algorithm selection, and simulation-based optimization and innovization approaches in practical relevant projects.
Stefan Wagner received his MSc in computer science in 2004 and his PhD in technical sciences in 2009, both from Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as associate professor for software project engineering and since 2009 as full professor for complex software systems at the Campus Hagenberg of the University of Applied Sciences Upper Austria. From 2011 to 2018 he was also CEO of the FH OÖ IT GmbH, which is the IT service provider of the University of Applied Sciences Upper Austria. Dr. Wagner is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is project manager and head architect of the open-source optimization environment HeuristicLab. He works as project manager and key researcher in several R&D projects on production and logistics optimization and his research interests are in the area of combinatorial optimization, evolutionary algorithms, computational intelligence, and parallel and distributed computing.
Sebastian Leitner (né Raggl) received his MSc in bioinformatics in 2014 from the University of Applied Sciences Upper Austria. He is currently pursuing his PhD at the Johannes Kepler University Linz, Austria. Since 2015 he is a member of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) where he is working on several industrial research projects. He has focused on stacking problems in the steel industry for which he has acquired a lot of experience in the application domain, but also in the scientific state of the art.
Johannes Karder received his master's degree in software engineering in 2014 from the University of Applied Sciences Upper Austria and is a research associate in the Heuristic and Evolutionary Algorithms Laboratory at the Research Center Hagenberg. His research interests include algorithm theory and development, simulation-based optimization and optimization networks. He is a member of the HeuristicLab architects team. He is currently pursuing his PhD in technical sciences at the Johannes Kepler University, Linz, where he conducts research on the topic of dynamic optimization problems.
Bernhard Werth received his MSc in computer science in 2016 from Johannes Kepler University Linz, Austria. He works as a researcher at the R&D facility at University of Applied Sciences Upper Austria, Hagenberg Campus. Mr Werth is contributor to the open source software environment HeuristicLab and member of the Heuristic and Evolutionary Algorithms Laboratory (HEAL) research group led by Dr. Affenzeller. He has authored and co-authored several papers concerning evolutionary algorithms, fitness landscape analysis, surrogate-assisted optimization and data quality monitoring.
Following the success of the previous editions (CEC, GECCO, WCCI), we are launching a more challenging competition at major conferences in the field of computational intelligence. This GECCO 2021 competition proposes two testbeds in the energy domain:
Testbed 1) Bi-level optimization of end-users' bidding strategies in local energy markets (LM). This test bed is constructed under the same framework of the past competitions (therefore, former competitors can adapt their algorithms to this new testbed) , representing a complex bi-level problem in which competitive agents in the upper-level try to maximize their profits, modifying and depending on the price determined in the lower-level problem (i.e., the clearing price in the LM), thus resulting in a strong interdependence of their decisions.
Testbed 2) Flexibility management of home appliances to support DSO requests. A model for aggregators flexibility provision in distribution networks that takes advantage of load flexibility resources allowing the re-schedule of shifting/real-time home-appliances to provision a request from a distribution system operator (DSO) is proposed. The problem can be modeled as a Mixed-Integer Non-Linear Programming (MINLP) in which the aggregator strives to match a flexibility request from the DSO/BRP, paying a remuneration to the households participating in the DR program according to their preferences and the modification of their baseline profile.
Note: Both testbeds are developed to run under the same framework of past competitions.
Competition goals:
Following the success of the previous editions (CEC, GECCO, WCCI), we are launching a more challenging competition at major conferences in the field of computational intelligence. This GECCO 2021 competition proposes two tracks in the energy domain:
Track 1) Bi-level optimization of end-users' bidding strategies in local energy markets (LM). This test bed is constructed under the same framework of the past competitions (therefore, former competitors can adapt their algorithms to this new track) , representing a complex bi-level problem in which competitive agents in the upper-level try to maximize their profits, modifying and depending on the price determined in the lower-level problem (i.e., the clearing price in the LM), thus resulting in a strong interdependence of their decisions.
Track 2) Flexibility management of home appliances to support DSO requests. A model for aggregators flexibility provision in distribution networks that takes advantage of load flexibility resources allowing the re-schedule of shifting/real-time home-appliances to provision a request from a distribution system operator (DSO) is proposed. The problem can be modeled as a Mixed-Integer Non-Linear Programming (MINLP) in which the aggregator strives to match a flexibility request from the DSO/BRP, paying a remuneration to the households participating in the DR program according to their preferences and the modification of their baseline profile.
Note: Both tracks are developed to run under the same framework of past competitions.
The GECCO 2021 competition on "Evolutionary Computation in the Energy Domain: Smart Grid Applications" has the purpose of bringing together and testing the more advanced Computational Intelligence (CI) techniques applied to energy domain problems, namely the optimal bidding of energy aggregators in local markets and the Flexibility management of home appliances to support DSO requests. The competition provides a coherent framework where participants and practitioners of CI can test their algorithms to solve two real-world optimization problems in the energy domain. The participants have the opportunity to evaluate if their algorithms can rank well in each independent problem since we understand the validity of the "no-free lunch theorem", making this contest a unique opportunity worth to explore the applicability of the developed approaches in real-world problems beyond the typical benchmark and standardized CI problems.
http://www.gecad.isep.ipp.pt/ERM-Competitions (will be online if accepted)
Fernando Lezama received the Ph.D. in ICT from the ITESM, Mexico, in 2014. Since 2017, he is a
researcher at GECAD, Polytechnic of Porto, where he contributes to the application of computational intelligence (CI) in the energy domain. Dr. Lezama is part of the National System of Researchers of
Mexico since 2016, Chair of the IEEE CIS TF 3 on CI in the Energy Domain, and has been involved in the organization of special sessions, workshops, and competitions (at IEEE WCCI, IEEE CEC and ACM GECCO), to promote the use of CI to solve complex problems in the energy domain.
João Soares attained his Ph.D. degree in Electrical and Computer Engineering from UTAD University in early 2017. He currently conducts research at GECAD – Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development, in the School of Engineering of the Polytechnic of Porto and has been an invited visiting professor at Ecole Centrale de Lille - L2EP since 2019. He has contributed to more than 120 publications in the field of power systems and computational intelligence. His works have been cited over 2500 times (H-index 26 in google scholar). He has participated in several national and international research funded projects and since 2018 he is involved in the coordination of two FCT projects concerning energy resource management in smart grids (PTDC/EEI-EEE/28983/2017) and smart buildings (PTDC/EEI-EEE/29070/2017). Currently, he is vice-chair of IEEE Computational Intelligence Society (CIS) Taskforce 3 and has been contributing to bridging the gap between computational intelligence and power system engineers, namely with numerous efforts, in particular with dedicated algorithm competitions in international specialized venues since 2017.
Bruno Canizes received the Ph.D. degree in Computer Engineering in the field of Smart Power Networks from the University of Salamanca (USAL) - Spain in 2019. Presently, He is a Researcher at GECAD - Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development of ISEP/IPP. His research interests include distribution network operation and reconfiguration, smart grids, smart cities, electric mobility, distributed energy resources management, power systems reliability, future power systems, optimization, electricity markets and intelligent house management systems.
Zita Vale (Senior Member, IEEE) received the Ph.D. degree in electrical and computer engineering from the University of Porto, Porto, Portugal, in 1993. She is currently a Professor with the Polytechnic Institute of Porto, Porto. Her research interests focus on artificial intelligence applications, smart grids, electricity markets, demand response, electric vehicles, and renewable energy sources.
Ruben Romero (Senior Member, IEEE) received the B.Sc. and P.E. degrees from the National University of Engineering, Lima, Peru, in 1978 and 1984, respectively, and the M.Sc. and Ph.D. degrees from the University of Campinas, Campinas, Brazil, in 1990 and 1993, respectively. He is currently a Professor of electrical engineering with São Paulo State University, Ilha Solteira, Brazil. His research interests include methods for the optimization, planning, and control of electrical power systems, applications of artificial intelligence in power systems, and operations research.
The Game Benchmark for Evolutionary Algorithms (GBEA) is a collection of single- and multi-objective optimisation tasks that occur in applications to games research. We are proposing a competition with multiple tracks that addresses several different research questions featuring continuous and integer search spaces. The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration.
The task is to find solutions of sufficient quality (as specified by a target value) as quickly as possible. The competition is available in a single- and bi-objective version for two different applications, thus resulting in 4 different tracks. Details will be available on our website.
Participants will be able to submit short algorithm descriptions as 2-page contributions to the GECCO Companion. The deadline for submissions will be in April 2021.
== Why Games? ==
Games are a very interesting topic that motivates a lot of research and have repeatedly been suggested as testbeds for AI algorithms. Key features of games are controllability, safety and repeatability, but also the ability to simulate properties of real-world problems such as measurement noise, uncertainty and the existence of multiple objectives.
The motivation for the competition setup is as follows. If an algorithm for generating content (such as a Mario level or a Top Trumps deck) is integrated in a game, the goal is usually to provide replay value by varying the content. In this context, it is not necessary to find the single solution that optimises the designer's objectives, but instead, it is important that a "good enough" solution can be found as fast as possible. "Good enough" in this case can be defined in relation to the values achieved by a baseline algorithm. Additionally, ideally, the same algorithm can find solutions across different objectives.
So which game levels can you find? Let us find out! Submit your best optimisation algorithms!
http://www.gm.fh-koeln.de/~naujoks/gbea/
Vanessa Volz is an AI researcher at modl.ai (Copenhagen, Denmark), with focus in computational intelligence in games. She received her PhD in 2019 from TU Dortmund University, Germany, for her work on surrogate-assisted evolutionary algorithms applied to game optimisation. She holds B.Sc. degrees in Information Systems and in Computer Science from WWU Münster, Germany. She received an M.Sc. with distinction in Advanced Computing: Machine Learning, Data Mining and High Performance Computing from University of Bristol, UK, in 2014. Her current research focus is on employing surrogate-assisted evolutionary algorithms to obtain balance and robustness in systems with interacting human and artificial agents, especially in the context of games.
Tea Tusar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.
Boris Naujoks is a professor for Applied Mathematics at TH Köln - Cologne University of Applied Sciences (CUAS). He joint CUAs directly after he received his PhD from Dortmund Technical University in 2011. During his time in Dortmund, Boris worked as a research assistant in different projects and gained industrial experience working for different SMEs. Meanwhile, he enjoys the combination of teaching mathematics as well as computer science and exploring EC and CI techniques at the Campus Gummersbach of CUAS. He focuses on multiobjective (evolutionary) optimization, in particular hypervolume based algorithms, and the (industrial) applicability of the explored methods.
The purpose of this first contest on open-endedness is to highlight the progress in algorithms that can create novel and increasingly complex artefacts. While most experiments in open-ended evolution have so far focused on simple toy domains, we believe Minecraft - with its almost unlimited possibilities - is the perfect environment to study and compare such approaches. While other popular Minecraft competitions, like MineRL, have an agent-centric focus, in this competition the goal is to directly evolve Minecraft builds.
As part of this competition, we introduce the Minecraft Mechanical Creations Environment (MMCE) API. The MMCE is implemented as a mod for Minecraft that allows clients to manipulate blocks in a running Minecraft server programmatically through an API. The framework is specifically developed to facilitate experiments in artificial evolution. The competition framework also supports the recently added "redstone" circuit components in Minecraft, which allowed players to build amazing functional structures, such as bridge builders, battle robots, or even complete CPUs. Can an open-ended algorithm running in Minecraft discover similarly complex artefacts automatically?
In contrast to the Minecraft Settlement Generation Challenge, this competition is more about - but not exclusively focused on - the evolution of mechanical/functional artefacts.
Postdoc at IT University of Copenhagen, researching safety in reinforcement learning, self-driving vehicles and artificial life. Did his PhD at the University of Geneva in Switzerland specializing in bioinformatics, genetic algorithms and machine learning applied to studies in evolutionary biology.
Postdoc at IT University of Copenhagen, researching unsupervised learning of object oriented world models. Did his PhD at the Technical University of Denmark in end-to-end document understanding.
Research Assistant at the Robotics, Evolution, and Art Lab (REAL), IT University of Copenhagen. Working on meta-learning, evolutionary computation and open-endedness.
Postdoc in deep reinforcement learning at Shanghai Jiao Tong- University of Michigan Joint Institute, notably around neural-logic, multi-agents, and neuro-evolution. Previous research in pure mathematics, with a PhD from University Paris 6, in number theory around the algebraic structure of motivic periods.
Professor at the IT University of Copenhagen where he co-directs the Robotics, Evolution and Art Lab (REAL). He is currently the principal investigator of a Sapere Aude: DFF Starting Grant (Innate: Adaptive Machines for Industrial Automation). He has won several international scientific awards, including multiple best paper awards, the Distinguished Young Investigator in Artificial Life 2018 award, a Google Faculty Research Award in 2019, and an Amazon Research Award in 2020. More information: sebastianrisi.com
In order to promote research in black-box optimization, we organize a competition around Nevergrad (https://facebookresearch.github.io/nevergrad/index.html ) and IOHprofiler (https://iohprofiler.github.io/ ).
The competition has two tracks:
Track 1: Performance-Oriented Track: Contributors submit an optimization algorithm as a pull request in Nevergrad as detailed below. Several subtracks ("benchmark suites") are available, covering a broad range of black-box optimization scenarios, from discrete over mixed-integer to continuous optimization, from "artificial" academic functions to real-world problems, from one-Shot Setting over sequential optimization to parallel settings.
Track 2: General Benchmarking Practices: contributions are made by pull request to Nevergrad or IOHprofiler and are accompanied by a "paper style" documentation. This track invites contributions to all aspects of benchmarking black-box optimization algorithms, e.g., by suggesting new benchmark problems, performance measures, or statistics, by extending or improving the functionalities of the benchmarking environment, etc.
(will be provided upon acceptance, here is the one from last year: https://facebookresearch.github.io/nevergrad/opencompetition2020.html )
Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Carola's main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other sampling-based optimization heuristics. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is associate editor of ACM Transactions on Evolutionary Learning and Optimization, editorial board member of the Evolutionary Computation journal, advisory board member of the Springer Natural Computing Book Series. She was program chair of PPSN 2020, FOGA 2019, and the theory tracks of GECCO 2015 and 2017. Carola was guest editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on ``Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)''. She is a founding and coordinating member of the \href{https://sites.google.com/view/benchmarking-network/}{Benchmarking Network}, an initiative created to consolidate and to stimulate activities on benchmarking sampling-based optimization heuristics, she has organized several workshops on benchmarking, and was co-organizer of the \href{https://facebookresearch.github.io/nevergrad/opencompetition2020.html}{Open Optimization Competition 2020}.
Olivier Teytaud is research scientist at Facebook. He has been working in numerical optimization in many real-world contexts - scheduling in power systems, in water management, hyperparameter optimization for computer vision and natural language processing, parameter optimization in reinforcement learning. He is currently maintainer of the open source derivative free optimization platform of Facebook AI Research (https://github.com/facebookresearch/nevergrad), containing various flavors of evolution strategies, Bayesian optimization, sequential quadratic programming, Cobyla, Nelder-Mead, differential evolution, particle swarm optimization, and a platform of testbeds including games, reinforcement learning, hyperparameter tuning and real-world engineering problems.
Jérémy Rapin is a research engineer at Facebook. He has been working on signal processing, optimization and deep learning, mostly in the domain of medical imaging. His current focus is on developing nevergrad, an open- source derivative-free optimization platform.
Thomas Bäck is professor of Computer Science at the Leiden Institute of Advanced Computer Science, Leiden University, Netherlands, since 2002. He received his PhD in Computer Science from Dortmund University, Germany, in 1994, and was leader of the Center for Applied Systems Analysis at the Informatik Centrum Dortmund until 2000. Until 2009, Thomas was also CTO of NuTech Solutions, Inc.~(Charlotte, NC), where he gained ample experience in solving real-world problems in optimization and data analytics, by working with global enterprises in the automotive and other industry sectors. Thomas received the IEEE Computational Intelligence Society (CIS) Evolutionary Computation Pioneer Award for his contributions in synthesizing evolutionary computation (2015), was elected as a fellow of the International Society of Genetic and Evolutionary Computation (ISGEC) for fundamental contributions to the field (2003), and received the best dissertation award from the ""Gesellschaft für Informatik"" in 1995. Thomas has more than 300 publications, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation and the Handbook of Natural Computing.
Similar to the many previous competitions, the team of the Institute of Data Science, Engineering, and Analytics at the TH Cologne (IDE+A), hosts the 'Industrial Challenge' at the GECCO 2021.
This year's industrial challenge is posed in cooperation with an IDE+A partner from health industry and with Bartz & Bartz GmbH.
Simulation models are valuable tools for resource usage estimation and capacity planning. Your goal is to determine improved simulation model parameters for a capacity and resource planning task for hospitals. The simulator, babsim.hospital, explicitly covers difficulties for hospitals caused by the COVID-19 pandemic. The simulator can handle many aspects of resource planning in hospitals:
- various resources such as ICU beds, ventilators, personal protection equipment, staff, pharmaceuticals
- several cohorts (based on age, health status, etc.).
The task represents an instance of an expensive, high-dimensional computer simulation-based optimization problem and provides an easy evaluation interface that will be used for the setup of our challenge. The simulation will be executed through an interface and hosted on one of our servers (similar to our last year's challenge).
The task is to find an optimal parameter configuration for the babsim.hospital simulator with a very limited budget of objective function evaluations. The best-found objective function value counts. There will be multiple versions of the babsim.hospital simulations, with slightly differing optimization goals, so that algorithms can be developed and tested before they are submitted for the final evaluation in the challenge.
The participants will be free to apply one or multiple optimization algorithms of their choice.
Thus, we enable each participant to apply his/her algorithms to a real problem from health industry, without software setup or licensing that would usually be required when working on such problems.
Frederik is a Ph.D. student at the Institute of Data Science, Engineering, and Analytics at the CUAS (Cologne University of Applied Sciences). After earning his bachelor's degree in Electronics as well as a master's degree in Automation & IT, his research is now focused on the parallel application of surrogate model-based optimization.
M. Eng. Sowmya Chandrasekaran is a research associate at Institute for Data Science, Engineering and Analytics TH Köln, Germany. Her research interest includes: Computational Intelligence, Multivariate Anomaly Detection in Sensor Data, Performance Analysis Frameworks, Data Mining/ Machine Learning, Process Mining, Deep Learning and Internet of Things.
Academic Background: Ph.D. (Dr. rer. nat.), TU Dortmund University, 2005, Computer Science.
Professional Experience: Shareholder, Bartz & Bartz GmbH, Germany, 2014 – Present; Speaker, Research Center Computational Intelligence plus, Germany, 2012 – Present; Professor, Applied Mathematics, TH Köln, Germany, 2006 – Present.
Professional Interest: Computational Intelligence; Simulation; Optimization; Statistical Analysis; Applied Mathematics.
ACM Activities: Organizer of the GECCO Industrial Challenge, SIGEVO, 2011 – Present; Event Chair, Evolutionary Computation in Practice Track, SIGEVO, 2008 – Present; Tutorials Evolutionary Computation in Practice, SIGEVO, 2005 – 2013; GECCO Program Committee Member, Session Chair, SIGEVO, 2004 – Present.
Membership and Offices in Related Organizations: Program Chair, International Conference Parallel Problem Solving from Nature, Jozef Stefan Institute, Slovenia, 2014; Program Chair, International Workshop on Hybrid Metaheuristics, TU Dortmund University, 2006; Member, Special Interest Group Computational Intelligence, VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik, 2008 – Present.
Awards Received: Innovation Partner, State of North Rhine-Westphalia, Germany, 2013; One of the top 20 researchers in applied science by the Ministry of Innovation, Science and Research of the State of North Rhine-Westphalia, 2017.
In this competition, participants will solve a collection of real-world multi-objective optimization problems using their algorithms. The results can be submitted as a two page long competition paper. Detailed results can be placed online and submitted to
This competition aims to motivate work in the broad field of logistics. We have prepared a benchmarking framework which allows the development of multi-agent swarms to process a variety of test environments. Those can be extremely diverse, highly dynamic and variable of size. The ultimate goal of this competition is to foster comparability of multi-agent systems in logistics-related problems (e. g., in hospital logistics). Many such problems have good accessibility and are easy to comprehend, but hard to solve. Problems of different difficulty have been designed to make the framework interesting for educational purposes. However, finding efficient solutions for different a priori unknown test environments remains a challenging task for practitioners and researchers alike.
Following these ideas, in the AbstractSwarm Multi-Agent Logistics Competition, participants must develop agents that are able to cooperatively solve different a priori unknown logistics problems. A logistics problem is given as a graph containing agents and stations. An agent can interact with the graph (1) by deciding which station to visit next, (2) by communicating with other agents, and (3) by retrieving a reward for its previous decision. While simulating a scenario, a timetable in the form of a Gantt-chart is created according to the decisions of all agents. Submissions will be ranked according to the total number of idle time of all agents in several different a priori unknown problem scenarios in conjunction with the number of iterations needed to come to the solution.
https://abstractswarm.gitlab.io/abstractswarm_competition/
Daan Apeldoorn primarily works for the Institute for Medical Biostatistics, Epidemiology and Informatics (IMBEI) in the Medical Informatics department at the University Medical Centre of the Johannes Gutenberg University Mainz, Germany, and additionally for the Z Quadrat GmbH in Mainz. His research focuses on the extraction and exploitation of knowledge bases in the context of learning agents. He is also active in the field of multi-agent systems with application in (hospital) logistics. In the past, he worked as a scientific staff member for the TU Dortmund University and the University of Koblenz-Landau.
Alexander Dockhorn is a post-doctoral research associate at the Queen Mary University of London. He received his PhD at the Otto von Guericke University in Magdeburg in 2020. His current research focuses on state and action abstraction methods for competitive strategy game AI. He is active member of the IEEE in which he serves as the chair of the IEEE CIS Competitions Sub-Committee and recently joined as a member of the Games Technical Committee (GTC). Since 2017, he is organizing the Hearthstone AI competition to foster comparability of AI agents in card games.
Lars Hadidi is a theoretical physicist working as a research fellow at the Institute for Medical Biostatistics, Epidemiology and Informatics (IMBEI) in the Medical Informatics department at the University Medical Centre of the Johannes Gutenberg University Mainz, Germany. He is currently focusing on neural networks which directly operate on graph structured data, their methods and applications.
Torsten Panholzer is managing director of the division Medical Informatics at the Institute for Medical Biostatistics, Epidemiology and Informatics (IMBEI) at the University Medical Centre Mainz, Germany. He studied natural sciences and graduated as PhD at the Johannes Gutenberg University Mainz. His research focus is on system and data integration, identity management and artificial intelligence.
New Virtualization Chair
GECCO'21 goes online
Submission system open for full papers and poster-only submissions
The submission of tutorial, workshop and competition proposals is open
GECCO 2021 website now live
GECCO Tweets
Tweets by @GeccoConf
GECCO is sponsored by the Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO). SIG Services: 2 Penn Plaza, Suite 701, New York, NY 10121 USA. 1-800-342-6626 (USA and Canada) or +212-626-0500.
[email protected]
GECCO 2020 - GECCO 2019 - GECCO 2018 - GECCO 2017
GECCO 2016 - GECCO 2015 - GECCO 2014 - GECCO 2013 - GECCO 2012 - GECCO 2011
Powered by Tiki Wiki CMS Groupware
GECCO 2021
Workshop Papers
Competition Entries
Humies
Late-breaking Abstracts
Workshop Proposals
Competition Proposals
Paper Submission Instructions | CommonCrawl |
A Relaxation of Üresin and Dubois' Asynchronous Fixed-Point Theory in Agda | springerprofessional.de Skip to main content
vorheriger Artikel Formalizing the LLL Basis Reduction Algorithm a...
nächster Artikel Verified Analysis of Random Binary Tree Structures
PatentFit aktivieren
Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Tipp schließen
A Relaxation of Üresin and Dubois' Asynchronous Fixed-Point Theory in Agda
Journal of Automated Reasoning > Ausgabe 5/2020
Matthew L. Daggitt, Ran Zmigrod, Timothy G. Griffin
» Zur Zusammenfassung PDF-Version jetzt herunterladen
1.1 A Theory of Asynchronous Iterative Algorithms
Let \(S\) be a set. Iterative algorithms attempt to find a fixed point \(x^*\in S\) for a function \(\mathbf{F }: S\rightarrow S\) by repeatedly applying the function to some initial starting point \(x \in S\). The state after k such iterations, \(\sigma ^k(x)\), is defined as follows:
$$\begin{aligned} \sigma ^k(x) \triangleq {\left\{ \begin{array}{ll} x &{} \quad \text {if }k = 0 \\ \mathbf{F }(\sigma ^{k-1}(x)) &{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
The algorithm terminates when it reaches an iteration \(k^*\) such that \({\sigma ^{k^*+1}(x) = \sigma ^{k^*}(x)}\) and so \(x^*= \sigma ^{k^*}(x)\) is the desired fixed point.
Many iterative algorithms can be performed in parallel. Assume that the state space \(S\) and the function \(\mathbf{F }\) are decomposable into n parts:
$$\begin{aligned} S= S_1 \times S_2 \times \cdots \times S_n \quad \mathbf{F }= (\mathbf{F }_1, \mathbf{F }_2, \ldots , \mathbf{F }_n) \end{aligned}$$
where \(\mathbf{F }_i : S\rightarrow S_i\) takes in a state and calculates the ith component of the new state. It is now possible to assign the computation of each \(\mathbf{F }_i\) to a separate processor. The processors may be part of a single computer with shared memory or distributed across many networked computers. We would prefer our model to be agnostic to this choice, and so this paper will simply refer to the processors as nodes. Each node i continues to apply \(\mathbf{F }_i\) locally and propagate its updated state to the other nodes who incorporate it into their own computations. We will refer to an asynchronous implementation of this scheme as \(\delta \). A rigorous mathematical definition of \(\delta \) will be presented in Sect. 2.2.
If the nodes' applications of \(\mathbf{F }_i\) are synchronised then the parallel computation \(\delta \) will be identical to \(\sigma \). However in many cases enforcing synchronisation may not be practical or even possible. For example in distributed routing, the overhead of synchronisation on a continental scale would be prohibitive to the operation of the protocol. However, when updates are performed asynchronously, the behaviour of \(\delta \) depends on the exact sequence of node activations and the timings of update messages between nodes. Furthermore, \(\delta \) may enter states unreachable by \(\sigma \) and hence \(\delta \) may not converge even when \(\sigma \) is guaranteed to do so. This motivates the question: what properties of \(\mathbf{F }\) are required to guarantee that the asynchronous computation \(\delta \) always converges to a unique fixed point?
Depending on the properties of the state space \(S\) and the function \(\mathbf{F }\), there are multiple answers to this question—see the survey paper by Frommer and Szyld [ 12 ]. For example many of the approaches discussed in [ 12 ] rely on the rich structure of vector spaces over continuous domains. Üresin and Dubois [ 20 ] were the first to develop a theory that applied to both discrete and continuous domains. They prove that if \(\mathbf{F }\) is an asynchronously contracting operator (ACO), then \(\delta \) will always converge to a unique fixed point. Their model makes only very weak assumptions about inter-node communication and allows messages to be delayed, lost, duplicated and reordered. Henceforth we will refer to Üresin and Dubois [ 20 ] as UD.
Proving that \(\mathbf{F }\) is an ACO is dramatically simpler than directly reasoning about the asynchronous behaviour of \(\delta \). However, in many cases it remains non-trivial and so UD also derive several alternative conditions that are easier to prove in special cases and that imply the ACO conditions. For example, they provide sufficient conditions when \(S\) is partially ordered and \(\mathbf{F }\) is order preserving.
Many applications of these results can be found in the literature including in routing [ 6 , 9 ], programming language design [ 10 ], peer-to-peer protocols [ 16 ] and numerical simulation [ 7 ].
1.2 Contributions
This paper makes several contributions to UD's existing theory. The original intention was to formalise the work of UD in the proof assistant Agda [ 4 ] as part of a larger project to develop formal proofs of correctness for distributed routing protocols [ 8 ]. The proofs in UD are mathematically rigorous in the traditional sense, but their definitions are somewhat informal and they occasionally claim the existence of objects without providing an explicit construction. Given this and the breadth of fields these results have been applied to, in our opinion a formal verification of the results is a useful exercise.
During the process of formalisation, we discovered various relaxations of the theory. This includes: (i) enlarging the set of schedules for which it is possible to prove \(\delta \) converges over, (ii) relaxing the ACO conditions and (iii) generalising the model to include fully distributed algorithms rather than just shared-memory models. Furthermore, it was found that two of UD's auxiliary sufficient conditions were incorrect, and we demonstrate a counter-example: an iteration which satisfies the conditions yet does not converge to a unique fixed point. Finally, we also formalise (and relax) a recently proposed alternative sufficient condition based on metric spaces by Gurney [ 13 ].
We have made the resulting library publicly available [ 1 ]. Its modular design should make it easy to apply the results to specific algorithms without understanding the technical details and we hope that it will be of use to others who are interested in developing formal proofs of correctness for asynchronous iterative algorithms. In this paper we have also included key definitions and proofs from the library alongside the standard mathematics. We do not provide an introduction to Agda, but have tried to mirror the mathematical notation as closely as possible to ensure that it is readable. Interested readers may find several excellent introductions to Agda online [ 3 ]. Any Agda types that are used but not explicitly referenced can be found in version 0.17 of the Agda standard library [ 2 ].
There have been efforts to formalise other asynchronous models and algorithms such as real-time systems [ 18 ] and distributed languages [ 14 , 15 ]. However, as far as we know our work is the first attempt to formalize the results of UD.
This paper is a revised version of our ITP 2018 conference paper [ 23 ]. In particular, the following contributions are new: (i) showing that it is possible to enlarge the set of schedules that the iteration converges over, (ii) relaxing the ACO conditions. As a result, the main proof has been simplified sufficiently to include its Agda formalisation within this paper.
This section introduces UD's model for asynchronous iterations. There are three main components: (i) the schedule describing the sequence of node activations and the timings of the messages between the nodes, (ii) the asynchronous state function and (iii) what it means for the asynchronous iteration to converge. We explicitly note where our definitions diverge from that of UD and justify why the changes are desirable.
2.1 Schedules
Schedules determine the non-deterministic behaviour of the asynchronous environment in which the iteration takes place; they describe when nodes update their values and the timings of the messages to other nodes. Let \(V\) be the finite set of nodes participating in the asynchronous process. Time \(T\) is assumed to be a discrete, linearly ordered set (i.e. \({\mathbb {N}}\)).
Definition 1
A schedule is a pair of functions:
The activation function \(\alpha : T\rightarrow {\mathcal {P}}(V)\).
The data flow function \(\beta : T\rightarrow V\rightarrow V\rightarrow T\).
where \(\beta \) satisfies:
(S1) \(\forall t, i, j : \beta (t+1,i,j) \leqslant t\)
We formalise schedules in Agda as a dependent record. The number of nodes in the computation is passed as a parameter and the nodes themselves are represented by the type , the type of finite sets with n elements.
It would be possible to implicitly capture by changing the return type of to instead of . However, this would require converting the result of to type almost every time it wanted to be used. The simplification of the definition of is therefore not worth complicating the resulting proofs.
Generalisation 1
In the original paper UD propose a model where all nodes communicate via shared memory, and so their definition of \(\beta \) takes only a single node i. However, in distributed processes (e.g. internet routing) nodes communicate in a pairwise fashion. We have therefore augmented our definition of \(\beta \) to take two nodes, a source and destination. UD's original definition can be recovered by providing a data flow function \(\beta \) that is constant in its third argument.
UD's definition of a schedule also has two additional liveness assumptions:
(S2) \(\forall t, i\,\,\,\, : \exists t' : t < t'\wedge i\in \alpha (t')\)
(S3) \(\forall t, i,j : \exists t' : \forall t'' : t' < t''\Rightarrow \beta (t'', i , j)\ne t\)
Assumption (S2) states that every node will always activate again at some point in the future and (S3) states that every message is only used for a finite amount of time. In practice they represent the assumption that every node and every link between pairs of nodes continue to function indefinitely.
Why have these assumptions been dropped from our definition of a schedule? We argue that unlike causality, (S2) and (S3) are not fundamental properties of a schedule but merely one possible set of constraints defining what it means for the schedule to be "well behaved". Any useful notion of the asynchronous iteration converging will require it to do so in a finite amount of time, yet (S2) and (S3) require the schedule to be well behaved for an infinite amount of time. This is hopefully an indication that (S2) and (S3) are unnecessarily strong assumptions. This is discussed further in Sect. 2.3 where we will incorporate relaxed versions of (S2) and (S3) into our definition of convergence, and in Sect. 3.1 we will show that there exist schedules which do not satisfy (S2) and (S3) and yet still allow the asynchronous iteration to converge.
Although not explicitly listed in their definition of a schedule, UD assume that all nodes activate at time 0, i.e. \({\alpha (0)=V}\). Such synchronisation is difficult to achieve in a distributed context and fortunately this assumption turns out to be unnecessary.
We should explicitly highlight that one of the advantages of UD's theory is that there is no requirement for the data flow function to be monotonic, i.e. that messages arrive in the order they were sent:
$$\begin{aligned} \forall t,t': t \leqslant t' \Rightarrow \beta (t, i, j) \leqslant \beta (t', i, j) \end{aligned}$$
Although this assumption is natural in many settings, it does not hold for example if the nodes are communicating over a network and different messages take different routes through the network. Figure 1 demonstrates this and other artefacts of asynchronous communication that can be captured by \(\beta \).
Behaviour of the data flow function \(\beta \). Messages from node j to node i may be reordered, lost or even duplicated. The only constraint is that every message must arrive after it was sent
2.2 Asynchronous State Function
We now define the asynchronous state function \(\delta \). We formalise the state space \(S = S_1 \times \cdots \times S_n\) in Agda using a ( )-indexed from the Agda standard library, i.e. n sets each equipped with some suitable notion of equality.
Given a function \(\mathbf{F }\) and a schedule \((\alpha ,\ \beta )\) the asynchronous state function is defined as:
$$\begin{aligned} \delta ^t_i(x)&= {\left\{ \begin{array}{ll} x_{i} &{}\quad \text {if }t = 0\\ \delta ^{t-1}_i(x) &{}\quad \text {else if }i \notin \alpha (t) \\ \mathbf{F }_i(\delta ^{\beta (t,i,1)}_1(x), \delta ^{\beta (t,i,2)}_2(x), \ldots , \delta ^{\beta (t,i,n)}_n(x)) &{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
where \(\delta ^t_i(x)\) is the state of node i at time t when the iteration starts from state x.
Initially node i adopts \(x_i\), the ith component of the initial state. At a subsequent point in time then if node i is inactive then it simply carries over its state from the previous time step. However if node i is in the set of active nodes then it applies \(\mathbf{F }_i\) to the state of the computation from node i's local perspective, e.g. the term \(\delta ^{\beta (t,i,1)}_1(x)\) is the contents of the most recent message node i received from node 1 at time t. This definition is formalised in Agda as follows:
Those unfamiliar with Agda may wonder why the additional argument is necessary. Agda requires that every program terminates, and its termination checker ensures this by verifying that the function's arguments get structurally smaller with each recursive call. While we can see that \(\delta \) will terminate, in the case where i activates at time t, the time index of the recursive call \(\beta (t+1,i,j)\) is only smaller than \(t+1\) because of , and so a naive implementation fails to pass the Agda termination checker. The type helps the termination checker see that the function terminates by providing an argument that always becomes structurally smaller with each recursive call. The argument for the second case has the type: . Therefore in order to generate the next one must prove the time really does strictly decrease. For the second recursive case this is proved by , where proves \(\beta (t+1,i,j) \leqslant t\) and is a proof that if \(x \leqslant y\) then \(x + 1 \leqslant y + 1\) and hence that \(\beta (t+1,i,j)+1 \leqslant t+1 \Leftrightarrow \beta (t+1,i,j) < t+1\). This additional complexity can be hidden from the users of the library by defining a second function of the expected type:
by using the proof which shows the natural numbers are well-founded with respect to < and which has type
Note that our revised definition of a schedule contains only what is necessary to define the asynchronous state function \(\delta \) and nothing more. This provides circumstantial evidence that the decision to remove assumptions (S2) and (S3) from the definition was a reasonable one, as they are extraneous when defining the core iteration.
2.3 Correctness
Before exploring UD's conditions for the asynchronous iteration \(\delta \) to behave correctly, we must first establish what "behave correctly" means. An intuitive and informal definition might be as follows:
The asynchronous iteration, \(\delta \), behaves correctly if for a given starting state x and all well-behaved schedules \((\alpha , \beta )\) there exists a time after which the iteration will have converged to the fixed point \(x^*\).
What is a "well-behaved" schedule? As in many cases, it is initially easier to describe when a schedule is not well-behaved. For example, if a node i never activates then the iteration cannot be expected to converge to a fixed point. Equally, if node i never succeeds in sending a message to node j then a fixed point is unlikely to be reached.
UD incorporated their notion of well-behavedness into the definition of the schedule itself in the form of assumptions (S2) and (S3). These state that nodes continue to activate indefinitely and links will never fail entirely. As discussed previously in Sect. 2.1, this guarantees that the schedule is well-behaved forever. However, intuitively they are unnecessarily strong assumptions as both the definition of correctness above and the definition used by UD require that \(\delta \) converges in a finite amount of time.
We now explore how to relax (S2) and (S3), so that the schedule is only required to be well-behaved for a finite amount of time. Unfortunately this is not as simple as requiring "every node must activate at least n times" and "every pair of nodes must exchange at least m messages", because the interleaving of node activations and message arrivals is important. For example node 1 activating n times followed by node 2 activating n times is unlikely to allow convergence as there is no opportunity for feedback from node 2 to node 1.
The right notion of a suitable interleaving of messages and activations turns out to be that of a pseudoperiodic schedule, as used by UD in their proof of convergence.
A schedule is infinitely pseudoperiodic if there exists functions \({\varphi : {\mathbb {N}} \rightarrow T}\) and \({\tau : V\rightarrow {\mathbb {N}} \rightarrow T}\) such that:
(P1) \(\varphi (0) = 0\)
(P2) \(\forall i, k : \varphi (k) < \tau _i(k) \leqslant \varphi (k+1)\)
(P3) \(\forall i, k : i \in \alpha (\tau _i(k))\)
(P4) \(\forall t, i , j, k : \varphi (k+1) < t \Rightarrow \tau _i(k) \leqslant \beta (t,i,j)\)
Note that UD refer to such schedules simply as pseudoperiodic. We have renamed it infinitely pseudoperiodic for reasons that will hopefully become apparent as we unpick the components of the definition.
First of all we define a period of time as a pair of times:
We do not include that \(\leqslant \) as in turns out that it will always be inferrable from the context and hence including the proof in the record only leads to duplication.
Assumption (P1) for an infinitely pseudoperiodic schedule simply says that the initial time of interest is time 0. This turns out to be unnecessary and any starting point will do and hence we leave it unformalised. Assumptions (P2) and (P3) guarantee that every node activates at least once between times \(\varphi (k)\) and \(\varphi (k+1)\). We will call such a period an activation period.
A period of time \([t_1 , t_2]\) is an activation period for node i if i activates at least once during that time period.
Assumption (P4) says that any message that arrives after \(\varphi (k+1)\) must have been sent after \(\tau _i(k)\), i.e. \(\varphi (k+1)\) is long enough in the future that all messages sent before node i activated have either arrived or been lost. This motivates the following definition:
A period of time \([t_1,t_2]\) is an expiry period for node i if every message that arrives at i after \(t_2\) was sent after time \(t_1\).
UD call the period of time between \(\varphi (k)\) and \(\varphi (k+1)\) a pseudocycle. In such a period of time every node activates and subsequently all the messages sent before its activation time expire. The sequence \(\varphi (k)\) therefore forms an infinite sequence of pseudocycles.
We argue that a pseudocycle is more naturally defined the other way round, i.e. all messages sent to node i before the start of the pseudoperiod should expire and then the node should activate. As shown in Fig. 2 and proved in Sect. 3.1, this alteration aligns the end of the pseudocycle with the moment the iteration converges. This has the consequence of simplifying the definition of what it means for the asynchronous iteration to converge in a finite time as well as the subsequent proofs that the iteration converges.
A sequence of activation periods and expiry periods. Section 3.1 will show that convergence occurs at the end of an activation period. One consequence of UD's definition of pseudocycle is that convergence occurs after " n and a half" pseudocycles. By redefining a pseudocycle it is possible to align the end of a pseudocycle with the end of the activation period. The realignment requires an additional expiry period at the start of the sequence, however this is fulfilled by the trivial expiry period at time 0 of 0 length. Realigning the pseudocycles in this way consequently simplifies both the definition and the proof of convergence
A period of time [ s, e] is a pseudocycle if there exists a time t such that [ s, t] is an expiry period and [ t, e] is an activation period.
The notion of a pseudocycle is related to the iteration converging, as during a pseudocycle the asynchronous iteration will make at least as much progress as that of a single synchronous iteration. This will be shown rigorously in Sect. 3.1.
A period of time is a multi-pseudocycle of order k if it contains k disjoint pseudocycles.
We define a schedule to be k-pseudoperiodic if it contains k pseudocycles. UD show that a schedule satisfies (S2) and (S3) if and only if the schedule is \(\infty \)-pseudoperiodic. Therefore UD's definition of convergence implicitly assumes that all schedules are \(\infty \)-pseudoperiodic. By removing (S2) and (S3) from the definition of a schedule we can relax our definition to say that the schedule only needs to be \(k^*\)-pseudoperiodic for some finite \(k^*\). Our definition of what it means for \(\delta \) to converge therefore runs as follows:
The iteration converges over a set of states \(X_0\) if there exist a state \(x^*\) and a number \(k^*\) such that for all starting states \(x \in X_0\) and schedules then if the schedule is \(k^*\) pseudoperiodic in some time period [ s, e] then for any time \(t \ge e\) then \(\delta ^t(x) = x^*\).
Note that prior to this, the definitions of \(\delta \) and pseudocycles etc. have been implicitly parameterised by some schedule \(\psi \) (omitted in the Agda via module parameters). As the definition of quantifies over all schedules, this dependency must now be made explicit. Another point that Agda forces us to make explicit, and which is perhaps not immediately obvious in the mathematical definition above due to the overloading of \(\in \), is that when we write \(x \in X_0\) we really mean \(\forall i : x_i \in (X_0)_i\). The latter is represented in the Agda code by the indexed membership relation .
What are the practical advantages of this new definition of convergence?
It is strictly weaker than UD definition, as it allows a strictly larger set of schedules to be used. For example the following schedule:
$$\begin{aligned} \alpha (t) = {\left\{ \begin{array}{ll} V &{}\quad \text {if }t \leqslant k^*\\ \varnothing &{}\quad \text {otherwise} \end{array}\right. } \quad \beta (t,i,j) = {\left\{ \begin{array}{ll} t - 1 &{} \quad \text {if }t \leqslant k^*\\ k^*&{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
which is synchronous until time \(k^*\) after which all nodes and links cease to activate, is \(k^*\)-pseudoperiodic but not \(\infty \)-pseudoperiodic.
It allows one to reason about the rate of convergence. If you know \(\delta \) converges according to UD's definition, you still have no knowledge about how fast it converges. The new definition bounds the number of pseudocycles required. Prior work has been done on calculating the distribution of pseudocycles [ 5 , 21 ] when the activation function and data flow functions are modelled by various probability distributions. Together with the definition of convergence above, this would allow users to generate a probabilistic upper bound on the convergence time.
The next section discusses under what conditions \(\delta \) can be proved to fulfil this definition of convergence.
3 Convergence
This section discusses sufficient conditions for the asynchronous iteration \(\delta \) to converge. The most important feature of the conditions is that they require properties of the function \(\mathbf{F }\) rather than the iteration \(\delta \). This means that the full asynchronous algorithm, \(\delta \), can be proved correct without having to directly reason about unreliable communication between nodes or the exponential number of possible interleavings of messages and activations.
The section is split up into 3 parts. In the first we discuss the original ACO conditions proposed by UD. We show that they can be relaxed and then prove that they imply our new stronger notion of convergence defined in Sect. 2.3. In the second part we show that two further sufficient conditions proposed by UD are in fact insufficient to guarantee that \(\delta \) converges to a unique fixed point, and we provide a counter-example. In the final section we formalise and relax the alternative ultrametric conditions proposed by Gurney [ 13 ] and his proof that they reduce to the ACO conditions.
3.1 ACO Conditions
UD define a class of functions called Asynchronously Contracting Operators (ACO). They then prove that if the function \(\mathbf{F }\) is an ACO, then \(\delta \) will converge to a unique fixed point for all possible \(\infty \)-pseudoperiodic schedules.
An operator \(\mathbf{F }\) is an asynchronously contracting operator (ACO) iff there exists a sequence of sets \(D(k) = D_1(k) \times D_2(k) \times \cdots \times D_{n}(k)\) for \(k \in {\mathbb {N}}\) such that
(A1) \(\forall x : x \in D(0) \Rightarrow \mathbf{F }(x) \in D(0)\)
(A2) \(\forall k, x : x \in D(k) \Rightarrow \mathbf{F }(x)\in D(k+1)\)
(A3) \(\exists k^*, x^*: \forall k : k^*\leqslant k \Rightarrow D(k)=\{ x^*\}\)
The ACO conditions state that the space S can be divided into a series of boxes \(D(k)\). Every application of \(\mathbf{F }\) moves the state into the next box, and eventually a box containing only a single element is reached. Intuitively, the reason why it is possible to show that these conditions guarantee asynchronous convergence, instead of just synchronous convergence, is that each box is to be decomposable over each of the n nodes. Therefore, \(\mathbf{F }\) remains contracting even if every node has not activated the same number of times. The definition of an ACO is formalised in Agda as follows:
The variable is necessary to keep track of the universe level the family of sets reside in. The sets themselves are implemented as a ( )-indexed family of predicates. The code can be read as "there exists two objects \(k^*\) and \(x^*\) such that".
Highlighting our change to the definition of the ACO conditions. Whereas UD's definition required each set to be contained within the previous set (left), our definition do not make this assumption (right). Note that this figure is a simplification, as each set D( k) is decomposable into \(D_1(k) \times \cdots \times D_n(k)\) and so in reality the diagram should be n-dimensional
The definition of an ACO in UD has the stronger assumption:
(A1*) \(\forall k : D(k+1) \subset D(k)\)
whilst other related work in the literature [ 12 ] use:
(A1**) \(\forall k : D(k+1) \subseteq D(k)\)
As shown in Fig. 3 assumption (A1*/A1**) implies that the sets \(D(k)\) are nested. For any particular \(D\), the assumption (A1) is strictly weaker than (A1*/A1**) as (A1*/A1**) + (A2) implies (A1) but (A1) + (A2) does not imply (A1*/A1**). However in general the two definitions of an ACO are equivalent because if the function \(\mathbf{F }\) satisfies our definition of an ACO then the set of boxes defined by
$$\begin{aligned} C_i(0)&= D_i(0) \\ C_i(k+1)&= C_i(k) \cap D_i(k+1) \end{aligned}$$
satisfy UD definition of an ACO. See our Agda library for a proof of this. However we argue that the relaxation is still useful as in practice (A1) is significantly easier to prove than (A1*/A1**) for users of the theorems.
(Theorem 1 in UD ) If \(\mathbf{F }\) is an ACO then \(\delta \) converges over \(D(0)\).
Assume that \(\mathbf{F }\) is an ACO, and consider an arbitrary schedule \((\alpha , \beta )\) and starting state \(x \in D(0)\). We initially describe some additional definitions in order to help structure the Agda proofs and increase their readability. The first definition is what it means for the current state of the iteration to be in \(D(k)\):
We define the messages sent to node i to be in box k at time t if every message that arrives at node i after t is in box k.
Finally, the computation is in box k at time t if for every node i its messages are in box \(k - 1\) and its state is in box k.
The proof is then split into three main steps:
Step 1 The computation is always in \(D(0)\). It is relatively easy to prove that the state is always in \(D(0)\) by induction over the time t. The initial state x was assumed to be in \(D(0)\), and assumption (A1) ensures that the ith component remains in \(D_i(0)\) whenever node i activates.
As the state is always in \(D(0)\) then it is a trivial consequence that all messages must always be in \(D(0)\).
Therefore the computation is always in \(D(0)\).
Step 2 Once the computation has entered \(D(k)\) then it remains in \(D(k)\). Suppose the state of node i is in \(D(k)\) and the messages to i are in \(D(k-1)\) at time s then we will show that the state remains in \(D(k)\) for any later time e.
The proof proceeds by induction over e and k. If \(e = 0\) then \(s = e\) as \(s \leqslant e\), and so the proof holds trivially. If \(k = 0\) then we already know that the state is always in \(D(0)\) by Step 1. For \(k + 1\) and \(e + 1\), if \(s = e + 1\) then again the proof holds trivially. Therefore \(s < e + 1\) otherwise we would contradict the assumption that time \(e + 1\) is after time s. If i is inactive at time \(e+1\) then the result holds by the inductive hypothesis. Otherwise if i is active at time \(e + 1\) then assumption (A2) ensures that the result of applying \(\mathbf{F }\) to node i's current view of the global state, is in \(D(k+1)\) as we know from our initial assumption that all messages arriving at i are in \(D(k)\).
The corresponding lemma for messages is easy to prove as the definition requires that all future messages that arrive after time s at node i will be in box k and hence as \(s \leqslant e\) so are all messages that arrive after time e.
It is then possible to prove the corresponding lemma for the entire computation.
Step 3 After a pseudocycle the computation will advance from \(D(k)\) to \({D(k+1)}\). Suppose all messages to node i are in \(D(k)\) at time s and i activates at some point after s and before time e then the state of node i is in \(D(k+1)\) at e. This can be shown by induction over e. We know that \(e \ne 0\) as node i must activate strictly after s. If i activates at time \(e + 1\) then the state of node i after applying \(\mathbf{F }\) must be in \(D(k+1)\) by (A2) as all the messages it receives are in \(D(k)\). If i does not activate at time \(e+1\) then [ s, e] must still be an activation period for i and so it is possible to apply the inductive hypothesis to prove that node i is in \(D(k+1)\) at time e and therefore also at \(e+1\).
The analogous proof for messages runs as follows. If the computation is in \(D(k)\) at time s and [ s, e] is an expiry period then we show that all messages i receives after time e must also be in \(D(k)\). As [ s, e] is an expiry period then any message i receives after time e must have been sent after time s, and, as we know the computation is in \(D(k)\) at time s, then the state of the computation must have been in \(D(k)\) at every time after s by Step 2.
Using these lemmas, it is now possible to show that during a pseudocycle the whole computation advances from \(D(k)\) to \(D(k+1)\). More concretely, after the expiry period messages to node i have moved from \(D(k-1)\) to \(D(k)\). After the subsequent activation period the state of node i has moved from \(D(k)\) to \(D(k+1)\) whilst the messages remain in \(D(k)\).
It is therefore a trivial proof by induction to show that after n pseudocycles then the computation will have advanced from \(D(k)\) to \(D(k + n)\).
The notation A \(\mathbin {{A}{\Rightarrow }{B}}\) B is an attempt to emulate standard mathematical logical reasoning that we have a proof of A, and a proof that A implies B and hence we have a proof of B.
Finally, the main result may be proved as follows. Initially the computation is in \(D(0)\). Subsequently at time e after the end of the \(k^*\) pseudocycles the computation must be in \(D(k^*)\). This implies that at any subsequent time t then the state of the computation must be in \(D(k^*)\) and hence that \(\delta ^t(x) \in D(k^*)\). Therefore \(\delta ^t(x) = x^*\) by (A3) .
3.2 Synchronous and Finite Conditions
Even after relaxing (A1*) to (A1), the ACO sets \(D(k)\) are not always intuitive or simple to construct. UD recognised this and provided several alternative sufficient conditions which are applicable in special cases. They then claim that these new conditions are sufficient for convergence by showing that they imply that \(\mathbf{F }\) is an ACO.
The first set of sufficient conditions apply when there exists a partial order \(\leqslant _i\) over each \(S_i\). These are then lifted to form the order \(\leqslant \) over \(S\) where \(x \leqslant y\) means \(\forall i : x_i \leqslant y_i\). UD then make the following claim, where \(\sigma \) is the synchronous state function:
(Proposition 3 in UD) The asynchronous iteration \(\delta \) converges over some set \({D= D_1 \times D_2 \cdots \times D_n}\) if:
\(\forall x : x \in D\Rightarrow \mathbf{F }(x) \in D\)
\(\forall x, y : x, y \in D\wedge x \leqslant y \implies \mathbf{F }(x) \leqslant \mathbf{F }(y)\)
\(\forall x, t : x \in D\Rightarrow \ \sigma ^{t+1}(x) \leqslant \sigma ^t(x)\)
\(\forall x : x \in D\Rightarrow \sigma \) converges starting at x
UD attempt to prove this by first showing a reduction from these conditions to an ACO and then applying Theorem 1 to obtain the required result. However this claim is not true. While the asynchronous iteration does converge from every starting state in \(D\) under these assumptions, it does not necessarily converge to the same fixed point. The flaw in the original proof is that UD tacitly assume that the set \(D(0)\) for the ACO they construct is the same as the original \(D\) specified in the conditions above. However the only elements that are provably in the ACO's \(D(0)\) is the set \(\{\sigma ^k(x) \mid k \in {\mathbb {N}}\}\). We now present a counter-example to the claim.
Counterexample 1
Consider the degenerate asynchronous environment in which \(|V| = 1\) and let \(\mathbf{F }\) be the identity function (i.e. \(\mathbf{F }(a) = a\)). Let \(D = \{ x , y \}\) where the only relationships in the partial order are \(x \leqslant x\) and \(y \leqslant y\). Clearly (i), (ii), (iii) and (iv) all trivially hold as \(\mathbf{F }\) is the identity function. However x and y are both fixed points, and which fixed point is reached depends on whether the iteration starts at x or y. Hence Claim 1 cannot be true.
It is possible to "fix" this claim by strengthening (iv) to "the synchronous iteration always converges to the same fixed point". Additionally it also turns out that the reduction to the ACO requires \(D\) to be non-empty so that there exists an initial state from which to begin iterating. The modified conditions are formalised in Agda as follows:
If \(\mathbf{F }\) obeys the synchronous conditions above, \(\mathbf{F }\) is an ACO.
The sequence of sets D( k) required by the definition of an ACO are defined as follows, where \(x_0\) is some initial state in \(D\):
$$\begin{aligned} D_i(k)=\{ x \mid x \in D_i(0) \wedge x^*_i \leqslant _i x \leqslant _i \sigma ^k_i(x_0) \} \end{aligned}$$
For a full proof that these sets satisfy the ACO conditions please consult our Agda library [ 1 ]. \(\square \)
One point of interest is that the sets defined above depend on the initial state \(x_0\), and that \(D(0)\) as defined only contains the synchronous iterates from \(x_0\), and hence D(0) is only a subset of \(D\). It is still possible to show convergence for all states in \(D\) by constructing an ACO for each initial element and proving that \(D(k^*)\) contains the same final element (which is possible due to the modification that \(\sigma \) now converges to a unique fixed point). However, even with our updated assumptions we have been unable to construct a single "unified" ACO for which \(D(0) = D\). Whether or not it is possible to do so is an interesting open question.
3.3 Finite Conditions
UD also provide a set of sufficient conditions that are applicable for convergence over a finite set of values. Like Claim 1, they require that \(S\) is equipped with some indexed order.
(Proposition 4 in UD) The asynchronous iteration \(\delta \) converges over \(D\) if:
\(D\) is finite
\(\forall x : x \in D\Rightarrow \mathbf{F }(x) \leqslant x\)
\(\forall x, y : x , y \in D\wedge x \leqslant y \implies \mathbf{F }(x)\leqslant \mathbf{F }(y)\)
UD's attempted proof for Claim 2 is a reduction to the conditions for Claim 1. Therefore like Claim 1, the conditions guarantee convergence but not to a unique solution. Similarly, the counterexample for Claim 1 is also a counterexample for Claim 2.
Unlike Claim 1, we do not have a proposed strengthening of Claim 2 which guarantees the uniqueness of the fixed point. This is because the finiteness of \(D\) does not help to ensure the uniqueness of the fixed point. Instead much stronger assumptions would be required to guarantee uniqueness (for example the existence of a metric space over the computation as discussed in the next section) and any such stronger conditions have the tendency to make the finiteness assumption redundant.
3.4 AMCO Conditions
Many classical convergence results for synchronous iterations rely on the notion of distance, and in suitable metric spaces the iteration can be proved to converge by showing that every application of the operator \(\mathbf{F }\) moves the state closer (in non-negligible steps) to some fixed point \(x^*\). There already exist several results of this type for asynchronous iterations. El Tarazi [ 11 ] shows that \(\delta \) converges if each \(S_i\) is a normed linear space and there exists a fixed point \(x^*\) and a \(\gamma \in (0, 1]\) such that for all \(x \in S\):
$$\begin{aligned} || \mathbf{F }(x) - x^*|| \leqslant \gamma || x - x^*|| \end{aligned}$$
However, this is a strong requirement as the use of the norm implies the existence of an additive operator over \(S\) and in many applications such an operator may not exist.
Recently Gurney [ 13 ] proposed a new, more general set of conditions based on ultrametrics [ 19 ]. Gurney does not name these conditions himself but for convenience we will call an \(\mathbf{F }\) that satisfies them an asynchronously metrically contracting operator (AMCO). He then proves that \(\mathbf{F }\) being an AMCO is equivalent to \(\mathbf{F }\) being an ACO. We are primarily concerned with applying the results to prove correctness and so we only formalise the forwards direction of the proof here. Before doing so, we define some terminology for different types of metrics.
Definition 10
A quasi-semi-metric is a distance function \(d : S \rightarrow S \rightarrow {\mathbb {N}}\) such that:
\(d(x,y) = 0 \Leftrightarrow x = y\)
An ultrametric is a distance function \(d : S \rightarrow S \rightarrow {\mathbb {N}}\) such that:
\(d(x,y) = d(y,x)\)
\(d(x,z) \leqslant \max (d(x,y), d(y,z))\)
It should be noted that in these definitions the image of d is \({\mathbb {N}}\) rather than \(\mathbb {R^+}\). This is important as it later allows us to prove that the distance between consecutive states in the iteration must reduce to 0 by the well-foundedness of the natural numbers over <.
An operator \(\mathbf{F }\) is an asynchronously metrically contracting operator (AMCO) if for every node i there exists a distance function \(d_i\) and:
(M1) S is non-empty
(M2) \(d_i\) is a quasi-semi-metric
(M3) \(\exists \ d_i^{\max } : \forall x, y : d_i(x,y) \leqslant d_i^{\max }\)
(M4) \(\forall x : x \ne \mathbf{F }(x) \implies d(x, \mathbf{F }(x)) > d(\mathbf{F }(x), \mathbf{F }(\mathbf{F }(x)))\)
(M5) \(\forall x^*, x : (\mathbf{F }(x^*) = x^*) \wedge (x \ne x^*) \implies d(x^*, x) > d(x^*, \mathbf{F }(x))\)
where \(d(x,y) = \max _{i \in V} d_i(x_i,y_i)\)
Assumption (M1) is not listed in Gurney [ 13 ] but was found to be required during formalisation as proofs in Agda are constructive. It should be noted that if (M1) does not hold (i.e. there are no states) then convergence is trivial to prove. Assumption (M2) says that two states are equal if and only if they occupy the same point in space. Assumption (M3) says that there exists a maximum distance between pairs of states. Assumption (M4) says that the distance between consecutive iterations must strictly decrease, and assumption (M5) says that for any fixed point \(x^*\) then applying \(\mathbf{F }\) must move any state closer to that fixed point. These conditions are formalised in Agda as follows:
Gurney's definition of an AMCO makes the stronger assumption:
(M2*) \(d_i\) is an ultrametric
Users of the new modified conditions therefore no longer need to prove that their distance functions are symmetric or that they obey the max triangle inequality. This relaxation is a direct consequence of Generalisation 4 and reinforces our argument that Generalisation 4 truly is a relaxation.
(Lemma 6 in [ 13 ]) If \(\mathbf{F }\) is an AMCO then \(\mathbf{F }\) is an ACO.
Let x be an element in \(S\) by (M1). Then the fixed point \(k^*\) can be found by repeatedly applying (M4) to form the chain:
$$\begin{aligned} d(x,\mathbf{F }(x))> d(\mathbf{F }(x),\mathbf{F }^2(x))> d(\mathbf{F }^2(x),\mathbf{F }^3(x)) > \cdots \end{aligned}$$
This is a decreasing in chain in \({\mathbb {N}}\) and there must exist a time k at which (M4) can no longer be applied. Hence \(\mathbf{F }^k(x) = \mathbf{F }^{k+1}(x)\) and so \(x^*= \mathbf{F }^k(x)\) is our desired fixed point. Let \(k^*= \max _{i \in V} d_i^{max}\) by (M3). The required sets D( k) are then defined as follows:
$$\begin{aligned} D_i(k) = \{ x_i \in S_i \mid d_i(x_i,x^*_i) \leqslant \max (k^*- k, 0) \} \end{aligned}$$
Due to space constraints we will not prove here that the sets \(D(k)\) fulfil the required ACO properties. Interested readers may find the full proofs in our Agda library [ 1 ]. \(\square \)
4 The Library
A library containing all of these proofs, as well as several others, is available publicly online [ 1 ]. It is arranged in such a way that hides the implementation details of the theorems from users. For example, among the most useful definitions contained in the main interface file for users are the following:
where is simply a wrapped version of the function \(\mathbf{F }\) that ensures that it is decomposable in the correct way. The same file also exports the definition of , etc., which allows users to easily pick their conditions and theorem as desired.
5.1 Achievements
In this paper we have formalised the asynchronous fixed point theory of Üresin and Dubois' in Agda. Along the way we have proposed various relaxations by:
extending the model to incorporate iterations in a fully distributed environment as well as the original shared memory model.
showing how the ACO conditions can be tweaked to reduce the burden of proof on users of the theory.
reworking the theory to allow users to prove that the iteration still converges even when the schedule is only well behaved for a finite rather than an infinite length of time.
We have also described how an accordingly relaxed version of UD's main theorem was successfully formalised. However our efforts to formalise Propositions 3 and 4 as stated in UD's paper revealed that they are false. We hope that this finding alone justifies the formalisation process. We have proposed a fix for Proposition 3 but have been unable to come up with a similar practical alteration for Proposition 4. Finally, we have also relaxed and formalised the set of AMCO conditions based on the work by Gurney.
Our formalisation efforts have resulted in a library of proofs for general asynchronous iterations. The library is publicly available [ 1 ] and we hope that it will be a valuable resource for those wanting to formally verify the correctness of a wide range of asynchronous iterations. We ourselves have used the library to verify a proof about the largest possible set of distributed vector-based routing protocols that are always guaranteed to converge over any network [ 8 ].
5.2 Further Work
We are primarily interested in proving correctness and therefore we have only formalised that the \(\mathbf{F }\) being an ACO is sufficient to guarantee that the asynchronous iteration converges. However Üresin and Dubois also show that if \(\mathbf{F }\) converges then \(\mathbf{F }\) is necessarily an ACO whenever the state space S is finite. The accompanying proof is significantly more complex and technical than the forwards direction and so would be an interesting extension to our formalisation.
Additionally it would be instructive to see if other related work such as [ 17 , 21 , 22 ], using different models, could be integrated into our formalisation.
Matthew Daggitt is supported by an Engineering and Physical Sciences Research Council Doctoral Training grant.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Premium-Abo der Gesellschaft für Informatik
Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.
Zurück zum Zitat Agda routing library. https://github.com/MatthewDaggitt/agda-routing/tree/jar2019. Accessed 09 Mar 2019 Agda routing library. https://github.com/MatthewDaggitt/agda-routing/tree/jar2019. Accessed 09 Mar 2019
Zurück zum Zitat Agda standard library. https://github.com/agda/agda-stdlib, version 0.17. Accessed 20 Oct 2018 Agda standard library. https://github.com/agda/agda-stdlib, version 0.17. Accessed 20 Oct 2018
Zurück zum Zitat Agda tutorials (2019). https://agda.readthedocs.io/en/latest/getting-started/tutorial-list.html. Accessed 06 Feb 2019 Agda tutorials (2019). https://agda.readthedocs.io/en/latest/getting-started/tutorial-list.html. Accessed 06 Feb 2019
Zurück zum Zitat Bove, A., Dybjer, P., Norell, U.: A brief overview of Agda–a functional language with dependent types. In: Wenzel, M., Nipkow, T. (eds.) Theorem Proving in Higher Order Logics, pp. 73–78. Springer, Berlin (2009) Bove, A., Dybjer, P., Norell, U.: A brief overview of Agda–a functional language with dependent types. In: Wenzel, M., Nipkow, T. (eds.) Theorem Proving in Higher Order Logics, pp. 73–78. Springer, Berlin (2009)
Zurück zum Zitat Casanova, H., Thomason, M.G., Dongarra, J.J.: Stochastic performance prediction for iterative algorithms in distributed environments. J. Parallel Distrib. Comput. 58(1), 68–91 (1999) CrossRef Casanova, H., Thomason, M.G., Dongarra, J.J.: Stochastic performance prediction for iterative algorithms in distributed environments. J. Parallel Distrib. Comput. 58(1), 68–91 (1999) CrossRef
Zurück zum Zitat Chau, C.K.: Policy-based routing with non-strict preferences. SIGCOMM Comput. Commun. Rev. 36(4), 387–398 (2006) CrossRef Chau, C.K.: Policy-based routing with non-strict preferences. SIGCOMM Comput. Commun. Rev. 36(4), 387–398 (2006) CrossRef
Zurück zum Zitat Chau, M.: Algorithmes parallèles asynchrones pour la simulation numérique. Ph.D. thesis, Institut National Polytechnique de Toulouse (2005) Chau, M.: Algorithmes parallèles asynchrones pour la simulation numérique. Ph.D. thesis, Institut National Polytechnique de Toulouse (2005)
Zurück zum Zitat Daggitt, M.L., Gurney, A.J.T., Griffin, T.G.: Asynchronous convergence of policy-rich distributed Bellman-Ford routing protocols. In: SIGCOMM Proceedings, ACM (2018) Daggitt, M.L., Gurney, A.J.T., Griffin, T.G.: Asynchronous convergence of policy-rich distributed Bellman-Ford routing protocols. In: SIGCOMM Proceedings, ACM (2018)
Zurück zum Zitat Ducourthial, B., Tixeuil, S.: Self-stabilization with path algebra. Theor. Comput. Sci. 293(1), 219–236 (2003) MathSciNetMATHCrossRef Ducourthial, B., Tixeuil, S.: Self-stabilization with path algebra. Theor. Comput. Sci. 293(1), 219–236 (2003) MathSciNetMATHCrossRef
Zurück zum Zitat Edwards, S.A., Lee, E.A.: The semantics and execution of a synchronous block-diagram language. Sci. Comput. Program. 48(1), 21–42 (2003) MathSciNetMATHCrossRef Edwards, S.A., Lee, E.A.: The semantics and execution of a synchronous block-diagram language. Sci. Comput. Program. 48(1), 21–42 (2003) MathSciNetMATHCrossRef
Zurück zum Zitat El Tarazi, M.N.: Some convergence results for asynchronous algorithms. Numer. Math. 39(3), 325–340 (1982) MathSciNetMATHCrossRef El Tarazi, M.N.: Some convergence results for asynchronous algorithms. Numer. Math. 39(3), 325–340 (1982) MathSciNetMATHCrossRef
Zurück zum Zitat Frommer, A., Szyld, D.B.: On asynchronous iterations. J. Comput. Appl. Math. 123(1), 201–216 (2000) MathSciNetMATHCrossRef Frommer, A., Szyld, D.B.: On asynchronous iterations. J. Comput. Appl. Math. 123(1), 201–216 (2000) MathSciNetMATHCrossRef
Zurück zum Zitat Gurney, A.J.T.: Asynchronous iterations in ultrametric spaces. Technical report (2017). arXiv:1701.07434 Gurney, A.J.T.: Asynchronous iterations in ultrametric spaces. Technical report (2017). arXiv:1701.07434
Zurück zum Zitat Henrio, L., Kammüller, F.: Functional active objects: typing and formalisation. Electron. Notes Theor. Comput. Sci. 255, 83–101 (2009) MATHCrossRef Henrio, L., Kammüller, F.: Functional active objects: typing and formalisation. Electron. Notes Theor. Comput. Sci. 255, 83–101 (2009) MATHCrossRef
Zurück zum Zitat Henrio, L., Khan, M.U.: Asynchronous components with futures: semantics and proofs in Isabelle/HOL. Electron. Notes Theor. Comput. Sci. 264(1), 35–53 (2010) CrossRef Henrio, L., Khan, M.U.: Asynchronous components with futures: semantics and proofs in Isabelle/HOL. Electron. Notes Theor. Comput. Sci. 264(1), 35–53 (2010) CrossRef
Zurück zum Zitat Ko, S.Y., Gupta, I., Jo, Y.: A new class of nature-inspired algorithms for self-adaptive peer-to-peer computing. ACM Trans. Auton. Adapt. Syst. 3(3), 11:1–11:34 (2008) CrossRef Ko, S.Y., Gupta, I., Jo, Y.: A new class of nature-inspired algorithms for self-adaptive peer-to-peer computing. ACM Trans. Auton. Adapt. Syst. 3(3), 11:1–11:34 (2008) CrossRef
Zurück zum Zitat Lee, H., Welch, J.L.: Applications of probabilistic quorums to iterative algorithms. In: Proceedings 21st International Conference on Distributed Computing Systems, pp. 21–28 (2001) Lee, H., Welch, J.L.: Applications of probabilistic quorums to iterative algorithms. In: Proceedings 21st International Conference on Distributed Computing Systems, pp. 21–28 (2001)
Zurück zum Zitat Meseguer, J., Ölveczky, P.C.: Formalization and correctness of the PALS architectural pattern for distributed real-time systems. In: International Conference on Formal Engineering Methods, pp. 303–320 (2010) Meseguer, J., Ölveczky, P.C.: Formalization and correctness of the PALS architectural pattern for distributed real-time systems. In: International Conference on Formal Engineering Methods, pp. 303–320 (2010)
Zurück zum Zitat Schörner, E.: Ultrametric fixed point theorems and applications. Valuat. Theory Appl. 2, 353–359 (2003) MathSciNetMATH Schörner, E.: Ultrametric fixed point theorems and applications. Valuat. Theory Appl. 2, 353–359 (2003) MathSciNetMATH
Zurück zum Zitat Üresin, A., Dubois, M.: Parallel asynchronous algorithms for discrete data. J. ACM 37(3), 588–606 (1990) MathSciNetMATHCrossRef Üresin, A., Dubois, M.: Parallel asynchronous algorithms for discrete data. J. ACM 37(3), 588–606 (1990) MathSciNetMATHCrossRef
Zurück zum Zitat Üresin, A., Dubois, M.: Effects of asynchronism on the convergence rate of iterative algorithms. J. Parallel Distrib. Comput. 34(1), 66–81 (1996) CrossRef Üresin, A., Dubois, M.: Effects of asynchronism on the convergence rate of iterative algorithms. J. Parallel Distrib. Comput. 34(1), 66–81 (1996) CrossRef
Zurück zum Zitat Wei, J.: Parallel asynchronous iterations of least fixed points. Parallel Comput. 19(8), 887–895 (1993) MathSciNetMATHCrossRef Wei, J.: Parallel asynchronous iterations of least fixed points. Parallel Comput. 19(8), 887–895 (1993) MathSciNetMATHCrossRef
Zurück zum Zitat Zmigrod, R., Daggitt, M.L., Griffin, T.G.: An Agda formalization of Üresin and Dubois' asynchronous fixed-point theory. In: International Conference on Interactive Theorem Proving, pp. 623–639. Springer (2018) Zmigrod, R., Daggitt, M.L., Griffin, T.G.: An Agda formalization of Üresin and Dubois' asynchronous fixed-point theory. In: International Conference on Interactive Theorem Proving, pp. 623–639. Springer (2018)
Matthew L. Daggitt
Ran Zmigrod
Timothy G. Griffin
https://doi.org/10.1007/s10817-019-09536-w
Journal of Automated Reasoning
Graph Theory in Coq: Minors, Treewidth, and Isomorphisms
Verified Analysis of Random Binary Tree Structures
Formalizing the LLL Basis Reduction Algorithm and the LLL Factorization Algorithm in Isabelle/HOL
EditorialNotes
Preface: Selected Extended Papers from Interactive Theorem Proving 2018
Formal Reasoning Under Cached Address Translation
The MetaCoq Project | CommonCrawl |
Commuting patterns: the flow and jump model and supporting data
Levente Varga1,
Géza Tóth2 &
Zoltán Néda ORCID: orcid.org/0000-0002-7319-51231
A simple model, named the flow and jump model (FJM) is used for describing commuter fluxes at different distances. The model is based on a master equation which allows a local net probability flow and non-local jumps. FJM is in principle a one-parameter model, however it is found that by fixing this parameter we get a parameter free model, similar with the radiation model. We find that FJM offers an improved description for commuting data from USA, Italy and Hungary. For a special choice of the model parameter FJM leads to the radiation model.
Commuter mobility patterns are in the focus of many recent studies. The problem by its nature belongs to the research field of human geography, sociology and economics. Nowadays however, researchers from many other fields became interested in the topic. The interest in such studies can be explained by the fact that many large electronic datasets became available for researchers, allowing to test both the assumptions and main results of the models. As a special case of human mobility, statisticians and data scientist are interested in universal patterns that govern the commuter fluxes at different spacial scales. Physicists and mathematicians are interested in simple models capable of explaining the observed patterns. A detailed review for the state of the art of the field of human mobility is given in the recent review article of Barbosa et al. [1].
Models for community fluxes, motivated phenomenologically by some simple socio-economic or probabilistic arguments, were proposed already in the early 1940 by Stouffer (the intervening opportunities model) [2] and by Block & Marschak in 1960 (the random utility model) [3]. Analogies with some classical physics phenomena were exploited by the very popular gravity and generalized potential models [4].
From modeling perspective a great leap in the understanding of human mobility patterns represented the radiation model introduced by Simini et al. [5]. In contrast with earlier models that were phenomenologically argued, the radiation model started from a basic socio-economic optimization assumption and derived a simple and compact formula for commuter fluxes. Relative to the earlier used models the compact result derived in [5] has also the advantage that it is parameter free. When compared however with real commuting and population data, the model contains an undetermined proportionality constant that makes the connection between the population and available job number. In such sense one can argue that this model is a one free parameter model. Other models of similar complexity, built also on realistic assumptions are the population-weighted opportunities model (PWO) [6] and a novel version of it where also memory effects are considered [7]. Recently a new, parameter-free model was introduced by Liu and Yan [8]. Their basic assumption is that individuals select destination locations that present higher opportunity benefits than the ones at the origin and the intervening opportunities between the origin and destination.
The radiation model was generalized for continuous population distribution [9] and was also made more realistic by allowing a realistic job selection for the individuals. This new model, the radiation model with selection offered an improved fit for the commuting data in USA. Further improvement for the simple radiation model was considered by taking into account also the travel cost involved in commuting (see for example [10]). In the line of this model the travel cost optimized radiation model introduced by us recently [11] offered an improved description for the commuting fluxes in Hungary. The main drawback of all these later generalization to the radiation model is that the parameter-free beauty of the radiation model is lost.
Here we offer a further generalization for the original radiation model, and prove it's advantages relative to the earlier models using large-scale population density and commuter flux data from USA, Italy and Hungary. The nice aspect of this generalization is that our model is again a parameter-free model, since the only fitting parameter is fixed to a universal value.
Modelling framework
The gravity model (GM) is probably the most known approach to describe empirically the commuter fluxes between cities [12]. It is based on a phenomenological analogy with gravity, assuming that the interaction between two regions or cities depends in an inverse proportionality with the distance raised at a positive power and in direct proportionality with the size of the two regions/cities. Contrary with what is usually believed, GM is not only a simple analogy there are also theoretical arguments in favour of it. The oldest one is probably the one using the maximal entropy hypothesis [13, 14]. Other successful attempts are based on the principle of utility maximization in economics. Both deterministic [15, 16] and random utility theories [17] were considered.
In the most general form, the number of commuters \(f_{i}(j)\) between cities i and j is written as:
$$ f_{i}(j)=F(W_{i}) \frac{(W_{j})^{\alpha}}{(r_{i,j})^{\beta}}. $$
We denoted here by \(W_{i}\) the population of the settlement i and by \(r_{i,j}\) the distance between settlements i and j. \(F(x)\) is an arbitrary monotonically increasing kernel function, and α and β are fitting exponents. From the \(f_{i}(j)\) data one can also compute the \(P^{i}_{>}(W)_{\mathrm{GM}}\) probability, that a worker living in location i commutes to a location that is outside of a disk containing a population W and centred at its home:
$$ P^{i}_{>}(W)_{\mathrm{GM}}=1-\frac{\sum_{j\ne i}^{(w_{i}[j]< W)} f_{i}(j)}{\sum_{j} f_{i}(j)}=1-\frac{\sum_{j\ne i}^{(w_{i}[j]< W)} \frac{(W_{j})^{\alpha }}{(r_{i,j})^{\beta}}}{\sum_{j} \frac{(W_{j})^{\alpha}}{(r_{i,j})^{\beta}}}, $$
which is independent of the \(F(x)\) kernel function. We denoted by \(w_{i}[j]\) the total population inside a disk centred at location i and reaching to location j. Now, the \(P_{>}(W)_{\mathrm{GM}}\) probability that commuters travel to work at a distance where they pass a disk with population W is:
$$ P_{>}(W)_{\mathrm{GM}}= \bigl\langle P^{i}_{>}(W)_{\mathrm{GM}} \bigr\rangle _{i} . $$
The GM model in such sense is a two parameter model and one has to determine the best α and β exponent values.
The original radiation model (RM) [5] is based on the simple assumption that jobseekers are optimizing their income by accepting the closest job offer that offers a better salary than the one available at their current address. Assuming a \(p_{\le}(z)\) distribution function for the incomes in the studied society the probability \(P_{>}(z|n)\) that a person with income z refuses the closest n jobs is:
$$ P_{>}(z|n)= \bigl[p_{\le}(z) \bigr]^{n}. $$
By using the probability density function for incomes, the probability of not accepting the closest n jobs, \(P_{>}(n)\), can be calculated as:
$$\begin{aligned} P_{>}(n)&= \int_{0}^{\infty}P_{>}(z|n) p(z) \,dz= \int_{0}^{\infty }P_{>}(z|n) \frac{\partial p_{\le}(z)}{\partial z} \,dz \\ & = \int_{0}^{1} \bigl[p_{\le}(z) \bigr]^{n} \,dp_{\le}(z)=\frac{1}{n+1}. \end{aligned}$$
Accepting now the hypothesis that the number of job openings in a territory is proportional with the W population (\(n=\mu W\)), the radiation model predicts the probability that a person commutes to a location that is outside of a disk centered on its current location and containing a population W:
$$ P_{>}(W)_{\mathrm{RM}}=\frac{1}{\mu W+1}. $$
It is interesting to note that the hypothesis \(n\propto W\) can be proven on real-life data using job advertisement and population data. This is also done in the Results section.
Assuming that jobseekers are willing to accept jobs (or they are aware of the jobs) only with a probability q, the above presented simple argument can be generalized [9] (radiation model with selection (RMwS)), leading to a result with two fitting parameters (\(q,\mu\)):
$$ P_{>}(W)_{\mathrm{RMwS}}=\frac{1-(1-q)^{\mu W+1}}{(\mu W+1) q}. $$
The travel cost optimized radiation model (TCORM) takes into account the fact that travel costs are distance dependent so in addition with the transited jobs the travel distance, r has to be considered when applying the arguments used in the radiation model. Assuming an exponential distribution kernel for the income distribution, and repeating the arguments from the original radiation model [11] one arrives again to a result with two fitting parameters:
$$ P_{>}(W)_{\mathrm{TCORM}}=\frac{1+\lambda\sqrt{W}}{\mu W+1}. $$
The λ fitting parameter incorporates both the value of μ, the value of a proportionality constant between the travelled distance and cost of travel and a third constant governing the shape of an assumed exponential-type income distribution [11].
Here we introduce yet another model, offering another one-parameter alternative for the simple RM model. Our alternative, dynamical approach is based on simple master equation for the \(\rho (n,t)=-dP_{>}(n,t)/dn\) probability density and reproduces as a specific case the results of the RM model. We name the model as Flow and Jump Model (FJM). Following the assumptions of the recently introduced growth and reset type models (for a review please consult [18]) we assume now an inverse process: a backward probability flow supplemented by a jump process from the origin to any state with a given n value. The discrete version of the process is depicted in Fig. 1. The continuous master equation has the form:
$$ \frac{d \rho(n,t)}{dt}= \frac{\partial(\eta(n) \rho(n,t))}{\partial n}+ \bigl[\gamma(n)\rho(n,t) \bigr]\rho(0,t). $$
The above master equation describes a process where there is a local net probability density flow from each state towards the \(n=0\) state and a jump probability from the origin \((n=0)\) to an n state. For the state dependent \(\eta(n)\) and \(\gamma(n)\) rates we consider now simple kernels which makes sense for the commuting process. Definitely the transitions \(0\rightarrow n\) governed by the \(\gamma(n)\rho(n,t)\) rates describes the probability that workers choose a commuting job. \(\gamma(n)\) should decrease with distance (or correspondingly with n) and the proportionality with \(\rho(n,t)\) suggests that where are already many commuters there should also be many good jobs, so it is attractive to commuters.
Dynamics for the FJM model. Sketch of the dynamics that leads in the continuum limit the master equation (9)
For more details about such dynamical equation, their stability and stationarity please consult [18]. As shown in [18], the stationary solution (\(d\rho _{s}(n,t)/dt=0\)) of (9) is:
$$ \rho_{s}(n)=\frac{\eta(0) \rho_{s}(0)}{\eta(n)}e^{-\int_{0}^{n} \frac{\gamma (x)\rho_{s}(0)}{\eta(x)}\,dx}. $$
The \(\rho_{s}(0)\) value is obtained from the normalization condition:
$$ \int_{0}^{\infty}\rho_{s}(n) \,dn=1. $$
For \(\eta(n)\) and \(\gamma(n)\) rates we consider now the simplest kernels which makes sense for the commuting process. For \(\gamma(n)\rho _{s}(0)\) the simplest choice that avoids also the divergence in \(n=0\) is an inverse proportionality:
$$ \gamma(n)\rho_{s}(0)=\frac{C}{n+1}. $$
C is a constant which fixes also the time unit in the dynamical equation (9). The backward flow characterizes the tendency of the commuters to search for appropriate jobs that are closer to their living places, accepting with a bigger probability jobs that will approach them to their home. This net flow is described by the \(\eta(n)\) terms. The simplest choice that leads to a final equilibrium distribution is:
$$ \eta(n)=\eta=\mathrm{Const}. $$
For the above \(\gamma(n)\) and \(\eta(n)\) kernels (equations (12) and (13), respectively), and assuming \(a=C/\eta>1\) the solution (10) writes as
$$ \rho_{s}(n)=(a-1) (n+1)^{-a}, $$
which is a scaling Tsallis–Pareto (or Lomax) type distribution [19]. This probability density leads to the \(P_{>}(n,t)\) probability:
$$ P_{>}(n,t)= \int_{n}^{\infty} \rho_{s}(x) \,dx=(1+n)^{(1-a)}. $$
With the assumption \(n(r)=\mu W(r)\) we get a slightly modified expectation for \(P_{>}(W)\)
$$ P_{>}(W)_{\mathrm{FJM}}=\frac{1}{(\mu W+1)^{(a-1)}}. $$
In the followings we demonstrate on real commuting data that the FJM model with the universal choice \(a=7/4\) offers a much improved fit for the real commuting data. For the specific case \(a=2\) one gets back the original radiation model. In principle the model is a two-parameter one, however if we admit the universality of a it becomes similarly with RM a one-parameter model.
Data source and format
For USA we processed a complete commuter and population database. We analyzed estimated population census data between 2006 and 2010 [20] using \(Q = 73\text{,}803\) settlements (nodes) (white circles in Fig. 2) and \(4\text{,}156\text{,}426\) commuter routes (edges) (blue lines between white circles in Fig. 2). We use the same dataset as the one used in [21], where the authors attempted a region-like geographic division of USA based on commuting patterns. For studying the geographical population distribution we used a database from years between 2006 and 2010 giving the estimated population of continental USA divided in \(11\text{,}078\text{,}286\) cells of 1 km2 area [22]. We detail now the three different data subsets that were constructed by us and are the input for our calculations:
Settlements and data processing method in the commuting network of USA. Disks of different radius \(d(i,j)\), starting from a given settlement and reaching the other j settlements, are constructed. The population \(w_{i}[j]\) inside these disks and the commuter number, starting from settlement i and traveling to settlement j, \(f_{i}(j)\), is recorded
a. Settlement data where the settlement code and their latitudes and longitudes are given. In the case of USA, the total number of settlements is \(Q = 73\text{,}803\). These geographical locations are the source and targets for commuting. The data is in the format given below:
set. code lat lon
1 32.4771763256 −86.4901731173
2 32.474292121 −86.4733798888
3 32.4754563613 −86.460168641
b. Commuting data, containing the source and targets for \(4\text{,}156\text{,}426\) directed travels to work. The data has the following structure: the first and second column contains the source and target settlements code and the third column gives the number of commuters. Below we illustrate the format of this data:
source set. target set. num. of com.
9719 9719 20,950
29,719 29,719 540
c. Population distribution data. The original dataset contains \(11\text{,}078\text{,}286\) square like cells of 1 km2 area with its population, the latitude and longitude for the middle point. In order to speed up our calculations we have spatially renormalized this data and obtained a less accurate resolution with 4 km2 size cells. This is done by collapsing the data of four neighbouring cells and averaging their latitudinal and longitudinal coordinates. As result we ended up with \(1\text{,}230\text{,}920\) cells containing a total population \(W=308\text{,}745\text{,}231\). The data we have worked with has the following structure:
pop. num. lat lon
18.0 51.8642065666 −176.664361722
30.0 51.8621521667 −176.6534376
9.0 51.8700427111 −176.644826767
7.0 51.8785901 −176.629460933
219.0 51.8383112778 −176.512803256
From the above three datasets one can compute the \(P_{>}(W)\) dependency. For USA we have used yet another dataset to prove the linear proportionality between the number of job openings and total population for a geographical region. For this we obtained the number of listed jobs for each state of the continental USA using the site [23]. In the day we have processed the data (12.02.2018) we found a total of \(2\text{,}596\text{,}391\) jobs. The population of the states was obtained using the estimate between 2006 and 2011, available on the Internet [22].
Apart of the large-scale data available for USA we have used two smaller-size datasets for Hungary and Italy. These two additional datasets contain the same three data subsets: settlement data, commuting data and population distribution data. The population distribution data was used in its original form with cells of sizes 1 km2.
For Hungary we used the same commuting data as in [11]. Commuting data is between \(Q = 3176\) settlements, it contains \(81\text{,}664\) commuter routes [24] and the spatial distribution of population is for the \(W = 9\text{,}972\text{,}000\) total inhabitants [25] as measured in the 2011 population census.
The data for Italy contains \(Q = 8093\) settlements, \(556\text{,}120\) commuter routes and it is from the Italian population census from 2011 [26]. The total population \(W = 55\text{,}605\text{,}065\) is mapped in cells of 1 km2 area [27].
During the data processing, we select one by one the settlements i as source for commuting and construct the disks with radius \(d(i,j)\), reaching to the target settlement j. This is illustrated schematically in Fig. 2. We count the total population \(w_{i}[j]\) inside this disk and record the number of commuters \(f_{i}(j)\) starting from settlement i and traveling to settlement j.
Having the data \(d(i,j)\), \(f_{i}(j)\), and \(w_{i}[j]\) for all the settlement pairs \((i,j)\) we compute the experimental \(P_{>}(W)\) probabilities.
The number of commuters that have their residence in settlement i are denoted by \(N_{i}\).
$$ N_{i} = \sum_{j = 1}^{Q} f_{i}(j). $$
We ordered the settlements according to their distance relative to i. Let \(h_{i}^{[k]}\) be the index of the settlement that is the kth one in this row (for example, \(h_{i}^{[1]}\) is the index of the settlement that is the closest to settlement i and \(h_{i}^{[2]}\) is the index of the settlement that is the second closest to i). We denote by \(s(i,w)\) the smallest number of settlements for which the population inside a disk centered in i becomes larger (or equal) than w.
Mathematically:
$$ s(i,w) \in\{1,2,3,\ldots,Q\} $$
and satisfies for any \(w \le W\):
$$ w_{i} \bigl[h_{i}^{[s(i,w)-1]} \bigr] < w \le w_{i} \bigl[h_{i}^{[s(i,w)]} \bigr]. $$
(We record here that Q denotes the total number of settlements and W is the total population in the studied territory.) If no such number exists, then we will consider \(s(i,w) = Q\).
The probability that commuters from i are a transiting a disk with population W inside it can be written as:
$$ P_{>}^{i}(W) = \frac{1}{N_{i}} \Biggl[N_{i} - \sum _{k = 1}^{s(i,W)} f_{i} \bigl(h_{i}^{[k]} \bigr) \Biggr] . $$
Due to the discrete nature of the settlements this has a step-like structure for a given commuting source i, as it is illustrated on Fig. 3 for a given town in USA.
Step-like form of the \(P_{>}^{i}(W)\) probability for commuters starting from a given city, i. The red marked part shows one probability jump between two consecutive settlements
After obtaining these probabilities for all settlements, we constructed the desired \(P_{>}(W)\) probability by averaging over all i settlements:
$$ P_{>}(W) = \frac{1}{Q} \sum_{i = 1}^{Q} P_{>}^{i}(W). $$
As expected, averaging will result in a smoother curve, showing the experimental trend for the probability that is in our focus.
Results and discussions
We show first the experimentally obtained \(P_{>}(W)\) probability for the large USA dataset in comparison with the best fit results obtained from the GM model (3), the original RM model (6), the RMwS model (7), the TCORM model (8) and our novel FJM model (16). In the FJM model we have fixed the \(a=7/4\) parameter for all studied datasets, so in principle the only free parameter of this model is μ. Boundary effects become important for large W values (the disks centred on the settlements become largely incomplete due to the fact that they extend over the borders of continental USA). To minimize these effects we considered the data only up to \(W_{\mathrm{max}}= 1\text{,}000\text{,}000\). Also, to eliminate very short commuting routes (where commuting is questionable) we have imposed a lower threshold of \(W_{\mathrm{min}}=1000\). Fitting was realized on the \([W_{\mathrm{min}},W_{\mathrm{max}}]\) interval using the nonlinear fitting features of the Wolfram Mathematica® software. For the GM model, equation (3) does not lead to a compact functional form, so fitting was realised by considering a progressive mesh method for various α and β values in the interval \(\alpha\in[-1.0,2.5]\) and \(\beta\in[-1.0,2.5]\). The best fit parameters and the goodness of the fits (\(R^{2}\) correlation coefficient) are summarized in Table 1.
Table 1 Fitting parameters for the USA commuting data, considering the models given by equations (6), (7), (8), (3) and (16). The \(R^{2}\) values and fitted curves suggests that the FJM and GM models performs better than the simple RM, RMwS, or TCORM model. For the FJM model \(a=7/4\) was fixed
The statistics is in favour of the FJM and GM model. The best fits drawn on Fig. 4 suggests visually the same conclusion. The fact that FJM over performs the approximation given by the RM model for \(a=7/4\) is not surprising, since it has one more parameter: a. We will show however that one can fix this parameter and get also an excellent fit on other datasets as well (Italy and Hungary).The clear improvement in fitting the data relative to the RMwS and TCORM models is however a great leap forward since these models offer a two-parameter fit. It is important to notice the fact that GM offers also a good fit. This is again a two-parameter fit, but we will show in the followings on other datasets, that one cannot fix any of these parameters and remain with a fit quality that is comparable with FJM.
Comparison between the models prediction with experimental data for USA. Brown circles represent the calculated \(P_{>}(W)\) values for USA using the census data between 2006 and 2010. These results are visually compared with the best fits obtained for the RM model (Eq. (6)), GM model (Eq. (3)) RMwS model (Eq. (7)), TCORM model (Eq. (8)) and FJM model (Eq. (16)). Logarithmic scales are used in order to better illustrate deviations from experiments at all scales. The best fit parameters are given in Table 1
For the sake of completeness we also show for USA that our hypothesis, according to which the number of job openings in a geographical region is linearly proportional with the population. On Fig. 5 we plot the total number of job openings for different states as a function of the population of the state. The straight-line trend confirms our hypothesis.
The total number of job openings for different states as a function of the population of the state
The FJM model for \(a=7/4\) works well also for the commuting data processed for Hungary and Italy. The goodness of the fits and the best fit parameters are shown in Table 2. The visual picture for \(P_{>}(W)\) and best fits offered by the models are plotted in Fig. 6.
Visual comparison between the FJM model prediction and experimental data for three countries (USA, Italy, Hungary). The faint lines composed of circles show the \(P_{>}(W)\) experimental data and the simple dark colored lines are the best fits with the FJM model prediction (Eq. (16)). We fixed \(a=7/4\) and the best fit μ values are given in Table 2
Table 2 Fitting parameters and goodness of the fits shown in Fig. 6, considering the functional form given by equation (16) and fixing \(a=7/4\)
We show the same results also for the GM model. The obtained best fit parameters are given in Table 3 and the results are plotted on Fig. 7. We learn form here that GM offers a good description also for Hungary but it fails for the Italy data, and suggests that one cannot consider a universal value for α and β so that all datasets are reasonable well fitted. The negative value obtained for the α is more than strange, and suggests again, that the GM model is seemingly not appropriate for fitting the Italian commuting data.
Visual comparison between the GM model prediction and experimental data for three countries (USA, Italy, Hungary). The faint lines composed of circles show the \(P_{>}(W)\) experimental data and the simple dark colored lines are the best fits with the GM model prediction (Eq. (3)). The best fit parameters are given in Table 3
Table 3 Fitting parameters and goodness of the fits shown in Fig. 7, considering the functional form given by the GM model (Eq. (3))
In order to describe the statistics of commuter fluxes at different distances we introduced the FJM model based on a mean-field type dynamical approach. The model takes into account indirectly that commuting to larger distances is costly and less probable. Relative to the classical models it offers an improved fit for commuter fluxes in USA, Hungary and Italy. The probability that commuters are traveling for their jobs over a population W is compactly given by equation (16). The model is a two-parameter one, although we have shown that one parameter can be fixed, so that all studied datasets are reasonable well explained. In such sense the model becomes similarly with the RM model a one-parameter one, and improves the RM model in a considerable manner.
In order to comment on the results obtained for USA, Italy and Hungary we review from Table 2 the best fit parameter μ obtained with the FJM model. The parameter μ characterises both the availability of jobs per population and the attractiveness of these jobs to jobseekers. A higher value of μ suggests that there are many jobs relative to the population, jobseekers are aware of them and consider them for a potential commuting. A smaller μ value suggests that the number of available jobs per population is smaller and jobseekers are very selective for commuting. The obtained fitting parameter for μ are in good agreement with the given heuristic justifications and confirms the known social and economic profile of USA, Italy and Hungary. Commuting is more common in USA relative to Europe and there are more available commuting jobs per population. Related to the value of the a exponent in equation (16), one can also draw some interesting conclusions. The difference from the original radiation model (where we have \(a=2\)) suggests an already known issue, i.e. commuters are selective, not all available jobs are acceptable for them and travel cost has to be taken into account in accepting a commuting job [7–11]. Due to this the \(C/\eta\) value is smaller than the one for a simple salary optimization mechanism where the commuters accept the closest job that improves their salary at home (assumption of RM). This can be done either by lowering the C constant or by increasing the value of η, or changing both of them. The seemingly universal value of \(a=7/4\) remains however a puzzle motivating further studies.
In conclusion, we believe that the FJM model proposed in the present study lies on simple and reasonable assumptions and the studied experimental data supports it's predictions.
GM:
RM:
RMwS:
Radiation Model with Selection
TCORM:
Travel Cost Optimized Radiation Model
FJM:
Flow and Jump Model
Barbosa-Filho H, Barthélemy M, Ghoshal G, James RC, Lenormand M, Louail T, Menezes R, Ramasco JJ, Simini F, Tomasini M (2018) Human mobility: Models and Applications. Physics Reports. https://doi.org/10.1016/j.physrep.2018.01.001
Stouffer SA (1940) Intervening opportunities: a theory relating mobility and distance. Am Sociol Rev 5:845–867
Block H, Marschak J (1960) Random orderings and stochastic theories of responses. In: Contributions to probability and statistics, vol 2 pp 97–132
Lukermann F, Porter PW (1960) Gravity and potential models in economic geography. Ann Assoc Am Geogr 50:493–504
Simini F, González MC, Maritan A, Barabási AL (2012) A universal model for mobility and migration patterns. Nature 484:96–100
Yan XY, Zhao C, Fan Y, Di Z, Wang WX (2014) Universal predictability of mobility patterns in cities. J R Soc Interface 11:20140834
Yan XY, Wang WX, Gao ZY, Lai YC (2017) Universal model of individual and population mobility on diverse spatial scales. Nat Commun 8:1639
Liu E, Yan XY New parameter-free mobility model. Preprint. arXiv:1808.06363
Simini F, Maritan A, Néda Z (2013) Human mobility in a continuum approach. PLoS ONE 8(3):e60069
Ren Y, Ercsey-Ravasz M, Wang P, Gonzales MC, Toroczkai Z (2014) Predicting commuter flows in spatial networks using a radiation model based on temporal ranges. Nat Commun 5:5347
Varga L, Tóth G, Néda Z (2017) An improved radiation model and its applicability for understanding commuting patterns in Hungary. Reg Statist 6(2):27–38
Stefanouli M, Polyzos S (2017) Gravity vs radiation model: two approaches on commuting in Greece. Transp Res Proc 24:65–72
Wilson AG (1967) A statistical theory of spatial distribution models. Transp Res 1:253–269
Hua CI, Porell F (1979) A critical review of the development of the gravity model. Int Reg Sci Rev 4(2):97–126
Sheppard ES (1978) Theoretical underpinnings of the gravity hypothesis. Geogr Anal 10(4):386–402
Niedercorn JH, Bechdolt BV (1969) An economic derivation of the "gravity law" of spatial interaction. J Regional Sci 9(2):273–282
Domencich T, McFadden DL (2015) Urban travel demand: a behavioral analysis. North-Holland, Amsterdam
Biró TS, Néda Z (2018) Unidirectional random growth with resetting. Physica A 499:355–361
Thurner S, Kyriakopoulos F, Tallis C (2007) Unified model for network dynamics exhibiting nonextensive statistics. Phys Rev E 76:036111
CTPP 2006–2010 Census Tract Flows, Commuting data, American Community Survey. https://www.fhwa.dot.gov/planning/census_issues/ctpp/data_products/2006-2010_tract_flows/
Dash Nelson G, Rae A (2016) An economic geography of the United States: from commutes to megaregions. PLoS ONE 11(11):e0166083
2006–2010 Population distribution, American Community Survey. https://www.census.gov/geo/maps-data/data/tiger-data.html
2018 USA job openings accessed at 10.02.2018. https://www.indeed.com/
2011 Census Tract Flow, Commuting data, Hungary. http://www.ksh.hu
2011 Population distribution, Hungary. http://ec.europa.eu/eurostat/cache/GISCO/geodatafiles/GEOSTAT-grid-POP-1K-2011-V2-0-1.zip
2011 Census Tract Flow, Commuting data, Italy. http://www.istat.it/storage/cartografia/matrici_pendolarismo/matrici_pendolarismo_2011.zip
2011 Population distribution, Italy. http://ec.europa.eu/eurostat/cache/GISCO/geodatafiles/GEOSTAT-grid-POP-1K-2011-V2-0-1.zip
The used data is available on the Internet by following the links from the "Data source and format" and "Data processing" sections. The processed data in the format indicated in the "Data source and format" section is available in the Figshare repository, doi:10.6084/m9.figshare.6151130, URL: https://figshare.com/s/b86965bb06ce018f52bf. In this repository one will find: the figs_data.zip file containing the data used for plotting the Figures, the Hungary.zip, Italy.zip and USA.zip files containing the processed data for Hungary, Italy and USA, respectively. The fig2.gdf file is a GUESS Graph Data Format file, which is editable in a simple text editor.
Z.N. is professor of theoretical physics, working in the area of interdisciplinary applications of statistical physics. He uses both analytical and computational models to understand complex phenomena from physics, economics, biology and sociology. G.T. is a senior investigator at the Hungarian Central Statistical Office. He is specialist in geographical data collection, in handling and processing large geographical datasets. L.V. is a PhD student in computational physics with a strong background in computer science and informatics.
Work supported by the Romanian Research Council UEFISCDI, Romania through grant Nr: PN-III-P4-PCE-2016-0363.
Department of Physics, Babeş-Bolyai University, Cluj-Napoca, Romania
Levente Varga & Zoltán Néda
Hungarian Central Statistical Office, Budapest, Hungary
Géza Tóth
Levente Varga
Zoltán Néda
ZN designed the study, elaborated the FJM model and wrote up the first version of the manuscript. LV analyzed the data and draw the figures. GT collected the data, interpreted them and putted in the desired form. All authors worked on the final version of the manuscript. All authors read and approved the final manuscript.
Correspondence to Zoltán Néda.
Varga, L., Tóth, G. & Néda, Z. Commuting patterns: the flow and jump model and supporting data. EPJ Data Sci. 7, 37 (2018). https://doi.org/10.1140/epjds/s13688-018-0167-3
Human mobility models
Commuters data | CommonCrawl |
Steiner, W. et al. The following patents were recently issued by the countries in which the inventions were made. For US patents, titles and names supplied to us by the US Patent Office are reproduced exactly as they appear on the original published patent. (Submitted).
Zhang, W. et al. Folding of a donor–acceptor polyrotaxane by using noncovalent bonding interactions. Proceedings of the National Academy of Sciences 105, 6514–6519 (2008).
Stieg, A. Z., Rasool, H. I. & Gimzewski, J. K. A flexible, highly stable electrochemical scanning probe microscope for nanoscale studies at the solid-liquid interface. Review of Scientific Instruments 79, 103701 (2008).
, et al. A femtojoule calorimeter using micromechanical sensors. Review of Scientific Instruments 65, 3793–3798 (1994).
Cross, S. E. et al. Evaluation of bacteria-induced enamel demineralization using optical profilometry. dental materials 25, 1517–1526 (2009).
Han, T. H. & Liao, J. C. Erythrocyte nitric oxide transport reduced by a submembrane cytoskeletal barrier. Biochimica et Biophysica Acta (BBA)-General Subjects 1723, 135–142 (2005).
, et al. Erratum: A femtojoule calorimeter using micromechanical sensors [Rev. Sci. Instrum. 65, 3793 (1994)]. Review of Scientific Instruments 66, 3083–3083 (1995).
Gimzewski, J. K., Jung, T. A. & Schlittler, R. R. Epitaxially layered structure. (1999).
Gimzewski, J., Jung, T., Schlittler, R. & ,. EPITAXIALLY LAYERED STRUCTURE. (1997).
David, T., Gimzewski, J. K., Purdie, D., Reihl, B. & Schlittler, R. R. Epitaxial growth of C 60 on Ag (110) studied by scanning tunneling microscopy and tunneling spectroscopy. Physical Review B 50, 5810 (1994).
Gimzewski, J. K., Sass, J. K., Schlitter, R. R. & Schott, J. Enhanced photon emission in scanning tunnelling microscopy. EPL (Europhysics Letters) 8, 435 (1989).
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Enhanced photon emission from the STM: a general property of metal surfaces. Ultramicroscopy 42, 355–359 (1992).
Ascott, R. Engineering nature: art & consciousness in the post-biological era. (Intellect Ltd, 2006).
Stieg, A. Z. et al. Emergent Criticality in Complex Turing B-Type Atomic Switch Networks. Advanced Materials 24, 286–293 (2012).
Himpsel, F. J., Jung, T. A., Schlittler, R. R. & Gimzewski, J. K. Element-Specific Contrast in STM via Resonant Tunneling. APS March Meeting Abstracts 1, 1908 (1996).
Martin-Olmos, C., Stieg, A. Z. & Gimzewski, J. K. Electrostatic force microscopy as a broadly applicable method for characterizing pyroelectric materials. Nanotechnology 23, 235701 (2012).
Joachim, C., Gimzewski, J. K. & Aviram, A. Electronics using hybrid-molecular and mono-molecular devices. Nature 408, 541–548 (2000).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic transparence of a single C 60 molecule. Physical review letters 74, 2102 (1995).
Lang, H. P. et al. Micro Total Analysis Systems' 9 57–60 (Springer Netherlands, 1998).
Fornaro, P. et al. AN ELECTRONIC NOSE BASED ON A MICROMECHANICAL CANTILEVER ARRAY. Micro Total Analysis Systems' 98: Proceedings of the Utas' 98 Workshop, Held in Banff, Canada, 13-16 October 1998 57 (1998).
Gimzewski, J. A. M. E. S. K. A. Z. I. M. I. E. R. Z., Schlittler, R. Rudolf & Welland, M. Edward. ELECTROMECHANICAL TRANSDUCER. (2000).
Gimzewski, J. K., Schlittler, R. R. & Welland, M. E. Electromechanical transducer. (1998).
Gimzewski, J., SCHLITTLER, R., WELLAND, M. & ,. ELECTROMECHANICAL TRANSDUCER. (1996).
Joachim, C. & Gimzewski, J. K. An electromechanical amplifier using a single molecule. Chemical Physics Letters 265, 353–357 (1997).
Berndt, R., Gimzewski, J. K. & Johansson, P. Electromagnetic interactions of metallic objects in nanometer proximity. Physical review letters 71, 3493 (1993).
Gimzewski, J. K., Brewer, R. J., VepYek, S. & Stuessi, H. THE EFFECT OF A HYDROGEN PLASMA ON THE HYDRIDING OF TITANIUM: KINETICS AND EQUILIBRIUM CONCENTRATION. (Submitted).
Battiston, F. et al. E. MEYER, M. GUGGISBERG, CH. LOPPACHER. Impact of Electron and Scanning Probe Microscopy on Materials Research 339 (1999).
Pelling, A. E., Wilkinson, P. R., Stringer, R. & Gimzewski, J. K. Dynamic mechanical oscillations during metamorphosis of the monarch butterfly. Journal of The Royal Society Interface 6, 29–37 (2009).
Hu, W. et al. DNA builds and strengthens the extracellular matrix in Myxococcus xanthus biofilms by interacting with exopolysaccharides. PloS one 7, e51905 (2012).
Pelling, A. E. et al. Distinct contributions of microtubule subtypes to cell membrane shape and stability. Nanomedicine: Nanotechnology, Biology and Medicine 3, 43–52 (2007).
Dumas, P. et al. Direct observation of individual nanometer-sized light-emitting structures on porous silicon surfaces. EPL (Europhysics Letters) 23, 197 (1993).
Loppacher, C. et al. Direct determination of the energy required to operate a single molecule switch. Physical review letters 90, 066107 (2003).
Duerig, U. T., Gimzewski, J. K. & Pohl, W. D. Direct access storage unit using tunneling current techniques. (1989).
Pohl, W. Dieter, Duerig, U. Theodor & Gimzewski, J. A. M. E. S. K. A. Z. I. M. I. E. R. Z. Direct access storage unit. (1991).
Wesoloski, L. Marie, Stieg, A., Kunitake, M. & Gimzewski, J. K. Dimerization of decacyclene on copper surfaces. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY 233, (AMER CHEMICAL SOC 1155 16TH ST, NW, WASHINGTON, DC 20036 USA, 2007).
Schlittler, R. R. & Gimzewski, J. K. Design and performance analysis of a three-dimensional sample translation device used in ultrahigh vacuum scanned probe microscopy. Journal of Vacuum Science & Technology B 14, 827–831 (1996).
Feynman, R. P. et al. CSIS-181 Section 1346. (Submitted).
Won, S. Jin, Gimzewski, J. & Schlittler, R. Crystals comprising single-walled carbon nanotubes. (2002). | CommonCrawl |
Try these 3 questions to test your level of financial literacy.
Question 272 NPV
Suppose you had $100 in a savings account and the interest rate was 2% per year.
After 5 years, how much do you think you would have in the account if you left the money to grow?
than $102, $102 or than $102?
Question 278 inflation, real and nominal returns and cash flows
Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year.
After one year, would you be able to buy , exactly the as or than today with the money in this account?
Question 279 diversification
Do you think that the following statement is or ? "Buying a single company stock usually provides a safer return than a stock mutual fund."
Try these 15 questions which test your knowledge of debt and equity pricing:
Question 1 NPV
Jan asks you for a loan. He wants $100 now and offers to pay you back $120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate.
Ignore credit risk. Remember:
### V_0 = \frac{V_t}{(1+r_\text{eff})^t} ###
Will you or Jan's deal?
Question 2 NPV, Annuity
Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate.
Ignore credit risk.
Will you or Katya's deal?
Question 3 DDM, income and capital returns
The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: ### P_0 = \frac{ C_1 }{ r - g } ###
What is ##g##? The value ##g## is the long term expected:
(a) Income return of the stock.
(b) Capital return of the stock.
(c) Total return of the stock.
(d) Dividend yield of the stock.
(e) Price-earnings ratio of the stock.
Question 4 DDM
For a price of $13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa.
Would you like to Carla's share or politely ?
For a price of $6, Carlos will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa.
Would you like to his share or politely ?
For a price of $102, Andrea will sell you a share which just paid a dividend of $10 yesterday, and is expected to pay dividends every year forever, growing at a rate of 5% pa.
So the next dividend will be ##10(1+0.05)^1=$10.50## in one year from now, and the year after it will be ##10(1+0.05)^2=11.025## and so on.
The required return of the stock is 15% pa.
Would you like to the share or politely ?
For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa.
So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on.
For a price of $10.20 each, Renee will sell you 100 shares. Each share is expected to pay dividends in perpetuity, growing at a rate of 5% pa. The next dividend is one year away (t=1) and is expected to be $1 per share.
Would you like to the shares or politely ?
Question 9 DDM, NPV
For a price of $129, Joanne will sell you a share which is expected to pay a $30 dividend in one year, and a $10 dividend every year after that forever. So the stock's dividends will be $30 at t=1, $10 at t=2, $10 at t=3, and $10 forever onwards.
Question 10 DDM
For a price of $95, Sherylanne will sell you a share which is expected to pay its first dividend of $10 in 7 years (t=7), and will continue to pay the same $10 dividend every year after that forever.
Question 11 bond pricing
For a price of $100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa.
Would you like to her bond or politely ?
For a price of $100, Carol will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 12% pa.
For a price of $100, Rad will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa.
Would you like to the bond or politely ?
For a price of $100, Andrea will sell you a 2 year bond paying annual coupons of 10% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa.
For a price of $95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa.
Try these interesting but more difficult questions:
Question 105 NPV, risk, market efficiency
A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced.
What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value (##V_0##), not the value in one year (##V_1##).
(a) $10
(b) $3
(c) $2.8037
(d) $2.7273
(e) $0
Question 125 option, speculation, market efficiency
The US government recently announced that subsidies for fresh milk producers will be gradually phased out over the next year. Newspapers say that there are expectations of a 40% increase in the spot price of fresh milk over the next year.
Option prices on fresh milk trading on the Chicago Mercantile Exchange (CME) reflect expectations of this 40% increase in spot prices over the next year. Similarly to the rest of the market, you believe that prices will rise by 40% over the next year.
What option trades are likely to be profitable, or to be more specific, result in a positive Net Present Value (NPV)?
Assume that:
Only the spot price is expected to increase and there is no change in expected volatility or other variables that affect option prices.
No taxes, transaction costs, information asymmetry, bid-ask spreads or other market frictions.
(a) Buy one year call options on fresh milk.
(b) Buy one year put options on fresh milk.
(c) Sell one year call options on fresh milk.
(d) All of the above option trades are likely to have a positive NPV.
(e) None of the above option trades are likely to have a positive NPV.
Question 223 CFFA, interest tax shield
Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant?
(a) An increase in net capital spending.
(b) A decrease in the cash flow to creditors.
(c) An increase in interest expense.
(d) An increase in net working capital.
(e) A decrease in dividends paid.
Question 270 real estate, DDM, effective rate conversion
You own an apartment which you rent out as an investment property.
What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation?
You just signed a contract to rent the apartment out to a tenant for the next 12 months at $2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment.
The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year.
So rental payments will increase at the start of the 13th month (t=12) to be $2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months.
Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on.
The required return of the apartment is 8.732% pa, given as an effective annual rate.
Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments.
(a) $415,149.4048
(b) $438,252.6707
(c) $441,067.7356
(d) $444,155.5276
(e) $455,263.0844
To see more questions, click one of the links at the top of the page such as 'Tags' to see a list of topics, or 'Random' to show questions that you haven't answered yet, or click 'All' to see every question on the fight finance website in one big long page. | CommonCrawl |
A computational modular approach to evaluate $ {\mathrm{NO_{x}}} $ emissions and ozone production due to vehicular traffic
On the Cauchy-Born approximation at finite temperature for alloys
doi: 10.3934/dcdsb.2021276
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Pullback attractors via quasi-stability for non-autonomous lattice dynamical systems
Radosław Czaja
Institute of Mathematics, University of Silesia in Katowice, Bankowa 14, 40-007 Katowice, Poland
In memory of María José Garrido Atienza
Received February 2021 Revised October 2021 Early access November 2021
In this paper we study long-time behavior of first-order non-autono-mous lattice dynamical systems in square summable space of double-sided sequences using the cooperation between the discretized diffusion operator and the discretized reaction term. We obtain existence of a pullback global attractor and construct pullback exponential attractor applying the introduced notion of quasi-stability of the corresponding evolution process.
Keywords: Lattice dynamics, pullback global attractor, pullback exponential attractor, fractal dimension, quasi-stability.
Mathematics Subject Classification: Primary: 37L30; Secondary: 37L60, 34D45.
Citation: Radosław Czaja. Pullback attractors via quasi-stability for non-autonomous lattice dynamical systems. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021276
A. Y. Abdallah, Exponential attractors for first-order lattice dynamical systems, J. Math. Anal. Appl., 339 (2008), 217-224. doi: 10.1016/j.jmaa.2007.06.054. Google Scholar
A. Y. Abdallah, Uniform exponential attractors for first order non-autonomous lattice dynamical systems, J. Differential Equations, 251 (2011), 1489-1504. doi: 10.1016/j.jde.2011.05.030. Google Scholar
A. Y. Abdallah, Attractors for first order lattice systems with almost periodic nonlinear part, Discrete Contin. Dyn. Syst. Ser. B, 25 (2020), 1241-1255. doi: 10.3934/dcdsb.2019218. Google Scholar
P. W. Bates, K. Lu and B. Wang, Attractors for lattice dynamical systems, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 143-153. doi: 10.1142/S0218127401002031. Google Scholar
T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Anal., 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar
T. Caraballo, F. Morillas and J. Valero, Asymptotic behaviour of a logistic lattice system, Discrete Contin. Dyn. Syst., 34 (2014), 4019-4037. doi: 10.3934/dcds.2014.34.4019. Google Scholar
T. Caraballo and S. Sonner, Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces, Discrete Contin. Dyn. Syst., 37 (2017), 6383-6403. doi: 10.3934/dcds.2017277. Google Scholar
A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: Theoretical results, Commun. Pure Appl. Anal., 12 (2013), 3047-3071. doi: 10.3934/cpaa.2013.12.3047. Google Scholar
J. W. Cholewa and R. Czaja, Lattice dynamical systems: Dissipative mechanism and fractal dimension of global and exponential attractors, J. Evol. Equ., 20 (2020), 485-515. doi: 10.1007/s00028-019-00535-3. Google Scholar
J. W. Cholewa and R. Czaja, On fractal dimension of global and exponential attractors for dissipative higher order parabolic problems in $\mathbb{R}^{N}$ with general potential, Contemporary Approaches and Methods in Fundamental Mathematics and Mechanics, (2021), 293–314. doi: 10.1007/978-3-030-50302-4_13. Google Scholar
I. Chueshov, Dynamics of Quasi-Stable Dissipative Systems, Universitext. Springer, Cham, 2015. doi: 10.1007/978-3-319-22903-4. Google Scholar
I. Chueshov and I. Lasiecka, Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping, Mem. Amer. Math. Soc. 195 (2008). doi: 10.1090/memo/0912. Google Scholar
R. Czaja, Bi-space pullback attractors for closed processes, São Paulo J. Math. Sci., 6 (2012), 227-246. doi: 10.11606/issn.2316-9028.v6i2p227-246. Google Scholar
R. Czaja, Pullback exponential attractors with admissible exponential growth in the past, Nonlinear Anal., 104 (2014), 90-108. doi: 10.1016/j.na.2014.03.020. Google Scholar
R. Czaja and M. Efendiev, Pullback exponential attractors for nonautonomous equations, Part I: Semilinear parabolic problems, J. Math. Anal. Appl., 381 (2011), 744-765. doi: 10.1016/j.jmaa.2011.03.053. Google Scholar
X. Han, Exponential attractors for lattice dynamical systems in weighted spaces, Discrete Contin. Dyn. Syst., 31 (2011), 445-467. doi: 10.3934/dcds.2011.31.445. Google Scholar
X. Han and P. E. Kloeden, Non-autonomous lattice systems with switching effects and delayed recovery, J. Differential Equations, 261 (2016), 2986-3009. doi: 10.1016/j.jde.2016.05.015. Google Scholar
X. Li, K. Wei and H. Zhang, Exponential attractors for lattice dynamical systems in weighted spaces, Acta Appl. Math., 114 (2011), 157-172. doi: 10.1007/s10440-011-9606-x. Google Scholar
X. Li and C. Zhong, Attractors for partly dissipative lattice dynamic systems in $\ell^2\times\ell^2$, J. Comput. Appl. Math., 177 (2005), 159-174. doi: 10.1016/j.cam.2004.09.014. Google Scholar
T. F. Ma, R. N. Monteiro and A. C. Pereira, Pullback dynamics of non-autonomous Timoshenko systems, Appl. Math. Optim., 80 (2019), 391-413. doi: 10.1007/s00245-017-9469-2. Google Scholar
P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems, Nonlinear Anal., 71 (2009), 3956-3963. doi: 10.1016/j.na.2009.02.065. Google Scholar
B. Wang, Attractors for reaction-diffusion equations in unbounded domains, Phys. D, 128 (1999), 41-52. doi: 10.1016/S0167-2789(98)00304-2. Google Scholar
B. Wang, Asymptotic behavior of non-autonomous lattice systems, J. Math. Anal. Appl., 331 (2007), 121-136. doi: 10.1016/j.jmaa.2006.08.070. Google Scholar
S. Zhou, Attractors for first order dissipative lattice dynamical systems, Phys. D, 178 (2003), 51-61. doi: 10.1016/S0167-2789(02)00807-2. Google Scholar
S. Zhou, Attractors and approximations for lattice dynamical systems, J. Differential Equations, 200 (2004), 342-368. doi: 10.1016/j.jde.2004.02.005. Google Scholar
S. Zhou and X. Han, Pullback exponential attractors for non-autonomous lattice systems, J. Dynam. Differential Equations, 24 (2012), 601-631. doi: 10.1007/s10884-012-9260-7. Google Scholar
S. Zhou and W. Shi, Attractors and dimension of dissipative lattice systems, J. Differential Equations, 224 (2006), 172-204. doi: 10.1016/j.jde.2005.06.024. Google Scholar
Moncef Aouadi, Alain Miranville. Quasi-stability and global attractor in nonlinear thermoelastic diffusion plate with memory. Evolution Equations & Control Theory, 2015, 4 (3) : 241-263. doi: 10.3934/eect.2015.4.241
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195
T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037
Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060
Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036
Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2722. doi: 10.3934/dcdsb.2020028
Xiaoying Han, Peter E. Kloeden. Pullback and forward dynamics of nonautonomous Laplacian lattice systems on weighted spaces. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021143
José A. Langa, Alain Miranville, José Real. Pullback exponential attractors. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1329-1357. doi: 10.3934/dcds.2010.26.1329
Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143
Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058
Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194
Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887
Jason S. Howell, Irena Lasiecka, Justin T. Webster. Quasi-stability and exponential attractors for a non-gradient system---applications to piston-theoretic plates with internal damping. Evolution Equations & Control Theory, 2016, 5 (4) : 567-603. doi: 10.3934/eect.2016020
Theodore Tachim Medjo. Pullback $ \mathbb{V}-$attractor of a three dimensional globally modified two-phase flow model. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 2141-2169. doi: 10.3934/dcds.2018088
Zhijian Yang, Yanan Li. Criteria on the existence and stability of pullback exponential attractors and their application to non-autonomous kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2629-2653. doi: 10.3934/dcds.2018111
Sana Netchaoui, Mohamed Ali Hammami, Tomás Caraballo. Pullback exponential attractors for differential equations with delay. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1345-1358. doi: 10.3934/dcdss.2020367
Mohamed Ali Hammami, Lassaad Mchiri, Sana Netchaoui, Stefanie Sonner. Pullback exponential attractors for differential equations with variable delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 301-319. doi: 10.3934/dcdsb.2019183
Yujun Zhu. Topological quasi-stability of partially hyperbolic diffeomorphisms under random perturbations. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 869-882. doi: 10.3934/dcds.2014.34.869
Luci H. Fatori, Marcio A. Jorge Silva, Vando Narciso. Quasi-stability property and attractors for a semilinear Timoshenko system. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6117-6132. doi: 10.3934/dcds.2016067 | CommonCrawl |
Entropically engineered formation of fivefold and icosahedral twinned clusters of colloidal shapes
Binary icosahedral clusters of hard spheres in spherical confinement
Da Wang, Tonnishtha Dasgupta, … Alfons van Blaaderen
How to design an icosahedral quasicrystal through directional bonding
Eva G. Noya, Chak Kui Wong, … Jonathan P. K. Doye
A Brownian quasi-crystal of pre-assembled colloidal Penrose tiles
Po-Yuan Wang & Thomas G. Mason
Superstructures generated from truncated tetrahedral quantum dots
Yasutaka Nagaoka, Rui Tan, … Ou Chen
Formation of a single quasicrystal upon collision of multiple grains
Insung Han, Kelly L. Wang, … Ashwin J. Shahani
Synthesis and assembly of colloidal cuboids with tunable shape biaxiality
Yang Yang, Guangdong Chen, … Zhihong Nie
Self-templating assembly of soft microparticles into complex tessellations
Fabio Grillo, Miguel Angel Fernandez-Rodriguez, … Lucio Isa
Quantitative 3D real-space analysis of Laves phase supraparticles
Da Wang, Ernest B. van der Wee, … Alfons van Blaaderen
Emergent tetratic order in crowded systems of rotationally asymmetric hard kite particles
Zhanglin Hou, Yiwu Zong, … Kun Zhao
Sangmin Lee ORCID: orcid.org/0000-0002-1145-47081 nAff3 &
Sharon C. Glotzer ORCID: orcid.org/0000-0002-7197-00851,2
Colloids
Fivefold and icosahedral symmetries induced by multiply twinned crystal structures have been studied extensively for their role in influencing the shape of synthetic nanoparticles, and solution chemistry or geometric confinement are widely considered to be essential. Here we report the purely entropy-driven formation of fivefold and icosahedral twinned clusters of particles in molecular simulation without geometric confinement or chemistry. Hard truncated tetrahedra self-assemble into cubic or hexagonal diamond colloidal crystals depending on the amount of edge and vertex truncation. By engineering particle shape to achieve a negligible entropy difference between the two diamond phases, we show that the formation of the multiply twinned clusters is easily induced. The twinned clusters are entropically stabilized within a dense fluid by a strong fluid-crystal interfacial tension arising from strong entropic bonding. Our findings provide a strategy for engineering twinning behavior in colloidal systems with and without explicit bonding elements between particles.
Twinning arises from a type of grain boundary called a twin boundary, where two separate crystals sharing the same lattice plane intergrow with a certain symmetry. Fivefold and icosahedral symmetries, which are generally known to be incompatible with long-range order, can be induced in relatively large atomic crystal clusters (from several nanometers to a few microns) via multiple twinning1,2,3,4. For instance, materials that form face-centered cubic (fcc)2,3,5,6,7 or cubic diamond8,9 crystals can form twin boundaries toward the (111) or its equivalent lattice directions, promoting the formation of a \(\sim 70^\circ\) twin angle that can produce multiply twinned structures with fivefold or icosahedral symmetry. Twinning has been widely used to obtain interesting properties in synthetic nanomaterials, such as enhancing the mechanical properties of nanowires10 and increasing oxidation resistance11, and to synthesize noble metal nanostructures with decahedral or icosahedral shape3,12, which are useful as catalysts13. Many studies have been conducted to control the growth mechanism12,14,15 and stability of multiply twinned structures, e.g., tailoring solution chemistry5,6,16,17,18 and using multicomponent materials6,11. However, the multiplicity of variables in multiply twinned crystals makes it difficult to know which variables are responsible for, or have the most influence on, the formation of fivefold and icosahedral twins.
Compared to most systems, hard particle systems are regarded as relatively simple because the phase behavior is driven solely by entropy maximization19,20. The diversity and complexity of entropy-driven phase behavior, however, are enormously rich. The self-assembly of complex crystals and quasicrystals20,21, as well as complex assembly pathways such as multistep crystallization, have been reported in hard particle systems22. The spontaneous formation of icosahedral twins has been observed in hard sphere systems both in experiments and in computer simulations, but only under spherical confinement4,7,15,23, which produces an artificial "surface tension" to overcome the internal strain within the cluster arising from local particle packing. That, and the negligible free energy difference between fcc and hexagonal close-packed (hcp), two competing crystal structures of hard spheres24, conspire to produce twins. However, the formation and stabilization of fivefold and icosahedral twinned clusters without geometrical confinement or in any other hard particle systems or have yet to be reported. We hypothesized that artificial surface tension provided by spherical confinement could be achieved naturally, without any confinement, through the judicious choice of particle shape.
Here, we show the purely entropy-driven, confinement-free assembly of fivefold and icosahedral twinned clusters from multiply twinned diamond crystals in equilibrium with a dense fluid phase. Monte Carlo (MC) simulations show that hard truncated tetrahedrons (TTs) self-assemble into either cubic and hexagonal diamond crystals depending on the amount of edge and vertex truncation (Fig. 1). We tuned the TT shape to have a negligible free energy difference between the two diamond crystal phases, and found the formation of twin boundaries is easily induced in a fluid phase (Fig. 2). This finding demonstrates that the stability of twin boundaries in a fluid can be controlled by particle shape design, even entropically. Using this strategy, we induce the formation of fivefold and icosahedral twins through seed-assisted growth of a single-crystalline seed of cubic diamond. Since the formation of twin boundaries is easily induced toward multiple directions during crystal growth, multiply twinned structures are easily formed. We show that, through an error-and-repair process, a multiply twinned structure with defects transforms into a fivefold twinned cluster in a dense fluid phase (Fig. 3). We also show the formation of an icosahedral twin in a dense fluid upon further growth of the fivefold twin. The icosahedral twin of hard TTs is entropically stabilized within the fluid without spherical confinement (Fig. 3), unlike the icosahedral cluster of hard spheres that rapidly destabilizes and falls apart (Fig. 4). We show that icosahedral clusters of hard TTs have twice the fluid-solid interfacial free-energy (or entropy) compared to icosahedral clusters of hard spheres as a natural consequence of stronger entropic bonding25 in the former system. Our study isolates the essential role of surface tension in stabilizing the inherently strained icosahedral twinned cluster in a dense fluid, and suggests approaches for engineering the surface tension.
Fig. 1: Twinned crystals of truncated tetrahedra.
a Two types of pairwise contacts (left) and the local environment of particles in cubic diamond (upper right) and hexagonal diamond (bottom right). b, c Unit cells of cubic diamond (b, left and middle) and hexagonal diamond (c, left and middle) crystals. The (111) plane of cubic diamond (b, right) and the (0001) plane of hexagonal diamond (c, right) have the same structure. d–j Twinned structures of TTs. Red particles are \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\), blue are \({{{{{{\rm{S}}}}}}}_{3}{{{{{{\rm{E}}}}}}}_{1}\), green are \({{{{{{\rm{S}}}}}}}_{2}{{{{{{\rm{E}}}}}}}_{2}\) and purple are \({{{{{{\rm{S}}}}}}}_{1}{{{{{{\rm{E}}}}}}}_{3}\). d A twin boundary is formed when the (111) plane of cubic diamond (magenta arrow) and the (0001) plane of hexagonal diamond (cyan arrow) meet. e, Five-fold twinned structure of TTs. The angle between twin boundaries is ~72o. f–j Structure of icosahedral twinned crystal. The bond orientational order diagram (f, upper right) and diffraction pattern (f, bottom right) along the 10-fold symmetry axis. The ico-twin crystal consists of 20 cubic diamond clusters with a tetrahedron Wulff shape (red in f, g), 30 twin planes between the tetrahedral clusters (blue in f, h), and 12 columns with 5-fold symmetry (green in f, i). j The center of the ico-twin is a dodecahedron super-cluster of 100 TTs.
Fig. 2: Stability control of diamond crystals via particle shape design.
a Per-particle Helmholtz free-energy (\(F/N{k}_{B}T\)) plot, in units of kBT, of cubic (red) and hexagonal (blue) diamond crystals of hard TTs as a function of vertex and edge truncation parameters at constant particle volume fraction (\(\phi=0.62\)). b Phase diagram of hard TTs in the shape space determined by the free-energy calculation. c Free-energy difference between cubic and hexagonal diamond (\(\triangle F={F}_{{{{{{\rm{H}}}}}}}-{F}_{{{{{{\rm{C}}}}}}}\)) in shape space. d, When the TT is designed \((a=1.20,\,{c}=2.16)\) to have negligible \(\triangle F \sim+0.007\), the initial cubic diamond (left) single-crystalline cluster forms multiple stacking faults (right) in fluid, suggesting that the free energy loss from the twin boundaries is small. Each TT is represented by a tiny sphere at the TT center of mass. Red and blue spheres represent cubic and hexagonal diamonds, respectively. e When the TT is designed (\(a=1.22,\,{c}=2.16\)) to have \(\triangle F \sim -0.028\), the initial cubic diamond cluster (left) completely transforms into single-crystalline hexagonal diamond (right) cluster in a relatively short simulation time. f The change of the number ratio of particles in cubic diamond over time for three different \(\triangle F=+0.138,+0.007\) and \(-0.028\) (black X markers in c). Orange and cyan circles indicate each snapshot in d and e, respectively. g, Local volume fraction distribution plots calculated from the last snapshot from d (upper) and e (lower). Two peaks in the distribution show that the system is in coexistence between fluid and solid.
Fig. 3: Growth process of fivefold and icosahedral twinned crystals from seed in fluid.
Seed assisted growth of twinned crystals of hard TTs (\(a=1.20,\,{c}=2.16\)) at coexistence between crystal and fluid (\(\phi=0.58\)). a–h Simulation snapshots showing the error-and-repair mechanism of the five-fold twinned crystal. Each TT is represented by a tiny sphere at the TT center of mass. a When a spherical seed of cubic diamond crystal (\(N=500\)) grows, b twin boundaries are formed along (111) or its equivalent plane directions. Because the growth of each direction is independent, c, d an error can occur when stacking sequences of two growth directions mismatch. e The growing direction with the error is re-melted toward its opposite direction (f) until the boundary with its adjacent plane matches. Once the boundary matches, g, h) the five-fold twinned crystal grows to fully form the five-fold twinned crystal with truncated pentagonal dipyramid (PD) crystal shape. The final crystal is fully surrounded by fluid and stabilized. i–l Simulation snapshots of icosahedral twinned crystal formation from PD seed. All particles are represented by spheres showing centers of mass. m The icosahedral twinned crystal exposes (111) surfaces of cubic diamond crystal when stabilized in fluid. n Local volume fraction distribution plot of the system at coexistence shows a bimodal shape at \(\phi=0.57\) and \(0.64\). o The change of pressure over time for the ico-twin crystal-forming system. Pressure decreases during the growth of the ico-twin crystal and is constant (\({P}^{*} \sim 12.6\)) after the growth, indicating that the ico-twin crystal is stable in fluid. The red dashed lines indicate the simulation time when the snapshots in i–l are taken. The inset snapshots and diffraction patterns show that the ico-twin crystal structure surrounding by fluid is maintained during pressure stabilization.
Fig. 4: Fluid-solid interfacial energy calculation.
a Icosahedral twinned crystal of FCC of hard spheres quickly destabilizes in fluid (\( < {10}^{6}\) MC steps). b (111) surfaces of a cubic diamond crystal of hard TTs (left) and an FCC crystal of hard spheres (right), and their side views (middle). Simulation setup of the capillary fluctuation method for calculating the fluid-solid interfacial stiffness \(\widetilde{\gamma }\) of (c) cubic diamond and (d) FCC. e–g The change of interfacial profiles of the (111) direction of (e) cubic diamond and (g) FCC. f–h Fluid-solid interfacial free energy \(\gamma\) of (f) cubic diamond and (h) FCC. For all three lattice directions, the hard TT system has more than twice the value of \(\gamma\) of the hard sphere system, indicating its stronger fluid-solid surface tension.
Cubic diamond is different from hexagonal diamond (or lonsdaleite) in the conformation of local tetrahedral bonds26. Each atom has four tetrahedral bonds with nearest neighbors, and every atom in cubic diamond has four staggered bonds, while every atom in hexagonal diamond has three staggered bonds and one eclipsed bond. For a hard particle with a tetrahedron shape, the two types of chemical bond conformations can be mapped to the staggered and eclipsed entropic bond conformations associated with face-to-face contacts. (Fig. 1a). An appropriate degree of tip truncation of hard tetrahedra allows them to exclusively form staggered contacts, resulting in the self-assembly of the cubic diamond phase27,28 (Fig. 1b). However, the self-assembly of the hexagonal diamond phase, where the two bond conformation types are mixed, has not been reported in hard particle systems. We hypothesized that, through judicious truncation of a regular tetrahedron, we can find an appropriate shape that introduces both staggered and eclipsed contacts and favors hexagonal diamond (Fig. 1c), and, intermediate between the cubic diamond-forming and hexagonal diamond forming shapes, one that produces both diamond structures with negligible difference in entropy. Controlling the relative stability of cubic and hexagonal diamond will give us a route to control twinning and stacking behaviors of diamond crystals (Fig. 1b–d). Cubic and hexagonal diamond share an equivalent plane along the \((111)\) and \((0001)\) directions (Fig. 1b, c); fcc and hcp share the same equivalent plane. If the entropy difference between the two crystals can be engineered to be sufficiently small, the crystal will be prone to form twin boundaries.
The twinning of cubic diamond occurs towards \((111)\) and its equivalent planes such as \((\bar{1}11)\), \((1\bar{1}1)\) and \((11\bar{1})\); thus when twinning occurs in multiple directions, two twin planes meet with a specific angle, \({{{\cos }}}^{-1}(1/3)=70.5^\circ\), which is close to \(72^\circ\), the same angle that promotes the formation of multiply twinned structures with fivefold symmetry2 (Fig. 1e). The structure of the multiply twinned crystals made by TTs is easily analyzed by classifying particles based on the number of staggered and eclipsed contacts (Supplementary Figs. 1, 2), following the notation \({{{{{{\rm{S}}}}}}}_{n}{{{{{{\rm{E}}}}}}}_{m}\), where \(n+m=4\). For instance, the cubic diamond crystal consists of \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) particles (Fig. 1a, b), and the hexagonal diamond crystal consists of \({{{{{{\rm{S}}}}}}}_{3}{{{{{{\rm{E}}}}}}}_{1}\) particles (Fig. 1a, c). In a fivefold twin, five cubic diamond clusters with a tetrahedron shape made by \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) particles together form a pentagonal bipyramid super-cluster with five twin planes of \({{{{{{\rm{S}}}}}}}_{3}{{{{{{\rm{E}}}}}}}_{1}\) particles between the faces of the tetrahedron clusters (Fig. 1e). At the point where the five twin planes meet, \({{{{{{\rm{S}}}}}}}_{2}{{{{{{\rm{E}}}}}}}_{2}\) particles form a column where small pentagonal bipyramids (\(N=5\)) are linearly stacked. An icosahedral twin (Fig. 1f) comprises twenty cubic diamond clusters (\({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) particles), each with a tetrahedron shape (Fig. 1g). Thirty twin planes (\({{{{{{\rm{S}}}}}}}_{3}{{{{{{\rm{E}}}}}}}_{1}\)) exist between the faces of the tetrahedron clusters (Fig. 1h), and there twelve fivefold columns made by \({{{{{{\rm{S}}}}}}}_{2}{{{{{{\rm{E}}}}}}}_{2}\) particles where five tetrahedron clusters meet (Fig. 1i). At the center, there is a dodecahedron super-cluster \((N=100)\) comprising three different shells with icosahedral symmetry: 60 \({{{{{{\rm{S}}}}}}}_{2}{{{{{{\rm{E}}}}}}}_{2}\) particles comprising the outer shell form a rhombicosidodecahedron, 20 \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) particles comprising the intermediate shell form a dodecahedron and 20 \({{{{{{\rm{S}}}}}}}_{1}{{{{{{\rm{E}}}}}}}_{3}\) particles comprising the smallest shell form a dodecahedron. This hierarchical structure is equivalent to the super-cluster of water \({({{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{O}}}}}})}_{100}\) when connecting oxygen atoms29.
A competition between staggered and eclipsed contracts arises from many-body interactions responsible for the entropic forces producing the effective attraction between neighboring particles, and preferences to form a certain type of face-to-face TT contact—which are determined by the strength of this attraction—can be controlled by varying the amount of edge and vertex truncation28. We know that regular tetrahedra prefer to form a dodecagonal quasicrystal21 in which every particle has eclipsed alignments (\({{{{{{\rm{S}}}}}}}_{0}{{{{{{\rm{E}}}}}}}_{4}\)). Truncation introduces additional facets, weakening the strength of the entropic bond between two primary eclipsed faces, and thereby weakening the preference for eclipsed alignment. The entropic bonding between primary eclipsed faces is weakened with increasing truncation because less and less free volume is gained by maintaining this alignment (though this alignment still maximizes free volume and thus system entropy). Beyond a certain amount of truncation, however, the system has more entropy (more free volume) if the neighbors align in a staggered way, and thus the staggered configuration becomes preferred over the eclipsed configuration (\({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\))27. Motivated by this, we searched the shape space28 between \({{{{{{\rm{S}}}}}}}_{0}{{{{{{\rm{E}}}}}}}_{4}\) and \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) to find a shape intermediate between the two that favors the hexagonal diamond phase \(({{{{{{\rm{S}}}}}}}_{3}{{{{{{\rm{E}}}}}}}_{1})\). To quantify the relative thermodynamic stability of cubic and hexagonal diamond as a function of TT shape, we constructed free energy surfaces in TT shape space varying the edge (\(1.14\le a\le 1.30\)) and vertex (\(2.10\le c\le 2.30\)) truncation at constant volume fraction (Fig. 2a), using the Frenkel–Ladd free energy calculation method24,30 ("Methods"). We found a region where hexagonal diamond is thermodynamically more stable than cubic diamond, as shown in the phase diagram generated from the bottom view of the free energy landscape (Fig. 2b). We also confirmed those shapes self-assemble hexagonal diamond (Supplementary Fig. 3).
To assess the stability of the twin boundaries depending on the TT shape, we chose three systems with different relative stabilities between the two diamond phases, \(\Delta F={F}_{H}-{F}_{C}=0.138,0.007,\,{{{{{\rm{and}}}}}}-0.028\) (Fig. 2c). For each system, we prepared a cubic diamond cluster fully surrounded by a fluid phase at coexistence and equilibrated the system ("Methods"). We first distinguished particles in fluid and crystal based on the local volume fraction ("Methods"), which shows two peaks in distribution plots that indicate the two coexisting phases (Fig. 2g). Then, we colored particles in the crystal phase based on their face-to-face contact type \(({{{{{{\rm{S}}}}}}}_{n}{{{{{{\rm{E}}}}}}}_{m})\) and analyzed the crystal structures. When the free energy difference is large enough \(\left(\Delta F \sim+0.138\right)\), the cubic diamond cluster is stable without any notable changes in the crystal structure (Fig. 2f). However, when \(\Delta F\) is negligible (\(\Delta F \sim 0.007\)), the initial cubic diamond cluster repeatedly developed stacking faults (Fig. 2d) during the simulation, rather than stabilizing either diamond phase. This shows that the formation of twin boundaries between the two diamond phases is easily induced when the thermodynamic stability of the two crystals is comparable. On the other hand, when we further increased the stability of hexagonal diamond (\(\Delta F=-\!\!0.028\)), a complete phase transformation into a single-crystalline hexagonal diamond was observed (Fig. 2e). The phase transition behavior of the three systems was quantitatively compared by the change of the number ratio of \({{{{{{\rm{S}}}}}}}_{4}{{{{{{\rm{E}}}}}}}_{0}\) particles in a solid phase (\({N}_{{{{{{\rm{cubic}}}}}}}/{N}_{{{{{{\rm{solid}}}}}}}\)) (Fig. 2f). We confirmed a more gradual decrease of the \({N}_{{{{{{\rm{cubic}}}}}}}/{N}_{{{{{{\rm{solid}}}}}}}\) in the \(\Delta F=0.007\) system compared to that of the \(\Delta F=-\!\!0.028\) system, due to frequent formation of stacking faults.
Based on these findings, we hypothesized that the formation of a multiply twinned structure might be easily induced during the growth of the cubic diamond when \(\Delta F\) is negligible because of the easy formation of twin boundaries in multiple directions, which kinetically promotes the formation of ~\(72^\circ\) twin angles. To confirm the hypothesis, we conducted seed-assisted growth simulations ("Methods") with a system \((a=1.20,\,{c}=2.16)\) where \(\Delta F\) is negligible \((\Delta F=0.007)\). We placed a small spherical seed of cubic diamond crystal (\(N=500\)) in a fluid and allowed the seed to grow at the coexistence volume fraction \(\phi=0.58\) where the resulting crystal cluster is fully surrounded by a fluid phase in equilibrium. In the early stage of growth, we observed the formation of twin boundaries in multiple directions (Fig. 3a, b). Through subsequent multiple twinning events, a fivefold center was formed (Fig. 3c). Although the fivefold centers form a new local structure, \({{{{{{\rm{S}}}}}}}_{2}{{{{{{\rm{E}}}}}}}_{2}\) (Supplementary Fig. 2)—a pentagonal column —it has a similar local structure to that of hexagonal diamond, resulting in minimal entropy loss. Note that an error or defect can easily occur during the growth process because the formation of twin boundaries independently in multiple directions can cause a mismatch in the stacking sequence between two adjacent directions (Fig. 3d). However, interestingly, we observed a self-repair process during growth, where a growing direction with stacking errors re-melts until the boundary with its adjacent plane matches (Fig. 3e, f). Once the boundary matches, the fivefold twin crystal fully grows and is stabilized in fluid with a truncated pentagonal dipyramid shape (Fig. 3g, h). This error-and-repair mechanism of fivefold twinned crystals occurs via particle migration on the crystal surface, differently from the error-and-repair mechanism recently reported, in which particle rearrangement occurs throughout the fivefold twinned crystal during growth14, and differently from that reported recently for a dodecagonal quasicrystal of hard tetrahedra31. The observation of this new mechanism demonstrates that there exist multiple types of error-and-repair mechanisms for the formation of fivefold twinned clusters as well as for entropically stabilized colloidal crystals.
Next, we investigated the formation of an icosahedral twinned cluster within a fluid phase. Experiments have shown that an icosahedral twinned cluster can be formed from additional multiple twinning from a decahedral cluster12,32. Simulations have shown it can be formed in a purely entropy-driven system provided there is artificial confinement of the cluster. Our simulation results show that this is possible also in a purely entropy-driven system, but without confinement. We conducted a growth simulation of a seed with a fivefold twinned structure \((N=1020)\) in a system where \(\Delta F\) is negligible (\(\Delta F \sim 0.007\) at \(a=1.20,\,{c}=2.16\)) ("Methods"), and where the solid cluster is fully surrounded by fluid throughout the simulation. We observed the additional twinning from the surface of the seed, which eventually grew into an icosahedral twin (Fig. 3i–l), a process reminiscent of a simultaneous successive twinning of a palladium icosahedral cluster32. The fluid-solid coexistence of our ico-twin was confirmed by the local volume fraction distribution (Fig. 3n) showing two peaks. We classified particles into three types: fluid, interface and crystal. After removing particles at the interface, we confirmed that the icosahedral cluster exposes twenty (111) planes of the cubic diamond crystal toward the fluid (Fig. 3m). This implies that the interfacial entropy difference between fluid and the (111) plane plays an important role in the stability of the icosahedral cluster17.
The stability of the icosahedral cluster in the fluid can be checked by tracking the change in reduced system pressure (\({P}^{*}=P{v}_{0}/{k}_{B}T\)) during a simulation run (Fig. 3o). In the early stage (\(\le 50\times {10}^{5}\) MC steps), we observed a rapid decrease in pressure due to the growth of the icosahedral cluster. After the growth (\( > 50\times {10}^{5}\) MC steps), the pressure stabilized at a constant value (\({P}^{*} \sim 12.6\)) with small fluctuations, indicating that the ico-twin structure had stabilized in the fluid. From the simulation snapshots and diffraction patterns at \(100\times {10}^{5}\) and \(175\times {10}^{5}\) MC steps, we confirmed that the icosahedral symmetry of the solid cluster was maintained (Fig. 3o, insets).
Structurally, an icosahedral twin has internal and surface strain33, thus an internal or external effect to compensate for the strain energy is required to stabilize the cluster. For instance, we know that hard sphere systems require spherical confinement to entropically stabilize an icosahedral cluster4,7,23, otherwise, the icosahedral cluster rapidly destabilizes (Fig. 4a). This indicates that the fluid-solid interfacial tension of hard spheres is not strong enough to overcome the entropy loss from the strain within the icosahedral cluster. On the other hand, the icosahedral cluster of hard TTs is stable in a fluid without geometric confinement, suggesting that the fluid-solid interfacial tension is enhanced by the shape of the particle. Figure 4b shows the crystal structure of the (111) plane in the clusters of hard TTs and hard spheres, which is the largest surface of the icosahedral cluster exposed to the fluid. The (111) surface of the hard TT crystal is flat because the faces of the TTs are exposed toward the plane. On the other hand, the (111) surface of the hard sphere crystal is rough due to the spherical shape of the particles, implying that the roughness of the surface can affect the surface tension of the crystal.
To quantify the difference in the surface effect, we calculated the fluid-solid interfacial free energy of hard TTs and hard spheres using the capillary fluctuation method34 ("Methods"). We exposed a specific lattice plane of a crystal toward a fluid in a thin slab (\(z\)-axis in Fig. 4c, d) and monitored the fluctuation of the interface in equilibrium. The interfacial profile normalized by the circumscribed radius of the particle, \((h-{h}_{0})/{r}_{{{{{{\rm{circ}}}}}}}\), shows that the fluctuations of the (111) plane of the hard sphere system are larger than that of the hard truncated tetrahedra system (Fig. 4e and Supplementary Movies 1, 2). From 200 samples of the interfacial profiles obtained in equilibrium, we calculated the interfacial stiffness (\(\widetilde{\gamma }\)) of the crystal planes (Supplementary Fig. 6 and "Methods") and measured the fluid-solid interfacial free energy \((\gamma )\) of several major crystal orientations (Supplementary Table 2). The interfacial stiffness of the (111) plane of the fcc crystal of hard spheres, \({\widetilde{\gamma }}_{{{{{{\rm{HS}}}}}\_}(111)}=0.684{k}_{B}T{\sigma }^{-2}\), is \(56\%\) lower than that of the cubic diamond crystal of hard truncated tetrahedra, \({\widetilde{\gamma }}_{{{{{{\rm{TT}}}}}\_}(111)}=1.547{k}_{B}T{\sigma }^{-2}\) ("Methods"). In addition, we confirmed that the fluid-solid interfacial free energy difference (surface tension) of the (111) plane of the hard TT crystal, \({\gamma }_{{{{{{\rm{TT}}}}}\_}(111)}=1.347{k}_{B}T{\sigma }^{-2}\), is \(\sim 2.4\) times higher than that of the hard sphere crystal, \({\gamma }_{{{{{{\rm{HS}}}}}\_}(111)}=0.560{k}_{B}T{\sigma }^{-2}\) (Supplementary Table 2). This indicates that a strong fluid-solid interfacial tension of the hard TT system stabilizes the icosahedral twinned cluster in fluid without geometric confinement.
The dependence of the fluid-solid interfacial tension on particle shape is indirectly indicated by the change in local volume fraction distributions arising from TT shape change (Supplementary Fig. 5). The volume fraction of fluid and crystal at fluid-solid coexistence depends on the fluid-solid surface tension; thus, the local volume fraction change due to TT truncation suggests a change in fluid-solid interfacial energy from the TT shape change. To confirm this hypothesis, we compared three different hard particle systems with very different local volume fraction distributions at fluid-solid coexistence (Supplementary Fig. 7): (1) Hard triangular bipyramids (TBP) forming clathrate type-1 crystal22, (2) hard TTs with \(a=1.20\), \(c=2.16\) forming cubic diamond, and (3) hard spheres (HS) forming FCC crystal. The three systems show different local volume fraction difference between crystal and fluid (\({\phi }_{{{{{{\rm{Crystal}}}}}}}-{\phi }_{{{{{{\rm{Fluid}}}}}}}=\triangle \phi\)): \(\triangle {\phi }_{{{{{{\rm{TBP}}}}}}}=0.116\), \(\triangle {\phi }_{{{{{{\rm{TT}}}}}}}=0.077\) and \(\triangle {\phi }_{{{{{{\rm{HS}}}}}}}=0.05\) (Supplementary Fig. 7). Next, we conducted capillary fluctuation simulations for the three systems to calculate the interfacial stiffness of each system. We observed that the TBPs with the largest local volume fraction difference (\(\triangle \phi=0.116\)) show the smallest fluctuations of the interface, resulting in the strongest interfacial stiffness (\(\widetilde{\gamma } \sim 7.21{k}_{B}T{\sigma }^{-2}\)) (Supplementary Fig. 7b, c). On the other hand, the HSs with the smallest local volume fraction difference (\(\triangle \phi=0.05\)) show the largest fluctuations of the interface, resulting in the weakest interfacial stiffness (\(\widetilde{\gamma } \sim 0.74{k}_{B}T{\sigma }^{-2}\)) (Supplementary Fig. 7h, i). This comparison provides clear evidence that particle shape determines the volume fraction of fluid and solid at coexistence, which in turn determines the interfacial energy.
In summary, we studied the entropy-driven assembly and stabilization of fivefold and icosahedral twinned clusters in a one-component fluid of hard truncated tetrahedra (TT). We showed that the thermodynamic stability of the cubic and hexagonal diamond phases can be entropically controlled by designing the TT shape. If the particle shape is designed to have a negligible free energy difference between the two diamond crystals, the formation of twin boundaries is easily induced. This strategy can be used to induce the formation of fivefold and icosahedral twinned clusters in fluid via seed-assisted growth. We found that the formation of a fivefold cluster follows an error-and-repair mechanism that removes mismatches of the stacking sequence of adjacent twin planes through particle rearrangement at the cluster surface. We showed the formation of an icosahedral cluster by additional growth of twinned structures from a fivefold twinned seed. We showed that the icosahedral cluster of TTs can be entropically stabilized in a fluid, which is not possible for hard spheres without confinement. The capillary fluctuation method showed that a hard TT system has a much higher fluid-solid interfacial free energy (here, entropy) difference than that of a hard sphere system, directly implicating fluid-solid interfacial tension in stabilizing the TT ico-twin.
Our findings provide a quantitative understanding of the formation and stabilization of fivefold and icosahedral twinned clusters in hard particle systems. Importantly, we showed that the twinning behavior and the interfacial free energy difference may be entropically engineered by particle shape design. Experimentally, colloidal tetrahedral particles are synthesizable26,35,36,37,38 with tunable shape37. Non-complementary DNA could be used to make non-interacting particles that might closely approximate those studied here. Fivefold and icosahedral twinned clusters should be also attainable with attractive TT particle shapes, provided the explicit attraction favors face-to-face alignment. Such interparticle attraction could be realized through organic ligands36,38 via van der Waals forces or self-complementary DNA26, both of which are commonly used to drive nanoparticle assembly.
Particle geometry
The truncated tetrahedron (TT) shape is a member of the spherical triangle invariant 323 family, with truncation parameters \(\left(a,b,c\right)\) according to previous convention39. In this study, \(b=1.0\) for all systems, and \(a\) and \(b\) vary in a range: \(1.14\le a\le 1.30\) and \(2.10\le c\le 2.30\).
Identification of staggered and eclipsed pairs
Each TT in a crystal phase has four nearest neighbors that form face-to-face contacts. We identified the type of each face-to-face contact (staggered or eclipsed), following a similar identification protocol described in ref. 28. Briefly, for each face of the particle, we assigned three vectors that are defined from the face center to each tip of the face (Supplementary Fig. 1a). Then, for each pair contact (\(i,j\)), we can define three vectors for \(i\) particle and three other vectors for \(j\) particle (Supplementary Fig. 1b, c). We calculated every pair combination between the \(i\) and \(j\) vectors, which is nine, and found the minimum angle (\(\theta\)). All the \(\theta\) from every pair contact accumulated from the entire system give an angle distribution. For instance, Supplementary Fig. 1d was calculated from hexagonal diamond crystal. The angle distribution shows a clear bimodal distribution, and the height of the peak at \({\theta }_{1}\) and \({\theta }_{2}\) are around a 1:3 ratio, indicating that the angles around \({\theta }_{1}\) and \({\theta }_{2}\) come from eclipsed pairs and staggered pairs, respectively. This allows us to select the range of the pair contact angle to identify the eclipsed pair (\({\theta }_{1}\pm 10^\circ\)) and the staggered pair (\({\theta }_{2}\pm 10^\circ\)).
Monte Carlo simulation
Simulations were performed by the hard particle Monte Carlo (HPMC) implemented in the HOOMD-blue simulation package40,41, which is available at https://github.com/glotzerlab/hoomd-blue. The system size for the self-assembly simulations (Fig. 2) is \(N={{{{\mathrm{2000}}}}}\), and all simulations were performed under periodic box condition. The translational and rotational movement of particles is decided to have around a \(20\%\) acceptance ratio for every simulation. The self-assembly simulations were initialized from a dilute isotropic fluid \(\phi\) \(=N{v}_{0}/V < 0.01\) and compressed until the desired thermodynamic condition (\(\phi\) or reduced pressure \({P}^{*}=P{v}_{0}/{k}_{B}T\)) was reached. Here, \({v}_{0}\) and \(V\) are the volume of a particle and the volume of the simulation box, respectively. The unit length of simulation is defined by \(\sigma\), and the particle volume of a TT is set to \({{v}_{0}=1.0\sigma }^{3}\) regardless of its truncation amount. After initialization, each run was continued in the isochoric ensemble (NVT) at constant particle volume fraction \(\phi\) until equilibration was reached. For the most systems, crystallization occurs within \(1.0\times {10}^{8}\) Monte Carlo steps.
Stability of diamond crystals in a dense fluid
We checked the stability of a cubic diamond crystal in a dense fluid for three different systems: (1) \(a=1.16,\,{c}=2.16\) at \(\phi=0.57\), (2) \(a=1.20,\,{c}=2.16\) at \(\phi=0.57\)5 (Fig. 2d), and (3) \(a=1.24,\,{c}=2.16\) at \(\phi=0.58\) (Fig. 2e). All three systems are in a fluid-solid coexistence state, where a single-crystalline cluster is fully surrounded by a dense fluid phase. System size is \(N={{{{\mathrm{20,000}}}}}\) for all three systems. For the MC simulation, we first placed a spherical cubic diamond single-crystalline cluster (\(N \sim {{{{\mathrm{8000}}}}}\)) at the center of a large simulation box \((\phi=0.01)\) and put other particles around the solid cluster without overlapping each other. Then, holding the particles of the crystal phase (no movements), we rapidly compressed the system to the target particle volume fraction. Then, we released the particles of the crystal and equilibrated the whole system.
Free-energy calculation
The reduced Helmholtz free energy per particle \(F/N{k}_{B}T\) of the cubic and hexagonal diamond phases were calculated using the Frenkel–Ladd method24,30 at a constant volume. The details of the calculation method can be found in ref. 30. Briefly, we first constructed an Einstein lattice (i.e., reference state) of each phase, for which the free energy is analytically solvable. Then, we applied strong translational and rotational harmonic potentials between the hard TTs and the Einstein lattice to tether each particle to its reference lattice site, which makes the hard TT system a reference state. Then we gradually released the harmonic potential until the strength of the potential became nearly zero, yielding the real system without correlation to the reference state. This process allows us to set up a continuous and reversible path between the reference state and a real state. Thus, by integrating the potential energy along the path, we can calculate the free energy (or equivalently, entropy) difference between the reference state and the real system. Because that the free energy of the reference crystal is analytically solvable, we can calculate the absolute free energy of the real crystal.
Using this method, we calculated the free energy of each phase with different shape parameters \((a,{c})\) to construct a free energy surface (Fig. 2a) in the shape space: \(1.14\le a\le 1.30\) with 0.02 interval and \(2.10\le c\le 2.30\) with 0.02 interval, for a total of 99 systems with different shape parameters \((a,{c})\). The system size is \(N=512\) for the cubic diamond and \(N=992\) for the hexagonal diamond. For each phase, we calculated the free energy of the 99 systems at \(\phi=0.64\), \(0.66\), \(0.68\), \(0.70\) and \(0.72\), and for each \(\phi\), the free energy surface was constructed by interpolating the 99 data points (Supplementary Fig. 4). The free energy of the crystal at \(\phi=0.62\) (Fig. 2a) was estimated by extrapolating the free energy surface plots of the five particle volume fractions studied.
Seed-assisted growth MC simulation
We conducted seed-assisted growth simulation at a fluid-solid coexistence state (\(\phi=0.58\) for \(a=1.20,\,{c}=2.16\) system). For the growth of the fivefold twinned cluster, we used a spherical cluster of a cubic diamond crystal (\(N=500\)) as a seed, and the total system size including the seed is \(N={{{{\mathrm{20,000}}}}}\). At a very dilute condition (\(\phi=0.01\)), we placed the seed at the center of simulation box and placed other particles around the seed without overlap. Fixing the seed particles in place, we compressed the simulation box to the target particle volume fraction. Once it reached the target particle volume fraction, we released the seed particles and equilibrated the system. For the growth of the icosahedral twinned cluster, we used a fivefold twinned cluster (\(N \sim {{{{\mathrm{1000}}}}}\)) as a seed, and the total system size including the seed is \(N={{{{\mathrm{20,000}}}}}\). The compression and equilibration protocol is the same in both cases.
Local volume fraction calculation
Seed-assisted growth simulations for fivefold and icosahedral twinned clusters were performed in at coexistence between a fluid phase and a crystal phase, where the solid clusters are fully surrounded by fluid. To distinguish the two phases, we calculated the local particle volume fraction \({\phi }_{{{{{{\rm{loc}}}}}}}\), which is defined as the particle volume fraction around a particle within a certain radius, \({r}_{{{{{{{\rm{cut}}}}}}}}\). The value of \({r}_{{{{{{{\rm{cut}}}}}}}}\) in this study was taken to be 3 times the distance from a particle center to its nearest neighbor particle center, in order to properly average the local environment. TTs are identified as belonging to the fluid region if \({\phi }_{{{{{{\rm{loc}}}}}}}\le 0.60\) and to the crystal region if \({\phi }_{{{{{{\rm{loc}}}}}}} > 0.60\). We utilized the freud python library for this calculation42.
Stability check of an icosahedral twinned cluster of hard spheres
We performed MC simulations of a hard sphere system to check the stability of an icosahedral twinned cluster of hard spheres at fluid-solid coexistence (\(\phi=0.515\)). We first constructed an icosahedral cluster (\(N=5971\)) with an ideal structure, following the method described in ref. 4. At a very dilute condition (\(\phi=0.01\)), we placed the ico-twin cluster at the center of simulation box and placed other hard spheres around the cluster without overlap. Temporarily "freezing" the ico-twin particles in place, we compressed the simulation box to the target particle volume fraction. Once it reached the target particle volume fraction, we released the ico-twin cluster particles and equilibrated the system. To check if the ico-twin cluster is stable in the coexistence phase, we monitored the bond-order diagram and the morphology of the ico-twin cluster during the simulation. We observed the icosahedral structure rapidly destabilize within \(5\times {10}^{5}\) MC steps (Fig. 4a).
Capillary fluctuation method
The fluid-solid interfacial free energy of hard TT (\(a=1.20,\,{c}=2.16\)) and hard sphere systems was calculated by the capillary fluctuation method34. We prepared systems in a fluid-solid coexistence state (\(\phi=0.582\) for hard TTs and \(\phi=0.515\) for hard spheres) containing a crystal, where the crystal of each system exposes different orientations toward a fluid phase: \((111)[\bar{1}10]\), \((100)[001]\), \((100)[\bar{1}10]\) and \((110)[001]\). Here, the crystal orientation is defined as \(({ijk})[{lmn}]\), where \(({ijk})\) is a crystal plane toward the fluid phase (\(z\)-direction in Fig. 4c, d) and \([{lmn}]\) is a crystal plane perpendicular to the fluid phase along the short direction (\(y\)-direction in Fig. 4c, d). The system size is \(N \sim {{{{\mathrm{40,000}}}}}\) for every system, and the shortest direction of the simulation box contains 2–3 unit cells (\(y\)-direction). At coexistence, we distinguished fluid and solid phases using the local volume fraction \({\phi }_{{{{{{\rm{loc}}}}}}}\) for the hard TT system (\({\phi }_{{{{{{\rm{loc}}}}}}} > 0.6\) for crystal phase) and the average bond-orientational order parameter42,43 \({\bar{q}}_{6}\) for the hard sphere system (\({\bar{q}}_{6} > 0.3\) for crystal phase).
For the calculation of the fluid-solid interfacial free energy, we followed the calculation process described in ref. 34. Briefly, we obtained the interfacial profile \(h(x)\) as a function of MC steps (Fig. 4e–g), which is then normalized by the circumscribed sphere radius of a particle. The values of \(h(x)\) were measured at a discrete set of points \({x}_{n}=n\Delta\), where \(n=1,\ldots,{N}\) and \(\Delta={{L}}/{{N}}\), where L is the length of crystal along \(x\)-direction. The Fourier modes \({h}_{q}\) are defined as \({h}_{q}=\frac{1}{N}{\sum }_{n=1}^{N}h({x}_{n}){{{{{{\rm{e}}}}}}}^{{{{{{{\rm{i}}}}}}q}{x}_{n}}\), where wave number \(q=\frac{2\pi k}{L}\) with \(k=1,\ldots,{N}\). From the equipartition theorem, the size of the capillary fluctuation modes is related to the interfacial stiffness \(\widetilde{\gamma }\) as follows44:
$$\left\langle {\left|{h}_{q}\right|}^{2}\right\rangle={k}_{B}T/A\widetilde{\gamma }{q}^{2}$$
where \(A\) is the area of the fluid-crystal interface. The \(\widetilde{\gamma }\) for different crystal orientations were calculated and averaged for various \(q\) and \(\varDelta x\) (Supplementary Fig. 6). The interfacial stiffness is related to the interfacial free energy \(\gamma\) by the formula:
$$\widetilde{\gamma }\left(\theta \right)=\gamma \left(\theta \right)+\frac{{{{{{{\rm{d}}}}}}}^{2}\gamma }{{{{{{\rm{d}}}}}}{\theta }^{2}}$$
where \(\theta\) is the angle between the average orientation of the interface and the local normal of the interface. Based on this relationship, the interfacial free energy can be obtained by comparing the following two equations:
$$\gamma \left({{{{{\boldsymbol{n}}}}}}\right)/{\gamma }_{0}=1+{A}_{1}{\epsilon }_{1}+{A}_{2}{\epsilon }_{2}$$
$$\widetilde{\gamma }\left({{{{{\boldsymbol{n}}}}}}\right)/{\gamma }_{0}=1+{B}_{1}{\epsilon }_{1}+{B}_{2}{\epsilon }_{2}$$
where \({{{{{\boldsymbol{n}}}}}}\) is the unit vector normal to the interfacial plane. The values of \({A}_{1}\), \({A}_{2}\), \({B}_{1}\) and \({B}_{2}\) for each interface direction is listed in Supplementary Table 1. Then, the interfacial free energy \(\gamma\) for each crystal plane was obtained using the coefficients (Supplementary Table 1) and the interfacial stiffness (Supplementary Fig. 6). The calculation results are listed in Supplementary Table 2.
All data needed to evaluate the conclusions in this manuscript are present in the main text or the supplementary materials. Additional data are available upon request to the corresponding author.
Source code for HOOMD-Blue is available at https://github.com/glotzerlab/hoomd-blue.
Mackay, A. L. A dense non-crystallographic packing of equal spheres. Acta Crystallogr. 15, 916–918 (1962).
Hofmeister, H. Forty years study of fivefold twinned structures in small particles and thin films. Cryst. Res. Technol. 33, 3–25 (1998).
Buffat, P.-A., Flüeli, M., Spycher, R., Stadelmann, P. & Borel, J.-P. Crystallographic structure of small gold particles studied by high-resolution electron microscopy. Faraday Discuss. 92, 173–187 (1991).
Wang, J. et al. Magic number colloidal clusters as minimum free energy structures. Nat. Commun. 9, 5259 (2018).
Iijima, S. & Ichihashi, T. Structural instability of ultrafine particles of metals. Phys. Rev. Lett. 56, 616–619 (1986).
Langille, M. R., Zhang, J. & Mirkin, C. A. Plasmon-mediated synthesis of heterometallic nanorods and icosahedra. Angew. Chem. Int. Ed. 50, 3543–3547 (2011).
de Nijs, B. et al. Entropy-driven formation of large icosahedral colloidal clusters by spherical confinement. Nat. Mater. 14, 56–60 (2015).
Kobayashi, K. & Hogan, L. M. Fivefold twinned silicon crystals grown in an A1–16 wt.% Si melt. Philos. Mag. A 40, 399–407 (1979).
Shevchenko, V. Y. & Madison, A. E. Icosahedral diamond. Glas. Phys. Chem. 32, 118–121 (2006).
Wu, J. Y., Nagao, S., He, J. Y. & Zhang, Z. L. Role of five-fold twin boundary on the enhanced mechanical properties of fcc Fe nanowires. Nano Lett. 11, 5264–5273 (2011).
Wang, R. M. et al. Layer resolved structural relaxation at the surface of magnetic FePt icosahedral nanoparticles. Phys. Rev. Lett. 100, 017205 (2008).
Langille, M. R., Zhang, J., Personick, M. L., Li, S. & Mirkin, C. A. Stepwise evolution of spherical seeds into 20-fold twinned icosahedra. Science 337, 954–957 (2012).
He, D. S. et al. Ultrathin icosahedral Pt-enriched nanocage with excellent oxygen reduction reaction activity. J. Am. Chem. Soc. 138, 1494–1497 (2016).
Song, M. et al. Oriented attachment induces fivefold twins by forming and decomposing high-energy grain boundaries. Science 367, 40–45 (2020).
Chen, Y. et al. Morphology selection kinetics of crystallization in a sphere. Nat. Phys. 17, 121–127 (2021).
Seo, D. et al. Shape adjustment between multiply twinned and single-crystalline polyhedral gold nanocrystals: Decahedra, icosahedra, and truncated tetrahedra. J. Phys. Chem. C 112, 2469–2475 (2008).
Patala, S., Marks, L. D. & Olvera de la Cruz, M. Thermodynamic analysis of multiply twinned particles: Surface stress effects. J. Phys. Chem. Lett. 4, 3089–3094 (2013).
Xiong, Y., McLellan, J. M., Yin, Y. & Xia, Y. Synthesis of palladium icosahedra with twinned structure by blocking oxidative etching with citric acid or citrate ions. Angew. Chem. Int. Ed. 46, 790–794 (2007).
Frenkel, D. Order through entropy. Nat. Mater. 14, 9–12 (2015).
Damasceno, P. F., Engel, M. & Glotzer, S. C. Predictive self-assembly of polyhedra into complex structures. Science 337, 453–457 (2012).
Haji-Akbari, A. et al. Disordered, quasicrystalline and crystalline phases of densely packed tetrahedra. Nature 462, 773–777 (2009).
Lee, S., Teich, E. G., Engel, M. & Glotzer, S. C. Entropic colloidal crystallization pathways via fluid–fluid transitions and multidimensional prenucleation motifs. Proc. Natl Acad. Sci. USA 116, 14843–14851 (2019).
Wang, D. et al. Binary icosahedral clusters of hard spheres in spherical confinement. Nat. Phys. 17, 128–134 (2021).
Frenkel, D. & Ladd, A. J. C. New Monte Carlo method to compute the free energy of arbitrary solids. Application to the fcc and hcp phases of hard spheres. J. Chem. Phys. 81, 3188–3193 (1984).
Vo, T. & Glotzer, S. C. A theory of entropic bonding. Proc. Natl Acad. Sci. USA 119, e2116414119 (2022).
Article MathSciNet PubMed PubMed Central Google Scholar
He, M. et al. Colloidal diamond. Nature 585, 524–529 (2020).
Damasceno, P. F., Engel, M. & Glotzer, S. C. Crystalline assemblies and densest packings of a family of truncated tetrahedra and the role of directional entropic forces. ACS Nano 6, 609–614 (2012).
Teich, E. G., van Anders, G. & Glotzer, S. C. Identity crisis in alchemical space drives the entropic colloidal glass transition. Nat. Commun. 10, 64 (2019).
Müller, A. et al. Changeable pore sizes allowing effective and specific recognition by a molybdenum-oxide based "nanosponge": En route to sphere-surface and nanoporous-cluster chemistry. Angew. Chem. Int. Ed. 41, 3604–3609 (2002).
Haji-Akbari, A., Engel, M. & Glotzer, S. C. Phase diagram of hard tetrahedra. J. Chem. Phys. 135, 194101 (2011).
Je, K., Lee, S., Teich, E. G., Engel, M. & Glotzer, S. C. Entropic formation of a thermodynamically stable colloidal quasicrystal with negligible phason strain. Proc. Natl Acad. Sci. 118, e2011799118 (2021).
Pelz, P. M. et al. Simultaneous successive twinning captured by atomic electron tomography. ACS Nano 16, 588–596 (2022).
Howie, A. & Marks, L. D. Elastic strains and the energy balance for multiply twinned particles. Philos. Mag. A 49, 95–109 (1984).
Davidchack, R. L., Morris, J. R. & Laird, B. B. The anisotropic hard-sphere crystal-melt interfacial free energy from fluctuations. J. Chem. Phys. 125, 094710 (2006).
Liu, L. et al. Shape control of CdSe nanocrystals with zinc blende structure. J. Am. Chem. Soc. 131, 16423–16429 (2009).
Boles, M. A. & Talapin, D. V. Self-assembly of tetrahedral CdSe nanocrystals: Effective "patchiness" via anisotropic steric interaction. J. Am. Chem. Soc. 136, 5868–5871 (2014).
Gong, Z., Hueckel, T., Yi, G.-R. & Sacanna, S. Patchy particles made by colloidal fusion. Nature 550, 234–238 (2017).
Nagaoka, Y. et al. Superstructures generated from truncated tetrahedral quantum dots. Nature 561, 378–382 (2018).
Chen, E. R., Klotsa, D., Engel, M., Damasceno, P. F. & Glotzer, S. C. Complexity in surfaces of densest packings for families of polyhedra. Phys. Rev. X 4, 011024 (2014).
Anderson, J. A., Eric Irrgang, M. & Glotzer, S. C. Scalable Metropolis Monte Carlo for simulation of hard shapes. Comput. Phys. Commun. 204, 21–30 (2016).
Anderson, J. A., Glaser, J. & Glotzer, S. C. HOOMD-blue: A Python package for high-performance molecular dynamics and hard particle Monte Carlo simulations. Comput. Mater. Sci. 173, 109363 (2020).
Ramasubramani, V. et al. freud: A software suite for high throughput analysis of particle simulation data. Comput. Phys. Commun. 254, 107275 (2020).
Article MathSciNet CAS Google Scholar
Steinhardt, P. J., Nelson, D. R. & Ronchetti, M. Bond-orientational order in liquids and glasses. Phys. Rev. B 28, 784–805 (1983).
Karma, A. Fluctuations in solidification. Phys. Rev. E 48, 3441–3458 (1993).
Towns, J. et al. XSEDE: Accelerating scientific discovery. Comput. Sci. Eng. 16, 62–74 (2014).
This work was supported as part of the Center for Bio-Inspired Energy Science, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # DE-SC0000989. This work used the Extreme Science and Engineering Discovery Environment (XSEDE)45, which is supported by National Science Foundation grant number ACI-1053575; XSEDE award DMR 140129. Computational resources and services also supported by Advanced Research Computing at the University of Michigan, Ann Arbor.
Sangmin Lee
Present address: Department of Biochemistry, University of Washington, Seattle, WA, USA
Department of Chemical Engineering, University of Michigan, Ann Arbor, MI, USA
Sangmin Lee & Sharon C. Glotzer
Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
Sharon C. Glotzer
S.C.G. directed the research. S.L. designed the study and performed all simulations. Both authors contributed to data analysis and manuscript preparation.
Correspondence to Sharon C. Glotzer.
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Description of Additional Supplementary Files
Supplementary Movie 1
Lee, S., Glotzer, S.C. Entropically engineered formation of fivefold and icosahedral twinned clusters of colloidal shapes. Nat Commun 13, 7362 (2022). https://doi.org/10.1038/s41467-022-34891-5 | CommonCrawl |
Congruence modulo n is an equivalence relation on the integers. It partitions the integers into n equivalence classes. The equivalence classes are called the residues modulo n. Any member of a residue is a representative.
Arithmetic can be defined on residues.
Addition, Additive Inverse, and Multiplication
Sometimes we can perform operations on representatives of the residues using customary integer operations and the result will be a representative of the residue we would have got if we performed the corresponding operations on the residue classes.
This is true for addition and multiplication and we can use this to show that addition and multiplication on residues are both associative and commutative. They also observe the distributive law.
The additive identity is the reside class with zero in it. The additive inverse residue exists and can be found by computing the additive inverse of a representative.
In summary, the residues modulo n are a commutative ring with identity and addition, negation, and multiplication can be calculated by doing the corresponding operations on representatives.
Exponentiation
Exponentiation can be calculated using repeated multiplication, so the preceding section shows us that we can compute exponentiation on a residue by using a representative. Note that the base is a residue, but the exponent is an integer.
There is a shortcut for computing exponents due to Euler when a and n are relatively prime:
$$ \begin{align} a^{\phi(n)} \equiv 1 \;(\text{mod}\;n) \end{align} $$
The congruence allows us to compute any exponent with fewer than φ(n) multiplications.
The function φ is called Euler's totient, and it counts the positive integers less than or equal to n which are relatively prime to n. An expression for the totient is
$$ \begin{align} \phi(n) = n \prod_{p | n} \frac{p - 1}{p} \end{align} $$
where the product is over all primes that divide n. When n is itself prime, Euler's Theorem reduces to Fermat's Little Theorem:
$$ \begin{align} a^{p-1} \equiv 1 \;(\text{mod}\; p) \end{align} $$
Multiplicative Inverse and Division
Division on the integers is often not defined. An integer a is divisible by b if there is another integer m such that:
$$ \begin{align} a = mb \end{align} $$
Multiplicative inverses for most integers don't exist. The exceptions are 1 and -1, which are their own inverses.
For residues, the situation is better. A nonzero residue a modulo n has a multiplicative inverse if and only if a and n are relatively prime. The extended Euclidean algorithm will find x and y such that ax + ny = 1, and thus x is the multiplicative inverse of a.
If the modulus n is prime, then all nonzero residues have multiplicative inverses and the residues are a field.
If n is not prime, the residues are not an integral domain and cancellation of a nonzero factor is not always possible. If d is the greatest common divisor of a and n, then for all z and z' we have this result:
$$ \begin{align} az \equiv az' \;(\text{mod}\; n) \iff z \equiv z' \;\left(\text{mod}\; \frac{n}{d}\right) \end{align} $$
If a and n are relatively prime, which means d is 1, we can cancel a from both sides of the equation.
Non-zero complex numbers have two square roots in the complex numbers.
Positive real numbers have two square roots in the real numbers and negative real numbers have none.
Positive integers have two square roots in the integers if they are perfect squares, otherwise they have none.
The single non-zero residue modulo 2 has one square root.
Non-zero residues (p > 2) have either two square roots or none. In the former case, the residue is said to be a quadratic residue and in the latter case a quadratic nonresidue. Determining whether a residue is a quadratic residue is complicated. The following notation, called the Legendre symbol, is used when p is an odd prime:
$$ \begin{align} \left( \frac{a}{p} \right) = \begin{cases} \;\; 1 \;\;\; a \; \text{is a quadratic residue} \\ \;\; 0 \;\;\; p \mid a \\ -1 \;\;\; a \; \text{is a quadratic nonresidue} \end{cases} \end{align} $$
The following always hold:
$$ \begin{align} \left( \frac{1}{p} \right) = 1 \\ \left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}} \\ \left( \frac{2}{p} \right) = (-1)^{\frac{p^2 - 1}{8}} \\ \left( \frac{a}{p} \right) \left( \frac{b}{p} \right) = \left( \frac{ab}{p} \right) \end{align} $$
When a and p are relatively prime then
$$ \begin{align} \left(\frac{a^2}{p} \right) = 1 \end{align} $$
The Jacobi symbol is a generalization of the Legendre symbol where p is replaced by any positive odd integer n. The Kronecker symbol is a further generalization where n can be any non-zero integer.
The quadratic reciprocity law states that for odd primes q and p which both are are of the form 4k +3, then exactly one of the following congruences has a solution:
$$ \begin{align} x^2 \equiv q \;(\text{mod}\;p) \end{align} $$
$$ \begin{align} x^2 \equiv p \;(\text{mod}\;q) \end{align} $$
Moreover, if q and p are odd primes not both of the form 4k + 3, then both congruences are solvable or both congruences are not.
How the quadratic reciprocity law is used:
$$ \begin{align} \left(\frac{3}{5}\right) = \left(\frac{5}{3}\right) = \left(\frac{2}{3}\right) = (-1)^{\frac{3^2 - 1}{8}} = -1 \end{align} $$
$$ \begin{align} \left(\frac{3}{7}\right) = -\left(\frac{7}{3}\right) = -\left(\frac{1}{3}\right) = -1 \end{align} $$
If a is a quadratic residue of an odd prime p congruent to 3 modulo 4, then
$$ \begin{align} x \equiv \pm a^{(p+1)/4} \;(\text{mod}\;p) \end{align} $$
$$ \begin{align} x \equiv \begin{cases} \pm a^{(n+3)/8} \;(\text{mod}\;p) \\ \pm a^{(n+3)/8} 2^{(n-1)/4} \;(\text{mod}\;p) \end{cases} \end{align} $$
There is no known formula when a is a quadratic residue of an odd prime p congruent to 1 modulo 8. The Tonelli-Shanks algorithm offers a method for finding a solution, however.
Finding square roots is a special case of finding the roots of a polynomial. If f(x) is a polynomial with integer coefficients, then Hensel's lemma provides a way subject to some conditions of calculating roots module pk+m, m ≤ k, when we know a root for f(x) modulo pk.
If f(x) is a polynomial with integer coefficients, then Hensel's lemma specifies conditions under which we can find roots for f(x) modulo pk+m, m ≤ k, when we know a root for f(x) modulo pk. In particular, suppose that
$$ \begin{align} f(r) \equiv 0 \;(\text{mod}\;p) \;\;\;\textrm{and}\;\;\; f'(r) \not\equiv 0 \;(\text{mod}\;p) \end{align} $$
$$ \begin{align} s = r - f(r) \cdot a \;\;\;\textrm{for any}\;a\;\textrm{such that}\;\;\; a \equiv [f'(r)]^{-1} \;(\text{mod}\;p^m) \end{align} $$
$$ \begin{align} f(x) \equiv 0 \;(\text{mod}\;p^{k+m}) \;\;\;\textrm{and}\;\;\; r \equiv s \;(\text{mod}\;p^k) \end{align} $$
what if f'(r) is zero
general case when n is composite
quadratic residuosity problem
Discrete Logarithm
The discrete log of g base b, where both g and b are residues modulo n, is an integer k such at bk = g. If n is prime, then the multiplicative group is cyclic and a solution exists if b is not 0 or 1, and g is not 0. A brute force search has run time which is linear in the size of the multiplicative group, or exponential in the number of digits in the size of the multiplicative group. Better algorithms exist, but none are polynomial in the number of digits in the size of the multiplicative group.
Multiple Equations
If we have multiple equations with the same modulus, we can use substitution to find a solution.
If the moduli on the equations are different, the Chinese Remainder Theorem tells us there is a solution under certain conditions. In particular there is a solution a to the following system of equations, provided that the ni are all pairwise relatively prime:
$$ \begin{align} a \equiv a_i \mod n_i \;\;\; (i = 1,\ldots,k) \end{align} $$
Moreover, if a and a' are two solutions, then:
$$ \begin{align} a \equiv a' \mod \prod_{i=1}^k n_i \end{align} $$
The way to find a solution is by repeated application of the extended Euclidean algorithm. Find m1 and m2 such that
$$ \begin{align} m_1 n_1 + m_2 m_2 = 1 \end{align} $$
$$ \begin{align} x = a_1 m_2 n_2 + a_2 m_1 n_1 \end{align} $$
will be a solution to the first two equations. These can be replaced by
$$ \begin{align} x \equiv a_{1,2} \;(\text{mod}\;n_1 n_2) \end{align} $$
to get a system of k - 1 equations. | CommonCrawl |
Search Results: 1 - 10 of 78048 matches for " QIU Shu-yu "
Clinical analysis of neuropsychiatric systemic lupus erythematosus involving the central nervous system
ZHANG Zhen,XIAO Lan,ZENG Qiu-ming,LI Shu-yu
Chinese Journal of Contemporary Neurology and Neurosurgery , 2013, DOI: 10.3969/j.issn.1672-6731.2013.01.010
Abstract: Background Neuropsychiatric systemic lupus erythematosus (NP-SLE) presents with a wide variety of clinical manifestations, which is often difficult to diagnose with a high mortality. This study aims to investigate the clinical features of NP-SLE involving the central nervous system (CNS) and the differential diagnoses between CNS NP-SLE and intracranial infections. Methods The clinical manifestations, serum immunological features, cerebrospinal fluid (CSF) examinations (including intracranial pressure, leukocyte count, protein, glucose and chloride), CT and (or) MRI and electroencephalogram (EEG) data of 23 NP-SLE patients with CNS involved were retrospectively reviewed. Results Nine patients presented with diffuse manifestations, while 14 patients presented with focal manifestations. Serum analysis showed the positive rates of immunoglobulins anti-nuclear antibody (ANA), anti-double stranded DNA antibody (dsDNA), anti-Sm, anti-ribosmal P protein, anti-SSA and anti-SSB antibodies were 21/22, 7/22, 1/14, 2/14, 9/14 and 3/14 respectively. Patients with decreased serum C3 accounted for 14/20 while patients with decreased serum C4 accounted for 5/20. Besides, patients with increased CSF leukocyte count and microalbumin took up 5/12 and 7/12, while patients with decreased glucose and chloride levels took up 5/12 and 6/12. All 23 patients presented abnormal CT and (or) MRI and 6 patients presented abnormal EEG. Conclusion Serum immunological levels, CT and (or) MRI and EEG examinations contributed to the diagnosis of NP-SLE involving CNS. Although CSF analyses were slightly abnormal, the increase of leukocyte count and average microalbumin was not obvious, and the mean values of glucose and chloride were in the normal range, suggesting that the CSF examinations were helpful for the differential diagnoses from intracranial infections. Glucocorticoids and immunosuppressive drugs were remarkably effective for CNS NP-SLE patients.
Generalized $\Cal{L}$-geodesic and monotonicity of the generalized reduced volume in the Ricci flow
Shu-Yu Hsu
Mathematics , 2006,
Abstract: Suppose $M$ is a complete n-dimensional manifold, $n\ge 2$, with a metric $\bar{g}_{ij}(x,t)$ that evolves by the Ricci flow $\partial_t \bar{g}_{ij}=-2\bar{R}_{ij}$ in $M\times (0,T)$. For any $0
<1$, $(p_0,t_0)\in M\times (0,T)$, $q\in M$, we define the $\Cal{L}_p$-length between $p_0$ and $q$, $\Cal{L}_p$-geodesic, the generalized reduced distance $l_p$ and the generalized reduced volume $\widetilde{V}_p(\tau)$, $\tau=t_0-t$, corresponding to the $\Cal{L}_p$-geodesic at the point $p_0$ at time $t_0$. Under the condition $\bar{R}_{ij}\ge -c_1\bar{g}_{ij}$ on $M\times (0,t_0)$ for some constant $c_1>0$, we will prove the existence of a $\Cal{L}_p$-geodesic which minimize the $\Cal{L}_p(q,\bar{\tau})$-length between $p_0$ and $q$ for any $\bar{\tau}>0$. This result for the case $p=1/2$ is conjectured and used many times but no proof of it was given in Perelman's papers on Ricci flow. My result is new and answers in affirmative the existence of such $\Cal{L}$-geodesic minimizer for the $L_p(q,\tau)$-length which is crucial to the proof of many results in Perelman's papers on Ricci flow. We also obtain many other properties of the generalized $\Cal{L}_p$-geodesic and generalized reduced volume.
An elementary proof of the convergence of Ricci flow on compact surfaces
Abstract: This paper has been withdrawn by the author for further modification.
Uniform Sobolev inequalities for manifolds evolving by Ricci flow
Abstract: Let M be a compact n-dimensional manifold, $n\ge 2$, with metric g(t) evolving by the Ricci flow $\partial g_{ij}/\partial t=-2R_{ij}$ in (0,T) for some $T\in\Bbb{R}^+\cup\{\infty\}$ with $g(0)=g_0$. Let $\lambda_0(g_0)$ be the first eigenvalue of the operator $-\Delta_{g_0} +\frac{R(g_0)}{4}$ with respect to g_0. We extend a recent result of R. Ye and prove uniform logarithmic Sobolev inequality and uniform Sobolev inequalities along the Ricci flow for any $n\ge 2$ when either $T<\infty$ or $\lambda_0(g_0)>0$. As a consequence we extend Perelman's local $\kappa$-noncollapsing result along the Ricci flow for any $n\ge 2$ in terms of upper bound for the scalar curvature when either $T<\infty$ or $\lambda_0(g_0)>0$.
Some results for the Perelman LYH-type inequality
Abstract: Let $(M,g(t))$, $0\le t\le T$, $\partial M\ne\phi$, be a compact $n$-dimensional manifold, $n\ge 2$, with metric $g(t)$ evolving by the Ricci flow such that the second fundamental form of $\partial M$ with respect to the unit outward normal of $\partial M$ is uniformly bounded below on $\partial M\times [0,T]$. We will prove a global Li-Yau gradient estimate for the solution of the generalized conjugate heat equation on $M\times [0,T]$. We will give another proof of Perelman's Li-Yau-Hamilton type inequality for the fundamental solution of the conjugate heat equation on closed manifolds without using the properties of the reduced distance. We will also prove various gradient estimates for the Dirichlet fundamental solution of the conjugate heat equation.
Maximum principle and convergence of fundamental solutions for the Ricci flow
Abstract: In this paper we will prove a maximum principle for the solutions of linear parabolic equation on complete non-compact manifolds with a time varying metric. We will prove the convergence of the Neumann Green function of the conjugate heat equation for the Ricci flow in $B_k\times (0,T)$ to the minimal fundamental solution of the conjugate heat equation as $k\to\infty$. We will prove the uniqueness of the fundamental solution under some exponential decay assumption on the fundamental solution. We will also give a detail proof of the convergence of the fundamental solutions of the conjugate heat equation for a sequence of pointed Ricci flow $(M_k\times (-\alpha,0],x_k,g_k)$ to the fundamental solution of the limit manifold as $k\to\infty$ which was used without proof by Perelman in his proof of the pseudolocality theorem for Ricci flow.
Simple proofs of some results of Perelman on Ricci flow
Another proof of Ricci flow on incomplete surfaces with bounded above Gauss curvature
Abstract: We give a simple proof of an extension of the existence results of Ricci flow of G.Giesen and P.M.Topping [GiT1],[GiT2], on incomplete surfaces with bounded above Gauss curvature without using the difficult Shi's existence theorem of Ricci flow on complete non-compact surfaces and the pseudolocality theorem of G.Perelman [P1] on Ricci flow. We will also give a simple proof of a special case of the existence theorem of P.M.Topping [T] without using the existence theorem of W.X.Shi [S1].
Singular limit and exact decay rate of a nonlinear elliptic equation
Abstract: For any $n\ge 3$, $00$, $\beta>0$, $\alpha$, satisfying $\alpha\le\beta(n-2)/m$, we prove the existence of radially symmetric solution of $\frac{n-1}{m}\Delta v^m+\alpha v +\beta x\cdot\nabla v=0$, $v>0$, in $\R^n$, $v(0)=\eta$, without using the phase plane method. When $0<(n-2)/n$, $n\ge 3$, and $\alpha=2\beta/(1-m)>0$, we prove that the radially symmetric solution $v$ of the above elliptic equation satisfies $\lim_{|x|\to\infty}\frac{|x|^2v(x)^{1-m}}{\log |x|} =\frac{2(n-1)(n-2-nm)}{\beta(1-m)}$. In particular when $m=\frac{n-2}{n+2}$, $n\ge 3$, and $\alpha=2\beta/(1-m)>0$, the metric $g_{ij}=v^{\frac{4}{n+2}}dx^2$ is the steady soliton solution of the Yamabe flow on $\R^n$ and we obtain $\lim_{|x|\to\infty}\frac{|x|^2v(x)^{1-m}}{\log |x|}=\frac{(n-1)(n-2)}{\beta}$. When $0<(n-2)/n$, $n\ge 3$, and $2\beta/(1-m)>\max (\alpha,0)$, we prove that $\lim_{|x|\to\infty}|x|^{\alpha/\beta}v(x)=A$ for some constant $A>0$. For $\beta>0$ or $\alpha=0$, we prove that the radially symmetric solution $v^{(m)}$ of the above elliptic elliptic equation converges uniformly on every compact subset of $\R^n$ to the solution $u$ of the equation $(n-1)\Delta\log u+\alpha u+\beta x\cdot\nabla u=0$, $u>0$, in $\R^n$, $u(0)=\eta$, as $m\to 0$.
A note on compact gradient Yamabe solitons
Abstract: We will give a simple proof that the metric of any compact Yamabe gradient soliton (M,g) is a metric of constant scalar curvature when the dimension of the manifold n>2. | CommonCrawl |
OSA Publishing > Optical Materials Express > Volume 10 > Issue 1 > Page 57
Alexandra Boltasseva, Editor-in-Chief
Ar/Cl2 etching of GaAs optomechanical microdisks fabricated with positive electroresist
Rodrigo Benevides, Michaël Ménard, Gustavo S. Wiederhecker, and Thiago P. Mayer Alegre
Rodrigo Benevides,1,2 Michaël Ménard,3 Gustavo S. Wiederhecker,1,2 and Thiago P. Mayer Alegre1,2,*
1Applied Physics Department, Gleb Wataghin Physics Institute, University of Campinas, 13083-859 Campinas, SP, Brazil
2Photonics Research Center, University of Campinas, Campinas 13083-859, SP, Brazil
3Department of Computer Science, Université du Québec à Montréal, Montréal, QC H2X 3Y7, Canada
*Corresponding author: [email protected]
Rodrigo Benevides https://orcid.org/0000-0001-7127-5069
Michaël Ménard https://orcid.org/0000-0002-0947-0258
Gustavo S. Wiederhecker https://orcid.org/0000-0002-8216-6797
R Benevides
M Ménard
G Wiederhecker
T Mayer Alegre
•https://doi.org/10.1364/OME.10.000057
Rodrigo Benevides, Michaël Ménard, Gustavo S. Wiederhecker, and Thiago P. Mayer Alegre, "Ar/Cl2 etching of GaAs optomechanical microdisks fabricated with positive electroresist," Opt. Mater. Express 10, 57-67 (2020)
Materials for Integrated Optics
Electron beam lithography
Physical vapor deposition
Tapered fibers
Whispering gallery modes
A method to fabricate GaAs microcavities using only a soft mask with an electrolithographic pattern in an inductively coupled plasma etching is presented. A careful characterization of the fabrication process pinpointing the main routes for a smooth device sidewall is discussed. Using the final recipe, optomechanical microdisk resonators are fabricated. The results show very high optical quality factors of Qopt > 2 × 105, among the largest already reported for dry-etching devices. The final devices are also shown to present high mechanical quality factors and an optomechanical vacuum coupling constant of g0 = 2π × 13.6 kHz enabling self-sustainable mechanical oscillations for an optical input power above 1 mW.
| EPUB ?
OSA is experimenting with EPUB as a way to encourage reading on mobile devices, accommodate assistive reading devices, and explore options for secure article sharing. Our Readium EPUB implementation requires a recent, modern browser such as Chrome, Firefox, Safari, or Edge.
High frequency optomechanical disk resonators in III–V ternary semiconductors
Biswarup Guha, Silvia Mariani, Aristide Lemaître, Sylvain Combrié, Giuseppe Leo, and Ivan Favero
Beyond the Rayleigh scattering limit in high-Q silicon microdisks: theory and experiment
Matthew Borselli, Thomas J. Johnson, and Oskar Painter
High yield fabrication of low threshold single-mode GaAs/AlGaAs semiconductor ring lasers using metallic etch masks
Neilanjan Dutta, Janusz A. Murakowski, Shouyuan Shi, and Dennis W. Prather
Hybrid confinement of optical and mechanical modes in a bullseye optomechanical resonator
Felipe G. S. Santos, Yovanny A. V. Espinel, Gustavo O. Luiz, Rodrigo S. Benevides, Gustavo S. Wiederhecker, and Thiago P. Mayer Alegre
Form-birefringence structure fabrication in GaAs by use of SU-8 as a dry-etching mask
Lin Pang, Maziar Nezhad, Uriel Levy, Chia-Ho Tsai, and Yeshaiahu Fainman
B. Stern, X. Ji, Y. Okawachi, A. L. Gaeta, and M. Lipson, "Battery-operated integrated frequency comb generator," Nature 562(7727), 401–405 (2018).
M. Karpov, M. H. P. Pfeiffer, J. Liu, A. Lukashchuk, and T. J. Kippenberg, "Photonic chip-based soliton frequency combs covering the biological imaging window," Nat. Commun. 9(1), 1146 (2018).
M. Gräfe, R. Heilmann, M. Lebugle, D. Guzman-Silva, A. Perez-Leija, and A. Szameit, "Integrated photonic quantum walks," J. Opt. 18(10), 103002 (2016).
J. E. Sharping, K. F. Lee, M. A. Foster, A. C. Turner, B. S. Schmidt, M. Lipson, A. L. Gaeta, and P. Kumar, "Generation of correlated photons in nanoscale silicon waveguides," Opt. Express 14(25), 12388 (2006).
A. Mohanty, M. Zhang, A. Dutt, S. Ramelow, P. Nussenzveig, and M. Lipson, "Quantum interference between transverse spatial waveguide modes," Nat. Commun. 8(1), 14010 (2017).
X. Qiang, X. Zhou, J. Wang, C. M. Wilkes, T. Loke, S. O'Gara, L. Kling, G. D. Marshall, R. Santagati, T. C. Ralph, J. B. Wang, J. L. O'Brien, M. G. Thompson, and J. C. F. Matthews, "Large-scale silicon quantum photonics implementing arbitrary two-qubit processing," Nat. Photonics 12(9), 534–539 (2018).
J. Wang, S. Paesani, Y. Ding, R. Santagati, P. Skrzypczyk, A. Salavrakos, J. Tura, R. Augusiak, L. Mančinska, D. Bacco, D. Bonneau, J. W. Silverstone, Q. Gong, A. Acín, K. Rottwitt, L. K. Oxenløwe, J. L. O'Brien, A. Laing, and M. G. Thompson, "Multidimensional quantum entanglement with large-scale integrated optics," Science 360(6386), 285–291 (2018).
A. Faraon, A. Majumdar, D. Englund, E. Kim, M. Bajcsy, and J. Vučković, "Integrated quantum optical networks based on quantum dots and photonic crystals," New J. Phys. 13(5), 055025 (2011).
C. Wang, M. Zhang, B. Stern, M. Lipson, and M. Lončar, "Nanophotonic lithium niobate electro-optic modulators," Opt. Express 26(2), 1547–1555 (2018).
K. C. Balram, M. I. Davanço, J. D. Song, and K. Srinivasan, "Coherent coupling between radiofrequency, optical and acoustic waves in piezo-optomechanical circuits," Nat. Photonics 10(5), 346–352 (2016).
G. S. Wiederhecker, P. Dainese, and T. P. Mayer Alegre, "Brillouin optomechanics in nanophotonic structures," APL Photonics 4(7), 071101 (2019).
M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, "Cavity optomechanics," Rev. Mod. Phys. 86(4), 1391–1452 (2014).
R. Riedinger, S. Hong, R. A. Norte, J. A. Slater, J. Shang, A. G. Krause, V. Anant, M. Aspelmeyer, and S. Gröblacher, "Nonclassical correlations between single photons and phonons from a mechanical oscillator," Nature 530(7590), 313–316 (2016).
J. Chan, T. P. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, and O. Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground state," Nature 478(7367), 89–92 (2011).
D. Navarro-Urrios, N. E. Capuj, M. F. Colombano, P. D. García, M. Sledzinska, F. Alzina, A. Griol, A. Martínez, and C. M. Sotomayor-Torres, "Nonlinear dynamics and chaos in an optomechanical beam," Nat. Commun. 8(1), 14965 (2017).
A. H. Safavi-Naeini, D. Van Thourhout, R. Baets, and R. Van Laer, "Controlling phonons and photons at the wavelength scale: integrated photonics meets integrated phononics," Optica 6(2), 213 (2019).
B. J. Eggleton, C. G. Poulton, P. T. Rakich, M. J. Steel, and G. Bahl, "Brillouin integrated photonics," Nat. Photonics 13(10), 664–677 (2019).
L. Ding, C. Baker, P. Senellart, A. Lemaitre, S. Ducci, G. Leo, and I. Favero, "High Frequency GaAs Nano-Optomechanical Disk Resonator," Phys. Rev. Lett. 105(26), 263903 (2010).
W. C. Jiang, X. Lu, J. Zhang, and Q. Lin, "High-frequency silicon optomechanical oscillator with an ultralow threshold," Opt. Express 20(14), 15991–15996 (2012).
X. Sun, X. Zhang, and H. X. Tang, "High-Q silicon optomechanical microdisk resonators at gigahertz frequencies," Appl. Phys. Lett. 100(17), 173116 (2012).
A. G. Krause, M. Winger, T. D. Blasius, Q. Lin, and O. J. Painter, "A high-resolution microchip optomechanical accelerometer," Nat. Photonics 6(11), 768–772 (2012).
J. T. Hill, A. H. Safavi-Naeini, J. Chan, and O. Painter, "Coherent optical wavelength conversion via cavity optomechanics," Nat. Commun. 3(1), 1196 (2012).
M. Metcalfe, "Applications of cavity optomechanics," Appl. Phys. Rev. 1(3), 031105 (2014).
M. Eichenfield, J. Chan, R. M. Camacho, K. J. Vahala, and O. Painter, "Optomechanical crystals," Nature 462(7269), 78–82 (2009).
R. Benevides, F. G. Santos, G. O. Luiz, G. S. Wiederhecker, and T. P. Mayer Alegre, "Ultrahigh-Q optomechanical crystal cavities fabricated in a CMOS foundry," Sci. Rep. 7(1), 2491 (2017).
F. G. Santos, Y. Espinel, G. de Oliveira Luiz, R. Benevides, G. Wiederhecker, and T. P. Alegre, "Hybrid confinement of optical and mechanical modes in a bullseye optomechanical resonator," Opt. Express 25(2), 508–1001 (2017).
G. Luiz, R. Benevides, F. Santos, Y. Espinel, T. Mayer Alegre, and G. Wiederhecker, "Efficient anchor loss suppression in coupled near-field optomechanical resonators," Opt. Express 25(25), 31347–31361 (2017).
C. Xiong, W. H. P. Pernice, X. Sun, C. Schuck, K. Y. Fong, and H. X. Tang, "Aluminum nitride as a new material for chip-scale optomechanics and nonlinear optics," New J. Phys. 14(9), 095014 (2012).
J. Liu, K. Usami, A. Naesby, T. Bagci, E. S. Polzik, P. Lodahl, and S. Stobbe, "High-Q optomechanical GaAs nanomembranes," Appl. Phys. Lett. 99(24), 243102 (2011).
K. Usami, A. Naesby, T. Bagci, B. Melholt Nielsen, J. Liu, S. Stobbe, P. Lodahl, and E. S. Polzik, "Optical cavity cooling of mechanical modes of a semiconductor nanomembrane," Nat. Phys. 8(2), 168–172 (2012).
B. Guha, S. Mariani, A. Lemaître, S. Combrié, G. Leo, and I. Favero, "High frequency optomechanical disk resonators in III-V ternary semiconductors," Opt. Express 25(20), 24639–24649 (2017).
K. C. Balram, M. Davanço, J. Y. Lim, J. D. Song, and K. Srinivasan, "Moving boundary and photoelastic coupling in GaAs optomechanical resonators," Optica 1(6), 414 (2014).
R. W. Dixon, "Photoelastic Properties of Selected Materials and Their Relevance for Applications to Acoustic Light Modulators and Scanners," J. Appl. Phys. 38(13), 5149–5153 (1967).
C. Baker, W. Hease, D.-T. Nguyen, A. Andronico, S. Ducci, G. Leo, and I. Favero, "Photoelastic coupling in gallium arsenide optomechanical disk resonators," Opt. Express 22(12), 14072 (2014).
W. Yang, S. A. Gerke, K. W. Ng, Y. Rao, C. Chase, and C. J. Chang-Hasnain, "Laser optomechanics," Sci. Rep. 5(1), 13700 (2015).
D. Princepe, G. S. Wiederhecker, I. Favero, and N. C. Frateschi, "Self-Sustained Laser Pulsation in Active Optomechanical Devices," IEEE Photonics J. 10(3), 1–10 (2018).
X. Xi, J. Ma, and X. Sun, "Carrier-mediated cavity optomechanics in a semiconductor laser," Phys. Rev. A 99(5), 053837 (2019).
B. Guha, F. Marsault, F. Cadiz, L. Morgenroth, V. Ulin, V. Berkovitz, A. Lemaître, C. Gomez, A. Amo, S. Combrié, B. Gérard, G. Leo, and I. Favero, "Surface-enhanced gallium arsenide photonic resonator with a quality factor of six million," Optica 4(2), 218–221 (2017).
Y. W. Chen, B. S. Ooi, G. I. Ng, K. Radhakrishnan, and C. L. Tan, "Dry via hole etching of GaAs using high-density Cl2/Ar plasma," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 18(5), 2509 (2000).
K. Liu, X.-M. Ren, Y.-Q. Huang, S.-W. Cai, X.-F. Duan, Q. Wang, C. Kang, J.-S. Li, Q.-T. Chen, and J.-R. Fei, "Inductively coupled plasma etching of GaAs in Cl2/Ar, Cl2/Ar/O2 chemistries with photoresist mask," Appl. Surf. Sci. 356, 776–779 (2015).
A. Matsutani, F. Ishiwari, Y. Shoji, T. Kajitani, T. Uehara, M. Nakagawa, and T. Fukushima, "Chlorine-based inductively coupled plasma etching of GaAs wafer using tripodal paraffinic triptycene as an etching resist mask," Jpn. J. Appl. Phys. 55(6S1), 06GL01 (2016).
A. Faraon, "Locally controlled photonic crystal devices with coupled quantum dots: physics and applications," Ph.D. thesis, Stanford University (2009).
K. Srinivasan, M. Borselli, T. J. Johnson, P. E. Barclay, O. Painter, A. Stintz, and S. Krishna, "Optical loss and lasing characteristics of high-quality-factor AlGaAs microdisk resonators with embedded quantum dots," Appl. Phys. Lett. 86(15), 151106 (2005).
S. Buckley, M. Radulaski, J. L. Zhang, J. Petykiewicz, K. Biermann, and J. Vučković, "Nonlinear frequency conversion using high-quality modes in GaAs nanobeam cavities," Opt. Lett. 39(19), 5673 (2014).
K. Rivoire, S. Buckley, and J. Vučković, "Multiply resonant photonic crystal nanocavities for nonlinear frequency conversion," Opt. Express 19(22), 22198 (2011).
M. J. Madou, "Fundamentals of Microfabrication: The Science of Miniaturization," Fundamentals of Microfabrication: The Science of Miniaturization p. 49 (2002).
R. Kirchner, V. A. Guzenko, I. Vartiainen, N. Chidambaram, and H. Schift, "ZEP520A - A resist for electron-beam grayscale lithography and thermal reflow," Microelectron. Eng. 153, 71–76 (2016).
S. Pfirrmann, A. Voigt, A. Kolander, G. Grützner, O. Lohse, I. Harder, and V. A. Guzenko, "Towards a novel positive tone resist mr-PosEBR for high resolution electron-beam lithography," Microelectron. Eng. 155, 67–73 (2016).
A. Schleunitz, V. A. Guzenko, M. Messerschmidt, H. Atasoy, R. Kirchner, and H. Schift, "Novel 3D micro- and nanofabrication method using thermally activated selective topography equilibration (TASTE) of polymers," Nano Convergence 1(1), 7 (2014).
A. Baca and C. I. Ashby, Fabrication of GaAs devices (IET, 2009), 2nd ed.
M. N. Mudholkar, G. Sai Saravanan, K. Mahadeva Bhat, C. Sridhar, H. P. Vyas, and R. Muralidharan, "Etching of 200 µm deep GaAs via holes with near vertical wall profile using photoresist mask with inductively coupled plasma," Proceedings of the 14th International Workshop on the Physics of Semiconductor Devices, IWPSD pp. 466–468 (2007).
H. Lee, T. Chen, J. Li, K. Y. Yang, S. Jeon, O. J. Painter, and K. J. Vahala, "Chemically etched ultrahigh-Q wedge-resonator on a silicon chip," Nat. Photonics 6(6), 369–373 (2012).
L. Midolo, T. Pregnolato, G. Kirsanske, and S. Stobbe, "Soft-mask fabrication of gallium arsenide nanomembranes for integrated quantum photonics," Nanotechnology 26(48), 484002 (2015).
M. L. Gorodetsky, A. Schliesser, G. Anetsberger, S. Deleglise, and T. J. Kippenberg, "Determination of the vacuum optomechanical coupling rate using frequency noise calibration," Opt. Express 18(22), 23236 (2010).
T. Skauli, P. S. Kuo, K. L. Vodopyanov, T. J. Pinguet, O. Levi, L. A. Eyres, J. S. Harris, M. M. Fejer, B. Gerard, L. Becouarn, and E. Lallier, "Improved dispersion relations for gaas and applications to nonlinear optics," J. Appl. Phys. 94(10), 6447–6455 (2003).
R. Benevides, N. C. Carvalho, M. Ménard, N. C. Frateschi, G. S. Wiederhecker, and T. P. M. Alegre, "Overcoming optical spring effect with thermo-opto-mechanical coupling in GaAs microdisks," in "Latin America Optics and Photonics Conference," (Optical Society of America, Lima, 2018), OSA Technical Digest, p. W4D.4.
K. Børkje, A. Nunnenkamp, J. D. Teufel, and S. M. Girvin, "Signatures of Nonlinear Cavity Optomechanics in the Weak Coupling Regime," Phys. Rev. Lett. 111(5), 053603 (2013).
A. G. Krause, J. T. Hill, M. Ludwig, A. H. Safavi-naeini, J. Chan, F. Marquardt, and O. Painter, "Nonlinear radiation pressure dynamics in an optomechanical crystal," Phys. Rev. Lett. 115(23), 233601 (2015).
M. A. Zhang, G. S. Wiederhecker, S. Manipatruni, A. Barnard, P. McEuen, and M. Lipson, "Synchronization of Micromechanical Oscillators Using Light," Phys. Rev. Lett. 109(23), 233906 (2012).
G.-J. Qiao, H.-X. Gao, H.-D. Liu, and X. X. Yi, "Quantum synchronization of two mechanical oscillators in coupled optomechanical systems with Kerr nonlinearity," Sci. Rep. 8(1), 15614 (2018).
Acín, A.
Alegre, T. P.
Alegre, T. P. M.
Alzina, F.
Amo, A.
Anant, V.
Andronico, A.
Anetsberger, G.
Ashby, C. I.
Aspelmeyer, M.
Atasoy, H.
Augusiak, R.
Baca, A.
Bacco, D.
Baets, R.
Bagci, T.
Bahl, G.
Bajcsy, M.
Baker, C.
Balram, K. C.
Barclay, P. E.
Barnard, A.
Becouarn, L.
Benevides, R.
Berkovitz, V.
Biermann, K.
Blasius, T. D.
Bonneau, D.
Børkje, K.
Borselli, M.
Buckley, S.
Cadiz, F.
Cai, S.-W.
Camacho, R. M.
Capuj, N. E.
Carvalho, N. C.
Chan, J.
Chang-Hasnain, C. J.
Chase, C.
Chen, Q.-T.
Chen, T.
Chen, Y. W.
Chidambaram, N.
Colombano, M. F.
Combrié, S.
Dainese, P.
Davanço, M.
Davanço, M. I.
de Oliveira Luiz, G.
Deleglise, S.
Ding, L.
Dixon, R. W.
Duan, X.-F.
Ducci, S.
Dutt, A.
Eggleton, B. J.
Eichenfield, M.
Englund, D.
Espinel, Y.
Eyres, L. A.
Faraon, A.
Favero, I.
Fei, J.-R.
Fejer, M. M.
Fong, K. Y.
Foster, M. A.
Frateschi, N. C.
Fukushima, T.
Gaeta, A. L.
Gao, H.-X.
García, P. D.
Gerard, B.
Gérard, B.
Gerke, S. A.
Girvin, S. M.
Gomez, C.
Gong, Q.
Gorodetsky, M. L.
Gräfe, M.
Griol, A.
Gröblacher, S.
Grützner, G.
Guha, B.
Guzenko, V. A.
Guzman-Silva, D.
Harder, I.
Harris, J. S.
Hease, W.
Heilmann, R.
Hill, J. T.
Hong, S.
Huang, Y.-Q.
Ishiwari, F.
Jeon, S.
Ji, X.
Jiang, W. C.
Johnson, T. J.
Kajitani, T.
Kang, C.
Karpov, M.
Kim, E.
Kippenberg, T. J.
Kirchner, R.
Kirsanske, G.
Kling, L.
Kolander, A.
Krause, A.
Krause, A. G.
Krishna, S.
Kumar, P.
Kuo, P. S.
Laing, A.
Lallier, E.
Lebugle, M.
Lee, H.
Lee, K. F.
Lemaitre, A.
Lemaître, A.
Leo, G.
Levi, O.
Li, J.
Li, J.-S.
Lim, J. Y.
Lin, Q.
Lipson, M.
Liu, H.-D.
Liu, J.
Liu, K.
Lodahl, P.
Lohse, O.
Loke, T.
Loncar, M.
Lu, X.
Ludwig, M.
Luiz, G.
Luiz, G. O.
Lukashchuk, A.
Ma, J.
Madou, M. J.
Mahadeva Bhat, K.
Majumdar, A.
Mancinska, L.
Manipatruni, S.
Mariani, S.
Marquardt, F.
Marsault, F.
Marshall, G. D.
Martínez, A.
Matsutani, A.
Matthews, J. C. F.
Mayer Alegre, T.
Mayer Alegre, T. P.
McEuen, P.
Melholt Nielsen, B.
Ménard, M.
Messerschmidt, M.
Metcalfe, M.
Midolo, L.
Mohanty, A.
Morgenroth, L.
Mudholkar, M. N.
Muralidharan, R.
Naesby, A.
Nakagawa, M.
Navarro-Urrios, D.
Ng, G. I.
Ng, K. W.
Nguyen, D.-T.
Norte, R. A.
Nunnenkamp, A.
Nussenzveig, P.
O'Brien, J. L.
O'Gara, S.
Okawachi, Y.
Ooi, B. S.
Oxenløwe, L. K.
Paesani, S.
Painter, O.
Painter, O. J.
Perez-Leija, A.
Pernice, W. H. P.
Petykiewicz, J.
Pfeiffer, M. H. P.
Pfirrmann, S.
Pinguet, T. J.
Polzik, E. S.
Poulton, C. G.
Pregnolato, T.
Princepe, D.
Qiang, X.
Qiao, G.-J.
Radhakrishnan, K.
Radulaski, M.
Rakich, P. T.
Ralph, T. C.
Ramelow, S.
Rao, Y.
Ren, X.-M.
Riedinger, R.
Rivoire, K.
Rottwitt, K.
Safavi-Naeini, A. H.
Sai Saravanan, G.
Salavrakos, A.
Santagati, R.
Santos, F.
Santos, F. G.
Schift, H.
Schleunitz, A.
Schliesser, A.
Schmidt, B. S.
Schuck, C.
Senellart, P.
Shang, J.
Sharping, J. E.
Shoji, Y.
Silverstone, J. W.
Skauli, T.
Skrzypczyk, P.
Slater, J. A.
Sledzinska, M.
Song, J. D.
Sotomayor-Torres, C. M.
Sridhar, C.
Srinivasan, K.
Steel, M. J.
Stern, B.
Stintz, A.
Stobbe, S.
Szameit, A.
Tan, C. L.
Tang, H. X.
Teufel, J. D.
Thompson, M. G.
Tura, J.
Turner, A. C.
Uehara, T.
Ulin, V.
Usami, K.
Vahala, K. J.
Van Laer, R.
Van Thourhout, D.
Vartiainen, I.
Vodopyanov, K. L.
Voigt, A.
Vuckovic, J.
Vyas, H. P.
Wang, J.
Wang, J. B.
Wang, Q.
Wiederhecker, G.
Wiederhecker, G. S.
Wilkes, C. M.
Winger, M.
Xi, X.
Xiong, C.
Yang, K. Y.
Yang, W.
Yi, X. X.
Zhang, J. L.
Zhang, M.
Zhang, M. A.
Zhou, X.
APL Photonics (1)
Appl. Phys. Rev. (1)
Appl. Surf. Sci. (1)
IEEE Photonics J. (1)
J. Appl. Phys. (2)
J. Opt. (1)
J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. (1)
Jpn. J. Appl. Phys. (1)
Microelectron. Eng. (2)
Nano Convergence (1)
New J. Phys. (2)
Optica (3)
Rev. Mod. Phys. (1)
Fig. 1. Fabrication procedure. Over a MBE-epitaxy grown GaAs/AlGaAs wafer (250 nm/2000 nm)(a), an electroresist layer (500 nm) is spun (b). Electron beam lithography features the resist (c), with transfer to GaAs layer done with ICP-etching (d). The resist is removed (e) and a wet HF-release with post-cleaning is performed, yielding a suspended disk (f).
Fig. 2. Electroresist edges profile. Different walls slope are obtained changing the electron beam exposure dose from a) 45 µC$/$cm$^{2}$, to b) 60 µC$/$cm$^{2}$, to c) 75 µC$/$cm$^{2}$. All images were taken at 30 kV using secondary electron detector. The resist layer was false-ed as red.
Fig. 3. Reflow process. Microdisks fabricated with resist reflowed at different temperatures: a) without reflow, b) $140^{\textrm {o}}$ C, c) $160^{\textrm {o}}$ C and d) $180^{\textrm {o}}$ C for 2 minutes at a hot plate. Etching parameters were gas flow Ar/Cl$_{2}=12/8$ sccm, RF power $=150$ W, ICP power $= 210$ W, and chamber pressure $= 4.5$ mTorr. All images were taken at 20 kV using secondary electron detector. The top resist layer was false-ed as red, the brightness contrast between the top GaAs layer and AlGaAs is intrinsic to the SEM image. The image in (c) is also shown in Fig. 4(b) and Fig. 5(a), for easier comparison.
Fig. 4. Gases flow. Etched disks sidewall profile for Ar/Cl$_{2}$ flow of a) $16/4$ sccm, b) 12/8 sccm, c) 8/12 sccm and d) 4/16 sccm. Etching parameters were RF power $=150$ W, ICP power $= 210$ W and chamber pressure $= 4.5$ mTorr and a $2$ minutes resist reflow at $160^{\circ }$ C. The etch duration were 90 s for a) and b), 35 s for c) and $25$ s for d). All images were taken at $20$ kV using secondary electron detector and false-ed. The image in b) is also shown in Fig. 3(c) and Fig. 5(a), for easier comparison.
Fig. 5. Chamber pressure. SEM images of microdisks etched at chamber pressures of a) $4.5$ mTorr, a) $6.0$ mTorr, a) $7.5$ mTorr and a) $9.0$ mTorr. Changes in sidewall roughness, angle and depth can be observed. These devices were fabricated with Ar/Cl$_2$ flow = $12/8$ sccm, RF power = $150$ W, ICP power = $210$ W and a resist reflow process of $2$ minutes at $160^{\textrm {o}}$ C. The image in a) is also shown in Fig. 3(c) and Fig. 4(b), for easier comparison.
Fig. 6. a) Experimental setup used to characterize optomechanical disks. $\phi$-mod, PD, DAQ, MZ, Acet., FPD and ESA stand for phase modulator, photodetector, analog-digital converter, Mach-Zehnder interferometer, acetylene cell, fast photodetector and electrical spectrum analyzer, respectively. b) Broaband spectrum of of a $10\;\mu$m radius disk. The fitted intrinsic quality factor is $Q_{\textrm {opt}}=2.0\times 10^{5}$. c) Microscope image of the fabricated sample, TPL stands for the taper parking lot used to stabilize the tapered fiber position.d) Optical modes of a $10\;\mu$m radius disk, with intrinsic $Q_{\textrm {opt}} = 1.55\times 10^{5}$ and $Q_{\textrm {opt}} = 2.0\times 10^{5}$ respectively. e) Instrinsic optical quality factors for a cavity with (blue bars) and a cavity without resist reflow (orange bars). We see that the optimized recipe shows higher quality factors.
Fig. 7. a) Mechanical mode at $\Omega _{\textrm {m}} = 2\pi \times 370$ MHz observed for a 3.6 µm radius disk, with a quality factor of $Q_{\textrm {mec}} = 760$. A phase modulator calibration tone can be seen at $374$ MHz, yielding an optomechanical coupling rate of $g_0=2\pi \times 13.6$ kHz. The inset shows a Finite Element Simulation (FEM) of normalized displacement profile of the fundamental mechanical breathing mode. b) Self-sustained oscillation of the fundamental mechanical mode is shown with a peak of more than $50$ dB above the noise floor.
Table 1. Etching rate dependence on gas flow.
View Table | View all tables in this article
Table 2. Etching rate dependence on chamber pressure.
Etching rate dependence on gas flow.
Ar/Cl$_2$ 2 flow (sccm)
GaAs/AlGaAs rate (nm/s)
Resist rate (nm/s)
16/4 5.9 3.7
12/8 10.2 5.2
8/12 22.8 8.4
Etching rate dependence on chamber pressure.
Pressure (mTorr) | CommonCrawl |
www.johngiovannis.com
Rekindling a curiosity in maths, physics with a touch of linux
Distance to the horizon
Submitted by johng on 16 March, 2012 - 17:29
Have you ever wondered how far can you see to the horizon from an elevated position ? Using simple trigonometry the distance to the horizon along the Earth's surface can be easily determined. Two cases are considered: (i) light travels in a straight line and (ii) light travels along a curved path as a result of atmospheric refraction. In both cases it will be assumed that the Earth is a sphere resulting in a circular cross section when examining the problem in two dimensions.
Simple case
Consider the case of an observer standing on an elevated position such as a lighthouse viewing platform. In figure 1 below there is a clear and unobstructed view of the sea to the right of the lighthouse. The furthest distance visible from the observer to the horizon (which is represented as the quantity s) can be uniquely determined by the beam of light shown as a dashed line in Figure 1. The straight beam of light meets the horizon on the Earth's surface out to sea at a tangent. Given the height of the light house and the radius of the Earth the problem is reduced to a case of calculating the arc length of s shown Figure 1.
The geometry can be examined in more detail in Figure 2. In the triangle PHO, the following points are defined:
P - the peak or viewing platform
H - the point marking the horizon. This is the furthest point visible from P on the Earth's surface
O - the center of the earth
In addition the following distances are defined:
s is the distance along the arc length of the Earth's surface from base of the observer at P
h is the height of the observer above sea level
r is the radius of the Earth
$\theta$ - the angle at the center of the Earth, defined by the arc length s
A tangent to a circle touches a circle at only one point which is at right angles to the radius at the point of contact. PH and OH form a 90 degree angle in figure 2. Therefore PHO is a right angle triangle with the following quantities:
PO = r + h
OH = r
Using elementary trigonometry in triangle PHO, the cosine of the angle $\theta$ is expressed as:
$$\cos(\theta) = \frac{r}{r+h} \qquad_{...(1)}$$
The arc length is easily calculated as
$$s = r \theta \qquad_{...(2)}$$ where $\theta$ is measured in radians. Substituting the expression for $\theta$ from equation (1) into (2), the following is obtained:
$$s = r \arccos \left( \frac{r}{r+h} \right)\qquad_{...(3)}$$
Factoring out $r$ from the numerator and denominator, the arc length along the Earth's surface can be expressed as the ratio $h/r$.
$$ s = r \arccos \left[ \frac{1}{1+\frac{h}{r}} \right] \qquad_{...(4)}$$
Examining the limits of equation (4) is a useful check to see if the expected results are obtained in two extreme cases.
At h=0, s=0. An observer lying on the ground will not able to see along any distance along the horizon
At an infinite height above the Earth's surface ($h \to \infty$), we would expect to see the full face of the Earth. In equation (4), the expression in arccos approaches zero. $\arccos(0) = \pi / 2$. Therefore s approaches $\pi r/2$ This is equivalent to the quarter circumference of a circle with the radius of r. Taking into account the symmetry (see figure 2) the total arc length on either side is $\pi r$
For practical applications equation (4) is cumbersome. The expression for $\theta$ in equation (1) can be simplified if one assumes that the height of the observer above sea level is a small fraction of the radius of the Earth, i.e. h/r is a small quantity. Indeed this condition is true for most terrestrial situations, such calculating the distance to the horizon from a mountain, lighthouse or ship.
In mathematics any continuous function can be represented as a series of polynomials known as a Taylor expansion. In the case of $\cos(\theta)$ the Taylor series is:
$$\cos(\theta) = 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - \frac{\theta^6}{6!} + ...$$
The valid assumption that h/r is small also implies that $\theta$ will be a small angle. As a result the first two terms are sufficient to approximate $\cos(\theta)$. Rewriting equation (1):
$$1 - \frac{\theta^2}{2} = \frac{r}{r+h} = \frac{1}{1+\frac{h}{r}}$$
Multiplying the numerator and denominator of the right side of the equation by $1 - \frac{h}{r}$ the following expression is obtained
$$1 - \frac{\theta^2}{2} = \frac{1 - \frac{h}{r}}{1-\frac{h^2}{r^2}}$$
Taking into account $h / r$ << 1, then $\frac{h^2}{r^2} \approx 0$, therefore
$$\frac{\theta^2}{2} \approx \frac{h}{r}$$
$$\theta \approx \sqrt{\frac{2h}{r}}$$
Substituting $\theta$ into equation (2) we get:
$$s \approx r\sqrt{\frac{2h}{r}} = \sqrt{2hr} \qquad_{.....(5)}$$
The resulting equation will work provided the height (h), the radius of the Earth (r) and the distance to the horizon (s) are all in the same units of length. A conversion of units is required to produce a more practical result that will work in both metric and imperial units of measurement.
Equation (5) can be rewritten where the height is specified in meters and the arc length is expressed in km. Refer to Metric and Imperial Units in the Appendix:
$$s_{km} = 3.57 \sqrt{h_{m}} \qquad_{.....(6)}$$
The imperial unit equivalent is:
$$s_{miles} = 1.23 \sqrt{h_{feet}} \qquad_{.....(7)}$$
The resulting equations are in close agreement with other sources: $3.57 \sqrt{h}$ for metric units and $1.22 \sqrt{h}$ for imperial units - (see reference (2))
In the simple case light travels in a straight line. However light traveling through a medium such as the Earth's atmosphere will deviate from a straight line due to atmospheric refraction. The amount of refraction will depend on a number of factors such as temperature, air pressure and density. In the case of the Earth's atmosphere, the refractive index of air (a measure of how light is refracted) will be a function of height above the Earth's surface. One would expect the effect of refraction would diminish at greater heights as the atmosphere thins out into empty space.
To treat the case of refraction accurately requires a detailed model of the atmosphere which goes beyond the scope of the article. However a very good approximation of the light path is an arc of a large circle with a radius much larger than that of the Earth's radius. As a result of the curved light path shown in red (in figure 3), the distance to the horizon (sf) in the presence of an atmosphere is greater that the simple case (s) from a straight beam of light shown as the dashed line in the same figure.
The complete geometric configuration is shown in figure 4. The cross section of the Earth is represented as the grey circle. The key points to note are:
B - the base of the viewing platform
X - the point marking the horizon as defined by the refracted beam of light. This is the furthest point visible from P on the Earth's surface
The red circular arc between the points P and X represents the refracted light path. Q is the center of the red circle. Both circles meet at the point X. Therefore a perpendicular line projected from a tangential line at point X will pass through both O and Q.
Triangle POQ is formed with clearly defined quantities to determine the angle POX, denoted as $\alpha$. Once $\alpha$ is determined the arc length BX along the Earth surface is computed, corresponding to the distance to the horizon for a refracted light beam.
In triangle POQ the following quantities defined:
PQ = R, the radius of the refracted beam of light
PO = h + r
QO = R - r
Using the cosine rule:
$$R^2 = (R-r)^2 + (r+h)^2 - 2.(R-r).(r+h)\cos(\pi-\alpha) \qquad_{.....(6)}$$
Rearranging equation (6) and using the identity $\cos(\pi-\alpha) = - \cos(\alpha)$:
$$\cos(\alpha) = \frac{2Rr-2r^2-2hr-h^2}{2(R-r)(r+h)} \qquad_{.....(7)}$$
$$s_f = r . \arccos \left[ \frac{2Rr-2r^2-2hr-h^2}{2(R-r)(r+h)} \right] \qquad_{.....(8)}$$
After considerable manipulation and taking into account the valid approximations that $h / r$ << 1, $\frac{h^2}{r^2} \approx 0$ and $\alpha$ is a small angle so that $\cos(\alpha) \approx 1 - \frac{\alpha^2}{2}$ it can be shown that:
$$\alpha \approx \sqrt{\frac{2h}{r(1-r/R)}} \qquad_{.....(9)} $$
A detailed analysis on obtaining equation (9) can be found in the Refraction section of the Appendix. The circular arc length BX can be easily computed one the angle $\alpha$ is determined:
$$s_f = r \alpha = \sqrt{\frac{2hr}{1-r/R}} \qquad_{.....(10)}$$
The result is equivalent to the simple case where light travels in a straight line for a "fictitious" planet with a radius of $r/(1-r/R)$. A quick examination of the limit of equation (10) for $R \to \infty$ corresponds to a straight line path where $s_f \to s = \sqrt{2hr}$
The radius of the light beam is dependent on atmospheric conditions such as temperature, altitude and air pressure. A detailed analysis beyond the scope of this article would be required to determine the value of R. A common value used for correcting surveyor's data for over a century is r/R=0.13 (i.e. R/r=7.69) - ref (1). Another estimate for R/r is 7 based on a simple method by Young - ref (2).
Taking the first estimate of r/R = 0.13 equation (10) becomes:
$$\begin{align} s_f &= \sqrt{2.2989 hr} \\ &= 1.07 \sqrt{2hr} \qquad_{.....(11)} \\ &= 1.07 s\end{align}$$
The result indicates that one can see 7% further as a result of atmospheric refraction compared to the simple case where no atmosphere is taken into consideration. As in the simple case a conversion of units is required to produce a useful formula for both metric and imperial units. Refer to Metric and Imperial Units (Appendix) for the conversion:
$$s_{f (km)} = 3.83 \sqrt{h_m} \qquad_{.....(12)}$$
$$s_{f (miles)} = 1.33\sqrt{h_{feet}} \qquad_{.....(13)}$$
Using a similar comparison as in the simple case, the resulting equations are in close agreement with other sources: $3.86 \sqrt{h}$ for metric units and $1.32 \sqrt{h}$ for imperial units - (see reference (2)).
The resulting equations from the analysis of the simple and refracted light cases above lead to the generation of a number of practical tables and graphs. This allows us to get a quantitative picture of what distances to expect for various structures or viewing platforms above the Earth's surface.
Equation (12) can now be used to generate real data of a number of landmarks and viewing platforms including moving aircraft and orbiting structures.
Structure or viewing platform height (m)
distance to
horizon (km)
The Cape Byron Lghthouse (Byron Bay, NSW, Australia) 118 42
Eureka Tower (Melbourne, Australia) 297 66
Petronas Twin Towers (Kuala Lumpur, Malaysia) 458 82
Burj Khalifa (Dubai, United Arab Emirates) 829 110
Mount Kilimanjaro (Tanzania) 5895 294
Mount Everest (Nepal) 8848 360
cruising altitude of a 747 12000 420
International Space Station 350000 2266
Table 1 assumes the horizon out to sea is visible without any obstructions, which may not be the case for some structures listed, e.g. Mount Everest. The table simply illustrates the expected distance to the horizon at the given height above sea level.
The second table below compares the results of a number of equations derived from the theory:
determine the distance to the horizon for both metric and imperial units
compare equation (12) with the exact formulae (eq 8 and 3) and the simple approximation (eq 5)
Distance to horizon
Height above sea level (feet/metres) Distance to horizon km Exact value
(refracted light - eq 8) Simple case
approximation (eq 5) Simple case
no approximations (eq 3)
4.2 10 12.1 12.1 11.3 11.3
13.3 100 38.3 38.3 35.7 35.7
42.1 1000 121 121 113 113
133 10000 383 383 357 357
421 100000 1211 1204 1129 1122
To determine the distance to the horizon in metric units:
Look up the height above sea level in the central column of the table (e.g. 100 meters)
Read the value to the right in km. In the example highlighted in red, the distance to the horizon for an observer 100 m above the Earth's surface is 38.3km
To determine the distance to the horizon in imperial units:
Look up the height above sea level in the central column of the table (e.g. 50 feet)
Read the value to the left in miles. In the example highlighted in blue, the distance to the horizon for an observer 50 feet above the Earth's surface is 9.4 miles
Note: The table above is a combination of two tables. The lookup table for the imperial units consists of columns 1 and 2. The lookup table for the metric units consists of columns 2 and 3, i.e. 4.2 miles does not equal 12.1 km
The last three columns of the table are to compare the metric values with (i) the exact value as determined from equation 8, (ii) the simple case approximation from equation (5) and (iii) the exact value for the simple case as determined from equation (3). In all cases it can be seen that the values differ no more than 7% up to a height of 100 km above sea level. The result given in equation (11) is sufficient for most practical purposes.
Log-Log Graph
The tabulated values can be represented as a log-log graph. It can be easily shown using elementary high school maths that is one takes the logarithm of both sides of equations (5) and (6) a graph of the relationship between the $s$ and $h$ will result in a straight line with a slope of 0.5.
One advantage of using a log graph is a large range of values can be represented in increasing or decreasing orders of magnitude along the horizontal axis. However there is a loss of accuracy due to the limitation of the number of grid lines in the log-log graph.
For consistency and clarity the red line corresponds to metric units and the blue line for imperial. Two graphs have been combined into one (as is the case of the table above).
At 1000 meters above sea-level the distance to the horizon is approximately 120 km (using the red line to read the value)
At 10 feet above sea-level the distance to the horizon is about 4 miles (using the blue line to read the value)
In both cases the examples are consistent with the tabulated values.
Comparison of formulas
In the simple case, the relationship between the exact formula (equation 3) and the approximation (equation 6) as shown in Table 1 can be better visualised with a log-log graph. It can be clearly seen in figure 5 that both formulas are in close agreement out to a height of 1000 km (106m)above sea level.
The exact calculation (marked as the solid black line) clearly shows the asymptotic limit where s approaches $\pi r /2 \approx $ 10018 km (as already discussed in the limits to equation 3). The cyan coloured straight line depicts equation 6. Based on the comparisons in table 2 and figure 5, we can be confident that $3.57\sqrt{h}$ is a good approximation for determining the distance to the horizon out to 1000km above the Earth's surface.
The case for refracted light as shown in Figure 6 also depicts a similar comparison. Both the exact formula (equation 8) and the approximation (equation 12) are in close agreement out to 1000 km above sea level.
In the case of refracted light in Figure 6 the dark cyan line depicts the approximation $3.83\sqrt{h}$ and the solid black line is the exact calculation (equation 8). The grey curve denotes the fraction (num/den) of the arccos argument of equation 8 which is defined in the range -1 to 1. The results are summarised below:
The exact calculation (eq 8) and the approximation (eq 12) are in precise agreement out to $h \approx 1000$ km.
The solid black line (eq 8) and the grey curve are not defined for values greater than a height of 108m (100,000 km) above sea level
Let A denote the argument of arccos of equation 8:
$$ A = \frac{num}{den} = \frac{2Rr-2r^2-2hr-h^2}{2(R-r)(r+h)} \qquad_{.....(14)}$$
$\arccos(x)$ is defined for values of x between -1 and 1. Therefore the right hand side of equation 14 will also have a range between -1 and 1. One can solve for h where A=-1, corresponding to $\alpha = 180$o. A quadratic equation is solved to get $h=2(R-r)$. Taking the ratio of r/R=0.13 (i.e. R/r=7.69) - ref (1), the maximum possible value of h in equation 8 and 14 is:
$$h_{max} = 13.38r = 8.533 \times 10^7 m$$
Figure 7 illustrates the behavior of refracted light for various scenarios. An interesting result of the analysis of the limits of equation 8 is that at $h=h_{max}$ and $\alpha=180o$ one can see a point on at the opposite side of the Earth. In reality this effect is not observed taking into account: (i) the atmosphere only extends out to a small fraction of the radius of the Earth and (ii) the approximation that refracted light will follow an arc of a circle is not likely to be valid for an extended atmosphere.
Distance between objects above horizon
Imagine the following scenario. A ship out to see is heading towards the coastline as shown in figure 8. There is a critical light path where the lighthouse will first become visible to an observer on the ship. What is the distance between the lighthouse and ship ? The problem is simply a case of applying the "distance to the horizon" problem for both the lighthouse and the ship using the results obtained in the previous sections. The scenario can apply to any pair of elevated objects and not just a lighthouse and a ship.
The simplest case is to assume there is no atmospheric refraction where light travels in a straight line. Using equation 6 one can calculate s1 and s2 and ad the distances to obtain the total distance between the lighthouse and the ship:
$$\begin{align} s_1 &= 3.57 \sqrt{h_1} \\ s_2 &= 3.57 \sqrt{h_2} \end{align}$$
$$d = s_1 + s_2 = 3.57 \sqrt{h_1 + h_2} \qquad_{.....(14)}$$
The case for atmospheric refraction is shown in figure 9. Even though the refracted beam of light in red follows an arc of a circle, the same principle applies as for the simple case above. Using equation 12 once can calculate s1 and s2 for refracted light:
$$d_f = 3.83 \sqrt{h_1 + h_2} \qquad_{.....(15)}$$
Comparing figures 8 and 9 it can be seen that distance between the lighthouse and the ship is increased as a result of atmospheric refraction. A simple comparison between equations 14 and 15 indicates that the increase is 7%, which is consistent with the analysis for distance to the horizon problem for refracted light .
A table can be generated to show the critical distance between two objects as function of h1 and h2 in the Earth's atmosphere using equation 15.
Critical visible distance between two objects above the horizon (km), s1 + s2
h2 (m) h1(m)
0 5 10 25 50 75 100 250 500 1000
0 0.0 8.6 12.1 19.1 27.1 33.2 38.3 60.6 85.6 121.1
5 8.6 12.1 14.8 21.0 28.4 34.3 39.2 61.2 86.1 121.4
10 12.1 14.8 17.1 22.7 29.7 35.3 40.2 61.8 86.5 121.7
The table is asymetrical where $h_1$ spans a larger range than $h_2$ to be consistent with $h_1$ denoting the height a tower, lighthouse or mountain and $h_2$ a significantly shorter object or structure such as a ship.
In the case of $h_1=0$ or $h_2=0$ the results in the table reduce to the distance to the horizon for only one object, consistent with equation 12.
The distance to the horizon of an observer looking from an elevated position above the Earth's surface can be determined using simple trigonometry. A number of assumptions were made: (i) the Earth is a perfect sphere, (ii) light travels in a straight line and (iii) there is a clear and unobstructed view to the horizon. The resulting equation was found:
$$ s = r \arccos \left[ \frac{1}{1+\frac{h}{r}} \right]$$
where r is the radius of the Earth and h is the height above the Earth's surface. Further simplification was possible by considering the case where h is a small fraction of r which would be applicable in most practical situations.
$$ s = 3.57\sqrt{h} $$
A more complicated case was considered taking into account the bending of light as a result of the presence of the Earth's atmosphere. To a first order approximation it was assumed that a refracted beam of light follows an arc of a circle with a radius of R. The distance to the horizon was shown to be:
$$ s_f = r . \arccos \left[ \frac{2Rr-2r^2-2hr-h^2}{2(R-r)(r+h)} \right] $$
After considerable mathematical reduction and additional valid approximations it was shown that:
$$ s_f = 3.83\sqrt{h} $$
In both cases it was interesting to see that both approximate estimates for straight and refracted light are propotional to $\sqrt{h}$. The distance to the horizon is increased by 7% in comparison to the simple case.
The resulting equations were compared with a table and a numbers of graphs (Table 2, Figures 5 and 6) to show the deviation between the approximations and the exact formulae. In both cases it was shown that the approximations start to deviate only after 1000 km above the Earth's surface.
Finally, the results can be extended to determine the maximum visible distance between two objects looking along the horizon.
$$d_f = 3.83 \sqrt{h_1 + h_2}$$
In conclusion to determine how far you can see to the horizon in km (i) calculate the square root of your height in meters above sea level and (ii) multiply the result by 3.83. Alternatively for imperial units: (i) calculate the square root of your height (in feet) and (ii) multiply the result to 1.33 to obtain the distance to the horizon in miles.
Dip of the Horizon: http://mintaka.sdsu.edu/GF/explain/atmos_refr/dip.html
Distance to the Horizon: http://mintaka.sdsu.edu/GF/explain/atmos_refr/horizon.html
Horizon: http://en.wikipedia.org/wiki/Horizon
How far away if the horizon ?
http://blogs.discovermagazine.com/badastronomy/2009/01/15/how-far-away-i...
How to Calculate the Distance to the Horizon
http://www.wikihow.com/Calculate-the-Distance-to-the-Horizon
Metric and Imperial Units
Equation (5), $s=\sqrt{2hr}$ is not useful from a practical point of view. The units of measurement for the height and radius can be converted for imperial and metric units. Two cases are considered:
Given the height in meters how far is it to the horizon in km ?
Given the height in feet how far is it to the horizon in miles ?
Metric Units:
Using units of km for all distances,
$$s_{km} = \sqrt{2h_{km}r_{km}}$$
rewriting the height in terms of meters,
$$h_{km} = h_{m} \times \frac{1}{1000}$$
... and taking the average radius of the Earth to be 6378 km, the distance to the horizon in km is:
$$\begin{align} s_{km} &= \sqrt{h_{m}} \times \sqrt{2/1000 \times 6378} \\ &= 3.57 \sqrt{h_{m}} \end{align}$$
Imperial Units:
Using a similar treatment for metric units, express all distances in miles:
$$s_{miles} = \sqrt{2h_{miles}r_{miles}}$$
using the relation between miles and feet,
$$h_{miles} = h_{feet} \times 0.3048 \times \frac{1}{1000} \times \frac{1}{1.609344}$$
... and taking the average radius of the Earth in miles to be $\frac{6378}{1.609344} \approx$ 3964 miles, the distance to the horizon is:
$$\begin{align}s_{miles} &= \sqrt{2} . \sqrt{h_{feet}} . \sqrt{0.3048 \times \frac{1}{1000} \times \frac{1}{1.609344}} \times \sqrt{\frac{6378}{1.609344}} \\ &= \sqrt{2 \times 0.3048 \times \frac{1}{1000} \times \frac{1}{1.609344} \times \frac{6378}{1.609344}} \times \sqrt{h_{feet}} \\ s_{miles} &= 1.225 \sqrt{h_{feet}} \end{align}$$
Refraction (detailed analysis)
Determining the distance to the horizon with refracted light (shown in red in Figure 5) is simply a case of determining the angle POX ($\alpha$). The main assumption is the refracted light beam (PX) is an arc of a circle with a radius of R with the following properties:
The circle with a radius of R touches the Earth's surface at the point X
A perpendicular line drawn at from a tangential line at the point X for both circles with radius r and R will pass through O and Q
The problem is reduced to defining the lengths of the sides in triangle POQ in Figure 5 and determining the angle POX ($\alpha$).
In triangle POQ, applying the cosine rule:
$$(PQ)^2 = (QO)^2 + (PO)^2 - 2.(QO).(PO)\cos(\angle POQ)$$
$$ R^2 = (R-r)^2 + (r+h)^2 - 2.(R-r).(r+h)\cos(\pi-\alpha) \qquad_{.....(1a)}$$
$$ \cos(\pi-\alpha) = \frac{(R-r)^2+(r+h)^2-R^2} {2(R-r)(r+h)} \qquad_{.....(2a)}$$
Using the identity $\cos(\pi-\alpha) = - \cos(\alpha)$, equation (2a) becomes:
$$ \begin{align} \cos(\alpha) &= \frac{R^2 - (R-r)^2-(r+h)^2} {2(R-r)(r+h)} \\ &= \frac{2Rr-2r^2-2hr-h^2}{2(R-r)(r+h)} \end{align}$$
Multiply the numerator and denominator by $(r-h)$:
$$ \begin{align} \cos(\alpha) &= \frac{[2Rr-2r^2-2hr-h^2](r-h)}{2(R-r)(r^2-h^2)} \\ &= \frac{r(1-h/r)[2Rr-2r^2-2hr-h^2]}{2r^2(R-r)(1-h^2/r^2)} \\ &= \frac{(1-h/r)[2Rr-2r^2-2hr-h^2]}{2r(R-r)(1-h^2/r^2)} \end{align}$$
In most cases, $h / r$ << 1, therefore $\frac{h^2}{r^2} \approx 0$, simplifying the expression further:
$$ \begin{align} \cos(\alpha) &= \frac{(1-h/r)[2Rr-2r^2-2hr-h^2]}{2r(R-r)} \\ &= \frac{1}{2r^2} \frac{(1-h/r)[2Rr-2r^2-2hr-h^2]}{(R/r-1)} \\ &= \frac{(1-h/r)}{2(R/r-1)} [2(R/r) - 2 - 2(h/r) - h^2/r^2] \\ &=\frac{(1-h/r)}{(R/r-1)}[R/r-h/r-1] \\ &=\frac{1}{(R/r-1)} [ R/r - h/r -1 - Rh/r^2 +h^2/r^2 + h/r ] \\ \cos(\alpha) &= \frac{[ R/r-1-Rh/r^2 ]} {(R/r-1)}\end{align}$$
Multiplying the numerator and denominator by $r/R$ gives:
$$ \begin{align} \cos(\alpha) &= \frac{[(1-r/R)-h/r]}{(1-r/R)} \\ &= 1 - \frac{h}{r}\frac{1}{(1-r/R)} \end{align}$$
For small values of $\alpha$, $\cos(\alpha) \approx 1 - \frac{\alpha^2}{2!}$:
$$ 1 - \frac{\alpha^2}{2} = 1 - \frac{h}{r}\frac{1}{(1-r/R)}$$
$$\alpha = \sqrt{\frac{2h}{r(1-r/R)}}$$
The arc length $s_f$ is:
$$s_f = r \alpha = \sqrt{\frac{2hr}{1-r/R}}$$
Updating Drupal
Calculating $\pi$
Linux shell commands
Copyright © 2012 - John GiovannisTheme by Kiwi Drupal Themes, based on Tarski project. | CommonCrawl |
For PhDs
Study portals
Mathematics, mathematics-economy and statistics
PhD portal
Local staff information
QGM - Staff portal
Centre for Quantum Geometry of Moduli Spaces
You are here: QGM Research Publications
QGM Masterclass series
PhD defences
About QGM
Sort by: Date | Author | Title
Manton, N. S. & M. Romão, N. (2011). Vortices and Jacobian varieties. Journal of Geometry and Physics, 61(6), 1135-1155. https://doi.org/10.1016/j.geomphys.2011.02.017
Agerholm, T. & Mazorchuk, V. (2011). On selfadjoint functors satisfying polynomial relations. Journal of Algebra, 330(1), 448-567. https://doi.org/10.1016/j.jalgebra.2011.01.004
Agerholm, T. (2011). Simple 2-representations and Classification of Categorifications. Department of Mathematical Sciences, Aarhus University.
Alessandrini, D. & Li, Q. (2018). AdS 3-manifolds and Higgs bundles. Proceedings of the American Mathematical Society, 146(2), 845-860. https://doi.org/10.1090/proc/13586
Alexeev, N., Andersen, J. E., Penner, R. C. & Zograf, P. (2016). Enumeration of chord diagrams on many intervals and their non-orientable analogs. Advances in Mathematics, 289, 1056-1081. https://doi.org/10.1016/j.aim.2015.11.032
Alvarez-Consul, L., Andersen, J. E. & Mundet i Riera , I. (Eds.) (2016). Geometry and Quantization of Moduli Spaces. Birkhauser Verlag Basel. Advanced Courses in Mathematics - CRM Barcelona https://doi.org/10.1007/978-3-319-33578-0
Ambjørn, J. & Chekhov, L. (2014). A matrix model for hypergeometric Hurwitz numbers. Theoretical and Mathematical Physics, 181(3), 1486-1498. https://doi.org/10.1007/s11232-014-0229-z
Andersen, J. E. (2010). Toeplitz Operators and Hitchin's Projectively Flat Connection. In O. Garcia-Prada, J. P. Bourguignon & S. Salamon (Eds.), The many facets of geometry: a tribute to Nigel Hitchin (pp. 177-209). Oxford University Press.
Andersen, J. E. & Berg, C. (2009). Quantum Hilbert matrices and orthogonal polynomials. Journal of Computational and Applied Mathematics, 233(3), 723-729. https://doi.org/10.1016/j.cam.2009.02.040
Andersen, H. H. (2009). Review of Belkale, Prakash: Geometric proof of a conjecture of Fulton. Adv. Math. 216 (2007), no. 1, 346--357. Mathematical Reviews.
Andersen, H. H. (2009). Review of Webster, Ben; Williamson, Geordie A geometric model for Hochschild homology of Soergel bimodules. Geom. Topol. 12 (2008), no. 2, 1243--1263. Mathematical Reviews.
Andersen, H. H. (2009). Review of Sopkina, Ekaterina: Classification of all connected subgroup schemes of a reductive group containing a split maximal torus. J. K-Theory 3 (2009), no. 1, 103--122. Mathematical Reviews.
Andersen, J. E. & Fjelstad, J. (2010). Reducibility of quantum representations of mapping class groups. Letters in Mathematical Physics, 91(3), 215-239. https://doi.org/10.1007/s11005-009-0367-7
Andersen, J. E., Bene, A. & Penner, R. (2009). Groupoid extensions of mapping class representations for bordered surfaces. Topology and Its Applications, 156(17), 2713-2725. https://doi.org/10.1016/j.topol.2009.08.001
Andersen, J. E., Bene, A., Meilhan, J-B. O. T. & Penner, R. (2010). Finite type invariants and fatgraphs. Advances in Mathematics, 225(4), 2117-2161. https://doi.org/10.1016/j.aim.2010.04.008
Andersen, H. H. & Kaneda, M. (2011). Cohomology of Line Bundles on the Flag Variety for Type G_2. arXiv.org.
Andersen, H. H. (2010). Review of: Edidin, Dan; Francisco, Christopher A. "Grassmannians and representations". J. Commut. Algebra 1 (2009), no. 3, 381–392. Mathematical Reviews.
Andersen, H. H. (2010). Review of: Xi, Nanhua "Maximal and primitive elements in baby Verma modules for type B2". Representation theory, 257–271, Contemp. Math., 478, Amer. Math. Soc., Providence, RI, 2009. Mathematical Reviews, 2010h.
Andersen, H. H. (2010). Review of: Sommers, Eric N. "Cohomology of line bundles on the cotangent bundle of a Grassmannian". Proc. Amer. Math. Soc. 137 (2009), no. 10, 3291–3296. Mathematical Reviews, 2010b.
Andersen, H. H. (2010). Review of: Lakshmibai, Venkatramani; Raghavan, Komaranapuram N.; Sankaran, Parameswaran "Wahl's conjecture holds in odd characteristics for symplectic and orthogonal Grassmannians". Cent. Eur. J. Math. 7 (2009), no. 2, 214–223. Mathematical Reviews, 2010g.
Andersen, J. E. & Villemoes, R. (2012). Cohomology of mapping class groups and the abelian moduli space. Quantum Topology, 3(3/4), 359-376. https://doi.org/10.4171/QT/32
Andersen, H. H. (2011). Quotient Categories of Modular Representations. Progress in Mathematics, 284, 1-16. https://doi.org/10.1007/978-0-8176-4697-4_1
Andersen, J. E. (2010). Asymptotics of the Hilbert-Schmidt norm of curve operators in TQFT. Letters in Mathematical Physics, 91(3), 205-214. https://doi.org/10.1007/s11005-009-0368-6
Andersen, J. E., Penner, R., Reidys, C. & Wang, R. (2010). Linear chord diagrams on two intervals. arXiv.org.
Andersen, H. H. & Kaneda, M. (2011). Rigidity of tilting modules. Moscow Mathematical Journal, 11(1), 1-39.
Andersen, J. E. & Blaavand, J. L. (2011). Asymptotics of Toeplitz operators and applications in TQFT. Travaux Mathematiques, University of Luxembourg, 19(Geometry and Quantization. Lectures of the GEOQUANT school), 167-201.
Andersen, H. H. & Mazorchuk, V. (2011). Category $\mathcal O$ for Quantum Groups. Math. Arxiv.
Andersen, J. E. & Fjelstad, J. (2010). On reducibility of mapping class group representations: the SU(N) case. In S. Caenepeel, J. Fuchs, S. Gutt, C. Schweigert, A. Stolin & F. Van Oystaeyen (Eds.), Noncommutative structures in mathematics and physics (pp. 27-45). Brussels: Koninklijke Vlaamse Academie van Belgie voor Wetenschappen en Kunsten (KVAB).
Andersen, J. E., Boden, H. U., Hahn, A. & Himpel, B. (Eds.) (2011). Chern-Simons Gauge Theory: 20 Years After. American Mathematical Society. A M S - I P Studies in Advanced Mathematics, Vol.. 50
Andersen, J. E. & Gammelgaard, N. L. (2011). Hitchin's projectively flat connection, Toeplitz operators and the asymptotic expansion of TQFT curve operators. In D. A. Ellwood & E. Previato (Eds.), Grassmannians, Moduli Spaces and Vector Bundles: Clay Mathematics Institute Workshop Moduli Spaces of Vector Bundles, with a View Towards Coherent Sheaves October 6–11, 2006 Cambridge, Massachusetts (Vol. 14, pp. 1-24). American Mathematical Society. Clay Mathematics Proceedings, Vol.. 14
Andersen, H. H. (2011). Review of Samokhin, A. On the D-affinity of flag varieties in positive characteristic. J. Algebra 324 (2010), no. 6, 1435–1446. Mathematical Reviews, 2011k(14054), MR2671814.
Andersen, H. H. (2011). Review of Zhu, Minxian Regular representations of quantum groups at roots of unity. Int. Math. Res. Not. IMRN 2010, no. 15, 3039–3065. Mathematical Reviews, 2011k(20100), MR2673718.
Andersen, H. H. (2011). Review of Touzé, Antoine Universal classes for algebraic groups. Duke Math. J. 151 (2010), no. 2, 219–249. Mathematical Reviews, 2011e(20067), MR2598377.
Andersen, H. H. & Kaneda, M. (2012). Cohomology of line bundles on the flag variety for type G_2. Journal of Pure and Applied Algebra, 216(7), 1566-1579. https://doi.org/10.1016/j.jpaa.2011.10.013
Andersen, J. E. & Villemoes, R. (2011). Degree one cohomology with twisted coefficients of the mapping class group. In S. Akbulut, T. Önder & D. Auroux (Eds.), Proceedings of the Gökova Geometry-Topology Conference 2010 (pp. 64-78). International Press of Boston.
Andersen, J. E. & Ueno, K. (2012). Modular functors are determined by their genus zero data. Quantum Topology, 3(3/4), 255-291. https://doi.org/10.4171/QT/29
Andersen, J. E. (2012). Hitchin's connection, Toeplitz operators, and symmetry invariant deformation quantization. Quantum Topology, 3(3/4), 293-325. https://doi.org/10.4171/QT/30
Andersen, J. E., Gammelgaard, N. L. & Lauridsen, M. R. (2012). Hitchin's connection in metaplectic quantization. Quantum Topology, 3(3/4), 327-357. https://doi.org/10.4171/QT/31
Andersen, J. E. & Himpel, B. (2012). The Witten-Reshetikhin-Turaev invariants of finite order mapping tori II. Quantum Topology, 3(3/4), 377-421.
Andersen, T. S. (2012). Endomorphism Algebras of Tensor Powers of Modules for Quantum Groups. Aarhus: Centre for Quantum Geometry of Moduli Spaces, Aarhus University.
Andersen, J. E., Huang, F. W., Penner, R. & Reidys, C. (2012). Topology of RNA-RNA interaction structures. Journal of Computational Biology, 19(7), 928-943. https://doi.org/10.1089/cmb.2011.0308
Andersen, J. E., Chekhov, L. O., Penner, R., Reidys, C. M. & Sułkowski , P. (2013). Topological recursion for chord diagrams, RNA complexes, and cells in moduli spaces. Nuclear Physics, Section B, 866(3), 414-443. https://doi.org/10.1016/j.nuclphysb.2012.09.012
Andersen, J. E., Penner, R., Reidys, C. M. & Waterman, M. S. (2013). Topological classification and enumeration of RNA structures by genus. Journal of Mathematical Biology, 67(5), 1261-1278. https://doi.org/10.1007/s00285-012-0594-x
Andersen, H. H. & Stroppel, C. (2012). Fusion Rings for Quantum Groups. (pp. 1-20). Math ArXives (1212.5736).
Andersen, H. H. (2013). The Role of Verma in Representation Theory. In R. Bhatia, C. S. Rajan & A. I. Singh (Eds.), Connected at Infinity II: A Selection of Mathematics by Indians (pp. 172-192). Hindustan Book Agency. Texts and Readings in Mathematics, Vol.. 67
Andersen, H. H. (2012). Review of Wenzl, Hans Quotients of representation rings. (English summary) Represent. Theory 15 (2011), 385–406. Mathematical Reviews, 2012i(22019), MR2801174.
Andersen, H. H. (2012). Review of Igarashi, Mana; Nakashima, Toshiki Geometric crystals on flag varieties and unipotent subgroups of classical groups. J. Geom. Phys. 61 (2011), no. 11, 2267–2284. Mathematical Reviews, MR2827123.
Andersen, H. H. (2012). Review of Franjou, Vincent; van der Kallen, Wilberd Power reductivity over an arbitrary base Doc. Math. 2010, Extra volume: Andrei A. Suslin sixtieth birthday, 171–195. Mathematical Reviews, 2012h(20098), MR2804253.
Andersen, H. H., Lehrer, G. I. & Zhang, R. (2013). Cellularity of certain quantum endomorphism algebras. (1303.0984 ed.) arXiv.org.
Andersen, J. E., Chekhov, L. O., Penner, R. C., Reidys, C. & Sułkowski, P. (2013). Enumeration of RNA complexes via random matrix theory. Biochemical Society. Transactions, 41(2), 652-655. https://doi.org/10.1042/BST20120270
Displaying results 1 to 50 out of 332
Double issue of the international journal Quantum Toplogy dedicated to QGM
A double issue of Quantum Topology (vol. 3, issue 3/4 (2012)) is dedicated entirely to QGM Center director Jørgen Ellegaard Andersen's work partly joint with other QGM resesarchers. Quantum Topology is the top journal within the research field of QGM and have never before published more than one article by the same author in one issue.
Vol. 1,by Centre Distinguished Professor Robert C. Penner
QGM Highlights
Jane Rasmussen Jamshidi
Ny Munkegade 118
E-mail: qgm(at)au.dk
CVR no.: 31119103
P no.: 1008798024
Budget code: 3002 | CommonCrawl |
Asymptotic behavior of a nonlocal KPP equation with an almost periodic nonlinearity
September 2016, 36(9): 5201-5221. doi: 10.3934/dcds.2016026
Existence and uniqueness of global weak solutions of the Camassa-Holm equation with a forcing
Shihui Zhu 1,
Department of Mathematics, Sichuan Normal University, Chengdu, Sichuan 610066, China
Received August 2015 Revised January 2016 Published May 2016
In this paper, we study the global well-posedness for the Camassa-Holm(C-H) equation with a forcing in $H^1(\mathbb{R})$ by the characteristic method. Due to the forcing, many important properties to study the well-posedness of weak solutions do not inherit from the C-H equation without a forcing, such as conservation laws, integrability. By exploiting the balance law and some new estimates, we prove the existence and uniqueness of global weak solutions for the C-H equation with a forcing in $H^1(\mathbb{R})$.
Keywords: weak solution, uniqueness, Camassa-Holm equation, characteristic method., forcing.
Mathematics Subject Classification: Primary: 35L05, 35D30; Secondary: 76B1.
Citation: Shihui Zhu. Existence and uniqueness of global weak solutions of the Camassa-Holm equation with a forcing. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5201-5221. doi: 10.3934/dcds.2016026
M. S. Alber, R. Camassa, D. D. Holm and J. E. Marsden, The geometry of peaked solitons and billiard solutions of a class of integrable PDE's, Lett. Math. Phys., 32 (1994), 137-151. doi: 10.1007/BF00739423. Google Scholar
N. Aronszajn, Differentiability of Lipschitzian mappings between Banach spaces, Studia Math., 57 (1976), 147-190. Google Scholar
R. Beals, D. H. Sattinger and J. Szmigielski, Multi-peakons and a theorem of Stieltjes, Inverse Problems, 15 (1999), L1-L4. doi: 10.1088/0266-5611/15/1/001. Google Scholar
A. Bressan and A. Constantin, Global conservative solutions to the Camassa-Holm equation, Arch. Rat. Mech. Anal., 183 (2007), 215-239. doi: 10.1007/s00205-006-0010-z. Google Scholar
A. Bressan and A. Constantin, Global dissipative solutions of the Camassa-Holm equation, Anal. Appl., 5 (2007), 1-27. doi: 10.1142/S0219530507000857. Google Scholar
A. Bressan, G. Chen and Q. Zhang, Uniqueness of conservative solutions to the Camassa-Holm equation via characteristics, Discr. Cont. Dyn. Syst., 35 (2015), 25-42. Google Scholar
A. Bressan, G. Chen and Q. Zhang, Unique conservative solutions to a variational wave equation, Arch. Rat. Mech. Anal., 217 (2015), 1069-1101. doi: 10.1007/s00205-015-0849-y. Google Scholar
G. Chen and Y. Shen, Existence and regularity of solutions in nonlinear wave equations, Discr. Cont. Dyn. Syst., 35 (2015), 3327-3342. doi: 10.3934/dcds.2015.35.3327. Google Scholar
G. Chen, Y. Shen and S. Zhu, Global well-posedness of weak solutions for a generalized water wave equation,, preprint., (). Google Scholar
R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243. doi: 10.1007/BF02392586. Google Scholar
A. Constantin, V. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation, Inverse Problems, 22 (2006), 2197-2207. doi: 10.1088/0266-5611/22/6/017. Google Scholar
A. Constantin and R. S. Johnson, Propagation of very long water waves, with vorticity, over variable depth, with applications to tsunamis, Fluid Dynam. Res., 40 (2008), 175-211. doi: 10.1016/j.fluiddyn.2007.06.004. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations, Arch. Ration. Mech. Anal., 192 (2009), 165-186. doi: 10.1007/s00205-008-0128-2. Google Scholar
A. Constantin and H. P. McKean, A shallow water equation on the circle, Comm. Pure Appl. Math., 52 (1999), 949-982. doi: 10.1002/(SICI)1097-0312(199908)52:8<949::AID-CPA3>3.0.CO;2-D. Google Scholar
A. Constantin and L. Molinet, Orbital stability of solitary waves for a shallow water equation, Physica D, 157 (2001), 75-89. doi: 10.1016/S0167-2789(01)00298-6. Google Scholar
A. Constantin and W. Strauss, Stability of peakons, Comm. Pure Appl. Math., 53 (2000), 603-610. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar
K. E. Dika and L. Molinet, Stability of multipeakons, Ann. Inst. H. Poincaré, Anal. Non Linéaire, 26 (2009), 1517-1532. doi: 10.1016/j.anihpc.2009.02.002. Google Scholar
L. C. Evans, Partial Differential Equations, Second edition, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019. Google Scholar
B. Fuchssteiner and A. S. Fokas, Symplectic structures, their Bäcklund transformations and hereditary symmetries,, Physica D, 4 (): 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar
H. Holden and X. Raynaud, Global conservative solutions of the Camassa-Holm equation- a Lagrangian point of view, Comm. Partial Differential Equations, 32 (2007), 1511-1549. doi: 10.1080/03605300601088674. Google Scholar
R. I. Ivanov, Water waves and integrability, Philos. Trans. Roy. Soc. Lond. Ser. A, 365 (2007), 2267-2280. doi: 10.1098/rsta.2007.2007. Google Scholar
M. Lakshmanan, Integrable nonlinear wave equations and possible connections to tsunami dynamics, in: Tsunami and Nonlinear Waves, Springer, Berlin, (2007), 31-49. doi: 10.1007/978-3-540-71256-5_2. Google Scholar
Z. Xin and P. Zhang, On the weak solutions to a shallow water equation, Comm. Pure Appl. Math., 53 (2000), 1411-1433. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar
Z. Xin and P. Zhang, On the uniqueness and large time behavior of the weak solutions to a shallow water equation, Comm. Partial Differential Equations, 27 (2002), 1815-1844. doi: 10.1081/PDE-120016129. Google Scholar
Zhenhua Guo, Mina Jiang, Zhian Wang, Gao-Feng Zheng. Global weak solutions to the Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 883-906. doi: 10.3934/dcds.2008.21.883
Li Yang, Zeng Rong, Shouming Zhou, Chunlai Mu. Uniqueness of conservative solutions to the generalized Camassa-Holm equation via characteristics. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5205-5220. doi: 10.3934/dcds.2018230
Alberto Bressan, Geng Chen, Qingtian Zhang. Uniqueness of conservative solutions to the Camassa-Holm equation via characteristics. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 25-42. doi: 10.3934/dcds.2015.35.25
Danping Ding, Lixin Tian, Gang Xu. The study on solutions to Camassa-Holm equation with weak dissipation. Communications on Pure & Applied Analysis, 2006, 5 (3) : 483-492. doi: 10.3934/cpaa.2006.5.483
Stephen Anco, Daniel Kraus. Hamiltonian structure of peakons as weak solutions for the modified Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4449-4465. doi: 10.3934/dcds.2018194
Shaoyong Lai, Qichang Xie, Yunxi Guo, YongHong Wu. The existence of weak solutions for a generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2011, 10 (1) : 45-57. doi: 10.3934/cpaa.2011.10.45
Yongsheng Mi, Boling Guo, Chunlai Mu. Persistence properties for the generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1623-1630. doi: 10.3934/dcdsb.2019243
Yu Gao, Jian-Guo Liu. The modified Camassa-Holm equation in Lagrangian coordinates. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2545-2592. doi: 10.3934/dcdsb.2018067
Yongsheng Mi, Boling Guo, Chunlai Mu. On an $N$-Component Camassa-Holm equation with peakons. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1575-1601. doi: 10.3934/dcds.2017065
Helge Holden, Xavier Raynaud. Dissipative solutions for the Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1047-1112. doi: 10.3934/dcds.2009.24.1047
Defu Chen, Yongsheng Li, Wei Yan. On the Cauchy problem for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 871-889. doi: 10.3934/dcds.2015.35.871
Milena Stanislavova, Atanas Stefanov. Attractors for the viscous Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 159-186. doi: 10.3934/dcds.2007.18.159
Wenxia Chen, Jingyi Liu, Danping Ding, Lixin Tian. Blow-up for two-component Camassa-Holm equation with generalized weak dissipation. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3769-3784. doi: 10.3934/cpaa.2020166
Giuseppe Maria Coclite, Lorenzo Di Ruvo. A note on the convergence of the solution of the high order Camassa-Holm equation to the entropy ones of a scalar conservation law. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1247-1282. doi: 10.3934/dcds.2017052
Aiyong Chen, Xinhui Lu. Orbital stability of elliptic periodic peakons for the modified Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1703-1735. doi: 10.3934/dcds.2020090
Stephen C. Anco, Elena Recio, María L. Gandarias, María S. Bruzón. A nonlinear generalization of the Camassa-Holm equation with peakon solutions. Conference Publications, 2015, 2015 (special) : 29-37. doi: 10.3934/proc.2015.0029
Shouming Zhou, Chunlai Mu. Global conservative and dissipative solutions of the generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1713-1739. doi: 10.3934/dcds.2013.33.1713
Yongsheng Mi, Chunlai Mu. On a three-Component Camassa-Holm equation with peakons. Kinetic & Related Models, 2014, 7 (2) : 305-339. doi: 10.3934/krm.2014.7.305
Feng Wang, Fengquan Li, Zhijun Qiao. On the Cauchy problem for a higher-order μ-Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4163-4187. doi: 10.3934/dcds.2018181
Priscila Leal da Silva, Igor Leite Freire. An equation unifying both Camassa-Holm and Novikov equations. Conference Publications, 2015, 2015 (special) : 304-311. doi: 10.3934/proc.2015.0304
PDF downloads (97)
HTML views (0)
Shihui Zhu | CommonCrawl |
Search SpringerLink
On two simple virtual Kirchhoff-Love plate elements for isotropic and anisotropic materials
P. Wriggers1,2,
B. Hudobivnik1,2 &
O. Allix3
Computational Mechanics (2021)Cite this article
The virtual element method allows to revisit the construction of Kirchhoff-Love elements because the \(C^1\)-continuity condition is much easier to handle in the VEM framework than in the traditional Finite Elements methodology. Here we study the two most simple VEM elements suitable for Kirchhoff-Love plates as stated in Brezzi and Marini (Comput Methods Appl Mech Eng 253:455–462, 2013). The formulation contains new ideas and different approaches for the stabilisation needed in a virtual element, including classic and energy stabilisations. An efficient stabilisation is crucial in the case of \(C^1\)-continuous elements because the rank deficiency of the stiffness matrix associated to the projected part of the ansatz function is larger than for \(C^0\)-continuous elements. This paper aims at providing engineering inside in how to construct simple and efficient virtual plate elements for isotropic and anisotropic materials and at comparing different possibilities for the stabilisation. Different examples and convergence studies discuss and demonstrate the accuracy of the resulting VEM elements. Finally, reduction of virtual plate elements to triangular and quadrilateral elements with 3 and 4 nodes, respectively, yields finite element like plate elements. It will be shown that these \(C^1\)-continuous elements can be easily incorporated in legacy codes and demonstrate an efficiency and accuracy that is much higher than provided by traditional finite elements for thin plates.
The necessity to accurately and efficiently model plates has a long history based on the fact that plates are installed as structural members in almost every building, in many machines and in airplanes. This has led early on to the development of plate elements in most discretisation schemes. In the finite element method associated development started more than 50 years ago with the work of [8, 17, 24]. It was obvious that general forms of finite elements were not able to fulfill \(C^1\)-continuity when using Kirchhoff-Love plate theory as starting point. First elements that met \(C^1\)-continuity were developed within the TUBA series of elements in [4]. Composite elements, consisting of four triangles were provided by [12, 18] which then allowed to use such composite element as general quadrilateral other formulations for strictly rectangular elements can be found e.g. in [14]. All these formulations have the disadvantage that besides deflection w and rotations \(w_{,x}\,, w_{,y}\) additional higher order kinematical quantities, e.g. the curvatures \(w_{,xx}\,, w_{,yy}\), appear as nodal unknowns which impedes the use of such element in an engineering environment.
The latter complication was circumvented by the introduction of Reissner-Mindlin elements, starting with the work of [20, 39]. Due to some disadvantages in the thin plate limit, many different variants were considered in the following years, see the textbooks e.g. [19, 30, 31]. One of these variants is based on the discrete, point-wise formulation of \(C^1\)-continuity, which leads to the discrete Kirchhoff triangles (DKT), see [6], and quadrilaterals (DKQ), see [7].
As shown in the work of [13], the virtual element method (VEM) can be applied to develop \(C^1\)-continuous Kirchhoff-Love plate elements. VEM actually has the advantage that no higher order kinematical quantities have to be introduced and thus these elements can be easily integrated in engineering codes which is due to the simplicity of the definition of the ansatz functions, which are defined only at the element boundary. Hence the virtual element method allows to revisit the construction of Kirchhoff-Love elements. This was illustrated for two simple plate elements in [16] which includes error estimates. [26] employed the virtual element methodology to dynamic plate problems and also showed that these elements can be used for the buckling analysis of plates, see [25, 27]. At the same time other virtual plate elements using Reissner-Mindlin kinematics were developed, see [11] and non-conforming discretisations were considered in [3] to model plate problems with VEM.
Here we present two simple VEM elements suitable for conforming Kirchhoff-Love plates which were already discussed for isotropic materials in [16]. We extend this formulation to isotropic and anisotropic material responses, provide a detailed description of the matrix forms and new stabilisations. The reason for restricting ourselves to low order ansatz functions was motivated by the fact that virtual elements of low order are the most suitable choice for non-linear extensions, see e.g. [1, 10, 15, 38].
The main difficulty in VEM concerns the stabilisation parts of the stiffness matrix. This question is even more crucial in the case of \(C^1\)-continuous elements than in the case of \(C^0\)-continuous elements, because the rank deficiency of the stiffness matrix associated to the projected part of the ansatz function is larger. Therefore this paper aims—besides providing a formulation in engineering terms—at introducing different possibilities for the stabilisation of virtual plate elements. Compared to the approach in [26], this work aims to improve stabilisation and additionally introduces an energy based stabilisation which is based on a sub-triangulation of the VEM elements, as was proposed in [38] for hyperelastic solids. In the latter stabilisation, different simple conforming and non-conforming plate elements, see [6, 29], are compared regarding their contribution to the stiffness matrix, their precision and efficiency of the resulting VEM element.
Numerical examples will be considered to study the convergence behaviour of both developed elements and to illustrate their applicability to engineering problems. We first deal with problems with analytical solution and then examine the case of rhombic plates where its is known that higher degree elements are preferable, see [5]. Examples including anisotropic material behaviour show the applicability of the method to composite plates.
Since the developed virtual plate elements only introduce deflections and rotations as nodal unknowns, despite being \(C^1\)-continuous, they can be easily incorporated in finite element software when they are reduced to triangular elements with three/six nodes or quadrilateral elements with four/eight nodes. Based on classical benchmark examples it can be concluded that these reduced virtual element have a much higher efficiency and accuracy than comparable traditional finite plate elements.
Governing equation of Kirchhoff plates
Description of the plate and of the constitutive relation
The geometry of the plate is described through its mid surface \(\Sigma _m\) with thickness, 2h, which will be assumed to be constant. A point \(\varvec{X}\) within the plate is given by the orthogonal projection \(\varvec{X}_m\) onto the mid surface and its coordinate Z along the normal \({\varvec{E}_Z}\) of \(\Sigma _m\) (see Fig. 1).:
$$\begin{aligned} \varvec{X} = \varvec{X}_m + Z {\varvec{E}_Z}= X {\varvec{E}_X} + Y {\varvec{E}_Y} + Z {\varvec{E}_Z} \end{aligned}$$
where \(({\varvec{E}_X},{\varvec{E}_Y},{\varvec{E}_Z})\) is a orthonormal direct cartesian basis.
Plate geometry
In the Kirchhoff-Love theory the displacement \(\varvec{u}(\varvec{X}_m,Z) \) of point \(\varvec{X}\) can be described by the deflection w of the plate
$$\begin{aligned} \varvec{u}(\varvec{X}_m,Z) = w(\varvec{X}_m)\varvec{E}_Z-Z\,\nabla _{\varvec{X}_m}\left( w(\varvec{X}_m)\right) \end{aligned}$$
where \(\nabla _{m}\), simply denoted by \(\nabla \), is the gradient operator with respect to \(\varvec{X}_m\) and w belongs to \(H^2(\Sigma _m)\).
Therefore the rotation \(\varvec{\theta }\) of the normal segment is equal to
$$\begin{aligned} \varvec{\theta }= \theta _X {\varvec{E}_X}+ \theta _Y {\varvec{E}_Y} = - {\varvec{E}_Z} \wedge \,\nabla w = w,_{Y} {\varvec{E}_X} - w,_{X}{\varvec{E}_Y} \end{aligned}$$
The in-plane strain \(\varvec{\varepsilon }\) associated with the displacement \(\varvec{u}(\varvec{X}_m,Z)\) is given by
$$\begin{aligned} \varvec{\varepsilon }(\varvec{u})= Z \varvec{\chi } \end{aligned}$$
where the curvature operator \(\varvec{\chi }\) is equal to
$$\begin{aligned} \varvec{\chi }= - {\nabla }( \nabla w )\,. \end{aligned}$$
In what follows we will assume that the plate is symmetrical with respect to its mid-surface. Introducing the classical Hooke's law under plane stress condition yields the constitutive tensor \(\mathbb {K}_{cp}\). With that one obtains a constitutive relation linking the moment of the in-plane stress to the curvature
$$\begin{aligned} \varvec{M} = <Z^2\,\mathbb {K}_{cp}>\varvec{\chi } \end{aligned}$$
where \(\langle a \rangle \) defines an integral of quantity a over the thickness \(t=2h\) of the plate
$$\begin{aligned} \langle a \rangle = \int _{-h}^{h} a \, \mathrm{d}Z \,. \end{aligned}$$
For a plate that is homogeneous through the thickness the integral (6) yields
$$\begin{aligned} \varvec{M} = \frac{2h^3}{3}\,\mathbb {K}_{cp}\, \varvec{\chi }\,. \end{aligned}$$
The local bilinear form associated with the Kirchhoff-Love model can be written as
$$\begin{aligned} a(w,v) = \text{ tr }[\varvec{M}(w)\varvec{\chi }(v)]=\frac{2h^3}{3}\text{ tr }[\mathbb {K}_{cp}\, \varvec{\chi }(w)\varvec{\chi }(v)] \end{aligned}$$
where the moment follows from (8) for an isotropic homogeneous plate with Young's modulus E and Poisson ratio \(\nu \)
$$\begin{aligned} \varvec{M} = \frac{2h^3}{3}\, \frac{E}{1+\nu } \left( \varvec{\chi }+ \dfrac{\nu }{1-\nu } \text{ tr }(\varvec{\chi }) \, \mathbb {I}_{2}\right) \,. \end{aligned}$$
With the definition of the bending stiffness
$$\begin{aligned} D=2\frac{h^3}{3}\, \frac{E}{1-\nu ^2} \end{aligned}$$
the bilinear form a(w, v) of the Kirchhoff-Love theory over \(H^2(\Sigma _m)\) follows
$$\begin{aligned} a(w,v)= & {} D [\,(1-\nu ) (w,_{XX}v,_{XX}+w,_{YY}v,_{YY}\nonumber \\&+ 2w,_{XY}v,_{XY}) +\nu \bigtriangleup w\bigtriangleup v\,] \end{aligned}$$
which leads to the bilinear form
$$\begin{aligned} A(v,w)=\int _{\Sigma _{m}} a(v,w)\ \mathrm{d}\Sigma _m \end{aligned}$$
characterising the internal virtual work of the Kirchhoff-Love problem.
Alternatively the strain energy density can be introduced
$$\begin{aligned} \psi (w)=\frac{1}{2} a(w,w) =\frac{1}{2} \text{ tr }[\varvec{M}(w)\varvec{\chi }(w)] = \frac{1}{2} \varvec{\chi }(w) \cdot \mathbb {D} \,\varvec{\chi }(w) \,. \end{aligned}$$
In what follows the symbol is associated to the Voigt notation. For example, for the moment and the curvature one introduces
$$\begin{aligned} \hat{M}= \begin{bmatrix} M_{XX} \\ M_{YY} \\ M_{XY} \end{bmatrix}, \, \, \hat{\chi } = \begin{bmatrix} w_{,XX} \\ w_{,YY} \\ 2 w_{,XY} \end{bmatrix}. \end{aligned}$$
The constitutive relation may be expressed through the matrix \(\hat{\mathbb {D}}\) that is defined with (11) by
$$\begin{aligned} \hat{M}= \hat{\mathbb {D}}\, \hat{\chi } = D \begin{bmatrix} 1 &{} \nu &{} 0 \\ \nu &{} 1&{} 0 \\ 0 &{}0 &{} \frac{1-\nu }{2} \end{bmatrix} \hat{\chi } \,. \end{aligned}$$
Thus the strain energy in (14) can be rewritten in Voigt notation as
$$\begin{aligned} \psi (w)= \frac{1}{2} \hat{\chi } ^T\hat{M}= \frac{1}{2} \hat{\chi } ^T \, \hat{\mathbb {D}}\, \hat{\chi }\,. \end{aligned}$$
For a laminated plate it is necessary to model the anisotropic behaviour, see e.g. [32]. A laminate consists of a stack of composite plies that have a fiber direction which points in the direction of the angle \(\phi \) to the X-axis. Let us note that we consider in this paper only stacking sequences that are symmetric with respect to their mid-plane in order to avoid coupling between tension and bending. For each ply k the orthotropic constitutive matrix may be defined with respect to the orthotropic basis (the notation refers to the orthotropic basis, while not using a bar refers to the global basis):
$$\begin{aligned} \bar{\hat{\mathbb {D}}}_{k} = \frac{1}{1-\nu _{12} \nu _{21}} \begin{bmatrix} E_1 &{} \nu _{12} E_2 &{} 0 \\ \nu _{12} E_2 &{} E_2 &{} 0 \\ 0 &{}0 &{}G_{12} \end{bmatrix} \end{aligned}$$
where \(E_1\) relates to the stiffness in fiber direction and the value \(E_2\) is the stiffness perpendicular to the fiber direction. The Poisson ration of the ply is given by \(\nu _{12}\) and the shear modulus is \(G_{12}\). The Poisson ratio \(\nu _{21}= \frac{E_2}{E_1}\nu _{12}\) is a dependent variable. This matrix has to be transformed to the \(X-Y\) coordinate system by the transformation
$$\begin{aligned} \hat{\mathbb {D}}_{k} = \varvec{T}^{-1} \bar{\hat{\mathbb {D}}}_{k} \varvec{T}^{-T} \end{aligned}$$
$$\begin{aligned} \varvec{T}^{-1} = \begin{bmatrix} \cos \phi ^2 &{} \sin \phi ^2 &{} -2 \sin \phi \cos \phi \\ \sin \phi ^2 &{} \cos \phi ^2 &{} 2 \sin \phi \cos \phi \\ \sin \phi \cos \phi &{}- \sin \phi \cos \phi &{} \cos \phi ^2 - \sin \phi ^2 \end{bmatrix} \,. \end{aligned}$$
Now an integration over the thickness has to be performed considering all \(n_{pl}\) plies. This leads, analogous to (7), by employing a sum over all plies to the global material stiffness matrix \(\hat{\mathbb {D}}_{G}\)
$$\begin{aligned} \hat{\mathbb {D}}_{G} = \frac{1}{3} \sum _{k=1}^{n_{pl}} \hat{\mathbb {D}}_{k} (Z_k^3 - Z_{k-1}^3) \end{aligned}$$
where the integration is executed for each ply over the ply thickness \(h_k= Z_k - Z_{k-1}\). Now the strain energy for the laminated plate is given by
$$\begin{aligned} \psi (w)=\frac{1}{2} \hat{\chi } ^T \, \hat{\mathbb {D}}_{G}\, \hat{\chi } \end{aligned}$$
and the moments follow from \(\hat{M}= \hat{\mathbb {D}}_{G}\, \hat{\chi }\).
Basis of the virtual element method for Kirchhoff plates
Most of the mathematical theory of the virtual element method (VEM) for Kirchhoff plates was established in [13]. The theory has been extended to vibrations in [26] and to buckling analysis notably in [25, 27].
We briefly recall some aspects of the theory useful for this paper. The domain \(\Sigma _m\) is partitioned into non-overlapping polygonal elements which need not be convex but basically star-shaped. Following the framework of the VEM, the space of a virtual Kirchhoff-Love element has to be \(C_1\) continuous at the inter-element level. A zoo of elements may be defined by four integers, see [13]:
The first one, r, defines the regularity of the polynomial chosen for the deflection w on the contour of each element (and thus of its tangentiel derivative \(w,_t\)),
the second one, s, defines the regularity of the polynomial chosen for its normal derivative \(w,_n\),
the third one, m, defines the polynomial order of the bi-laplacian inside the element and
the fourth one, k, possibly the most important, corresponds to the order of the method.
Not all possible choices of the quadruplet r, s, m, k are acceptable.
In what follows E denotes an element of a given mesh and e any of the corresponding edges of the element. The space of the polynomial \(P_m\) is given by the degree, less or equal to m for the corresponding geometrical entity
$$\begin{aligned} V_h= \{w \in H^2(\Sigma _m), \bigtriangleup ^2 w \in P_m(E), w,_t \in P_r(e), w,_n \in P_s(e)\} \end{aligned}$$
Among all relevant choices of r, s, m, k some appear more natural than others. Since the local equation of equilibrium involves the fourth order derivative of w, the choice of \(m = k-4\) is the one minimising the number of internal d.o.f.s for a given order. We adopt this condition which avoids the introduction of internal nodes for the chosen elements studied in this paper. We make use of the convention that \(P_m(E)=\{0\}\) for \(m <0\). The condition \( \bigtriangleup ^2 w \in P_m(E)\) is fundamental for the element to be uni-solvent that is ensuring the uniqueness of w inside the element knowing its degrees of freedom. The order of the method is ensured by the following consistency conditions where \(A_h\) is the approximation of the bilinear form associated with the strain energy of the model:
$$\begin{aligned} A_h(p, q)= A(p, q ) ,\, \forall p \in V_h, \, \forall q \in P_k(E) \end{aligned}$$
A method used in many papers on VEM to the recover stability along with consistency, see [9] for solids and [13] for plates, is the following: Let \(\Pi _k\) be the energy projector onto the set of polynomials of degree k where k corresponds to the degree of projection of \(\Pi _k(w)\) which can uniquely be prescribed knowing the degree of freedom of the element. \(\Pi _k\) is defined as follows:
$$\begin{aligned} \begin{array}{rl} &{}A_h(\Pi _k (p), q)= A_h(p, q ),\, \forall p \in V_h, \, \forall q \in P_k(E) \\ &{} \int _{\Sigma _E}(\nabla \Pi _k p- \nabla p) \,\text{ d } \Sigma =0\\ &{} \int _{\Sigma _E}( \Pi _k p- p) \,\text{ d } \Sigma =0 \end{array} \end{aligned}$$
Here \(A_h(p,q)\) follows from (12) and (13). Generally the projection in (25) is not sufficient for defining a proper stiffness matrix because it leads to a rank deficient matrix. Using the following property of \(\Pi _k\):
$$\begin{aligned} A_h(v_h, w_h)= & {} A_h(\Pi _k (v_h), \Pi _k (w_h) )\nonumber \\&+ A_h(v_h-\Pi _k (v_h), w_h-\Pi _k (w_h) ) ,\, \nonumber \\&\forall v_h \in V_h, \, \forall w_h \in V_h, \end{aligned}$$
we note that the first term is computable as a function of the degrees of freedom of the virtual elements but the second one is not because \((v_h,w_h)\) are not defined by the ansatz (23) within the element. The stability of the method may nevertheless be ensured by replacing
$$\begin{aligned} A_h(v_h-\Pi _k (v_h), w_h-\Pi _k (w_h) ) \end{aligned}$$
by an equivalent partial bilinear form which scales as the original one and is defined by the value of the element unknowns. Let us note that, for a given element, there exist several ways to complement the stiffness matrix following from (25) in a stable manner. All possible stable methods behave asymptotically in the same manner but may lead to rather different results for coarse meshes. This is precisely one of the issues addressed in this paper.
Formulation of the chosen virtual element method for Kirchhoff-Love plates
By using the specific strain energy (9) the virtual element for the Kirchhoff-Love theory can be derived based on
$$\begin{aligned} \begin{aligned} A(v,w) =&\int _{\Sigma _{E}} a(w,v) \ \mathrm{d}\Sigma _{E}= \int _{\Sigma _{E}} \text{ tr }[\varvec{M}(v)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E}\\&=\int _{\Sigma _{E}} div({div}\varvec{M}(v))w \ \mathrm{d}\Sigma _{E}\\&\quad + \int _{\Gamma _{e}} M_{nn}(v){\nabla } w . \, \varvec{N}\, \ \mathrm{d}\Gamma - \int _{\Gamma _{e}} [ {div}\varvec{M}(v). \, \varvec{N}\\&\quad +M_{nt},_{s}(v)]w\, \ \mathrm{d}\Gamma + \sum _{i} \left[ \left| M_{nt}(v)(\varvec{X}_i)\right| \right] w(\varvec{X}_i) \end{aligned} \end{aligned}$$
where \(\varvec{X}_i\) are the coordinates of possible loaded points in the domain and \(\left[ \left| M_{nt}(\varvec{X}_i)\right| \right] \) corresponds to the jump of \(M_{nt}\) at \(\varvec{X}_i\) computed following the curvilinear abscissa of the frontier of the domain.
Let us denote by \(V^1_h\) the space corresponding to element 1 with \(k=2\)
$$\begin{aligned} \begin{aligned} V^1_h=&\{w \in H^2(E), \bigtriangleup ^2 w \in P_{-2}(E), w,_t \in P_2(e), w,_n \in P_1(e)\} \\&= \{w \in H^2(E), \bigtriangleup (\bigtriangleup w )=0, w,_t \in P_2(e), w,_n \in P_1(e)\} \end{aligned} \end{aligned}$$
One obtains from (28) the consistency condition (24) for element 1
$$\begin{aligned} A_h(p,w)= & {} \int _{\Gamma _{e}} M_{nn} (p){\nabla } w . \, \varvec{N}\, \ \mathrm{d}\Gamma \nonumber \\&+ \Sigma _{i} \left[ \left| M_{nt}(p)(\varvec{X}_i)\right| \right] w(\varvec{X}_i) ,\, \nonumber \\&\forall w \in V^1_h, \, \forall p \in P_2\{E\} \,. \end{aligned}$$
Since \(w,_n\) and w are defined on the boundary the left hand side of the equation is known and the consistency condition is easily enforced. It leads to three conditions.
Let us now denote the space \(V^2_h\) corresponding to element 2. For element 2, \(k=3\) and thus
$$\begin{aligned} \begin{aligned} V^2_h&= \{w \in H^2(E), \bigtriangleup ^2 w \in P_{-1}(E), w,_t \in P_2(e), w,_n \in P_2(e)\} \\&= \{w \in H^2(E), \bigtriangleup ^2 w =0, w,_t \in P_2(e), w,_n \in P_2(e)\} \,. \end{aligned} \end{aligned}$$
Again relation (28) yields the consistency condition \(\forall p \in P_3\{E\} \)
$$\begin{aligned} \begin{aligned} A_h(p,q)&=A(p,w)=- \int _{\Gamma _{e}} [ {\text {div}}\varvec{M}(p). \, \varvec{N}+M_{nt},_{s}(p)]w\, \ \mathrm{d}\Gamma \\&\quad + \int _{\Gamma _{e}} M_{nn} (p){\nabla } w . \, \varvec{N}\, \ \mathrm{d}\Gamma \\&\quad + \Sigma _{i} \left[ \left| M_{nt}(p)(\varvec{X}_i)\right| \right] w(\varvec{X}_i) ,\, \forall w \in V^2_h, \, \forall p \in P_3\{E\} \,. \end{aligned} \end{aligned}$$
where w and \(w,_n\) are known at the boundary. Hence the left hand side of the equation is known and the consistency condition is easily enforced.
Let us now consider the projector \(\Pi _k\). For element 1, \(\Pi _2\) is defined as
$$\begin{aligned} A_h(\Pi _2 ({p}), q)&= A_h({p}, q ),\, \forall {p} \in V^1_h, \, \forall q \in P_2(E) \,,\nonumber \\ \int _{\Sigma _E}\nabla \Pi _2 p \,\text{ d } \Sigma&= \int _{\Sigma _E} \nabla p \,\text{ d } \Sigma \,,\, \forall {p} \in V^1_h \,, \\ \int _{\Sigma _E} \Pi _2 p \, \text{ d } \Sigma&= \int _{\Sigma _E} p \,\text{ d } \Sigma \,,\, \forall {p} \in V^1_h \nonumber \end{aligned}$$
Thanks to the consistency conditions the right hand side of the first relation defining \(\Pi _2\) is known. The second and third condition give rise to three additional conditions which completely define \(\Pi _2 ({p})\). It depends on 6 scalars as function of the degrees of freedom of the element. For the same reason, when deriving element 2, the projector \(\Pi _3\), involving 10 constants defining \(\Pi _3 ({p})\), is also perfectly defined.
In what follows we denote the number of elements by \(n_T\). For each element \(n_V\) denotes the number of vertices which is equal to the number \(n_E\) of edges, \(n_N\) is the number of nodes and \(n_D\) describes the number of degrees of freedom. The number of unknowns associated with the projection \(\Pi \) is given by \(n_{\Pi }\). Table 1 summarizes the data for element 1 and element 2 and provides the rank deficiency \(n_R\) associated with the use of the sole projection \(\Pi \) for the definition of the stiffness matrix of the element. Let us note that the normal rank deficiency of the stiffness matrix in bending should be 3, due to rigid body modes.
Table 1 Data for element 1 and 2
Basic aspects of the chosen elements
As previously explained, we have selected the two most simple elements with \(k = 2\) (\((r,s,m,k)=(2,1,-2,2)\)) and \(k=3\) (\((r,s,m,k)=(2,2,-1,3)\)), which correspond to elements suggested in [13].
In both cases no internal degrees of freedom are needed which simplifies possible extensions of the method to non-linear cases. Nevertheless this choice implies some approximation when computing the virtual power of external body load for the element of \(V_h\).
The ansatz for both elements is based on Hermite cubic functions for w, ensuring the continuity of the normal component of the gradient along the edges. In each case, the cubic ansatz for the deflection w along the element edge is given for a boundary segment k of the virtual element, defined by the local nodes (1)–(2) (respectively (1)–(2)–(3)) with three unknowns for w and \(\varvec{\theta }\) per vertex, see Fig. 2.
For element 1 the gradient of w along the edge is quadratic while its normal derivative is only linear. Element 1 is of order 2.
Element 1: lowest order VEM element for Kirchhoff-Love plate
To construct a quadratic ansatz for the gradient along the edge one has to introduce an additional unknown per edge, for example the value of \(\theta _{t}\) at the mid node of the edge. This element is denoted element 2 (Fig. 3).
Element 2: lowest order VEM element for Kirchhoff-Love plate with a quadratic gradient on the edge
It is interesting to compare the second element with the first one since it is of order 3, at the price of only one additional degree of freedom per edge. A second issue studied in the paper is related to the relative effective performance of elements 1 and 2 with respect to the type of stabilisation.
The segments of the elements are indexed by \(k= 1, \ldots , n_E\). In order to use the same notation for the two element formulations we adopt the following local convention for an edge k of the element. The origin of the local coordinate system is situated in the node 1 (\(X_k=0\)) and node 2 is located in \(X_k=L_k\) (see Fig. 2). In element 2 a node 3 is added at the middle of the edge (\(X_k=\frac{L_k}{2}\)) with the unknown \(\theta _{t}\). Moreover a non-dimensional local coordinate \(\xi _k=\frac{X_k}{L_k}\) is introduced to defined the ansatz function of both elements.
Hence all element nodes are placed at the vertices for w and its gradient in case of element one. The discrete space of test functions on \(\Sigma _m\) is denoted by \(V_{h}\), and for a conforming approach we require that \(V_{h} \subset V\). This requirement is met by the definition of the shape or basis functions in \(V_{h}\). The \(C^1\) continuous functions of an element \(\Sigma _{E}\) include (but the space is larger than that) quadratic functions.
The tangential and normal vectors \( (\varvec{T},\varvec{N})\) change from segment to segment. In the 2D case, normal \(\varvec{N}\) and tangent \(\varvec{T}\) are given for a segment k as
$$\begin{aligned}&\varvec{N}_k= \left\{ \begin{matrix} n_X \\ n_Y \end{matrix} \right\} _k =\frac{1}{L_k} \left\{ \begin{matrix} -(Y_2-Y_1) \\ X_2-X_1 \end{matrix} \right\} _k\nonumber \\&\quad \text{ and } \,\, \varvec{T}_k= \left\{ \begin{matrix} t_X \\ t_Y \end{matrix} \right\} _k =\frac{1}{L_k} \left\{ \begin{matrix} X_2-X_1 \\ Y_2-Y_1 \end{matrix} \right\} _k \end{aligned}$$
where \(L_k\) is the length of the segment k and (\(X_i\,,Y_i\)) with \(i = \{1,2\}\) are the nodes of the vertices defining the segment.
VEM ansatz functions
The virtual element ansatz functions are defined within the local orthonormal basis associated with an edge is denoted by \( \varvec{N}, \varvec{T}, {\varvec{E}_z}\) where \(\varvec{N}\) points outward. To ensure the \(C^1\) continuity requirement of the ansatz functions the values of w its normal derivative \(\nabla w,_n \) have to match along the edge between two adjacent elements. The following relations hold for the derivatives in normal and tangential direction
$$\begin{aligned}&\nabla w \cdot \varvec{T}=w,_s = \varvec{\theta }\cdot \varvec{N}= \theta _n \,, \end{aligned}$$
$$\begin{aligned}&\nabla w \cdot \varvec{N}= w,_n = -\varvec{\theta }\cdot \varvec{T}= -\theta _t \,. \end{aligned}$$
With the introduced notation the cubic ansatz for the deflection along the element edge is given for both elements on a segment k by
$$\begin{aligned} ({w}_{h})_k\ = w_1 H_1(\xi _k)+ L_k \theta _{1n} H'_1(\xi _k)+ w_2 H_2(\xi _k)+ L_k \theta _{2n} H'_2(\xi _k) \end{aligned}$$
where the basis functions are defined in terms of Hermite cubic splines
$$\begin{aligned} \begin{aligned} H_1(\xi _k)&= 2\xi _k^3-3\xi _k^2+1 \\ H'_1(\xi _k)&= \xi _k^3-2\xi _k^2+\xi _k\\ H_2(\xi _k)&= -2\xi _k^3+3\xi _k^2\\ H'_2(\xi _k)&= \xi _k^3-\xi _k^2 \end{aligned} \end{aligned}$$
The derivative \(\theta _n\) in (36) is then provided by
which yields the explicit form
$$\begin{aligned} ({\theta }_{hn})_k= & {} \frac{6}{L_{k}}(\xi _{k}^{2}-\xi _{k}) w_{1} +(3\xi _{k}^{2}-4\xi _{k} +1) \theta _{1n} \nonumber \\&+ \frac{6}{L_{k}} (-\xi _{k}^{2}+\xi _{k}) w_{2} + (3\xi _{k}^{2}-2\xi _{k}) \theta _{2n} \end{aligned}$$
for element 1 and 2.
The tangential rotation \((\theta _{ht})_k\) is defined by the ansatz
$$\begin{aligned} ({\theta }_{ht})_k&= (1-\xi _k )\,\theta _{1t} + \xi _k \, \theta _{2t} \,\quad {\text {for element 1, }} \end{aligned}$$
$$\begin{aligned} ({\theta }_{ht})_k&= 2\left( \xi _k-\frac{1}{2} \right) (\xi _k-1) \,\theta _{1t} + 2\xi _k \left( \xi _k-\frac{1}{2} \right) \, \theta _{2t}\nonumber \\&\quad + 4 \xi _k(1-\xi _k) \, \theta _{3t} \quad {\text {for element 2. }} \end{aligned}$$
Derivation of the \(L^2\) projector
Following the virtual element method, we split the ansatz space into a projected part \(\Pi ({w}_h) \) and a remainder.
$$\begin{aligned} {w}_h = \Pi ({w}_h) + [ {w}_h - \Pi ({w}_h)] \end{aligned}$$
As previously discussed, a main aspect of the virtual element method (VEM) is the choice of the projector of the deformation onto a specific ansatz space. This has now to be specified for the ansatz related to the two elements discussed above.
Some useful properties
One of the advantages of the virtual element method is, as we will see in the subsequent sections, that integrals have only to be computed at the boundary. But often polynomials are given as functions over the surface \(\Sigma _{E}\) of an element. Those polynomials can always be integrated as line integral over the element boundary. By using the divergence theorem, for example for a 2D domain one can write in general
$$\begin{aligned} \int _{\Sigma _{E}} f(X,Y) \ \mathrm{d}\Sigma= & {} \frac{1}{2} \int _{\Gamma _{e}} \left[ \begin{array}{c c} \int f(X,Y) \mathrm{d}\,X&, \int f(X,Y) \mathrm{d}\,Y\; \end{array}\right] \nonumber \\&\cdot \varvec{N} \, \mathrm{d}\Gamma \,. \end{aligned}$$
This theorem can now be used for polynomial functions which yields a simplified treatement
$$\begin{aligned} \int _{\Sigma _{E}} X^p Y^q \ \mathrm{d}\Sigma = \frac{1}{2} \int _{\Gamma _{e}}\left( \frac{X^{p+1} Y^q}{p+1} n_X + \frac{X^p Y^{q+1}}{q+1} n_Y\right) \, \mathrm{d}\Gamma \,. \end{aligned}$$
Hence any polynomial can be computed exactly over a polygon with arbitrary shape, this even holds for a non-convex virtual element.
Derivation of element 1
Since element 1 is of order 2 due to the choice of a specific ansatz for \(\theta _t,\) in (41) the projection \(\Pi \) has to be quadratic. It is defined at element level by
$$\begin{aligned} \Pi ({w}_h) = \mathbf {H} \,\mathbf {a} = \left[ \begin{matrix} 1&X&Y&\frac{1}{2}X^2&\frac{1}{2}Y^2&XY \end{matrix} \right] \, \left\{ \begin{matrix} a_1 \\ a_2 \\ \ldots \\ a_6 \end{matrix} \right\} \end{aligned}$$
where \(a_i\) are unknown parameters that have to be linked to the element unknowns by the projection procedure.
The curvature of the projection follows from (46) as
$$\begin{aligned} - \left. {\nabla }( \nabla \Pi (w) )_h \right| _e =- \left[ \begin{matrix} a_4 &{} a_6 \\ a_6 &{} a_5 \end{matrix} \right] \end{aligned}$$
which is constant at element level. Let us recall that the \(L^2\) projection is defined as follows, \(\forall w \in V^1_h, \, \forall p \in P_{2}\{E\} \)
$$\begin{aligned} \begin{aligned} \int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(\Pi (w))] \ \mathrm{d}\Sigma _{E}=\int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E} \end{aligned} \end{aligned}$$
Since \(\varvec{\chi }(p)\) spans with (47) the space of constant curvature this condition is equivalent to
$$\begin{aligned}&\Sigma _{E} \left. \varvec{\nabla }( \nabla \Pi (w) )_h \right| _e {\mathop {=}\limits ^{\smash {\scriptscriptstyle \mathrm {!}}}}\int _{\Sigma _{E}} \varvec{\nabla }( \nabla w )_h \ \mathrm{d}\Sigma _{E}\nonumber \\&\quad = \frac{1}{2} \int _{\Gamma _{e}} ( \nabla w_h \otimes \varvec{N}+ \varvec{N} \otimes \nabla w_h) \,\ \mathrm{d}\Gamma \,. \end{aligned}$$
The right hand side of (49) can be exactly integrated which yields
$$\begin{aligned} \sum _{k=1}^{n_E}&\int _{0}^{1}\left[ \left( \frac{1}{2 L} w_h(\xi ),_{\xi } \, (\varvec{T} \otimes \varvec{N}+\varvec{N} \otimes \varvec{T}) - \theta _t (\xi ) \,\varvec{N} \otimes \varvec{N}\right) L\,\ \mathrm{d}\xi \right] _k \nonumber \\&= \sum _{k=1}^{n_E}\left[ \frac{1}{2} (w_2-w_1) (\varvec{T} \otimes \varvec{N}+\varvec{N} \otimes \varvec{T}) - L \frac{ \theta _{t1}+\theta _{t2}}{2} \varvec{N} \otimes \varvec{N}\right] _k \,. \end{aligned}$$
the dyadic products \( \varvec{T} \otimes \varvec{N}\) and \( \varvec{N} \otimes \varvec{N}\) change from segment to segment. Hence by comparing (47) and (50) the unknowns \(a_4\) to \(a_6\) are obtained by inspection where all contributions related to the segments have to be added. We observe that \(a_4\) to \(a_6\) are defined as a linear combination of the nodal values w and \(\theta _{t}\).
The constants \(a_1\) to \(a_3\) still have to be determined. These follow from the two last conditions in (25) which lead to
$$\begin{aligned} \int _{\Sigma _{E}} ( \nabla \Pi (w) )_h \ \mathrm{d}\Sigma _{E} {\mathop {=}\limits ^{\smash {\scriptscriptstyle \mathrm {!}}}}\int _{\Sigma _{E}} ( \nabla w )_h \ \mathrm{d}\Sigma _{E} = \int _{\Gamma _{e}} w_h \varvec{N}\,\ \mathrm{d}\Gamma \end{aligned}$$
$$\begin{aligned} \int _{\Gamma _{e}} \Pi (w_h) \,\ \mathrm{d}\Gamma {\mathop {=}\limits ^{\smash {\scriptscriptstyle \mathrm {!}}}}\int _{\Gamma _{e}} w_h \,\ \mathrm{d}\Gamma \,. \end{aligned}$$
The term on the left hand side in (51) yields with (46)
$$\begin{aligned} \int _{\Sigma _{E}} ( \nabla \Pi (w) )_h \ \mathrm{d}\Sigma _{E}= \int _{\Sigma _{E}} \left\{ \begin{matrix} a_2+a_4X+a_6Y \\ a_3+a_5Y+a_6X \end{matrix} \right\} \,\ \ \mathrm{d}\Sigma _{E} \end{aligned}$$
and involves only polynomials. The right hand side of (51) can be explicitly determined as
$$\begin{aligned} \int _{\Gamma _{e}} w_h \varvec{N}\,\ \mathrm{d}\Gamma&= \sum _{k=1}^{n_E}\int _{0}^{1}[( w_1 H_1(\xi )+ L_k \theta _{1n} H'_1(\xi )\nonumber \\&\quad + w_2 H_2(\xi )+ L_k \theta _{2n} H'_2(\xi ) ) \,\ \mathrm{d}\xi ]_k L_k\, \varvec{N}_k \nonumber \\&= \sum _{k=1}^{n_E} \left[ \frac{1}{2}(w_1+w_2) + \frac{L_k}{12} (\theta _{1n}-\theta _{2n}) \right] _k \, L_k\, \varvec{N}_k \end{aligned}$$
From Eqs. (53) and (51) one derives with (54)
$$\begin{aligned}&\left\{ \begin{matrix} a_2 \\ a_3 \end{matrix} \right\} = \frac{1}{\Sigma _{E}}\left[ \sum _{k=1}^{n_E} \left[ \frac{1}{2}(w_1+w_2) + \frac{L_k}{12} (\theta _{1n}-\theta _{2n}) \right] _k L_k\, \varvec{N}_k\right. \nonumber \\&\quad \left. - \int \limits _{\Sigma _{E}} \left\{ \begin{matrix} a_4X+a_6Y\\ a_5Y+a_6X \end{matrix} \right\} \ \mathrm{d}\Sigma _{E} \right] \end{aligned}$$
Here the last term can be simply integrated over the boundary using relations (45). Thus (55) leads to a linear mapping of \(a_2\) and \(a_3\) in terms of the element unknowns.
The last term that has to be connected to the nodal values of the ansatz is the coefficient \(a_1\). Using Eq. (52) the coefficient \(a_1\) follows directly. With (46) and (52) one writes
$$\begin{aligned}&\int \limits _{\Gamma _{e}} (a_1+a_2 X+ a_3 Y+\frac{a_4}{2} X^2 + \frac{a_5}{2} Y^2 + a_6 X Y)\,\ \mathrm{d}\Gamma \nonumber \\&\quad = \sum _{k=1}^{n_E} \left[ \frac{1}{2}(w_1+w_2) + \frac{L_k}{12} (\theta _{1n}-\theta _{2n}) \right] _k \, L_k \end{aligned}$$
which yields \(a_1\) immediately since \(a_2\) to \(a_6\) are known by the relations (50) and (55).
With this result the \(L^2 \) projection \(\Pi ({w}_h)\) of the virtual element is completely defined in terms of the nodal unknowns of the virtual element. We can combine (50), (55) and (56) in the mapping
$$\begin{aligned} \Pi (w_h) = \mathbb {M}_1(X,Y)\,\mathbf {u}_1 \end{aligned}$$
where \(\mathbf {u}_1= \{ w_1,\theta _{X1},\theta _{Y1} \,, w_2,\theta _{X2},\theta _{Y2}\,\ldots \,, w_{n_N},\theta _{Xn_N},\theta _{Yn_N}\}^T\) is the vector of all nodal unknowns of element 1. Note that the global rotations \((\theta _{X},\theta _{Y})\) are related to the local rotations \((\theta _n,\theta _t)\) by the transformation
$$\begin{aligned} \left[ \begin{array}{c} \theta _{X}\\ \theta _{Y}\end{array}\right] = \left[ \begin{array}{cc}n_X &{} t_X \\ n_Y &{} t_Y\end{array}\right] ^T \left[ \begin{array}{c} \theta _{n}\\ \theta _{t} \end{array}\right] \end{aligned}$$
using (34). The mapping in (57) will not be formed explicitly since it is not necessary when using the software AceGen, see e.g. [22], to automatically derive and code the stiffness matrix associated with element 1.
Derivation of the \(L^2\) projector for element 2
Due to the choice of the ansatz for w and \(\theta _t\) in (38) and (42) element 2 is of order 3. Thus the projection \(\Pi \) is cubic and defined at element level by
$$\begin{aligned} \Pi _3 {w}_h = \mathbf {H} \,\mathbf {a} = \left[ \begin{matrix} 1&X&Y&\frac{1}{2}X^2&\frac{1}{2}Y^2&XY&\frac{1}{6}X^3&\frac{1}{2}X^2Y&\frac{1}{2}XY^2&\frac{1}{6}Y^3 \end{matrix} \right] \, \left\{ \begin{matrix} a_1 \\ a_2 \\ \ldots \\ a_{10} \end{matrix} \right\} \end{aligned}$$
The curvature of the projection follows with (59) as a linear function in X and Y
$$\begin{aligned} - \left. \varvec{\nabla }( \nabla \Pi _3 (w) )_h \right| _e =- \left[ \begin{matrix} (a_4 + a_7 X + a_8 Y)&{} (a_6 + a_8 X + a_9 Y) \\ (a_6 + a_8 X + a_9 Y) &{}( a_5 + a_9 X + a_{10} Y) \end{matrix} \right] \,. \end{aligned}$$
Let us consider again the \(L^2\) projection which is, for element 2, defined as follows, \(\forall w \in V^2_h, \, \forall p \in P_{3}\{E\} \) :
$$\begin{aligned} \begin{aligned} \int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(\Pi _3 (w))] \ \mathrm{d}\Sigma _{E}=\int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E}\,. \end{aligned} \end{aligned}$$
The integral on the right hand side yields with the divergence theorem
$$\begin{aligned} \int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E}= & {} \int _{\Gamma _e} \varvec{\chi }(p) \cdot (\nabla w_h \otimes \varvec{N}) \mathrm{d}\Gamma \nonumber \\&- \int _{\Sigma _{E}} div{\varvec{\chi }(p)} \cdot \nabla w_h \ \mathrm{d}\Sigma _{E}\,. \end{aligned}$$
Note that \(\varvec{\chi }(p)\) is symmetric. Hence the integrand of the first term of the right hand side can also be expressed as \(\varvec{\chi }(p) \cdot (\nabla w_h \otimes \varvec{N})= \varvec{\chi }(p) \cdot \frac{1}{2} (\nabla w_h \otimes \varvec{N} +\varvec{N} \otimes \nabla w_h)\).
Again the integrand of the last term can be transformed using
$$\begin{aligned} div [div{\varvec{\chi }(p)} \, w_h]= div{\varvec{\chi }(p)} \cdot \nabla w_h + w_h \,div [div{\varvec{\chi }(p)} ] \end{aligned}$$
which yields for (62)
$$\begin{aligned} \int \limits _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E}= & {} \int \limits _{\Gamma _e} \varvec{\chi }(p) \cdot (\nabla w_h \otimes \varvec{N}) \mathrm{d}\Gamma \nonumber \\&- \int \limits _{\Gamma _e} div{\varvec{\chi }(p)} \cdot \varvec{N} \,w_h \,d \gamma \nonumber \\&+ \int \limits _{\Sigma _{E}} div [div{\varvec{\chi }(p)}] \,w_h \ \mathrm{d}\Sigma _{E} \end{aligned}$$
For the cubic ansatz in (59) the last term is zero. All integrals on the right hand side can be evaluated using the ansatz functions (38) and (42). Due to the fact that \(div{\varvec{\chi }(p)}\) is constant and the normal vector \( \varvec{N}_k\) is constant at each edge k of the element we can rewrite (64) as
$$\begin{aligned}&\int \limits _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E}= \int \limits _{\Gamma _e} \varvec{\chi }(p) \cdot (\nabla w_h \otimes \varvec{N}) \mathrm{d}\Gamma \nonumber \\&\quad - \sum _{k=1}^{n_E} div{\varvec{\chi }(p)} \cdot \varvec{N}_k \int \limits _{\Gamma _k} w_h \,d \gamma \,. \end{aligned}$$
The first integral of the right hand side can be evaluated analogously to (55). The only difference is that now some integral terms occur where the coordinates X and Y appear. The second integral is exactly the same as the right hand side in (56).
The additional conditions (51) and (52) have to be evaluated as well. The first one yields for the integral on the left side
$$\begin{aligned}&\int _{\Sigma _{E}} ( \nabla \Pi (w) )_h \ \mathrm{d}\Sigma _{E}\nonumber \\&\quad = \int _{\Sigma _{E}} \left\{ \begin{matrix} a_2+a_4X+a_6 Y + \frac{1}{2} a_7 X^2 + a_8 X Y + \frac{1}{2} a_9 Y^2 \\ a_3+a_5Y+a_6X + \frac{1}{2} a_8 X^2 +a_9 X Y + \frac{1}{2} a_{10} Y^2 \end{matrix} \right\} \,\ \ \mathrm{d}\Sigma _{E} \end{aligned}$$
which can be simply integrated over the boundary using relations (45). The right hand side of (51) has the same form as (54)
$$\begin{aligned} \int _{\Gamma _{e}} w_h \varvec{N}\,\ \mathrm{d}\Gamma = \sum _{k=1}^{n_E} \left[ \frac{1}{2}(w_1+w_2) + \frac{L_k}{12} (\theta _{1n}-\theta _{2n}) \right] _k \, L_k\, \varvec{N}_k \end{aligned}$$
and equalising (66) and (67) determines \(a_2\) and \(a_3\) of (59) as a linear combination of the the element unknowns. The explicit form for the coefficients \(a_2\) and \(a_3\) is equivalent to (55).
Finally the coefficient \(a_1\) can be computed as in (56) the only difference is that now on the left side the ansatz function (59) has to be inserted.
Also for element 2 a mapping can be introduced
where \(\mathbf {u}_2= \{ w_1,\theta _{X1},\theta _{Y1}, \theta _{t1} \,, w_2,\theta _{X2},\theta _{Y2}, \theta _{t2} \,\ldots \,, w_{n_N},\theta _{Xn_N},\theta _{Yn_N}, \theta _{tn_N} \}^T\) is the vector of all nodal unknowns of element 2. Again the local rotations associated with specific edges can be transformed to global rotations using (58).
The generation of the vectors and matrices that appear in the description of both elements can be unified, see Sect. A. This is especially useful when employing the software tool AceGen, see [21, 22] to generate the software code.
Construction of the virtual stiffness matrix: different strategies
The basis for the element development is the total energy which can be obtained by assembling all element contributions for the \(n_T\) virtual elements,
In the following we will first discuss the formulation of the element part that stems from the projection, see last Section. Furthermore, different possibilities for stabilising the virtual plate elements 1 and 2 will be considered.
Part of the virtual element due to projection
With the strain \(\varvec{\chi }\) of the plate and the loading q we obtain the potential energy of an element
$$\begin{aligned} U_c^e [\Pi (w_h)]= \int \limits _{\Sigma _E}\left[ \frac{1}{2} \varvec{\chi } [\Pi (w_h)] \cdot \mathbb {D} \, \varvec{\chi } [\Pi (w_h] - q\, \Pi (w_h) \right] \,\mathrm{d}\Sigma \end{aligned}$$
with the constitutive tensor \(\mathbb {D} \).
Note that the strain \(\varvec{\chi }\) is constant for element 1 which leads to a trivial evaluation of the first term. For element 2 the strain \(\varvec{\chi }\) is linear in X and Y, hence the integration involves terms up to second order in the coordinates which again can be performed over the boundary using (45).
The projection \(\Pi (w_h)\) is known in terms of the unknowns \((w\,,\theta _X\,,\theta _Y)_k\) at the vertices for element 1 and 2 and at the unknown rotation \(\theta _t\) at the mid node of element 2, see (57) and (68). These mappings can be inserted into (70) which yields for element \(e=1,2\)
$$\begin{aligned}&U_c^e [\mathbb {M}_e(X,Y)\,\mathbf {u}_e]\nonumber \\&\quad = \int \limits _{\Sigma _E}\left[ \frac{1}{2} \varvec{\chi } [\mathbb {M}_e(X,Y)\,\mathbf {u}_e] \cdot \mathbb {D} \,\varvec{\chi } [\mathbb {M}_e(X,Y)\,\mathbf {u}_e] - q\, \mathbb {M}_e(X,Y)\,\mathbf {u}_e \right] \mathrm{d}\Sigma \,.\nonumber \\ \end{aligned}$$
Once the integration over the element area is carried out the potential is just a function of the unknowns of the element. Then residual vector \(\mathbf{R}_e\) and stiffness matrix \(\mathbf{K}_e\) follow from
$$\begin{aligned} \mathbf{R}_e = \frac{\partial U_c^e [\mathbf {u}_e]}{\partial \mathbf {u}_e} \quad \text{ and } \,\, \mathbf{K}_e = \frac{\partial \mathbf{R}_e}{\partial \mathbf {u}_e} \,. \end{aligned}$$
for element e.
Classical stabilisation
Within the virtual element method the curvature of the projected part of the deflection w is approximated by a constant and linear part, depending on the element type, as discussed in the last Section. A construction of a virtual element which is based purely on this projection leads to a rank deficient element, see Table 1. Thus the formulation has to be stabilised, see [13, 26]. In what follows we use first the stabilisation suggested in these papers. This leads to a stabilisation operator which addresses the error at all element nodes (vertices and midpoints):
$$\begin{aligned}&\widehat{U}_{stab}^e (w_h- \Pi ({w}_h)) = \frac{D}{2\,h_e^2} \sum _{i=1}^{n_V}\nonumber \\&\left[ \widehat{w}(\mathbf {X}_i)^2 + \left\| \frac{L_{i-1}+L_i}{2} \nabla \widehat{w}(\mathbf {X}_i)\right\| ^2 + \left\| L_i \, \widehat{\theta }_t(\mathbf {X}_i)\right\| ^2 \right] \\&\text{ with } \,\, \widehat{w}(\mathbf {X}_i) = w_h(\mathbf {X}_i) - \Pi ({w}_h)(\mathbf {X}_i)\,, \nonumber \\&\nabla \widehat{w}(\mathbf {X}_i) = \nabla w_h(\mathbf {X}_i) - \nabla \Pi ({w}_h)(\mathbf {X}_i) \,\,\, \text { and } \nonumber \\&\widehat{\theta }_t(\mathbf {X}_i) = \left\{ \begin{array}{ll} \theta _{t\,h}(\mathbf {X}_i) - \Pi (\theta _{t\,h})(\mathbf {X}_i) &{} \text { for el 2} \\ 0 &{} \text { for el 1.} \end{array} \right. \nonumber \end{aligned}$$
Here \(h_e\) is the maximum diameter of the virtual element e thus \(h_e^2\) can be interpreted as the element area \(\Sigma _e\). The function \(w_h\) and the projection \(\Pi ({w}_h)\) have to be evaluated at the vertices \(\mathbf {X}_i\).
The formulation (73) is now extended to an integral describing the total error on the edge instead of discrete values. This new stabilisation takes into account the distribution of each degree of freedom along the edge
$$\begin{aligned}&\text {Stab 2: }\ \widehat{U}_{stab}^e (w_h- \Pi ({w}_h))\nonumber \\&\quad = \frac{D}{2\,\Sigma _e} \sum _{k=1}^{n_E} \frac{1}{L_k} \int \limits _{\Gamma _{k}}\left[ \widehat{w}(\varvec{X_k})^2 + \Vert L_k \nabla \widehat{w}(\varvec{X}_k) \Vert ^2 \mathrm{d}\Gamma \right] \,. \end{aligned}$$
Again \(h_e^2\) can be used in this equation instead of \(\Sigma _e\). The edge integral in (74) is evaluated numerically using Gauss quadrature
$$\begin{aligned} \int \limits _{\Gamma _{k}} \,f(\varvec{x}_k) \mathrm{d}\Gamma =L_k \sum _{g=1}^{n_g} w_g f(\varvec{x}_g) \end{aligned}$$
where the Jacobian \(J_\xi \) in the transformation \(\mathrm{d}\Gamma = J_\xi \mathrm{d}\xi \) is the length of the \(k^{\text {th}}\) edge \(J_\xi =\Vert \frac{\partial \varvec{X}}{\partial \xi }^\Gamma \Vert =L_k\) and \(\xi \in \left[ 0,1\right] \) is the local coordinate.
When employing stabilisation (74) the resulting tangent matrix has full rank. Note, that stabilisation (73) is easier to implement than the second one.
Energy stabilisation
A stabilisation technique based on energy differences was first introduced for virtual solid elements in [38] for virtual elements. It is based on the idea of introducing the following stabilisation energy, see e.g. [23]
$$\begin{aligned} U_{stab}(w_h-\Pi (w_h)) = \widehat{U}(w_h) - \widehat{U}(\Pi (w_h))\,. \end{aligned}$$
The second term on the right hand side ensures consistency of the total potential energy
$$\begin{aligned} U(w_h) = U_c(\Pi (w_h))+\widehat{U}(w_h) - \widehat{U}(\Pi (w_h)) \end{aligned}$$
since \(U_{stab}(w_h-\Pi (w_h)) \rightarrow 0\) for \(h \rightarrow 0\) where h relates to the element size.
Often it is convenient to use the same strain energy \(\psi \), see (17), for the consistency part \(U_c (\Pi (w_h))\) and the stabilisation part \(U_{stab}(w_h-\Pi (w_h))\). In that case a stabilisation parameter \(\beta \) is introduced to control the amount of stabilisation. By multiplying \(U_{stab}(w_h-\Pi (w_h))\) by \(\beta \) one obtains with \(U_c(\Pi (w_h))= U [\psi (\Pi (w_h))]\) and \(\widehat{U} (w_h) = U [\psi (w_h)]\)
$$\begin{aligned} U(w_h)= U[\psi (\Pi (w_h))]+ \beta \left[ U[\psi (w_h)] - U[\psi (\Pi (w_h))] \right] \end{aligned}$$
which can be reformulated as
$$\begin{aligned} U(w_h) = (1-\beta )\, U[\psi (\Pi (w_h))] + \beta \,U[\psi (w_h)] \end{aligned}$$
In this formulation it remains to compute \( U[\psi (w_h)] \) which on a first glance seems to be impossible since \(w_h\) is not known in the interior of a virtual plate element. The solution is to subdivide the virtual element into triangles as shown in Fig. 4 and to use specific triangular plate elements to evaluate \(U[\psi (w_h)] \).
Internal triangular mesh: a Morley element and b DKT element
The chosen elements should possibly not introduce extra degrees of freedom. Thus two simple choices are considered. These are the Morley element, see [29] and the discrete Kirchhoff plate element (DKT), see [6]. In the following we will describe the main features of these elements.
Morley element It is based on six degrees of freedom and has six nodes with one degree of freedom at each node. The nodal values are related to the deflection \(w_i\) at the vertices and the rotations \(\theta _k\) around the tangent of the element at the mid nodes. Thus the Morley element will match only part of the unknowns of the consistency part of element 1 and 2 and introduces extra degrees of freedom at the mid nodes in the interior of the virtual elements due to the submesh, see Fig. 4a.
The unknown nodal values can be combined in the vector \(\mathbf {p}_{\mathcal T}^M = \{w_1\,,w_2\,,w_3\,,\theta _4\,,\theta _5\,,\theta _6\}^T\) which yields
$$\begin{aligned} w_h= \mathbf {N}\, \mathbf {p}_{\mathcal T} = N_1 w_1 + N_2 w_2+ N_3 w_3 + N_4 \theta _4 + N_5 \theta _5 + N_6\theta _4 \end{aligned}$$
where \(N_I\) are quadratic shape functions that can be found explicitly in [37]. This ansatz leads to a non-conforming plate bending element with constant curvature and moments. With the ansatz (80) the plate curvatures can be computed from (15) leading to
$$\begin{aligned} \hat{\chi } = \mathbf {B}_M \,\mathbf {p}_{\mathcal T}^M \end{aligned}$$
where \( \mathbf {B}_M\) includes the derivatives of the ansatz functions \(N_I\). The strain energy of one element \(\Sigma _k^i\) can then be defined following (17) with (16) as:
$$\begin{aligned} U[\psi (w_h)] = \frac{1}{2} \int \limits _{\Sigma _k^i} \hat{\chi } ^T \, \hat{\mathbb {D}}\, \hat{\chi } \,\mathrm{d}A=\frac{1}{2} [\mathbf {p}_{\mathcal T}^M]^T \int \limits _{\Sigma _k^i} \mathbf {B}_M^T\hat{\mathbb {D}}\,\mathbf {B}_M \,\mathrm{d}A \,\mathbf {p}_{\mathcal T}^M \end{aligned}$$
Note that the integrand is constant. Hence the strain energy can simply be written as
$$\begin{aligned} U[\psi (w_h)] ^M = \frac{{\Sigma _k^i} }{2} [\mathbf {p}_{\mathcal T}^M]^T \, \mathbf {B}_B^T \hat{\mathbb {D}} \,\mathbf {B}_B \,\mathbf {p}_{\mathcal T}^M \end{aligned}$$
where \({\Sigma _k^i} \) is the area of one of the internal triangles. Since no integration is needed this yields a very efficient code for the stabilisation part \(\widehat{U}(w_h) \).
DKT element This element uses a shear deformable theory, but enforces the Kirchhoff constraint pointwise. It has three nodes with 3 degrees of freedom per node which are the deflection w and two rotations \(\gamma _x\) and \(\gamma _y\) around the X and Y axis, respectively. Hence the element has the same degrees of freedom as the virtual element 1 but misses the rotation at the mid side used in the virtual element 2. The unknowns are at the vertices, see the submesh in Fig. 4b. The DKT stabilisation does not introduce new unknowns compared to the consistency part. The interpolation leads to the vector of unknowns in the element \(\mathbf {p}_{\mathcal T}^D = \{w_1\,,\gamma _{x1}\,,\gamma _{y1}\,,w_2\,,\gamma _{x2}\,,\gamma _{y2}\,,w_3\,,\gamma _{x3}\,,\gamma _{y3}\,\}^T\). Based on the introduced constraints special quadratic shape functions were derived in [6] which yield compatible rotations \(\beta _x\) and \(\beta _y\)
$$\begin{aligned} \varvec{\beta }= \begin{pmatrix} \beta _X \\ \beta _Y \end{pmatrix} = \begin{pmatrix} \mathbf{H}^T_X \\ \mathbf{H}^T_Y \end{pmatrix} \mathbf {p}_{\mathcal T}^D \end{aligned}$$
in terms of the nodal unknowns \( \mathbf {p}_{\mathcal T}^D\). The derivative of these rotations with respect to X and Y leads to the curvatures within the element
$$\begin{aligned} \hat{\chi } = \begin{pmatrix} \beta _{X,X} \\ \beta _{Y,Y} \\ \beta _{X,Y}+ \beta _{Y,X} \end{pmatrix} = \mathbf {B}_D \,\mathbf {p}_{\mathcal T}^D \end{aligned}$$
which can be used to formulate the strain energy of one element following (17) and (16)
$$\begin{aligned} U[\psi (w_h)] ^D =\frac{1}{2} [\mathbf {p}_{\mathcal T}^D]^T \int \limits _{\Sigma _k^i} \mathbf {B}_D^T \hat{\mathbb {D}} \,\mathbf {B}_D \,\mathrm{d}A \,\mathbf {p}_{\mathcal T}^D\,. \end{aligned}$$
Note that the integrand is of quadratic order and thus has to be numerically integrated using an adequate Gauss quadrature.
Both elements used within the energy stabilisation need a local assembly of all triangular elements in the submesh of the virtual element.
Comparison of the different stabilisations and numerical examples
In this section we first study and compare the two classical stabilisations and then the energetic ones. This allow us to select one which be used subsequently all along the paper. The performance of the proposed virtual plate elements will be illustrated by means of numerical examples that are related to applications in engineering. Isotropic material behaviour as well as anisotropic materials and composites will be investigated. All numerical solutions are compared with analytical solutions where available.
Notation used in the examples
In general the following element formulations for plates will be analysed:
VE-1: first order VEM formulation. Degrees of freedom per element edge are: \(\lbrace (w_k, \varvec{\theta }_{k}),\) \((w_{k+1}, \varvec{\theta }_{k+1}) \rbrace \), see Sect. 4.3.2,
VE-2: second order VEM formulation. DOFs per element edge are: \(\lbrace (w_k, \varvec{\theta }_{k})\), \(({\theta }_{t\,k}),\) \((w_{k+1}, \varvec{\theta }_{k+1}) \rbrace \), see Sect. 4.3.3,
FE-M: first order FEM Morley formulation. DOFs per element edge are: \(\lbrace (w_k)\), \(({\theta }_{t\,k}),\) \((w_{k+1}) \rbrace \), see Sect. 5.3
FE-DKT: DKT plate element. DOFs per element edge are: \(\lbrace (w_k, \varvec{\theta }_{k}),\) \((w_{k+1},\) \(\varvec{\theta }_{k+1}) \rbrace \), see Sect. 5.3
The following types of stabilisation will be employed for the virtual plate elements:
st-1a/b: classical, nodal error stabilisation (73), where "a" refers to using the element size \(h_e^2\) and "b" to employing the element area \(\Sigma _e\), both are used in the denominator.
st-2a/b: classical, edge-integrated error stabilisation (74) where "a" and "b" have the same meaning as above,
st-M: Morley stabilisation, see Sect. 5.3 and
st-K: Stabilisation with DKT element, see Sect. 5.3.
In general the following types of elements and meshes will be considered
Q1: quadrilateral mesh with 4 edges per element,
T1: triangle mesh with 3 edges per element,
VO-U: Voronoi-type mesh, with uniformly distributed cell seeds,
VO-R: Voronoi-type mesh, with randomly distributed cell seeds.
This means that Q1 meshes consist of quadrilateral elements which are virtual elements with four edges and 12 unknowns (element 1) or 16 unknowns (elements 2). In the same way a T1 mesh consists of virtual elements with triangular shape having 9 unknowns (element 1) or 12 unknowns (element 2). The actual number of nodes in a Voronoi type mesh depends on the shape of the Voronoi cells. In our examples the number of edges in a Voronoi-type mesh varies from three to ten.
Study and comparison of the different stabilisations
The first example is used to compare results related to the different stabilisation techniques. Additionally the general convergence characteristics of element 1 and 2 are reported. A fully clamped square plate of size \(B \times H = 8 \,\mathrm{m} \times 8 \,\mathrm{m}\) is used for this purpose since an analytical solution is available. The plate is subjected to a uniform load \(\bar{q}\) see Fig. 5a. The material parameters are given in Table 2.
Table 2 Material parameters
The analytical results of the clamped plate problem can be found in [34]. Table 3 provides the reference solutions for various quantities for the given data.
Table 3 Analytical solutions for the clamped plate
The study is based on the meshes depicted in Fig. 5b–d for a discretisation with square, elongated rectangular and Voronoi elements, respectively.
Boundary value problem (a), and 3 types of meshes used, square quadrilateral Q1 (b), elongated quadrilateral Q1 (c) and random-seed voronoi type VO-R mesh (d)
Study and comparison of classical stabilisation variants
The convergence study for virtual element 1 (VE-1) and virtual element 2 (VE-2) using the two types of classical stabilisation, is shown for square elements in Fig. 6, for rectangular elongated elements in Fig. 7 and for regular and unstructured Voronoi meshes in Fig. 8. The convergence study is made regarding the deflection at the center of the plate and the energy.
Convergence study of deflection w for elements 1 and 2 for square (\(n_X:n_Y =1:1\)) Q1 meshes
Convergence study of deflection w for elements 1 and 2 for elongated (\(n_X:n_Y =1:8\)) Q1 meshes
Convergence study of energy W and deflection w for elements 1 and 2 for different forms of the elements (structured and unstructured meshes)
One can observe that the choice of stabilisation does not have a big influence for the quadrilateral element VE-1. We will discuss the influence of the stabilisation when using triangular shape elements in Sect. 7. The effect between stabilisation type "a" and "b" related to the normalisation with element size \(h_e^2\) (st-\(\square \)a) or the area \(\Sigma _e\) (st-\(\square \)b) is negligible. But generally, using the area yields slightly better results in all cases. There is a slight difference in stabilisations 1 and 2. For Q1 meshes stabilisation 1 is the best, but for Voronoi meshes, stabilisation 2 is better by a small amount. It seems that the best stabilisation depends on the mesh type. As expected the structured meshes perform better than unstructured Voronoi type meshes. It is interesting to note that in this example the Q1 meshes yield the best results which will be explored further in Sect. 7.
Figure 9 depicts contour plots of the deflection w and the bending moment \(M_{XX}\), for the virtual element VE-2:st-1b. but graphically all mesh types produce similar results and demonstrate that the developed virtual plate elements are capable of computing meaningful engineering solutions. The contour plots of the structured meshes as well as the unstructured Voronoi meshes report minimum and maximum bending moments (\(\text{ max } \,M_{XX}=3.285\) and \(\text{ min }\, M_{XX}=-1.466\)) that match exactly the analytical results in Table 3.
Deflection w (a–c) and Moments \(M_{XX}\) (d–f) for structured and unstructured meshes, note that \(M_{YY}\) is same but flipped over the domain diagonal
Study of energetic stabilisation
While studying the energy stabilisation using Morley and DKT element different issues where encountered. We will therefore first discuss the stabilisation of VE-1 using the Morley element before studying the stabilisation of VE-2 using the DKT element.
Energetic stabilisation of VE-1 using the Morley element
Elements VE-1 and Morley are based on ansatz functions that lead to constant curvature within the element. Thus the Morley element was selected for stabilisation. One aspect of the energetic stabilisation is the parameter \(\beta \) which controls the influence of the stabilisation part of the energy. Figure 10 shows convergence curves obtained for the clamped plate under uniform load depending on \(\beta \).
It appears that values of \(\beta \) ranging from 0.4 to 0.8 yield similar results. For smaller values of \(\beta \) the convergence behaviour deteriorates. In total it can be concluded that a stabilisation with the Morley element is not satisfactory since the best results is achieved with the classical stabilisation.
Energetic stabilisation of VE-2 using the DKT element
Figure 11 depicts the convergence of the DKT element and the VE-2 element for a Q1 mesh. We first note that the convergence of DKT is lower than two. The convergence curves for a stabilisation with \(\beta =0.2\) and \(\beta =0.05\) increase the coarse mesh accuracy when compared to the DKT solution. A smaller value for \(\beta \) yields a better convergence. Up to a certain level of mesh refinement both stabilisations even lead to the second order rate of convergence, as is achieved with the classical stabilisation (st-1b). This in fact is not surprising since using a non vanishing value of \(\beta \) for small element sizes will prevent the second order convergence rate expected for VE-2 in the limit \(h_e \rightarrow 0\). This observation led to a recipe in which \(\beta \) is scaled by the size of the element \(\Sigma _e\) in the refined mesh with respect to the element size \(\Sigma _0\) in the initial coarse mesh with 1 element leading to the formula \(\beta _e= \frac{1}{2} \frac{\Sigma _e}{\Sigma _0}\). This scaling allows to recover the expected rate of convergence for VE-2 which is parallel to the curve using the classical stabilisation.
Convergence study related to the parameter \(\beta \) on a Q1 mesh for VE-1
Convergence study related to the parameter \(\beta \) at DKT-stabilisation on a Q1 and Voronoi mesh for VE-2
In conclusion the use of the proposed classical stabilisation is more simple and more efficient than the energetic one and therefore will be used in the rest of the paper. This conclusion could be different in the case of material non-linearities, a case for which it is not obvious to propose an efficient classical stabilisation. The results for stabilisation 1b and 2b are quite similar but stabilisation 1b seems generally to give the best results. In what follows we make use of stabilisation 1b systematically without referring to it anymore.
Dealing with singularities, the rhombic plate
Another example is the simply supported rhombic plate, see Fig. 12a, which is more complex due to a singularity. The plate is simply supported and loaded by a uniform load with \(H=B= 8\) m. The angle \(\alpha \) in Fig. 12a is chosen as \(\alpha = 30^\mathrm{o}\) which yields an internal, obtuse angle of \(150^\mathrm{o}\). The material parameters were taken from Table 2.
Near the obtuse angles the solution is singular which leads to a solution belonging to \(H^{\gamma -\epsilon }, \, \forall \epsilon >0\) with \(\gamma = 2- \frac{\alpha }{\pi -\alpha }\).
Rhombic plate geometry and meshes
Rhombic plate with \(\alpha = 30^\mathrm{o}\): Convergence study of the deflection w for Elements 1 and 2 st-2b for regular, Fig. 12b, and Voronoi, see Fig. 12d and e, meshes and adaptively refined meshes, Fig. 12c
Therefore the convergence is governed by the singularity for uniform meshes.
One of the advantages of the VEM is that it is easily possible to adaptively refine a mesh, see Fig. 12b, in a non uniform manner, see Fig. 12c allowing to recover a better convergence rate. The refinement is based on bisecting \(h_e\) as shown in Fig. 12c. This can be simply done by adding a new nodes at the middle of the edge of the element since the number of nodes can be arbitrary in the virtual element scheme and avoids the burden of hanging nodes. Five refinement steps were executed towards the obtuse corner. Due to the local refinement one can observe the drastic increase of convergence rate compared to uniformly refining of the mesh by a factor of 2 as shown in Fig. 13. The used refinement however is not sufficient to recover the optimal convergence rate, see [5], but it shows the sensitivity of the solution with respect to the mesh refinement at the singularity. Here the adaptive refinement is performed in a way, that the difference of the energy between element nodes is a criteria for refinement. Thus to keep the difference small, the elements at the obtuse corners are refined, as depicted in Fig. 12c. In total 5 refinement steps were used to obtain the solution in Fig. 13. In this convergence study, additionally, the energy of the previous refinement step is used as an indicator for the refinement in the next step.
An analytical solution can be found in [28] for different plate angles, however it is only provided up to three digits. Using finite element convergence studies convergence results for the rhombic plate were reported in [5, 36] with 3 digit accuracy, thus the obtained results related to the adaptive refined mesh do not converge linearly. We note, that the asymptotic convergence rate of 0.2 (related to the singularity) is recovered for all element formulations with a uniformly refined mesh which is clearly depicted in Fig. 13.
Moment plot for VE-2:Q1 with \(32\times 32\) elements, St-1b
Simply supported plate, deflection and moment plot on deformed configuration
Finally, Fig. 14 shows the distribution of the bending moments \(M_{XX}\) and \(M_{YY}\) for the rhombic plate using a uniform mesh with \(32 \times 32\) Q1 virtual elements.
Rectangular orthotropic plate
To investigate the convergence behaviour of the new plate elements for orthotropic material behaviour a rectangular plate is considered for which the analytical solution is provided in [35]. The orthotropic directions coincides with the main axis of the plate.
Using the same notation as in [35] the problem is characterised by a simplified material stiffness matrix, see (21),
$$\begin{aligned} \hat{\mathbb {D}}_{G} = \begin{bmatrix} D_x &{} D_1&{} 0 \\ D_1&{} D_y &{} 0 \\ 0 &{}0 &{}D_{xy}\end{bmatrix}= \frac{t^3}{12} \begin{bmatrix} E_x &{} \hat{E}&{} 0 \\ \hat{E} &{} E_y &{} 0 \\ 0 &{}0 &{}G \end{bmatrix} \end{aligned}$$
where t is the thickness of the plate whose length is denoted by \(a = 2\) mm and its width by \(b = 1\) mm. The plate is simply supported and subjected to a uniform load q. In the example the following geometric data are chosen as shown in Table 4.
Table 4 Material parameters of Timoshenko plate example
By noting that \(H = D_1 + 2 D_{xy}\), the deflection, see Fig. 15, can be expressed by the sum of the series
$$\begin{aligned} w(X,Y)= & {} \frac{16 \,q }{\pi ^6} \sum _{m=1,3,5 ..} \sum _{n=1,3,5 ...}\frac{1 }{m n (\frac{m^4}{a^4} D_x + 2 \frac{m^2n^2}{a^2b^2} H + \frac{n^4}{b^4} D_y)} \nonumber \\&\sin \left[ \frac{m \pi }{a}X\right] \sin \left[ \frac{n \pi }{b}Y\right] \,. \end{aligned}$$
We consider for the convergence analysis the deflection at the center \((X,Y)=(\frac{a}{2},\frac{b}{2})\) of the plate, denoted by \(w_c\). The value of \(w_c\) is computed for the given data with Mathematica by employing (88) with \(m=n=1,3,\ldots ,31\) which yields the result \(w_{ref}=- 1.5835858431216134\) mm.
Convergence study of the deflection w for Timoshenko type VEM element 2: st-2b for regular (Fig. 12b) and Voronoi (Fig. 5c) meshes
Figure 16 depicts the error in the maximum deflection in the middle of the plate. We observe a quadratic convergence rate for element 2 for Q1 and Voronoi meshes VO-R and VO-U. As in the previous example, Q1 element performs best, however the convergence rate is more erratic. This example underlines that the Kirchhoff-Love virtual elements converge for these more intricate materials with the same order as for isotropic materials.
Plate with anisotropic material as cantilever beam
Next we deal with double cantilever plate specimens made of composite material. This setup was used in [2] to investigate the value of the critical energy release rate depending of the relative angle of a stack of plies adjacent to the considered interlaminar interface.
Composite DCB plate geometry (mm)
The cantilever beam, moment plot on deformed configuration for one \(22.5^\mathrm{o}\) ply and for a \([22.5^\mathrm{o}, -22.5^\mathrm{o}, -22.5^\mathrm{o}, 22.5^\mathrm{o}, -22.5^\mathrm{o}, 22.5^\mathrm{o}, 22.5^\mathrm{o},-22.5^\mathrm{o} ]_s\) composite plate, scaled displacements
For the test setup shown in Fig. 17 bending solution for three stacking sequences are computed. The plate has dimensions \(B=\) 50 mm and width \(H=\) 20 mm. This plate is assumed to be clamped at the left side while a line load \(\bar{q}\) is applied on the right part. The material under consideration is a made of M18/M55J unidirectional plies. In what follows 1, refers to the direction of the fiber and 2 refers to the orthogonal direction inside the ply within the beam, see Table 5.
Table 5 Material parameters and geometry data for the orthotropic cantilever beam
We first compute the response for a solution with one ply whose orientation is at \(22.5^\mathrm{o}\). In Fig. 18a the distribution of \(M_{XX}\) is plotted with respect to an amplified deformed configuration. Since the orthotropic axis of the plate do not coincide with the global axis of the cantilever plate a coupled bending-torsion type of response is produced, as expected. This is the reason why those test are usually not performed. One prefers testing stacking sequences which avoid this kind of coupling as it is the case for stacking sequences of the type \([\phi , -\phi ,-\phi , \phi ,-\phi ,\phi ,\phi ,-\phi ]_s\). Due to the symmetry of the stacking sequences decoupling between tension occurs with respect to the mid-plane but yields an orthotropic bending response. The total material matrix is computed from (21). For such stacking sequence and for \(\phi = 22.5^\mathrm{o}\) the distribution of \(M_{XX}\) is plotted on an amplified deformed configuration, see Fig. 18b.
Use of the \(C^1\)-continuous virtual element in a finite element environment
Since the developed virtual plate element can have arbitrary number of nodes it is possible to construct a triangular or quadrilateral \(C^1\)-continuous plate element using element 1 or 2. Those elements can be implemented easily in conventional finite element codes. The advantage is that the construction of \(C^1\)-continuous finite plate elements need a high ansatz order, like in the TUBA family of triangular elements in [4] or are—as quadrilaterals—a composition of four triangular elements, see [18]. The developed formulation provides the necessary \(C^1\)-continuity for Kirchhoff plates with much simpler elements.
It is interesting to investigate how the above derived virtual plate elements compare as triangular (3-noded) and quadrilateral (4-noded) elements with classical finite elements for Kirchhoff plates. Within two standard benchmark examples, see e.g. [30], we illustrate the performance of the derived virtual plate elements. The results are compared with finite elements that approximate that curvature in the same way as the derived virtual elements 1 and 2. For the virtual element 1 (VE-1) an equivalent Kirchhoff plate element with constant curvature is the nonconforming Morley element, already described in Sect. 5.3. For element 2 (VE-2) we select a non-conforming triangular plate element, developed by [33], and the DKT element, see Sect. 5.3, which rely on a linear curvature approximation.
Convergence study of the deflection w under uniform load for VE-1 and -2, compared with finite elements
Clamped plate under uniform load
As first example we use a clamped plate under uniform load \(\bar{q}\), already discussed in Sect. 6.2. The geometry and material data are the same as in in Sect. 6.2 and the analytical solution is given in Table 3. Results related to the convergence behaviour of a number of conforming and non-conforming plate elements are provided in [30] for the case of a uniform load.
Convergence of the deflection w for Elements 1 and 2, compared to Morley, Specht and DKT finite elements for plate under uniform load
We compared with the solutions obtained with finite elements from Morley and Specht and the DKT formulation which are plotted next to the results of the VE- 1 and VE-2 elements used as standard triangles and quadrilaterals, see 19. For the pointwise stabilisation' 'st-1", see (73), Fig. 19a demonstrates that VE-1 has a better coarse mesh accuracy than the Morley element despite the fact that both elements have a constant approximation of the curvature. In the same way, VE-2 outperforms the DKT and Specht elements, although both elements approximate the curvature with a linear polynomial. It is interesting that the triangular element VE-1 has an extremely good coarse mesh accuracy when using the continuous stabilisation "st-2" in (74) as shown in Fig. 19b. The triangular element VE-1:T1 with stabilisation 2b is simple, efficient and has only three nodes. Thus it qualifies as a \(C^1\)-continuous Kirchhoff plate element for legacy codes.
The rate of convergence is depicted in Fig. 20a for the constant curvature elements and demonstrates for all element formulations the same order of asymptotic convergence. Contrary to that, VE-2 has a higher convergence order when compared to the DKT and Specht elements as demonstrated in Fig. 20b. Thus the conforming virtual element 2 as a triangular well as as a quadrilateral element is performing extremely well.
Distorted meshes for a \(2 \times 2\), \(4 \times 4\) and \(32 \times 32\) density
A distorted mesh, as depicted in Fig. 21 for quadrilateral and triangular element meshes with different densities, is used to investigate the behaviour of the quadrilateral and triangular virtual plate elements.
The convergence results are depicted in Fig. 22 for the continuous stabilisation "st-2". They show the same behaviour as the uniform mesh. Again the triangular element has better coarse grid characteristics than the quadrilateral element when using the continuous stabilisation, see Fig. 22a. Also the asymptotic convergence rate is maintained for the distorted mesh which is demonstrated in Fig. 22b.
Convergence study of the deflection w under uniform load for VE-1 and -2 with "st-2b", compared with finite elements on distorted meshes in Fig. 21
Clamped plate under point load
As a second example we apply the triangular and quadrilateral virtual element to the clamped plate under point load \(F = 64\) kN. The geometrical and the material data are described in Sect. 6.2. The analytical solution is reported in [35] as \(w=-0.0896 \frac{F}{D}\). The plot in Fig. 23 depicts the convergence behaviour of the different element formulations.
As shown in Fig. 23a the pointwise stabilisation"st-1" yield the best results for VE-2 while Fig. 23b depicts the superior coarse mesh accuracy of element VE-1:T1 for the continuous stabilisation "st-2", as in the previous example. The asymptotic convergence behaviour can be observed in Fig. 24. Here again the coarse mesh accuracy of the VE-1:T1 element is demonstrated in Fig. 24a. Expectantly the same asymptotic convergence rate is achieved for VE-1: T1 and VE-1:Q1. Due to the point load the rate of convergence is lower for the higher order ansatz which can be seen in Fig. 24b. Here VE-2: Q1 depicts the best performance when using the pointwise stabilisation.
Convergence study of the deflection w under point load for VE-1 and -2, compared with finite elements
Convergence of the deflection w for Elements 1 and 2, compared to Morley and DKT finite elements for plate under point load
It is impressive to see in both examples, that the proposed virtual elements (VE-1:T1 st-2b and VE-2:Q2 st-1b) outperform the DKT element which, in the engineering literature, is known to be an excellent element. This qualifies both virtual plate elements as candidates for engineering software related to the analysis of thin plates. Especially the very simple triangular element VE-1:T1 st-2b is a good candidate since it fits easily in existing software packages, having only three nodes and the same number of unknowns at each node which is actually equivalent to using the Specht or DKT element.
The virtual element method is a simple and powerful tool to construct \(C^1\)-continuous discretisations as needed in the case of Kirchhoff-Love theory for plates. This even works for elements with arbitrary shape and number of nodes. In this paper we have developed in a more engineering way of writing the theory presented in [13], extended its range of applications compared to [16] and also introduced and discussed several possible stabilisation techniques. Various examples and convergence studies were provided to demonstrate the ability of the virtual plate elements to solve engineering problems based on two simple element formulations. The second element is very appealing because despite its simple formulation it leads to second order accuracy and has, compared to existing plate elements, a very high coarse mesh accuracy. Moreover, both developed elements behave similarly well for isotropic and anisotropic cases. With its flexibility VEM is also well suited for non uniform mesh refinement which is required to recover the optimal rate of convergence in case of singularities encountered for example in rhombic plates. Finally, it appears that restricting the element shapes to those of classical finite plate elements allows to enrich the set of elements available in legacy codes with simple and accurate \(C^1\)-continuous plate elements that are simpler than the known \(C^1\)-continuous finite plate elements and also much faster. This work could be extended to plate structures in the nonlinear regime, undergoing large deflections and rotations.
Aldakheel F, Hudobivnik B, Wriggers P (2019) Virtual elements for finite thermo-plasticity problems. Comput Mech 64:1347–1360
MathSciNet Article Google Scholar
Allix O, Lévèque D, Perret L (1998) Interlaminar interface model identification and forecast of delamination in composite laminates. Compos Sci Technol 58–5:671–678
Antonietti PF, Manzini G, Verani M (2018) The fully nonconforming virtual element method for biharmonic problems. Math Models Methods Appl Sci 28(02):387–407
Argyris JH, Fried I, Scharpf DW (1968) The TUBA family of plate elements for the matrix displacement method. Aeronaut J 72(692):701–709
Babuska I, Scapolla T (1989) Benchmark computation and performance evaluation for a rhombic plate bending problem. IJNME 28:155–179
Batoz JL, Bathe KJ, Ho LW (1980) A study of three-node triangular plate bending elements. Int J Numer Methods Eng 15(12):1771–1812
Batoz JL, Tahar MB (1982) Evaluation of a new quadrilateral thin plate bending element. Int J Numer Methods Eng 18(11):1655–1677
Bazeley GP, Cheung YK, Irons BM, Zienkiewicz O (1965) Triangular elements in plate bending-conforming and non-conforming solutions. In: Conference on matrix methods in structural mechanics, air force institute of technology, Wright–Patterson, Ohio
Beirão da Veiga L, Brezzi F, Marini L (2013) Virtual elements for linear elasticity problems. SIAM J Numer Anal 51:794–812
Beirão da Veiga L, Lovadina C, Mora D (2015) A virtual element method for elastic and inelastic problems on polytope meshes. Comput Methods Appl Mech Eng 295:327–346
Beirão da Veiga L, Mora D, Rivera G (2019) Virtual elements for a shear-deflection formulation of Reissner–Mindlin plates. Math Comput 88(315):149–178
Bell K (1969) A refined triangular plate bending finite element. Int J Numer Methods Eng 1(1):101–122
Brezzi F, Marini LD (2013) Virtual element methods for plate bending problems. Comput Methods Appl Mech Eng 253:455–462
Bufler H, Stein E (1970) Zur Plattenberechnung mittels finiter Elemente. Ingenieurarchiv 39:248–260
Chi H, Beirão da Veiga L, Paulino G (2017) Some basic formulations of the virtual element method (VEM) for finite deformations. Comput Methods Appl Mech Eng 318:148–192
Chinosi C, Marini LD (2016) Virtual element method for fourth order problems: L2-estimates. Comput Math Appl 72(8):1959–1967
Clough RW, Tocher JL (1965) Finite element stiffness matricess for analysis of plate bending. In: Proceedings of the first Conference on matrix methods in structural mechanics, pp 515–546
De Veubeke BF (1968) A conforming finite element for plate bending. Int J Solids Struct 4(1):95–108
Hughes TJR The finite element method (2012) linear static and dynamic finite element analysis. Courier Corporation, North Chelmsford
Hughes TJR, Taylor RL, Kanoknukulchai W (1977) A simple and efficient finite element for plate bending. Int J Numer Methods Eng 11:1529–1547
Korelc J (2000) Automatic generation of numerical codes with introduction to AceGen 4.0 symbolc code generator. http://www.fgg.uni-lj.si/Symech
Korelc J, Wriggers P (2016) Automation of finite element methods. Springer, Berlin
Krysl P (2015) Mean-strain eight-node hexahedron with stabilization by energy sampling. Int J Numer Methods Eng 103:437–449
Melosh RJ (1961) A stiffness matrix for the analysis of thin plates in bending. J Aerospace Sci 28(1):34–42
Meng J, Mei L (2020) A linear virtual element method for the Kirchhoff plate buckling problem. Appl Math Lett 103:106188
Mora D, Rivera G, Velásquez I (2018) A virtual element method for the vibration problem of Kirchhoff plates. ESAIM Math Modell Numer Anal 52(4):1437–1456
Mora D, Velásquez I (2020) Virtual element for the buckling problem of Kirchhoff-love plates. Comput Methods Appl Mech Eng 360:112687
Morley L (1962) Bending of a simply supported rhombic plate under uniform normal loading. Q J Mech Appl Math 15(4):413–426
Morley LSD (1968) The triangular equilibrium element in the solution of plate bending problems. Aeronaut Q 19(2):149–169
Onate E (2013) Structural analysis with the finite element method, beams, plates and shells, vol 2. Springer, Berlin
MATH Google Scholar
Reddy JN (1999) Theory and analysis of elastic plates and shells. CRC Press, Boca Raton
Reddy JN (2004) Mechanics of laminated composite plates and shells: theory and analysis. CRC Press, Boca Raton
Specht B (1988) Modified shape functions for the three-node plate bending element passing the patch test. Int J Numer Methods Eng 26:705–715
Taylor RL, Govindjee S (2004) Solution of clamped rectangular plate problems. Commun Numer Methods Eng 20(10):757–765
Timoshenko S, Woinowsky-Krieger S (1959) Theory of plates and shells, vol 2. McGraw-hill, New York
Wang D, Katz I, Szabo B (1984) h-and p-version finite element analyses of a rhombic plate. Int J Numer Methods Eng 20(8):1399–1405
Wood R (1984) A shape function routine for the constant moment triangulare plate bending element. Eng Comput 1:189–197
Wriggers P, Reddy B, Rust W, Hudobivnik B (2017) Efficient virtual element formulations for compressible and incompressible finite deformations. Comput Mech 60:253–268
Zienkiewicz OC, Taylor RL, Too JM (1971) Reduced integration technique in general analysis of plates and shells. Int J Numer Methods Eng 3:275–290
O. Allix would like to thank the Alexander von Humbolt Foundation for its support through the the Gay-Lussac-Humboldt prize which made it possible to closely interact with the colleagues from the Institute of Continuum Mechanics at Leibniz University Hannover in this joint work. B. Hudobivnic and P. Wriggers were funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD, EXC 2122 (Project No.: 390833453).
Open Access funding enabled and organized by Projekt DEAL.
Institute for Continuum Mechanics, Leibniz Universität Hannover, Hannover, Germany
P. Wriggers & B. Hudobivnik
Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering - Innovation Across Disciplines), Leibniz Universität Hannover, Hannover, Germany
ENS Paris-Saclay, CNRS, LMT - Laboratoire de Mécanique et Technologie, Université Paris-Saclay, 91190, Gif-sur-Yvette, France
O. Allix
P. Wriggers
B. Hudobivnik
Correspondence to P. Wriggers.
Calculating the projection with AceGen
For the actual generation and implementation of the two elements and the associated stabilisations the software package AceGen, see [21, 22] was employed. In the following the essential steps and matrices are summarized that were basis of the code derivation.
The general way how to derivate the elements with the help of the automatic software generation tool AceGen is described in this appendix. Some steps of derivation of residual and tangent can be greatly simplified by using this software.
To obtain the unknown parameters \(\mathbf{{a}}\) in (46) and (59), the following relations are considered where \(w_\Pi = \Pi (w_h)\) and \(\nabla w_\Pi = \nabla \Pi (w_h)\) is used to simplify notation.
1. Equality of mean displacements:
$$\begin{aligned} \int _{\Gamma _{e}} w_\Pi \,\ \mathrm{d}\Gamma {\mathop {=}\limits ^{\smash {\scriptscriptstyle \mathrm {!}}}}\int _{\Gamma _{e}} w_h \,\ \mathrm{d}\Gamma \end{aligned}$$
2. Equality of gradients:
$$\begin{aligned} \int _{\Sigma _{E}} \nabla w _\Pi \ \mathrm{d}\Sigma _{E} = \int _{\Sigma _{E}} ( \nabla w )_h \ \mathrm{d}\Sigma _{E} \end{aligned}$$
3. Equality of energetic projections in terms of the ansatz (46) and (59). Additionally \(p=\mathbf{{H}} \mathbf{{a}}^P\) is introduced which has the role of a test function
$$\begin{aligned} \int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w_\Pi )] \ \mathrm{d}\Sigma _{E}=\int _{\Sigma _{E}} \text{ tr }[\varvec{\chi }(p)\varvec{\chi }(w)] \ \mathrm{d}\Sigma _{E} , \end{aligned}$$
where \(\mathbf{{a}}^p\) are the 10/6 parameters of a test the function p for element 1 and 2, respectively.
First, the left hand of equations side of equations (89) to (91) is defined, by is applying the divergence theorem (45) and integration per part (see chapter 4.3.3) to transform area to edge integrals
$$\begin{aligned} G =&\sum _{k=1}^{n_V} \sum _{g=1}^{n_g} L_k w_g \Bigg ( w_\Pi (\varvec{x}_g) a^p_1 \nonumber \\&+ \Big (\left( \left[ \begin{array}{cc}a^p_2 a^p_3\end{array}\right] + \nabla \cdot \varvec{\chi }_p\right) w_\Pi (\varvec{x}_g) - \nabla w_\Pi (\varvec{x}_g) \varvec{\chi }(p) \Big )\cdot \varvec{N}_k \Bigg ), \end{aligned}$$
and similarly, the right hand side is formulated
$$\begin{aligned} b =&\sum _{k=1}^{n_V} \sum _{g=1}^{n_g}L_k w_g \Bigg ( w_h(\varvec{x}_g) a^p_1 \nonumber \\&+ \Big (\left( \left[ \begin{array}{cc}a^p_2&a^p_3\end{array}\right] + \nabla \cdot \varvec{\chi }_p\right) w_h(\varvec{x}_g) - \nabla w_h(\varvec{x}_g) \varvec{\chi }(p) \Big )\cdot \varvec{N}_k \Bigg ) \end{aligned}$$
Then a matrix \(\varvec{G}\) and a vector \(\varvec{b}\) can be obtained by differentiation of expressions G and b with respect to the unknowns of the test function \(\mathbf{{a}}^p\) and the unknown parameters \(\mathbf{{a}}\)
$$\begin{aligned} \mathbf{{G}} = \frac{{\mathrm{d}}^2 G}{{\mathrm{d}}\mathbf{{a}}^p\; {\mathrm{d}}\mathbf{{a}}} \text { and } \mathbf{{b}} = \frac{{\mathrm{d}}b}{{\mathrm{d}}\mathbf{{a}}^p} \end{aligned}$$
This yields a system of equations \(\mathbf{{G}}\; \mathbf{{a}} = \mathbf{{b}}(\mathbf{{u}}_e)\) where the parameter \(\mathbf{{a}}\) will depend on the unknowns of the element \(\mathbf{{u}}_e\). Note that \(\mathbf{{u}}_e=\mathbf{{w}}_e \bigcup \varvec{\theta }_e\) is vector of all element unknowns, i.e displacements and rotations. By solving the linear system \( \mathbf{{a}} = \mathbf{{G}}^{-1} \mathbf{{b}}(\mathbf{{u}}_e)\) one obtains \(\mathbf{{a}}\) as a function of \(\mathbf{{u}}_e\). By inserting \(\mathbf{{u}}_e\) into the ansatz functions in (46) and (59) for element 1 and 2 the mapping in (57) and (68) can be computed.
Wriggers, P., Hudobivnik, B. & Allix, O. On two simple virtual Kirchhoff-Love plate elements for isotropic and anisotropic materials. Comput Mech (2021). https://doi.org/10.1007/s00466-021-02106-1
Received: 25 May 2021
DOI: https://doi.org/10.1007/s00466-021-02106-1
Virtual element method (VEM)
Kirchhoff-Love theory
Virtual plate element
Isotropic/anisotropic material
Finite plate element
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not affiliated
Springer Nature
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Should the US have locked heaven's door?
Reassessing the benefits of postwar immigration
Xavier Chojnicki1,2,
Frédéric Docquier3,4 &
Lionel Ragot5,6
Journal of Population Economics volume 24, pages317–359(2011)Cite this article
This paper examines the economic impact of the second great immigration wave (1945–2000) on the US economy. Our analysis relies on a computable general equilibrium model combining the major interactions between immigrants and natives (labor market impact, fiscal impact, capital deepening, endogenous education, endogenous inequality). Contrary to recent studies, we show that immigration induced important net gains and small redistributive effects among natives. According to our simulations, the postwar US immigration is beneficial for all natives cohorts and all skill groups. Nevertheless, the gains would have been larger if the US had conducted a more selective immigration policy.
See Section 2 for a survey of the mechanisms at work.
The term heaven's door refers here to the title of Borjas' remarkable book (Borjas 1999b).
See Borjas (1994), Lee and Miller (1997, 2000), Bonin et al. (2000), and Auerbach and Oreopoulos (2000) on the public finance impact of immigration.
Immigrants are defined as individuals who were foreign-born and whose parents were non US citizens.
RiosRull (1992), Storesletten (2000), or Fehr et al. (2004) use a different method. They assume that agents aged j′ to j″ (say, 23 to 45) give birth to fractions of children at the beginning of each period.
Another solution in the literature to deal with uncertainty consists in assuming, as Imrohoroglu (1998), that bequests are taxed to a 100% rate by the government and redistributed as a lump sum uniform amount to all surviving adults. As demonstrated thereafter, the approach adopted here allows for reasonable wage and wealth profiles.
At each date, the composite good is taken as a numeraire. The spot price is thus normalized to one.
We do not explicitly model the impact of illegal immigration as in Storesletten (2000).
The log-linear process \(\ln \big[ \beta _{j,t}^{m}\big] =.2\times \ln \big[ \beta _{j,t}^{l}\big] +.8\times \ln \big[ \beta _{j,t}^{h}\big]\) gives a good approximation of mortality differential per race and per age.
This figure is only used as a target value since the consumption tax rate is endogenously calculated to balance the government budget constraint.
Including medicare, medicaid, unemployment, AFDC, food stamps, and general welfare.
The actual wage gap is computed from Census data.
In 1960, 66% of male immigrants (against 53% for natives) were high school dropouts and 10% (against 11% for natives) were college graduates. Things had changed since the amendment act implementation. Nowadays, migrants are concentrating at the two extremities of the skill structure. For example, in 1998, 34% of male immigrants were high school dropouts against 9% for natives and 13% had validated a master's degree against 10% for natives.
Henceforth, BFK.
Auerbach AJ, Kotlikoff LJ (1987) Dynamic fiscal policy. Cambridge University Press, Cambridge.
Auerbach AJ, Oreopoulos P (2000) The fiscal effects of US immigration: a generational accounting perspective. In: Poterba J (ed) Tax policy and the economy, vol 14. MIT, Cambridge, pp 123–156
Ben-Porath Y (1967) The production of human capital and the life cycle of earnings. J Polit Econ 75(4):352–365
Blondal S, Scarpetta S (1997) Early retirement in OECD countries: the role of social security systems. OECD Econ Stud 29:7–54
Bonin H, Raffelhüschen B, Walliser J (2000) Can immigration alleviate the demographic burden. FinanzArchiv 57
Borjas GJ (1990) Friends or strangers: the impact of immigrants on the US economy. Basic Books, New York
Borjas GJ (1993) The impact of immigrants on employment opportunities of natives in OECD. The changing course of international migration. Paris
Borjas GJ (1994) The economics of immigration. J Econ Lit 32(4):1667–1717
Borjas GJ (1995) The economic benefits from immigration. J Econ Perspect 9(2):3–22
Borjas GJ (1999a) Immigration and welfare magnets. J Labor Econ 17(4):607–637
Borjas GJ (1999b) Heaven's door: immigration policy and the American economy. Princeton University Press, Princeton
Borjas GJ (2003) The labor demand curve is downward sloping: reexamining the impact of immigration on the labor market. Q J Econ 118(4):1335–1374
Borjas GJ (2009) The analytics of the wage effect of immigration. NBER working paper 14796
Borjas GJ, Katz LF (2007) The evolution of the Mexican-born workforce in the US labor market. In: Borjas GJ (ed) Mexican immigration to the United-States. University of Chicago Press, Chicago, pp 13–55
Borjas GJ, Freeman RB, Katz LF (1997) How much do immigration and trade affect labor market outcomes? Brookings Pap Econ Act 10:1–90
Borjas GJ, Grogger J, Hanson GH (2008) Imperfect substitution between immigrants and natives: a reappraisal. NBER working paper 13887
Brown C (1990) Episodes in the public debt history of the United States. In: Dornbush R, Draghi M (eds) Public debt management: theory and history. Cambridge University Press, Cambridge, pp 229–254
Card D, Lemieux T (2001) Can falling supply explain the rising return to college for younger men? A cohort-based analysis. Q J Econ 116(2):705–746
Cheeseman Day J, Bauman KJ (2000) Have we reached the top? Educational attainment projections of the US population. Working paper 43, Population Division, US Census Bureau
Chiswick B (1989) The impact of immigration on the human capital of natives. J Labor Econ 7(4):464–486
De la Croix D, Docquier F (2007) School attendance and skill premiums in France and the US: a general equilibrium approach. Fisc Stud 28(4):383–416
De la Croix D, Docquier F, Liégeois P (2007) Income growth in the 21st century: forecasts with an overlapping generations model. Int J Forecast 23(4):621–635
Fehr H, Jokisch S, Kotlikoff LJ (2004) The role of immigration in dealing with the developed world's demographic transition. NBER working paper 10512
Friedberg RM, Hunt J (1995) The impact of immigrants on the host country wages, employment and growth. J Econ Perspect 9(2):23–44
Gokhale J, Page BR, Sturrock JR (1999) Generational accounts for the United States: an update. In: Auerbach AJ, Kotlikoff LJ, Leibfritz W (eds) Generational accounting around the world. NBER Books, The University of Chicago Press, Chicago
Hao L (2004) Wealth of immigrant and native-born Americans. Int Migr Rev 38(2):518–546
Heckman J, Lochner L, Taber C (1998) Explaining rising wage inequality: explorations with a dynamic general equilibrium model of labor earnings with heterogeneous agents. Rev Econ Dyn 1(1):1–58
Imrohoroglu S (1998) A quantitative analysis of capital income taxation. Int Econ Rev 39(2):307–328
Jean S, Jimenez M (2007) The unemployment impact of immigration in OECD countries. Economic Department working paper 563. OECD
Lee RD, Miller TW (1997) The lifetime fiscal impacts of immigrants and their descendants: a longitudinal analysis. In: Smith J, Edmonston B (eds) The new Americans. National Academy Press, Washington, DC, pp 297–362
Lee RD, Miller TW (2000) Immigration, social security and broader fiscal impacts. Am Econ Rev 90(2):350–354
Levine P, Lotti E, Pearlman J (2003) The immigration surplus revisited in a general equilibrium model with endogenous growth. Discussion paper 02/03, University of Surrey
Ottaviano G, Perri G (2006) Rethinking the effects of immigration on wages. NBER working paper 12497
Ottaviano G, Perri G (2008) Immigration and national wages: clarifying the theory and the empirics. NBER working paper 14188
Razin A, Sadka E (1999) Migration and pension with international capital mobility. J Public Econ 74(1):141–150
Razin A, Sadka E (2004) Welfare migration: is the net fiscal burden a good measure of its economic impact on the welfare of the native-born population? NBER working paper 10682
RiosRull J-V (1992) Population changes and capital accumulation: the aging of the baby boom. Manuscript, Carnegie Mellon University, Pittsburgh
Simon JL (1989) The economic consequences of immigration. Basil Blackwell, Oxford
Sims C (1990) Solving the stochastic growth model by backsolving with a particular non linear form for the decision rule. J Bus Econ Stat 8(1):45–47
Storesletten K (2000) Sustaining fiscal policy through immigration. J Polit Econ 108(2):300–323
Wasmer E (2001a) Measuring human capital in the labour market: the supply of experience in 8 OECD countries. Eur Econ Rev 45(4–6):861–874
Wasmer E (2001b) Between-group competition in the labour market and the rising returns to skill: US and France 1964–2000. CEPR discussion paper 2798
Yaari M (1965) Uncertain lifetime, life insurance and the theory of the consumer. Rev Econ Stud 32(2):137–150
We are grateful to Alan Auerbach, Tim Miller, and Philip Oreopoulos for transmitting their dataset. The second author acknowledges financial support from the ARC convention on "Geographical mobility of factors" (convention ARC 09/14-019) and from the Marie-Curie research and training network "Transnationality of Migrants" (TOM). We thank two anonymous referees for their helpful comments and suggestions on an earlier version of this paper. The usual disclaimers apply.
EQUIPPE, University of Lille 2, 1 place Déliot, 59000, Lille, France
CEPII, 9 rue Georges Pitard, Paris, 75015, France
FNRS, National Fund for Scientific Research, Brussels, Belgium
Frédéric Docquier
IRES, Catholic University of Louvain, 3, Place Montesquieu, 1348, Louvain-La-Neuve, Belgium
EQUIPPE, Faculté des Sciences Économiques et Sociales, University of Lille 1, 59655, Villeneuve d'Ascq Cedex, France
CES, University of Paris 1, Paris, France
Correspondence to Xavier Chojnicki.
Responsible editor: Alessandro Cigno
Appendix A: Income inequality: "factor proportions approach" vs general equilibrium
So as to correctly account for migration effects on the labor market, the technological assumptions regarding the production function have to integrate the fact that workers belonging to different skill and experience groups are not perfect substitutes. So as to simplify the analysis, we assume in this paper that the stock of labor, education, and experience are homogenous so that an additional year of experience to a high-skill worker contributes to the productivity the same way as an additional year of experience to a low-skill worker. Recently, numerous papers account for the effect of migration on the labor market assuming that the labor supply incorporates the contribution of workers according to their education and experience level (Borjas 2003, 2009; Borjas and Katz 2007; Borjas et al. 2008; Ottaviano and Perri 2006, 2008). These papers use the "factor proportions approach" that consists in a partial equilibrium analysis based on nested CES production functions. The aggregate production function is given by Eq. 2. The aggregate labor input Q t is defined as:
$$ Q_{t}=\left[ \sum_{i}\alpha _{it}L_{it}^{\rho }\right] ^{1/\rho}, \label{aeq1} $$
where i is an index representing the educational level and 1/(1 − ρ) is the elasticity of substitution between workers with different educational levels and with ∑ i α it = 1. Borjas, Freeman, and Katz (BFK) have used this production function with only two inputs (high-skill labor, L st , and low-skill labor, L u ).
More recently, Borjas (2003) takes into account the experience level and assumes, within each educational group, that workers with different experience are imperfect substitutes:
$$ L_{it}=\left[ \sum_j \alpha_{ij} L_{ijt}^{\eta } \right] ^{1/\eta}, \label{aeq2} $$
where L ijt gives the number of workers with education i and experience j at time t and 1/(1 − η) is the elasticity of substitution between workers in the same education group but with different experience levels and with ∑ j α ij = 1.
Accounting for imperfect substitution between foreign and US workers within the same education-experience group is one of the main methodological contributions of Ottaviano and Perri (2006) to the "factor proportions approach." The aggregate L ij incorporates the contributions of home-born workers (L ijn ) and foreign-born workers (L ijm ):
$$ L_{ijt}=\left[ \sum_{k=n,m} \alpha_{ijkt} L_{ijkt}^{\beta } \right] ^{1/\beta}, \label{aeq3} $$
where 1/(1 − β) is the elasticity of substitution between US-born and foreign-born workers belonging to the same education and experience group.
BFK approach
Basically, the BFK approach yields the following relationship between relative wages and relative labor supplies:
$$ \ln \frac{W_{st}}{W_{ut}}=(1-\rho )\left( D_{t}-\ln \frac{L_{st}}{L_{ut}} \right), $$
where D t stands for the log of relative demand shifts for high-skill workers.
Denoting by L in,t and L im,t the labor supply of skill i = s,u of natives and immigrants, the national supply of skill group i at time t can be written as:
$$ L_{it}=L_{in,t}+L_{im,t} $$
$$ \ln \frac{L_{st}}{L_{ut}}=\ln \frac{L_{sn,t}}{L_{un,t}}+\ln \left( 1+\frac{ L_{sm,t}}{L_{sn,t}}\right) -\ln \left( 1+\frac{L_{um,t}}{L_{un,t}}\right) $$
so that the contribution of immigration (IMC t ) to the log of relative wages is given, in the BFK approach, by:
$$ IMC_{t}=(1-\rho )\left[ \ln \left( 1+\frac{L_{sm,t}}{L_{sn,t}}\right) -\ln \left( 1+\frac{L_{um,t}}{L_{un,t}}\right) \right] $$
Applying such a "factor proportion" technique to our population data and using ρ = 0.7, the post-1940 immigration accounts for 33% of the 0.313 log point increase in wage differential between medium-skill and low-skill workers from 1940 to 2000. As shown in Table 5, such a contribution falls to 11% with ρ = 0.9 and rises to 55% with ρ = 0.5. The range of the immigration contribution is thus fully compatible with BFK results.
Table 5 Estimated contribution of immigration to wage differentials, 1940–2000
Our production function builds on the microeconometric wage equation (a la Mincer) and distinguishes three major wage components: raw labor, experience, and education. To simplify the exposition, let us temporarily disregard experience. Compared to BFK, we consider an aggregate production function \(F\left( L,H\right) \) with two inputs (raw labor, L, and education, H). The return to schooling is then given by:
$$ \ln \frac{W_{t}^{H}}{W_{t}^{L}}=(1-\rho )\left( D_{t}-\ln \frac{H_{t}}{L_{t}} \right) $$
It can be reasonably assumed that the stock of human capital related to education is proportional to the number of high-skill workers \((H_{t}=\alpha L_{t}^{s})\) and that the supply of raw labor sums up high-skill and low-skill workers \((L_{t}=L_{t}^{s}+L_{t}^{u})\). We then have
$$ \begin{array}{rcl} \ln \frac{H_{t}}{L_{t}} &=&\ln (\alpha )+\ln \left( \frac{L_{t}^{s}}{ L_{t}^{s}+L_{t}^{u}}\right) \\ &=&\ln (\alpha )+\ln \left( \frac{L_{n,t}^{s}}{L_{n,t}^{s}+L_{n,t}^{u}} \right) +\ln \left( 1+\frac{L_{m,t}^{s}}{L_{n,t}^{s}}\right) \\ &&-\,\ln \left( 1+ \frac{L_{m,t}^{s}}{L_{n,t}^{s}+L_{n,t}^{u}}+\frac{L_{m,t}^{u}}{ L_{n,t}^{s}+L_{n,t}^{u}}\right) \end{array} $$
so that the contribution of immigration (IMC t ) to the return to schooling becomes:
$$ IMC_{t}=(1-\rho )\left[ \ln \left( 1+\frac{L_{m,t}^{s}}{L_{n,t}^{s}}\right) -\ln \left( 1+\frac{L_{m,t}^{s}}{L_{n,t}^{s}+L_{n,t}^{u}}+\frac{L_{m,t}^{u}}{ L_{n,t}^{s}+L_{n,t}^{u}}\right) \right] $$
This contribution is much lower than in the BFK specification. Using the same population data as before, immigration explains only 6% of the high school premium changes between 1940 and 2000. The impact on the wage ratio is lower since \(\ln \frac{W_{t}^{s}}{W_{t}^{u}}\approx \ln \left(1+ \frac{W_{t}^{H}}{W_{t}^{L}}\right) \). General equilibrium effects are likely to reduce the impact of immigration on wage differential since natives' education choices are endogenous. However, the choice of the relevant production function has a major impact on the contribution of immigration to wage inequality. As shown in Table 5, our approach predicts a 0.5% contribution of immigration to the wage differential between medium- and low-skilled. With a lower elasticity of substitution, such a contribution could rise to 1.5%. Consequently, more than 98% of wage differential is explained by native supplies and demand changes.
Borjas (2003) approach
Applying the Borjas (2003) methodology to our population data, the net impact of immigration on the log wage of group (x,y) is:
$$ \Delta log W_{x,y}= \epsilon_{xy,xy} m_{xy} + \sum_{j \neq y} \epsilon_{xy,xj} m_{xj} + \sum_{i \neq x} \sum_{j} \epsilon_{xy,ij} m_{ij}, \label{aeq4} $$
where m ij is the percentage change in labor supply due to immigration in group (i,j), ε xy,xy the own factor price elasticity, ε xy,xj the (within education branch) cross-factor price elasticity, and ε xy,ij the (across education branch) cross-factor price elasticity. Applying this methodology to our data, i = l,m,h and j = (0,...,4). The corresponding elasticities are directly taken from Borjas (2003) and adapted to our age and skill structure (Table 6).
Table 6 Factor price elasticities
The results of this are summarized in Table 7. The contribution of the post-Second World War immigration in wage differential between medium-skill and low-skill workers, with 10–19 years of experience, from 1940 to 2000 is evaluated to 47% with the Borjas (2003) methodology (i.e., applying elasticities of Table 6), while this contribution is less than 3% with our production function and general equilibrium approach. Whatever the group of experience is, we find the same order of magnitude. However, we have to keep in mind that the retained elasticities of substitution, and particularly the assumption of perfect substitution between a US-born and a foreign-born worker of the same education-experience, are highly controversial when regarding the totaly different results of Ottaviano and Perri (2006) with the same methodological approach.
Appendix B: Robustness to the elasticity of substitution
The parameter ρ determines the magnitude of wage responses (the intensity of the relationship between changes in factor proportions and changes in wages). Tables 8 and 9 give the economic consequences of immigration in alternative models. The model behind Table 8 is calibrated with a low elasticity of substitution (ρ = 0.5 and 1/(1 − ρ) = 2). The model behind Table 9 is calibrated with a high elasticity of substitution (ρ = 0.9 and 1/(1 − ρ) = 10). The conclusions are similar to the baseline simulation in Table 3 and to the magnitude of Table 4.
Table 8 Economic consequences of the US immigration (lower elasticity of substitution, ρ = .5)
Table 9 Economic consequences of the US immigration (higher elasticity of substitution, ρ = .9)
Chojnicki, X., Docquier, F. & Ragot, L. Should the US have locked heaven's door?. J Popul Econ 24, 317–359 (2011). https://doi.org/10.1007/s00148-009-0286-z
Computable general equilibrium | CommonCrawl |
Interpolation of Weighted Banach Lattices: A Characterization of Relatively Decomposable Banach Lattices available in Paperback
Interpolation of Weighted Banach Lattices: A Characterization of Relatively Decomposable Banach Lattices
by Michael Cwikel, Per G. Nilsson, Gideon SchechtmanMichael Cwikel
$52.80 $66.00 Save 20% Current price is $52.8, Original price is $66. You Save 20%.
Interpolation of Weighted Banach Lattices It is known that for many, but not all, compatible couples of Banach spaces $(A_{0},A_{1})$ it is possible to characterize all interpolation spaces with respect to the couple via a simple monotonicity condition in terms of the Peetre $K$-functional. Such couples may be termed Calderon-Mityagin couples. The main results of the present paper provide necessary and sufficient conditions on a couple of Banach lattices of measurable functions $(X_{0},X_{1})$ which ensure that, for all weight functions $w_{0}$ and $w_{1}$, the couple of weighted lattices $(X_{0,w_{0}},X_{1,w_{1}})$ is a Calderon-Mityagin couple. Similarly, necessary and sufficient conditions are given for two couples of Banach lattices $(X_{0},X_{1})$ and $(Y_{0},Y_{1})$ to have the property that, for all choices of weight functions $w_{0}, w_{1}, v_{0}$ and $v_{1}$, all relative interpolation spaces with respect to the weighted couples $(X_{0,w_{0}},X_{1,w_{1}})$ and $(Y_{0,v_{0}},Y_{1,v_{1}})$ may be described via an obvious analogue of the above-mentioned $K$-functional monotonicity condition. A number of auxiliary results developed in the course of this work can also be expected to be useful in other contexts. These include a formula for the $K$-functional for an arbitrary couple of lattices which offers some of the features of Holmstedt's formula for $K(t,f;L^{p},L^{q})$, and also the following uniqueness theorem for Calderon's spaces $X^{1-\theta }_{0}X^{\theta }_{1}$: Suppose that the lattices $X_0$, $X_1$, $Y_0$ and $Y_1$ are all saturated and have the Fatou property. If $X^{1-\theta }_{0}X^{\theta }_{1} = Y^{1-\theta }_{0}Y^{\theta }_{1}$ for two distinct values of $\theta$ in $(0,1)$, then $X_{0} = Y_{0}$ and $X_{1} = Y_{1}$. Yet another such auxiliary result is a generalized version of Lozanovskii's formula $\left( X_{0}^{1-\theta }X_{1}^{\theta }\right) ^{\prime }=\left (X_{0}^{\prime }\right) ^{1-\theta }\left( X_{1}^{\prime }\right) ^{\theta }$ for the associate space of $X^{1-\theta }_{0}X^{\theta }_{1}$. A Characterization of Relatively Decomposable Banach Lattices Two Banach lattices of measurable functions $X$ and $Y$ are said to be relatively decomposable if there exists a constant $D$ such that whenever two functions $f$ and $g$ can be expressed as sums of sequences of disjointly supported elements of $X$ and $Y$ respectively, $f = \sum^{\infty }_{n=1} f_{n}$ and $g = \sum^{\infty }_{n=1} g_{n}$, such that $\| g_{n}\| _{Y} \le \| f_{n}\| _{X}$ for all $n = 1, 2, \ldots$, and it is given that $f \in X$, then it follows that $g \in Y$ and $\| g\| _{Y} \le D\| f\| _{X}$. Relatively decomposable lattices appear naturally in the theory of interpolation of weighted Banach lattices. It is shown that $X$ and $Y$ are relatively decomposable if and only if, for some $r \in [1,\infty ]$, $X$ satisfies a lower $r$-estimate and $Y$ satisfies an upper $r$-estimate. This is also equivalent to the condition that $X$ and $\ell ^{r}$ are relatively decomposable and also $\ell ^{r}$ and $Y$ are relatively decomposable.
Memoirs of the American Mathematical Society Series , #165 | CommonCrawl |
Sequences and Series
Introduction to Sequences
Introduction to Arithmetic Progressions
Recurrence relationships for AP's
Terms in Arithmetic Progressions
Graphs and Tables - AP's
Notation for a Series
Arithmetic Series (defined limits)
Arithmetic Series (using graphics calculators)
Applications of Arithmetic Progressions
Introduction to Geometric Progressions
Recurrence relationships for GP's
Finding the Common Ratio
Terms in Geometric Progressions
Graphs and Tables - GP's
Geometric Series
Geometric Series (using graphics calculators)
Infinite sum for GP's
Applications of Geometric Progressions
Applications of Geometric Series
Sequences and Saving Money (Investigation) LIVE
First Order Linear Recurrences Introduction
Graphs and Tables - Recurrence Relations
Solutions to Recurrence Relations
Steady state solutions to recurrence relations
Applications of Recurrence Relations
Writing out the partial sum of a sequence can take up a lot of space and time. A number of centuries ago, mathematicians succeeded in developing a short hand notation for many sequences, and they borrowed from the Greek alphabet to do it.
The symbol $\Sigma$Σ (pronounced "sigma") is the capital letter S in the Greek alphabet. When $\Sigma$Σ is used to express a series in mathematics, then in those instances it stands for the word "Sum". To explain how the symbol is used, consider the following expression:
$\sum_{n=1}^{n=5}$n=5∑n=1 $n^2$n2
The equation $n=1$n=1 directly below and to the right of the $\Sigma$Σ sign tells us that we start the series by substituting $n=1$n=1 into the sequence formula shown as $n^2$n2. So the series begins:
$1^2$12
Then we increase $n$n by $1$1 so that $n$n becomes $2$2. The new value of $n$n is substituted into the sequence formula to reveal $2^2$22 so that the series becomes:
$1^2+2^2$12+22
Again we increase $n$n by $1$1 and substitute for the third term so that the series becomes:
$1^2+2^2+3^2$12+22+32
This process of increasing $n$n by $1$1 continues until $n=5$n=5 is reached as shown directly above and to the right of the $\Sigma$Σ sign. The complete sum becomes:
$\sum_{n=1}^{n=5}n^2=1^2+2^2+3^2+4^2+5^2$n=5∑n=1n2=12+22+32+42+52
Thus this series sums to $55$55.
When it is clear that the variable being referred to is $n$n, we are allowed to drop the "$n$n" in the superscripted and subscripted expression so that the above example can be written:
$\sum_1^5n^2=1^2+2^2+3^2+4^2+5^2$5∑1n2=12+22+32+42+52
As a second example, the first $100$100 terms of the arithmetic series whose first term is $a=10$a=10 and whose common difference is $d=3$d=3 is given, with our new notation, as:
$\sum_1^{100}\left(7+3n\right)$100∑1(7+3n)
Following the same strategy, you should be able to see that the series written out would look like:
$10+13+16+...+304+307$10+13+16+...+304+307
Note that the common difference, the first term and last term are easy to spot, and this means we can use the sum formula for an arithmetic progression to show that:
$S_{100}=\frac{100}{2}\left(10+307\right)=15850$S100=1002(10+307)=15850
In some instances of geometric series, we can add up an infinite number of terms and obtain a finite sum. Specifically this happens when the common ratio $r$r is between, but not including, $-1$−1 and $1$1.
Take for example the expression:
$\sum_{n=1}^{\infty}32\times\left(\frac{1}{2}\right)^{n-1}$∞∑n=132×(12)n−1
Using our strategy, we can write the first few terms as $32+16+8+4+...$32+16+8+4+... and using the limiting sum formula, we find that the finite sum is given by:
$S_{\infty}=\frac{a}{1-r}=\frac{32}{1-\left(\frac{1}{2}\right)}=64$S∞=a1−r=321−(12)=64
As a final note, the summation notation is quite versatile in its application to series. For example, the mean of a set of scores, say in general terms the scores $x_1,x_2,x_3,...,x_n$x1,x2,x3,...,xn is their sum divided by $n$n.
We can write this using what is known as a dummy variable $i$i and write:
$\sum_{i=1}^n$n∑i=1 $\frac{x_i}{n}$xin
In expanded form this expression reads:
$\frac{x_1}{n}+\frac{x_2}{n}+\frac{x_3}{n}+...+\frac{x_n}{n}$x1n+x2n+x3n+...+xnn
or equivalently $\frac{x_1+x_2+x_3+...+x_n}{n}$x1+x2+x3+...+xnn.
Write the following series using sigma notation.
$4+8+12+16+20+\text{. . .}$4+8+12+16+20+. . .
$\sum_{k=1}^{\infty}\left(\editable{}\right)$∞∑k=1(
Consider the series:
$\frac{1}{1^3}+\frac{1}{2^3}+\frac{1}{3^3}+\frac{1}{4^3}+\frac{1}{5^3}$113+123+133+143+153
Rewrite the series using sigma notation in the form $\sum_{k=\editable{}}^{\editable{}}\editable{}$
∑k=
Find the value of $\sum_{r=1}^4\frac{1}{r+2}$4∑r=11r+2.
Use arithmetic and geometric sequences and series
Apply sequences and series in solving problems | CommonCrawl |
Hostname: page-component-7ccbd9845f-dxj8b Total loading time: 1.404 Render date: 2023-01-30T10:55:28.903Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>British Journal of Nutrition
>Volume 108 Issue 9
>Soya isoflavone consumption in relation to carotid...
British Journal of Nutrition
Soya isoflavone consumption in relation to carotid intima–media thickness in Chinese equol excretors aged 40–65 years
Published online by Cambridge University Press: 29 February 2012
Yun Cai ,
Kaiping Guo ,
Chaogang Chen ,
Ping Wang ,
Bo Zhang ,
Quan Zhou ,
Fang Mei and
Yixiang Su
Yun Cai
Faculty of Nutrition, School of Public Health, Sun Yat-Sen University, Guangzhou510080, People's Republic of China
Kaiping Guo
Chaogang Chen
Department of Clinical Nutrition, The Second Affiliated Hospital of Sun Yat-Sen University, Guangzhou, People's Republic of China
Ping Wang
Quan Zhou
Fang Mei
Yixiang Su*
*Corresponding author: Y. Su, email [email protected]
Previous studies have suggested that the daidzein metabolite equol rather than daidzein itself contributes to the beneficial effect of soya foods in the prevention of CVD. The aim of the present study is to examine the proportion of equol excretion in Chinese adults and compare plasma lipids and carotid artery intima–media thickness (IMT) between equol excretors and non-excretors, and to evaluate the effect of soya isoflavone intakes on serum lipids and IMT in either equol excretors or non-excretors. Subjects (n 572; women n 362, men n 210) were recruited for the present study. An overnight urine sample was provided by each subject on their usual diet to quantify urinary concentrations of daidzein and equol. Far-wall IMT was determined by B-mode ultrasound in the right carotid at two sites, carotid bulb (CB-IMT) and common carotid artery (CCA-IMT), and fasting serum lipids were measured. Habitual dietary intakes were estimated with a FFQ, and soya isoflavone intake derived from the FFQ was assessed. Of the 572 subjects, the proportion of equol excretors on their usual diet was 25·0 % (n 143). Compared with non-excretors, equol excretors showed significantly lower serum TAG ( − 38·2 (95 % CI − 70·4, − 5·9) %, P = 0·012) and CCA-IMT ( − 4·9 (95 % CI − 9·7, − 0·3) %, P = 0·033). Equol excretors with higher daily isoflavone intakes ( − 5·4 mg/d) had significantly lower IMT ( − 16·2 %, P = 0·035) and tended to have higher HDL-cholesterol (P = 0·055) than did those with lower daily isoflavone intakes (1·5 mg/d), while no association was observed between soya isoflavone intakes and serum lipids or IMT in non-excretors. In conclusion, the benefits of soya isoflavones in preventing CVD may be apparent among equol excretors only.
IsoflavonesEquolBlood lipidsCarotid artery intima–media thickness
Full Papers
British Journal of Nutrition , Volume 108 , Issue 9 , 14 November 2012 , pp. 1698 - 1704
Soya isoflavones, mainly daidzein and genistein, have a similar chemical structure to that of oestrogens(Reference Setchell1), and consequently bind to oestrogen receptors to exert either oestrogenic or anti-oestrogenic activity(Reference Setchell, Clerici and Lephart2). Many studies(Reference Lichtenstein3, Reference Zhang, Shu and Gao4) have indicated that higher soya intake is associated with a lower incidence of CVD.
The bioactive effects of isoflavones with or without soya protein have been fully studied. In three recent meta-analyses, it has been confirmed that soya protein with isoflavones, when compared with other proteins, had clinically important favourable effects on LDL-cholesterol and other CVD risk factors(Reference Zhan and Ho5–Reference Sacks, Lichtenstein and Van Horn7), whereas the results of studies comparing soya protein that did or did not contain isoflavones or comparing isoflavones in pill form with placebo were inconsistent(Reference Sacks, Lichtenstein and Van Horn7, Reference Weggemans and Trautwein8). One possible explanation for complexity in the responsiveness to isoflavones is due to the differences in the metabolism of isoflavones among individuals, specifically the variance in equol-synthesising capacity(Reference Watanabe, Yamaguchi and Sobue9).
Equol is produced by the intestinal bacterial flora from daidzein in approximately one-third to one-half of human subjects(Reference Setchell1). Previous studies have shown a substantial individual difference in the ability to produce equol(Reference Watanabe, Yamaguchi and Sobue9), and Asian populations have a higher equol-producer prevalence(Reference Song, Atkinson and Frankenfeld10). Equol is believed to be the active form of soya isoflavones and has a stronger oestrogen-like activity than daidzein. Thus, the equol phenotype may modify the protection value of soya isoflavones against hormone-dependent diseases and affect the lipid profile and vascular health(Reference Karr, Lampe and Hutchins11, Reference Guo, Zhang and Chen12).
Therefore, the present study aims to investigate the prevalence of equol excretion under the usual diet in southern Chinese adults who are usual soya consumers, and to assess the association between the ability of equol excretion and lipid profiles and intima–media thickness (IMT), and to evaluate the relationship between isoflavone intakes and lipid profiles and IMT in either equol excretors or non-excretors.
Subjects and study procedure
A total of 600 community-based volunteers aged 40–65 years were recruited in Guangzhou, China by poster and telephone. After initially screening for their eligibility using a short questionnaire, which recorded sex, age and exclusion criteria, potential subjects were then invited to the First or the Second Affiliated Hospital of Sun Yat-sen University, Guangzhou, China. Staff with relevant knowledge in medical sciences screened the subject for eligibility via a face-to-face interview to determine whether they met the inclusion and exclusion criteria. Subjects were excluded if they had a history of diseases, which may affect personal lifestyle, such as CVD, diabetes, cancer, dyslipidaemia, or had recently (3 months) used drugs or healthy products affecting blood lipids. Carotid IMT was measured unilaterally (right) at the carotid bulb and common carotid artery (CCA-IMT, 20 mm proximal to the bulb) at the far wall of the artery using Technos MPX DU8 (Esaote) with the frequency of the probe set at 10·0 MHz. All subjects were examined in a supine position after they had rested for 10 min. B-mode images at the diastolic phase of the cardiac cycle were recorded by a single trained technician who was kept unaware of the subjects' backgrounds. On a longitudinal, two-dimensional ultrasound image of the carotid artery, the far walls of the carotid artery were displayed as two bright white echogenic lines separated by a hypoechogenic space. The distance between the leading edge of the first bright line and the leading edge of the second bright line indicates the IMT. Plaque thickness was avoided in IMT measurements.
The study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures were approved by the Medical Ethics Committee of Sun Yat-sen University. All participants signed the written informed consent form.
Information on sociodemographic data, medication and health history, lifestyles, socio-economic status and physical condition was obtained from a structured questionnaire. Information about habitual dietary intakes within the past 12 months before the investigating day was estimated using a FFQ designed for Chinese populations via a face-to-face interview by trained investigators. The first urine passed in the morning was provided by each subject on their usual diet to quantify urinary concentrations of isoflavones(Reference Franke, Morimoto and Yeh13, Reference Tseng, Olufade and Kurzer14), and 10 ml of the samples were stored at − 80°C before analysis. Height, weight, waist and hip circumferences and blood pressure were measured. Fasting blood samples (10 ml) were collected for blood lipid analysis.
Of the 600 subjects, twenty-eight subjects (ten men and eighteen women) were excluded for failure to return urine collection. A total of 572 participants successfully completed the study and were included in the data analysis.
Food models were used to help subjects in the quantification of their diet.
The FFQ includes 118 foods and food groups that cover the most commonly consumed foods in Guangzhou, China. The FFQ includes the following soya and isoflavone-rich food items or groups: soya milk, tofu, fried tofu, dried soyabeans, mung beans, soyabean sprouts, fresh beans and peanuts. For each food item or food group, subjects were asked about the frequency (daily, weekly, monthly, annually or never) and the amount in Liang (Liang = 50 g) per unit of time. The percentage of energy from fat for each subject was calculated according to the FFQ. The soya isoflavone intake levels derived from the FFQ were assessed using the Chinese Food Composition Table. Total isoflavone intake was calculated using the following formula:
$$\begin{eqnarray} Total\,\,isoflavone\,\,intake = \Sigma (amount\,\,of\,\,soya\,\,food\,\,from\,\,the\,\,FFQ\,\times \,proportion\,\,of\,\,edible\,\,part\,\times \,isoflavone\,\,amount\,\,of\,\,each\,\,soya\,\,food). \end{eqnarray}$$
Height, weight, waist and hip circumferences and blood pressure were measured on the day of urine collection. A 12 h fasting venous blood sample was collected in vacuum tubes containing EDTA. Plasma was separated after centrifugation at 1500 g for 15 min at 4°C within 2 h and was stored at − 80°C until analysis. Plasma lipids were measured using a Hitachi 7600-010 automatic analyser (Hitachi). The CV for lipid measurements were 2·17 % (at 5·03 mm-total cholesterol), 2·86 % (at 1·14 mm-TAG), 3·47 % (at 1·70 mm-HDL-cholesterol) and 4·67 % (at 2·65 mm-LDL-cholesterol). Biochemical assays were performed by two technicians who were not involved in the questionnaire interview.
Soya isoflavone metabolites in the first urine passed in the morning were assayed by HPLC with UV detection. In brief, urine samples were extracted by ethyl acetate (5 ml × 2) after deconjugation by β-glucuronidase/sulphatase. After evaporated to dryness under N2 at 40°C, the extract was reconstituted in a mobile-phase solution (1 ml) for analysis. The HPLC system consisted of a C18 stationary-phase extraction (5 μm, 4·6Φ × 250 mm) column, and separation of isoflavones was achieved with the mobile phase at a flow rate of 1·0 ml/min with the following gradient elution: A – acetic diethyl ether–methanol–0·05 % phosphate buffer (1:10:40, by vol.); B – acetic diethyl ether–methanol (1:49, v/v); 0·0–35·0 min, 0–35 % B; 35·0–45·0 min, 35–35 % B; 45·0–50·0 min, 35–40 % B; 50·0–55·0 min, 40–70 % B; 55·0–60·0 min, 70–0 % B; 60·0–70·0 min, 0–0 % B. Isoflavones were detected from their UV absorbance at 254 and 280 nm. The CV for daidzein and equol measurements were < 5 %.
Subjects with urinary equol:daidzein ratios >0·018 were defined as equol excretors(Reference Setchell and Cole15).
Data were checked for normality, and skewed parameters were log transformed before statistical analysis when possible. Data that were normally distributed either before or after log transformation were presented as means and standard deviations. Isoflavone intakes were categorised by the quartile distribution of the study population in the analysis. Differences between the two groups were assessed by Student's t test. Comparisons of variables between different quartiles of isoflavone intakes were performed with one-way ANOVA with post hoc Bonferroni's correction for multiple comparisons and the test for a linear trend. ANCOVA was used to compare the mean differences in blood lipids and IMT after adjustment for other potential confounding factors. The level of significance was set at P < 0·05. All analyses were conducted using SPSS for Windows (version 11.5; SPSS, Inc.).
Of the 572 subjects, 76·4 % (n 437) had detectable daidzein, 25·2 % (n 144) had detectable urinary equol and 25 % (n 143) were defined as equol excretors. The proportions of equol excretors were 26·2 % in men and 24·4 % in women, with no statistical difference between men and women (P>0·05).
There was no significant association between equol-metabolising phenotype in sociodemographic factors of subjects, such as age, sex, socio-economic status and education (Table 1). Subjects with different equol-metabolising phenotypes had no significant differences in the dietary intakes of soya isoflavones, energy and fat.
Table 1 Socio-economic, dietary clinical, and lipid profile of the participants by equol status (Mean values and standard deviations)
WHR, waist:hip ratio; SBP, systolic blood pressure; DBP, diastolic blood pressure; TC, total cholesterol; HDL-C, HDL-cholesterol; LDL-C, LDL-cholesterol; CB-IMT, intima–media thickness at the carotid bulb; CCA-IMT, intima–media thickness at the common carotid artery.
* Student's t test for continuous variables and χ2 tests for proportion.
† Log transformed.
Equol excretors showed lower serum TAG ( − 38·2 (95 % CI − 70·4, − 5·9) %, P = 0·012) and CCA-IMT ( − 4·9 (95 % CI − 9·7, − 0·3) %, P = 0·033) compared with non-excretors. No significant differences in BMI, waist:hip ratio and blood pressure were observed between equol excretors and non-excretors (Table 1).
Subjects were categorised into four quartiles by soya isoflavone intake. For both equol excretors and non-excretors, subjects in the four quartiles of daily isoflavones had significant differences in the intakes of daily energy and fat (% energy, P < 0·05 for both). There were no significant differences in BMI, waist:hip ratio, and systolic and diastolic blood pressure with increasing quartiles of isoflavones (P>0·05 for all; Table 2).
Table 2 Dietary and anthropometric characteristics across quartiles of soya isoflavone intakes in equol excretors or non-excretors* (Mean values and standard deviations)
WHR, waist:hip ratio; SBP, systolic blood pressure; DBP, diastolic blood pressure.
* For contrasting groups using ANOVA.
After adjustment for age, sex, daily energy intake and fat (% energy), there was a significant linear trend in serum HDL-cholesterol concentration with increasing quartiles of isoflavones (P = 0·017; Table 3) in equol excretors. Also, subjects in the subpopulation with more daily isoflavone intakes tended to have higher HDL-cholesterol concentration (P = 0·055). Similarly, there was a significant linear decreasing trend of carotid CCA-IMT in equol excretors in different quartiles of isoflavone intake (P = 0·044). As shown in Table 3, subjects in the first quartile of daily isoflavone intake had a significantly higher CCA-IMT than did those in the second, third and fourth quartiles (P < 0·05 for all).
Table 3 Blood lipids and clinical characteristics across quartiles of soya isoflavone intakes in equol excretors or non-excretors (Mean values and standard deviations)
TC, total cholesterol; HDL-C, HDL-cholesterol; LDL-C, LDL-cholesterol; CB-IMT, intima–media thickness at the carotid bulb; CCA-IMT, intima–media thickness at the common carotid artery.
* Mean values were significantly different compared with group 4 (P < 0·05).
† Mean values were significantly different compared with group 1 (P < 0·05).
‡ Model 1, unadjusted; model 2, adjusted for age, energy, fat (% energy).
§ P for differences between the quartiles.
∥ P for trend.
¶ Log transformed.
There were no significant trends towards blood lipids or IMT in non-excretors, and no significant differences were observed in blood lipids or IMT in non-excretors in the different quartiles of dietary isoflavone intake.
In the present study, we demonstrated that equol excretors had a lower lipid profile and IMT than non-excretors, and the lipid-lowering and vascular health effects of isoflavones were limited to equol excretors.
Equol excretors on their usual diet in the present study represented 25·0 % of the southern Chinese adults. The finding is consistent with the previously reported frequency of equol excretion in Chinese adults(Reference Guo, Zhang and Chen12, Reference Liu, Qin and Liu16), but lower than that reported in Japan(Reference Arai, Uehara and Sato17).
In our previous study(Reference Guo, Zhang and Chen12), we found a significantly higher percentage of equol excretion in Chinese adults who had consumed soya before urine collection than in those on their usual diet. However, in the present study, we did not observe a significantly higher excretion percentage of equol in Chinese adults across quartiles of soya isoflavone consumption, suggesting that usual dietary intake did not affect the equol excretion status. The half-lifes of daidzein and equol in the circulation are 5–8 h(Reference Setchell and Cassidy18). Then, using the first urine passed in the morning to check equol is liable to the induction of the dinner on the previous day, although in adults, the ability to produce equol does not appear to be easily altered by dietary means(Reference Frankenfeld, Atkinson and Thomas19–Reference Vedrine, Mathey and Morand21).
Carotid artery stenosis is a subclinical marker of systemic atherosclerosis, including coronary artery diseases and cerebrovascular diseases. An increased carotid IMT is a stronger predictor of CHD and stroke(Reference O'Leary, Polak and Kronmal22, Reference Hollander, Hak and Koudstaal23). A previous study by our group found that higher habitual soya food consumption was associated with decreased bifurcation IMT, plasma total cholesterol and LDL-cholesterol in middle-aged Chinese adults(Reference Zhang, Chen and Huang24). Other studies have examined the association between habitual soya food intake and the risk of CVD, and yielded inconsistent results. The Shanghai Women's Health Study, including 64 915 women and with a mean follow-up of 2·5 years, showed that covariate-adjusted CHD risk decreased by 75 % in middle-aged women consuming a median soya protein of 1·99 g/d when compared with those with a median intake of 0·47 g/d(Reference Zhang, Shu and Gao4). Another 7-year prospective study in Japan reported a marginally significantly inverse relationship between soya product intake and total mortality in 13 355 males and 15 724 females, respectively(Reference Nagata, Takatsuka and Shimizu25). However, a report from the Japan Public Health Center-Based Study Cohort exhibited sex-specific relationships between soya and isoflavone intake and the risk of cerebral and myocardial infarctions(Reference Kokubo, Iso and Ishihara26). High isoflavone intake was associated with a reduced risk of cerebral and myocardial infarctions in Japanese women. The risk reduction was pronounced for postmenopausal women, but no significant association of the dietary intake of soya and isoflavones with cerebral or myocardial infarctions was present in men(Reference Kokubo, Iso and Ishihara26). The inconsistent relationships between cardiovascular benefits and soya protein or isoflavones were postulated to be, at least partly, due to the ability to produce equol. Consistent with the hypothesis, we observed a favourable relationship between TAG, IMT and the equol phenotype among middle-aged Chinese adults, suggesting that the ability to produce equol might play a role in modulating cardiovascular risk factors.
The precise molecular mechanisms linking daidzein or its metabolite equol to cardiovascular health remain unclear. A likely explanation for the TAG reduction was that equol can activate PPAR, which leads to decreased TAG concentrations via increased fatty acid oxidation(Reference Ricketts, Moore and Banz27). Equol has a greater antioxidant activity than the parent isoflavone compounds genistein and daidzein, and its antioxidant effects during J774 cell-mediated LDL modification are based on a down-regulation of $O _{2}^{ - \z.rad } $ production that is achieved, at least in part, through inhibited NADPH oxidase activity(Reference Hwang, Wang and Morazzoni28).
An objective of the present study was to determine whether an individual's ability to convert daidzein to equol enhanced the lipid-lowering effects of isoflavones. The study by Meyer et al. (Reference Meyer, Larkin and Owen29) was the first human intervention to investigate retrospectively the correlation between equol production and the lipid-lowering benefits of soya supplementation. It showed significant reductions in the lipid profiles of equol producers compared with equol non-producers. In the present study, when subjects were categorised as either equol or equol non-excretors, there were significant differences in their responses to the dietary isoflavone intake. We observed that the benefits of isoflavones on CVD were limited to equol excretors. The results of the present study showed that a higher daily intake of isoflavones was associated with higher HDL-cholesterol and lower IMT in equol excretors other than equol non-excretors. The present finding confirms the hypothesis by Watanabe et al.(Reference Watanabe, Yamaguchi and Sobue9), who suggested that the inconsistency in results from trials investigating the effects of isoflavones may be related to the ability of individuals to convert the soya isoflavone daidzein into equol.
The major limitation of the present study is that we examined the equol metabolic phenotype on the individuals' usual diet. According to our previous study(Reference Guo, Zhang and Chen12), the prevalence of equol producers increased from 13·4 to 50 % after a 3 d soya isoflavone challenge, which suggested that a proportion of subjects had bacteria capable of producing equol but the insufficient isoflavone intake before urine collection resulted in them being classified as equol non-producers. The misclassification would bias the findings. However, such misclassification grouped some equol producers into equol non-producers, attenuating the true differences in blood lipids or IMT between equol producers and non-producers, which means that we are likely to underestimate the effects of equol metabolic phenotype on cardiovascular risk factors. In addition, we determined equol status with the urinary equol:daidzein ratio but not the detectable value of equol in the urine, which will reduce the possibility of misclassification to a certain degree.
There are some further limitations of the present study. We did not have enough power to detect small differences in blood lipids due to the relatively small study size. The results of the present study should be confirmed in a larger sample. Due to the limitation of funding, only the right carotid IMT was measured. The bias from other dietary factors such as vegetables and fish, even the parent isoflavone compounds genistein and daidzein and other daidzein metabolites, O-desmethylangolensin, could not be excluded.
In conclusion, higher dietary isoflavone consumption is associated with lower serum TAG and CCA-IMT in middle-aged Chinese equol producers, but not in equol non-producers. The present findings suggest that the benefits of soya isoflavones in preventing CVD may be limited to individuals with a high intrinsic capacity to convert daidzein to equol.
The present study was supported by the National Natural Science Foundation of China (30872102). The authors wish to thank the volunteers who participated in the study. Y. C. and K. G. contributed equally to the data collection, analysis and the manuscript preparation. C. C., P. W. and B. Z. were responsible for the statistical analysis and revision of the manuscript. Q. Z. and F. M. contributed to the assay of the isoflavone metabolites. B. Z. and Y. S. were responsible for the conception and design of the study. None of the authors had any financial or personal conflict of interest.
1Setchell, KD (1998) Phytoestrogens: the biochemistry, physiology, and implications for human health of soy isoflavones. Am J Clin Nutr 68, 1333S–1346S.CrossRefGoogle ScholarPubMed
2Setchell, KD, Clerici, C, Lephart, ED, et al. (2005) S-Equol, a potent ligand for estrogen receptor beta, is the exclusive enantiomeric form of the soy isoflavone metabolite produced by human intestinal bacterial flora. Am J Clin Nutr 81, 1072–1079.CrossRefGoogle ScholarPubMed
3Lichtenstein, AH (1998) Soy protein, isoflavones and cardiovascular disease risk. J Nutr 128, 1589–1592.CrossRefGoogle ScholarPubMed
4Zhang, X, Shu, XO, Gao, YT, et al. (2003) Soy food consumption is associated with lower risk of coronary heart disease in Chinese women. J Nutr 133, 2874–2878.CrossRefGoogle ScholarPubMed
5Zhan, S & Ho, SC (2005) Meta-analysis of the effects of soy protein containing isoflavones on the lipid profile. Am J Clin Nutr 81, 397–408.CrossRefGoogle ScholarPubMed
6Reynolds, K, Chin, A, Lees, KA, et al. (2006) A meta-analysis of the effect of soy protein supplementation on serum lipids. Am J Cardiol 98, 633–640.CrossRefGoogle ScholarPubMed
7Sacks, FM, Lichtenstein, A, Van Horn, L, et al. (2006) Soy protein, isoflavones, and cardiovascular health: an American Heart Association Science Advisory for professionals from the Nutrition Committee. Circulation 113, 1034–1044.CrossRefGoogle ScholarPubMed
8Weggemans, RM & Trautwein, EA (2003) Relation between soy-associated isoflavones and LDL and HDL cholesterol concentrations in humans: a meta-analysis. Eur J Clin Nutr 57, 940–946.CrossRefGoogle ScholarPubMed
9Watanabe, S, Yamaguchi, M, Sobue, T, et al. (1998) Pharmacokinetics of soybean isoflavones in plasma, urine and feces of men after ingestion of 60 g baked soybean powder (kinako). J Nutr 128, 1710–1715.CrossRefGoogle Scholar
10Song, KB, Atkinson, C, Frankenfeld, CL, et al. (2006) Prevalence of daidzein-metabolizing phenotypes differs between Caucasian and Korean American women and girls. J Nutr 136, 1347–1351.CrossRefGoogle ScholarPubMed
11Karr, SC, Lampe, JW, Hutchins, AM, et al. (1997) Urinary isoflavonoid excretion in humans is dose dependent at low to moderate levels of soy-protein consumption. Am J Clin Nutr 66, 46–51.CrossRefGoogle ScholarPubMed
12Guo, K, Zhang, B, Chen, C, et al. (2010) Daidzein-metabolising phenotypes in relation to serum lipids and uric acid in adults in Guangzhou, China. Br J Nutr 104, 118–124.CrossRefGoogle ScholarPubMed
13Franke, A, Morimoto, Y, Yeh, L, et al. (2006) Urinary isoflavonoids as a dietary compliance measure among premenopausal women. Asia Pac J Clin Nutr 15, 88–94.Google ScholarPubMed
14Tseng, M, Olufade, T, Kurzer, MS, et al. (2008) Food frequency questionnaires and overnight urines are valid indicators of daidzein and genistein intake in U.S. women relative to multiple 24-h urine samples. Nutr Cancer 60, 619–626.CrossRefGoogle ScholarPubMed
15Setchell, KD & Cole, SJ (2006) Method of defining equol-producer status and its frequency among vegetarians. J Nutr 136, 2188–2193.CrossRefGoogle ScholarPubMed
16Liu, B, Qin, L, Liu, A, et al. (2010) Prevalence of the equol-producer phenotype and its relationship with dietary isoflavone and serum lipids in healthy Chinese adults. J Epidemiol 20, 377–384.CrossRefGoogle ScholarPubMed
17Arai, Y, Uehara, M, Sato, Y, et al. (2000) Comparison of isoflavones among dietary intake, plasma concentration and urinary excretion for accurate estimation of phytoestrogen intake. J Epidemiol 10, 127–135.CrossRefGoogle ScholarPubMed
18Setchell, KD & Cassidy, A (1999) Dietary isoflavones: biological effects and relevance to human health. J Nutr 129, 758S–767S.CrossRefGoogle ScholarPubMed
19Frankenfeld, CL, Atkinson, C, Thomas, WK, et al. (2005) High concordance of daidzein-metabolizing phenotypes in individuals measured 1 to 3 years apart. Br J Nutr 94, 873–876.CrossRefGoogle ScholarPubMed
20Lampe, JW, Skor, HE, Li, S, et al. (2001) Wheat bran and soy protein feeding do not alter urinary excretion of the isoflavan equol in premenopausal women. J Nutr 131, 740–744.CrossRefGoogle Scholar
21Vedrine, N, Mathey, J, Morand, C, et al. (2006) One-month exposure to soy isoflavones did not induce the ability to produce equol in postmenopausal women. Eur J Clin Nutr 60, 1039–1045.CrossRefGoogle Scholar
22O'Leary, DH, Polak, JF, Kronmal, RA, et al. (1999) Carotid-artery intima and media thickness as a risk factor for myocardial infarction and stroke in older adults. Cardiovascular Health Study Collaborative Research Group. N Engl J Med 340, 14–22.CrossRefGoogle ScholarPubMed
23Hollander, M, Hak, AE, Koudstaal, PJ, et al. (2003) Comparison between measures of atherosclerosis and risk of stroke: the Rotterdam Study. Stroke 34, 2367–2372.CrossRefGoogle ScholarPubMed
24Zhang, B, Chen, YM, Huang, LL, et al. (2008) Greater habitual soyfood consumption is associated with decreased carotid intima–media thickness and better plasma lipids in Chinese middle-aged adults. Atherosclerosis 198, 403–411.CrossRefGoogle ScholarPubMed
25Nagata, C, Takatsuka, N & Shimizu, H (2002) Soy and fish oil intake and mortality in a Japanese community. Am J Epidemiol 156, 824–831.CrossRefGoogle Scholar
26Kokubo, Y, Iso, H, Ishihara, J, et al. (2007) Association of dietary intake of soy, beans, and isoflavones with risk of cerebral and myocardial infarctions in Japanese populations – The Japan Public Health Center-Based (JPHC) Study Cohort I. Circulation 116, 2553–2562.CrossRefGoogle ScholarPubMed
27Ricketts, ML, Moore, DD, Banz, WJ, et al. (2005) Molecular mechanisms of action of the soy isoflavones includes activation of promiscuous nuclear receptors. A review. J Nutr Biochem 16, 321–330.CrossRefGoogle ScholarPubMed
28Hwang, J, Wang, J, Morazzoni, P, et al. (2003) The phytoestrogen equol increases nitric oxide availability by inhibiting superoxide production: an antioxidant mechanism for cell-mediated LDL modification. Free Radic Biol Med 34, 1271–1282.CrossRefGoogle ScholarPubMed
29Meyer, BJ, Larkin, TA, Owen, AJ, et al. (2004) Limited lipid-lowering effects of regular consumption of whole soybean foods. Ann Nutr Metab 48, 67–78.CrossRefGoogle ScholarPubMed
Sánchez-Calvo, Juan Manuel Rodríguez-Iglesias, Manuel Antonio Molinillo, José M. G. and Macías, Francisco A. 2013. Soy isoflavones and their relationship with microflora: beneficial effects on human health in equol producers. Phytochemistry Reviews, Vol. 12, Issue. 4, p. 979.
Liu, J. Sun, L.L. He, L.P. Ling, W.H. Liu, Z.M. and Chen, Y.M. 2014. Soy food consumption, cardiometabolic alterations and carotid intima-media thickness in Chinese adults. Nutrition, Metabolism and Cardiovascular Diseases, Vol. 24, Issue. 10, p. 1097.
Ohkura, Yoshinori Obayashi, Satoshi Yamada, Kazuki Yamada, Mikiko and Kubota, Toshiro 2015. S-equol Partially Restored Endothelial Nitric Oxide Production in Isoflavone-deficient Ovariectomized Rats. Journal of Cardiovascular Pharmacology, Vol. 65, Issue. 5, p. 500.
Acharjee, Subroto Zhou, Jin-Rong Elajami, Tarec K. and Welty, Francine K. 2015. Effect of soy nuts and equol status on blood pressure, lipids and inflammation in postmenopausal women stratified by metabolic syndrome status. Metabolism, Vol. 64, Issue. 2, p. 236.
Uchiyama, Shigeto 2015. New Insights into "Equol", a Novel Ingredient Derived from Soy. NIPPON SHOKUHIN KAGAKU KOGAKU KAISHI, Vol. 62, Issue. 7, p. 356.
TAMURA, Motoi HORI, Sachiko NAKAGAWA, Hiroyuki KATADA, Kazuhiro KAMADA, Kazuhiro UCHIYAMA, Kazuhiko HANDA, Osamu TAKAGI, Tomohisa NAITO, Yuji and YOSHIKAWA, Toshikazu 2015. Relationships among fecal daidzein metabolites, dietary habit and BMI in healthy volunteers: a preliminary study. Bioscience of Microbiota, Food and Health, Vol. 34, Issue. 3, p. 59.
Tamura, Motoi Hori, Sachiko Nakagawa, Hiroyuki Yamauchi, Satoshi and Sugahara, Takuya 2016. Effects of an equol-producing bacterium isolated from human faeces on isoflavone and lignan metabolism in mice. Journal of the Science of Food and Agriculture, Vol. 96, Issue. 9, p. 3126.
KASAI, CHIKA SUGIMOTO, KAZUSHI MORITANI, ISAO TANAKA, JUNICHIRO OYA, YUMI INOUE, HIDEKAZU TAMEDA, MASAHIKO SHIRAKI, KATSUYA ITO, MASAAKI TAKEI, YOSHIYUKI and TAKASE, KOJIRO 2016. Comparison of human gut microbiota in control subjects and patients with colorectal carcinoma in adenoma: Terminal restriction fragment length polymorphism and next-generation sequencing analyses. Oncology Reports, Vol. 35, Issue. 1, p. 325.
Tang, Guo-Yi Meng, Xiao Li, Ya Zhao, Cai-Ning Liu, Qing and Li, Hua-Bin 2017. Effects of Vegetables on Cardiovascular Diseases and Related Mechanisms. Nutrients, Vol. 9, Issue. 8, p. 857.
Acosta-Navarro, Julio Antoniazzi, Luiza Oki, Adriana Midori Bonfim, Maria Carlos Hong, Valeria Acosta-Cardenas, Pedro Strunz, Celia Brunoro, Eleonora Miname, Marcio Hiroshi Filho, Wilson Salgado Bortolotto, Luiz Aparecido and Santos, Raul D. 2017. Reduced subclinical carotid vascular disease and arterial stiffness in vegetarian men: The CARVOS Study. International Journal of Cardiology, Vol. 230, Issue. , p. 562.
Frankenfeld, Cara L. 2017. Cardiometabolic risk and gut microbial phytoestrogen metabolite phenotypes. Molecular Nutrition & Food Research, Vol. 61, Issue. 1, p. 1500900.
Yoshikata, Remi Myint, Khin Z. and Ohta, Hiroaki 2017. Relationship between equol producer status and metabolic parameters in 743 Japanese women: equol producer status is associated with antiatherosclerotic conditions in women around menopause and early postmenopause. Menopause, Vol. 24, Issue. 2, p. 216.
Acosta-Navarro, Júlio César Oki, Adriana Midori Antoniazzi, Luiza Bonfim, Maria Aparecida Carlos Hong, Valeria Gaspar, Maria Cristina de Almeida Sandrim, Valeria Cristina and Nogueira, Adriana 2019. Consumption of animal-based and processed food associated with cardiovascular risk factors and subclinical atherosclerosis biomarkers in men. Revista da Associação Médica Brasileira, Vol. 65, Issue. 1, p. 43.
Ferreira, L. L. Silva, T. R. Maturana, M. A. and Spritzer, P. M. 2019. Dietary intake of isoflavones is associated with a lower prevalence of subclinical cardiovascular disease in postmenopausal women: cross‐sectional study. Journal of Human Nutrition and Dietetics, Vol. 32, Issue. 6, p. 810.
Anacleto, Sara L. Lajolo, Franco M. and Hassimotto, Neuza M.A. 2019. Estimation of dietary flavonoid intake of the Brazilian population: A comparison between the USDA and Phenol-Explorer databases. Journal of Food Composition and Analysis, Vol. 78, Issue. , p. 1.
De Bruyne, Tess Steenput, Bieke Roth, Lynn De Meyer, Guido Santos, Claudia Valentová, Kateřina Dambrova, Maija and Hermans, Nina 2019. Dietary Polyphenols Targeting Arterial Stiffness: Interplay of Contributing Mechanisms and Gut Microbiome-Related Metabolism. Nutrients, Vol. 11, Issue. 3, p. 578.
Sekikawa, Akira Ihara, Masafumi Lopez, Oscar Kakuta, Chikage Lopresti, Brian Higashiyama, Aya Aizenstein, Howard Chang, Yue-Fang Mathis, Chester Miyamoto, Yoshihiro Kuller, Lewis and Cui, Chendi 2019. Effect of S-equol and Soy Isoflavones on Heart and Brain. Current Cardiology Reviews, Vol. 15, Issue. 2, p. 114.
Peirotén, Ángela Bravo, Daniel and Landete, José M. 2020. Bacterial metabolism as responsible of beneficial effects of phytoestrogens on human health. Critical Reviews in Food Science and Nutrition, Vol. 60, Issue. 11, p. 1922.
Kirichenko, Tatiana V. Myasoedova, Veronika A. Ravani, Alessio L. Sobenin, Igor A. Orekhova, Varvara A. Romanenko, Elena B. Poggio, Paolo Wu, Wei-Kai and Orekhov, Alexander N. 2020. Carotid Atherosclerosis Progression in Postmenopausal Women Receiving a Mixed Phytoestrogen Regimen: Plausible Parallels with Kronos Early Estrogen Replacement Study. Biology, Vol. 9, Issue. 3, p. 48.
Jun, Sook-Hyun Shin, Woo-Kyoung and Kim, Yookyung 2020. Association of Soybean Food Intake and Cardiometabolic Syndrome in Korean Women: Korea National Health and Nutrition Examination Survey (2007 to 2011). Diabetes & Metabolism Journal, Vol. 44, Issue. 1, p. 143.
Yun Cai (a1), Kaiping Guo (a1), Chaogang Chen (a2), Ping Wang (a1), Bo Zhang (a1), Quan Zhou (a1), Fang Mei (a1) and Yixiang Su (a1) | CommonCrawl |
Symbols:General/Ellipsis
< Symbols:General(Redirected from Symbols:Ellipsis)
$\ldots$ or $\cdots$
An ellipsis is used to indicate that there are omitted elements in a set or a sequence whose presence need to be inferred by the reader.
$1, 2, \ldots, 10$
is to be understood as meaning:
$1, 2, 3, 4, 5, 6, 7, 8, 9, 10$
There are two forms of the horizontal ellipsis, one on the writing line which is to be used for punctuation separated lists:
$a, b, \ldots, z$
and one centrally placed in the line, to be used in other circumstances, for example, in expressions assembled using arithmetic operations:
$a + b + \cdots + k$
There also exist vertically and diagonally arranged ellipses, for use in the structure of matrices:
$\begin{array}{c} a \\ \vdots \\ b \end{array} \qquad \begin{array}{c} a \\ & \ddots \\ & & b \end{array}$
The $\LaTeX$ code for \(1, 2, \ldots, 10\) is 1, 2, \ldots, 10 .
The $\LaTeX$ code for \(1 + 2 + \cdots + 10\) is 1 + 2 + \cdots + 10 .
The $\LaTeX$ code for \(\vdots\) is \vdots .
The $\LaTeX$ code for \(\ddots\) is \ddots .
Linguistic Note
The plural of ellipsis is ellipses.
This is pronounced ell-ip-seez, where the final syllable is long.
Do not confuse with the plural of ellipse, which is spelt the same but is pronounced ell-ip-siz.
Definition:Summation
Definition:Iterated Operation
Retrieved from "https://proofwiki.org/w/index.php?title=Symbols:General/Ellipsis&oldid=321731"
This page was last modified on 20 October 2017, at 13:10 and is 0 bytes | CommonCrawl |
Ergodic Theory and Dynamical Systems
Potential kernel, hitting probabilities and distributional...
Potential kernel, hitting probabilities and distributional asymptotics
Part of: Ergodic theory Markov processes Stochastic processes Limit theorems Dynamical systems with hyperbolic behavior
Published online by Cambridge University Press: 26 January 2019
FRANÇOISE PÈNE and
DAMIEN THOMINE
FRANÇOISE PÈNE
Université de Brest and Institut Universitaire de France, Laboratoire de Mathématiques de Bretagne Atlantique, UMR CNRS 6205, 29238 Brest Cedex, France email [email protected]
Département de Mathématiques d'Orsay, Université Paris-Sud, UMR CNRS 8628, F-91405 Orsay Cedex, France email [email protected]
[email protected] ;
[email protected]
$\mathbb{Z}^{d}$ -extensions of probability-preserving dynamical systems are themselves dynamical systems preserving an infinite measure, and generalize random walks. Using the method of moments, we prove a generalized central limit theorem for additive functionals of the extension of integral zero, under spectral assumptions. As a corollary, we get the fact that Green–Kubo's formula is invariant under induction. This allows us to relate the hitting probability of sites with the symmetrized potential kernel, giving an alternative proof and generalizing a theorem of Spitzer. Finally, this relation is used to improve, in turn, the assumptions of the generalized central limit theorem. Applications to Lorentz gases in finite horizon and to the geodesic flow on Abelian covers of compact manifolds of negative curvature are discussed.
central limit theorem probabilistic potential theory classical ergodic theory infinite measure null recurrent process
Primary: 60F05: Central limit and other weak theorems
Secondary: 60J45: Probabilistic potential theory 60G50: Sums of independent random variables; random walks 37D40: Dynamical systems of geometric origin and hyperbolicity (geodesic and horocycle flows, etc.) 37A05: Measure-preserving transformations
Ergodic Theory and Dynamical Systems , Volume 40 , Issue 7 , July 2020 , pp. 1894 - 1967
DOI: https://doi.org/10.1017/etds.2018.136[Opens in a new window]
© Cambridge University Press, 2019
Aaronson, J.. An Introduction to Infinite Ergodic Theory. American Mathematical Society, Providence, RI, 1997.CrossRefGoogle Scholar
Aaronson, J. and Denker, M.. Characteristic functions of random variables attracted to 1-stable laws. Ann. Probab. 26(1) (1998), 399–415.Google Scholar
Aaronson, J. and Denker, M.. The Poincaré series of ℂ\backslashℤ. Ergod. Th. & Dynam. Sys. 19(1) (1999), 1–20.CrossRefGoogle Scholar
Aaronson, J. and Denker, M.. Local limit theorems for partial sums of stationary sequences generated by Gibbs–Markov maps. Stoch. Dyn. 1 (2001), 193–237.CrossRefGoogle Scholar
Aaronson, J. and Zweimüller, R.. Limit theory for some positive stationary processes with infinite mean. Ann. Inst. Henri Poincaré Probab. Stat. 50(1) (2014), 256–284.CrossRefGoogle Scholar
Apostol, T. M.. Mathematical Analysis, 2nd edn. Addison-Wesley, Reading, MA, 1974.Google Scholar
Babillot, M. and Ledrappier, F.. Geodesic paths and horocycle flows on Abelian covers. Lie Groups and Ergodic Theory (Mumbai, 1996) (Tata Institute of Fundamental Research Studies in Mathematics, 14). Tata Institute of Fundamental Research, Bombay, 1998, pp. 1–32.Google Scholar
Babillot, M. and Ledrappier, F.. Lalley's theorem on periodic orbits of hyperbolic flows. Ergod. Th. & Dynam. Sys. 18 (1998), 17–39.CrossRefGoogle Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L.. Regular variation. Encyclopedia of Mathematics and its Applications, 27. Cambridge University Press, Cambridge, 1987.Google Scholar
Borodin, A. N.. On the character of convergence to Brownian local time. I. Probab. Theory Related Fields 72(2) (1986), 231–250.CrossRefGoogle Scholar
Borodin, A. N.. On the character of convergence to Brownian local time. II. Probab. Theory Related Fields 72(2) (1986), 251–277.CrossRefGoogle Scholar
Bowen, R.. Symbolic dynamics for hyperbolic flows. Amer. J. Math. 95 (1973), 429–460.CrossRefGoogle Scholar
Bowen, R.. Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms (Lecture Notes in Mathematics, 470). Springer, Berlin, 1975.CrossRefGoogle Scholar
Bunimovich, L. A., Chernov, N. I. and Sinai, Y. G.. Statistical properties of two dimensional hyperbolic billiards. Uspekhi Mat. Nauk 46(4) (1991), 47–106.Google Scholar
Bunimovich, L. A. and Sinai, Y. G.. Statistical properties of Lorentz gas with periodic configuration of scatterers. Comm. Math. Phys. 78 (1981), 479–497.CrossRefGoogle Scholar
Chernov, N. and Markarian, R.. Chaotic Billiards (Mathematical Surveys and Monographs, 127). American Mathematical Society, Providence, RI, 2006.CrossRefGoogle Scholar
Coelho, Z.. Asymptotic laws for symbolic dynamical systems. Topics in Symbolic Dynamics and Applications (Temuco, 1997) (London Mathematical Society Lecture Note Series, 279). Cambridge University Press, Cambridge, 2000, pp. 123–165.Google Scholar
Csáki, E., Csörgő, M., Földes, A. and Révész, P.. Strong approximation of additive functionals. J. Theoret. Probab. 5(4) (1992), 679–706.CrossRefGoogle Scholar
Csáki, E. and Földes, A.. On asymptotic independence and partial sums. Asymptotic Methods in Probability and Statistics, A Volume in Honour of Miklós Csörgõ. North-Holland, Amsterdam, 1998, pp. 373–381.CrossRefGoogle Scholar
Csáki, E. and Földes, A.. Asymptotic independence and additive functionals. J. Theoret. Probab. 13(4) (2000), 1123–1144.CrossRefGoogle Scholar
Darling, D. A. and Kac, M.. On occupation times for Markoff processes. Trans. Amer. Math. Soc. 84 (1957), 444–458.CrossRefGoogle Scholar
Dolgopyat, D., Szász, D. and Varjú, T.. Recurrence properties of Lorentz gas. Duke Math. J. 142 (2008), 241–281.CrossRefGoogle Scholar
Dobrushin, R. L.. Two limit theorems for the simplest random walk on a line. Uspekhi Mat. Nauk 10 (1955), 139–146 (in Russian).Google Scholar
Feller, W.. An Introduction to Probability Theory and its Applications, Vol. II. Wiley, New York, 1966.Google Scholar
Galves, A. and Schmitt, B.. Inequalities for hitting times in mixing dynamical systems. Random Comput. Dynam. 5(4) (1997), 337–347.Google Scholar
Gouëzel, S.. Vitesse de décorrélation et théorèmes limites pour les applications non uniformément dilatantes. PhD Thesis, Université Paris XI, 2008 version (in French).Google Scholar
Gouëzel, S.. Almost sure invariance principle for dynamical systems by spectral methods. Ann. Probab. 38(4) (2010), 1639–1671.CrossRefGoogle Scholar
Guivarc'h, Y. and Hardy, J.. Théorèmes limites pour une classe de chaînes de Markov et applications aux difféomorphismes d'Anosov. Ann. Inst. Henri Poincaré 24(1) (1988), 73–98 (in French).Google Scholar
Halmos, P. R.. Lectures on ergodic theory. Publications of the Mathematical Society of Japan, Vol. 3. The Mathematical Society of Japan, 1956.Google Scholar
Haydn, N. T. A.. Entry and return times distribution. Dyn. Syst. 28(3) (2013), 333–353.CrossRefGoogle Scholar
Hennion, H. and Hervé, L.. Limit Theorems for Markov Chains and Stochastic Properties of Dynamical Systems by Quasi-compactness (Lecture Notes in Mathematics, 1766). Springer, Berlin, 2001.CrossRefGoogle Scholar
Hopf, E.. Ergodentheorie. Springer, Berlin, 1937, (in German).CrossRefGoogle Scholar
Kac, M.. On the notion of recurrence in discrete stochastic processes. Bull. Amer. Math. Soc. (N.S.) 53 (1947), 1002–1010.CrossRefGoogle Scholar
Kakutani, S.. Induced measure preserving transformations. Proc. Imp. Acad. 19 (1943), 635–641.CrossRefGoogle Scholar
Karamata, J.. Sur un mode de croissance régulière. Théorèmes fondamentaux. Bull. Soc. Math. France 61 (1933), 55–62 (in French).CrossRefGoogle Scholar
Kasahara, Y.. Two limit theorems for occupation times of Markov processes. Jpn. J. Math. 7(2) (1981), 291–300.CrossRefGoogle Scholar
Kasahara, Y.. Limit theorems for Lévy processes and Poisson point processes and their applications to Brownian excursions. Publ. Math. Inst. Hautes 24(3) (1984), 521–538.Google Scholar
Kasahara, Y.. A limit theorem for sums of random number of i.i.d. random variables and its application to occupation times of Markov chains. J. Math. Soc. Japan 37(2) (1985), 197–205.CrossRefGoogle Scholar
Katsuda, A. and Sunada, T.. Closed orbits in homology classes. Publ. Math. Inst. Hautes Études Sci. 71 (1990), 5–32.CrossRefGoogle Scholar
Keller, G. and Liverani, C.. Stability of the spectrum for transfer operators. Ann. Sc. Norm. Super. Pisa Cl. Sci. (4) 28 (1999), 141–152.Google Scholar
Kesten, H.. Occupation times for Markov and semi-Markov chains. Trans. Amer. Math. Soc. 103 (1962), 82–112.CrossRefGoogle Scholar
Ledrappier, F. and Sarig, O.. Unique ergodicity for non-uniquely ergodic horocycle flows. Discrete Contin. Dyn. Syst. 16 (2006), 411–433.CrossRefGoogle Scholar
Ledrappier, F. and Sarig, O.. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Israel J. Math. 160 (2007), 281–315.CrossRefGoogle Scholar
Ledrappier, F. and Sarig, O.. Fluctuations of ergodic sums for horocycle flows on ℤd-covers of finite volume surfaces. Discrete Contin. Dyn. Syst. 22 (2008), 247–325.Google Scholar
Lévy, P.. Sur certains processus stochastiques homogènes. Compos. Math. 7 (1940), 283–339 (in French).Google Scholar
Melbourne, I. and Terhesiu, D.. Operator renewal theory and mixing rates for dynamical systems with infinite measure. Invent. Math. 189(1) (2012), 61–110.CrossRefGoogle Scholar
Molchanov, S. A. and Ostrovskii, E.. Symmetric stable processes as traces of degenerate diffusion processes. Teor. Veroyatn. Primen. 14 (1969), 127–130 (in Russian).Google Scholar
Nagaev, S. V.. Some limit theorems for stationary Markov chains. Theory Probab. Appl. 11(4) (1957), 378–406.CrossRefGoogle Scholar
Nagaev, S. V.. More exact statements of limit theorems for homogeneous Markov chains. Teor. Veroyatn. Primen. 6 (1961), 67–87 (in Russian).Google Scholar
Paulin, F., Pollicott, M. and Schapira, B.. Equilibrium states in negative curbature. Astérisque 373 (2015), viii+145pp.Google Scholar
Pène, F.. Planar Lorentz process in a random scenery. Ann. Inst. Henri Poincaré Probab. Stat. 45(3) (2009), 818–839.CrossRefGoogle Scholar
Pène, F.. Asymptotic of the number of obstacles visited by the planar Lorentz process. Discrete Contin. Dyn. Syst. 24(2) (2009), 567–588.CrossRefGoogle Scholar
Pollicott, M. and Sharp, R.. Orbit counting for some discrete groups acting on simply connected manifolds with negative curvature. Invent. Math. 117 (1994), 275–302.CrossRefGoogle Scholar
Port, S. C.. Some theorems on functionals of Markov chains. Ann. Math. Statist. 35 (1964), 1275–1290.CrossRefGoogle Scholar
Rees, M.. Checking ergodicity of some geodesic flows with infinite Gibbs measure. Ergod. Th. & Dynam. Sys. 1(1) (1981), 107–133.CrossRefGoogle Scholar
Sarig, O.. Subexponential decay of correlations. Invent. Math. 150(3) (2002), 629–653.CrossRefGoogle Scholar
Saussol, B.. Étude statistique de systèmes dynamiques dilatants. PhD Thesis, Université de Toulon et du Var, 1998 (in French).Google Scholar
Saussol, B.. An introduction to quantitative Poincaré recurrence in dynamical systems. Rev. Math. Phys. 21(8) (2009), 949–979.CrossRefGoogle Scholar
Sharp, R.. Closed orbits in homology classes for Anosov flows. Ergod. Th. & Dynam. Sys. 13 (1993), 387–408.CrossRefGoogle Scholar
Sinai, Y. G.. Dynamical systems with elastic reflections. Ergodic properties of dispersing billiards. Uspehi Mat. Nauk 25(2) (1970), 141–192 (in Russian).Google Scholar
Spitzer, F.. Principles of Random Walk (Graduate Texts in Mathematics, 34), 2nd edn. Springer, New York, Heidelberg, 1976.CrossRefGoogle Scholar
Szász, D. and Varjú, T.. Local limit theorem for the Lorentz process and its recurrence in the plane. Ergod. Th. & Dynam. Sys. 24(1) (2004), 257–278.CrossRefGoogle Scholar
Szász, D. and Varjú, T.. Limit laws and recurrence for the planar Lorentz process with infinite horizon. J. Stat. Phys. 129(1) (2007), 59–80.CrossRefGoogle Scholar
Thaler, M.. A limit theorem for sojourns near indifferent fixed points of one-dimensional maps. Ergod. Th. & Dynam. Sys. 22(4) (2002), 1289–1312.CrossRefGoogle Scholar
Thaler, M. and Zweimüller, R.. Distributional limit theorems in infinite ergodic theory. Probab. Theory Related Fields 135(1) (2006), 15–52.CrossRefGoogle Scholar
Thomine, D.. Théorèmes limites pour les sommes de Birkhoff de fonctions d'intégrale nulle en théorie ergodique en mesure infinie. PhD Thesis, Université de Rennes 1, 2013 version (in French).Google Scholar
Thomine, D.. A generalized central limit theorem in infinite ergodic theory. Probab. Theory Related Fields 158(3–4) (2014), 597–636.CrossRefGoogle Scholar
Thomine, D.. Variations on a central limit theorem in infinite ergodic theory. Ergod. Th. & Dynam. Sys. 35(5) (2015), 1610–1657.CrossRefGoogle Scholar
Thomine, D.. Local time and first return time for periodic semi-flows. Israel J. Math. 215(1) (2016), 53–98.CrossRefGoogle Scholar
Young, L.-S.. Statistical properties of dynamical systems with some hyperbolicity. Ann. of Math. (2) 147(3) (1998), 585–650.CrossRefGoogle Scholar
Young, L.-S.. Recurrence times and rates of mixing. Israel J. Math. 110 (1999), 153–188.CrossRefGoogle Scholar
Zweimüller, R.. Mixing limit theorems for ergodic transformations. J. Theoret. Probab. 20 (2007), 1059–1071.CrossRefGoogle Scholar
* Views captured on Cambridge Core between 26th January 2019 - 21st January 2021. This data will be updated every 24 hours.
Hostname: page-component-76cb886bbf-kfxvk Total loading time: 0.403 Render date: 2021-01-21T03:21:34.802Z Query parameters: { "hasAccess": "0", "openAccess": "0", "isLogged": "0", "lang": "en" } Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": false, "newCiteModal": false }
Demers, Mark F. Pène, Françoise and Zhang, Hong-Kun 2020. Local Limit Theorem for Randomly Deforming Billiards. Communications in Mathematical Physics, Vol. 375, Issue. 3, p. 2281.
FRANÇOISE PÈNE (a1) and DAMIEN THOMINE (a2)
DOI: https://doi.org/10.1017/etds.2018.136 | CommonCrawl |
Method and data
Full paper
GPS-derived geocenter motion from the IGS second reprocessing campaign
Liansheng Deng1,
Zhao Li2Email author,
Na Wei3,
Yifang Ma3, 4 and
Hua Chen5
Accepted: 26 June 2019
Published: 5 July 2019
GPS data processing methods and theories are under continuous refinement in the past 30 years. Using the latest products is supposed to provide more stable and reliable geocenter estimates. In this paper, geocenter estimates from deformation inversion approach with new observations of IGS second data reprocessing campaign (IG2) are investigated. Results indicate that our IG2-derived geocenter motion estimates agree well with solutions from network approach for SLR. The truncated degree 5 exhibits the highest consistency between GPS-inverted geocenter estimates and the SLR results in both annual amplitudes and phases. Then, the GPS-derived geocenter motions are compared with results from other different approaches. We find that except for a discrepancy in the annual phase estimates of Z component, geocenter motions predicted with the IG2 data are in line with those based on other techniques. In addition, the effects of the translational parameters and the comparison with the IGS first data reprocessing campaign (IG1)-estimated geocenter motions are investigated, and results demonstrate that the translation parameters should be estimated when inversing the geocenter motion with the newly IG2 solutions and the advantage of the IG2 data reprocessing over the previous IG1 efforts. Finally, we address the impacts of post-seismic effects and the missing ocean data on the IG2-derived solutions. After removing the stations affected by large earthquakes, the amplitudes of Y component become higher, but the annual phases of the Y component become far away from the SLR solutions. Comparisons of the equivalent water height from the IG2-estimated coefficients and the solutions from the estimation of the circulation and climate of the ocean indicate that the differences between the two types of solutions vary with different truncated degrees, and the consistency is getting worse and worse with the truncated degree grows. Further researches still need to be done to invert surface mass variation coefficients from various combinations of GPS observations, ocean models and other datasets.
Geocenter motion
Degree-1 deformation
Seasonal signals
Truncated degree
According to IERS Conventions 2010, the origin of the International Terrestrial Reference System (ITRS) is located in the center of mass (CM) of the total Earth system, including the solid Earth, oceans and atmosphere (Argus 2012; Petit et al. 2010). In fact, after combining all space geodetic solutions, the realization of the origin of the International Terrestrial Reference Frame (ITRF) is defined as the long-term mean CM (Altamimi et al. 2016; Bloßfeld et al. 2014; Dong et al.2014). Over short and seasonal time scales, the ITRF origin is located approximately in the center of figure (CF) of the solid Earth surface (Blewitt 2003; Dong et al.1997, 2002; Collilieux et al. 2009, 2012). The motion between CM and CF (CF–CM) is commonly called geocenter motion.
With the development of space geodesy, geocenter motion plays a pivotal role in further improving the accuracy of the terrestrial reference frame (TRF) (Altamimi et al. 2016; Bloßfeld et al. 2014; Chambers et al. 2007; Melachroinos et al. 2013). Estimating the geocenter motion based on geodetic measurements is crucial since it is fundamentally related to how we realize the TRF. Normally, in the realizations of ITRS, the ITRF origin is determined by satellite laser ranging (SLR). Considering the sparse and non-uniform distribution of SLR network as well as the system noise, the SLR geocenter estimates display a rather large scatter at sub-annual time scales (Altamimi et al. 2016; Cheng et al. 2013; Collilieux et al. 2009; Feissel-Vernier et al. 2006; Riddell et al. 2017; Sun et al. 2017; Urschl et al. 2007). Owing to the superiority of globally distributed observations and full-time operation, global navigation satellite system (GNSS) has captured researchers' attentions to the determination of geocenter motion since the late 1990s. Various approaches have been proposed for the determination of the geocenter motions, among which two main categories of methods are used to get the geocenter series with GPS: translational approach and the inverse approach (Fritsche et al. 2010; Kang et al. 2009; Lavallée et al. 2006, 2010; Meindl et al.2013; Wu et al. 2002, 2012; Zannat and Tregoning 2017a, b; Zhang and Jin 2014).
According to the orbit dynamics theory, the center of the GPS satellite constellation is relevant to the position of the CM. If fiducial-free or minimal constraints are applied during data processing step when we simultaneously estimate the station coordinates and satellite orbits, the results are then related to the satellites' position and are the value of the CM. Under this circumstance, Helmert parameter transformation can be used to link these CM results to the defined TRF, for example ITRF (Altamimi et al. 2016; Jin et al. 2013; Ferland and Piraszewski, 2009). These obtained translational parameters are the geocenter coordinate; thus, the translational approach is also called the network shift approach, which gets the series by estimating the translation parameters of a tracking network related to the center of the geodetic satellite orbits. The second approach, also called the degree-1 deformation approach, models the geocenter motions using the deformation expression of the change in the CM of the surface load; namely, it inverses the geocenter motion by observing the deformation of the solid Earth due to the surface mass load. Blewitt et al. (2001) first adopted this approach to inverse the geocenter motion with a globally distributed GPS network and found an obvious mass exchange in the Earth. Then, Dong et al. (2002) and Wu et al. (2002) also adopted this method to inverse geocenter motion. They suggested that the ignored higher degrees could generate significant error. Based on these two methods, many researchers have tried to estimate the geocenter motion using GPS data. They point out that the quality of the GPS-estimated geocenter results are related to the distributions of GPS stations as well as the GPS data processing accuracy, and the improved precision of modern geodetic techniques would bring more improvements to the GPS-estimated geocenter motions (Rebischung et al. 2014, 2016; Rietbroek et al. 2012, 2014; Wu et al. 2015, 2017).
GNSS data processing methods and theories have been under continuous refinement in the past 30 years. Until now, the newest products come from the second data reprocessing campaign implemented by the International GNSS service (IGS). Hereafter, we call it the IG2 products. Using the latest models and methodology, the IG2 products come out of the reanalysis of the full history of GPS data (Altamimi et al. 2016; Rebischung et al. 2016). As the accuracies of the tracking systems are increasing, more uniform and denser network geometry is required to provide more stable and reliable geocenter estimates. Considering that the IG2 data processing strategies and models will be implemented in the GNSS data processing in the next few years, it is favorable to apply these newest solutions to the estimates of the geocenter motions, and assess its quality to see whether it has advantage or not compared with previous solutions. With public access to the IG2 solutions, the estimates of geocenter motions using the network shift approach have been detailed analyzed by Rebischung et al. (2015, 2016). However, there are few results related to the inverse approach using this new data. Would it be superior than the network shift approach? Compared with the other existing solutions using the inverse method, is there any advantage? These are the focus of this investigation (Additional file 1).
What's more, a terrestrial reference frame is realized by surface networks, and its origin needs to be at the center of the GPS tracking network (CN) frame rather than the CF frame. Therefore, the displacements observed by GPS are referred to the CN frame (Dong et al. 2002; Wu et al. 2012). However, the loading coefficients of the CF are used when estimating the spherical harmonic coefficients. One of the potential sources of errors is the approximation that we make use of the motion of the CF to replace the motion of the CN when estimating the geocenter motion with the surface load theory, and the translation parameters are mainly used to reduce the errors when replacing the CF frame with the CN frame. The differences between CF and CN are related to the network distribution, data quality, etc., and whether the translational parameters should be ignored in the inversion process or not is also one of the questions that would be discussed.
In this paper, we detail the characteristics of the geocenter motions using the degree-1 deformation approach with the newest IG2 products. Firstly, the inversion model and GPS observations from IG2 products are described in detail. Then the GPS-inversed geocenter motions under different truncated degrees are compared with SLR results in terms of the correlation coefficients. Moreover, both the seasonal and non-seasonal signals of the GPS-derived geocenter time series are analyzed quantitatively and qualitatively. Finally, investigations are done on the effects of the translational parameters and the improvements compared to the IGS first data reprocessing campaign (IG1). Analyses of this paper can help readers get a thorough understanding of the ability of GPS to produce geocenter motions. It would also provide numerical support for interpreting and making use of the IGS products.
Degree-1 deformation inverse approach
The three-dimensional displacements of a point on the Earth's surface induced by surface mass loading can be described as a spherical harmonic expansion according to the Love number theory (Farrell 1972; Zou et al. 2014):
$$\begin{aligned} N(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\sum\limits_{{n = n_{{\rm min} } }}^{{n_{{\rm max} } }} {\sum\limits_{m = 0}^{n} {\sum\limits_{\varPhi = C,S}^{{}} {\frac{{3l_{n}^{'} }}{(2n + 1)}} } } T_{nm}^{\varPhi } \partial_{\theta } P_{nm}^{{}} (\cos \theta ) \\ E(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\sum\limits_{{n = n_{{\rm min} } }}^{{n_{{\rm max} } }} {\sum\limits_{m = 0}^{n} {\sum\limits_{\varPhi = C,S}^{{}} {\frac{{3l_{n}^{'} }}{(2n + 1)}} } } T_{nm}^{\varPhi } \frac{{\partial_{\lambda } P_{nm}^{{}} (\cos \theta )}}{\cos \theta } \\ U(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\sum\limits_{{n = n_{{\rm min} } }}^{{n_{{\rm max} } }} {\sum\limits_{m = 0}^{n} {\sum\limits_{\varPhi = C,S}^{{}} {\frac{{3h_{n}^{'} }}{(2n + 1)}} } } T_{nm}^{\varPhi } P_{nm}^{{}} (\cos \theta ) \\ \end{aligned}$$
where \(T_{nm}^{\varPhi }\) are the spherical harmonic (SH) coefficients of the surface load density, \(P_{nm}^{{}}\) are the fully normalized associated Legendre polynomials, \(h_{n}^{'}\), \(l_{n}^{'}\) are load Love numbers of degree-n and,\(\rho_{\text{S}}\) and \(\rho_{\text{E}}\) are the density of the seawater (1.025 × 103 kg/m3) and the Earth (5.517 × 103 kg/m3), respectively. Formula (1) can be expanded as:
$$\begin{aligned} \Delta N(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\sum\limits_{n = 1}^{{N_{{\rm max} } }} {\sum\limits_{m = 0}^{l} {\frac{{3l_{n}^{'} }}{(2n + 1)}\frac{{\partial \overline{P}_{n,m} }}{\partial \lambda }} } (\cos \theta ) \cdot ( - \Delta C_{nm} \cos (m\lambda ) - \Delta S_{nm} \sin (m\lambda )) \\ \Delta E(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\frac{R}{\sin \theta }\sum\limits_{n = 1}^{{N_{{\rm max} } }} {\sum\limits_{m = 0}^{l} {\frac{{3l_{n}^{'} }}{(2n + 1)}\overline{P}_{n,m} (\cos \theta )} } \cdot ( - m\Delta C_{nm} \sin (m\lambda ) + m\Delta S_{nm} \cos (m\lambda )) \\ \Delta U(\theta ,\lambda ) & = \frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\sum\limits_{n = 1}^{{N_{{\rm max} } }} {\sum\limits_{m = 0}^{l} {\frac{{3h_{n}^{'} }}{(2n + 1)}\overline{P}_{n,m} } } (\cos \theta ) \cdot (\Delta C_{nm} \cos (m\lambda ) + \Delta S_{nm} \sin (m\lambda )) \\ \end{aligned}$$
among which \(\Delta C_{nm}\) and \(\Delta S_{nm}\) are the time-varying SH coefficients of the surface density anomaly. Then the geocenter motion can be described as follows (Blewitt and Clarke 2003):
$$\Delta r_{{{\text{CF}} - {\text{CM}}}} = \sqrt 3 \left( {\frac{{\left[ {h_{1}^{'} } \right]_{CE} + 2\left[ {l_{1}^{'} } \right]_{CE} }}{3} - 1} \right)\frac{{\rho_{\text{S}} }}{{\rho_{\text{E}} }}\left( {\begin{array}{*{20}c} {\Delta C_{11}^{{}} } \\ {\Delta S_{11}^{{}} } \\ {\Delta C_{10}^{{}} } \\ \end{array} } \right)$$
Combining formula (3) and (2), the functional model of geocenter motion using GPS observations can then be demonstrated as:
$$\Delta X = t + \left( {\frac{\sqrt 3 }{{\left[ {h_{1}^{'} } \right]_{CE} + 2\left[ {l_{1}^{'} } \right]_{CE} - 3}}} \right)G^{T} {\text{diag}}[l_{1}^{'} ,l_{1}^{'} ,h_{1}^{'} ]_{CF} G\Delta r_{{\text{CF}}-{\text{CM}}} + H[\begin{array}{*{20}c} {\Delta C_{20}^{{}} } & {\Delta C_{21}^{{}} } & {\Delta S_{21}^{{}} } & \cdots \\ \end{array} ]$$
where \(\Delta X\) is the surface displacements from GPS observations, \(G = \left[ {\begin{array}{*{20}c} { - \sin \theta \cos \lambda } & { - \sin \varphi \sin \lambda } & {\cos \theta } \\ { - \sin \lambda } & {\cos \lambda } & 0 \\ {\cos \varphi \cos \lambda } & {\cos \varphi \sin \lambda } & {\sin \theta } \\ \end{array} } \right]\) is the matrix that rotates geocentric into topocentric displacements on a point with geodetic longitude \(\lambda\) and latitude \(\theta\), \(H\) is the higher-degree coefficient (> 2) and \(t\) represents the translational parameters denoting the differences between CF and CN. To solve the inversion problem, the linear estimation model of geocenter motion is described as:
$$y = Ax + v,\quad E(v) = 0,\quad D(v) = \sigma_{0}^{2} P^{ - 1}$$
where \(y\) denotes the GPS displacement for the three components, \(A\) is the design matrix and \(P\) is the weight diagonal matrix for the displacements. The unknown parameter vector x is
$$x = \left( {\begin{array}{*{20}c} {t_{x} } & {t_{y} } & {t_{z} } & {\Delta r_{{\text{CF}}-{\text{CM}}} } & {\Delta C_{22} \Delta S_{22} \Delta C_{21} \Delta S_{21} \Delta C_{20} \ldots \Delta C_{{N_{{\rm max} } 0}} } \\ \end{array} } \right)$$
With translational parameter \(t\), geocenter motion, \(\Delta r_{\text{CF}}-{\text{CM}}\) and the higher degrees of the surface mass load (i.e., up to degree \(N_{{\rm max} }\)), the design matrix is
$$A = \left( {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \mathop {}\limits_{{}} \left( {\frac{\sqrt 3 }{{\left[ {h_{1}^{'} } \right]_{CE} + 2\left[ {l_{1}^{'} } \right]_{CE} - 3}}} \right)G_{i}^{T} {\text{diag}}[l_{1}^{'} ,l_{1}^{'} ,h_{1}^{'} ]_{CF} G_{i} B_{i} } \right)$$
among which the matrix \(B_{i}\) contains the partial derivatives for higher degrees (> 1). Based on the least-squares theory, the solution for Eq. (5) could then be obtained as
$$\hat{x} = (A^{T} P^{{}} A)^{ - 1} A^{T} P^{{}} y$$
where \(\hat{x}\) is the least-squares estimations and \(\hat{v} = A\hat{x} - y\). In this way, the geocenter motion (or the degree-1 spherical coefficients) can be obtained.
GPS observations from IG2 products
The geocenter motion from degree-1 deformation approach depends highly on the quality of the GPS observations. As a public service, IGS continuously provides the products with highest quality in support of the Earth science research, multi-disciplinary applications and the education. Until now, it is the IG2 product that provides the most reliable and accurate GPS results worldwide from January 1994 to February 2015. Therefore, in this paper, GPS observations from IG2 are adopted to inverse geocenter motion estimates from global GPS data.
To accurately estimate geocenter motion with GPS data, the site distribution should be globally homogenous and stable. Thus, the core network of IG2, a well-distributed subset of IGb08 stations, is selected (Fig. 1). This well-distributed sub-network is recommended for the comparison or alignment of global solutions to the IGb08 Reference Frame, since it mitigates the aliasing of stations' nonlinear motion into transformation parameters (IGSMAIL-6663: https://igscb.jpl.nasa.gov/pipermail/igsmail/). Also, for global IGS sites, the core sites are the most stable ones with the longest observation time, which can ensure the reliability of the estimates. Finally, 188 global evenly distributed GPS sites are selected from part of the IGS core sites with high geodetic quality. They satisfy the following criteria: (a) evenly distributed worldwide; (b) continuously observed for at least 10 years; (c) locations far from plate boundaries and deforming zones; and (d) velocity accuracy better than 3 mm/year. Its spatial distribution is shown in Fig. 1. The GPS coordinate time series of the selected network can be directly obtained from the IGS SINEX files (ftp://cddis.gsfc.nasa.gov/gps/products/repro2/). Before applying these data to the geocenter motion inversion, the original coordinate time series have been simultaneously detrended and corrected for step discontinuities by least-squares fitting of a linear trend, annual and semiannual sinusoids, as well as visually obvious offsets terms, at epochs corresponding to equipment changes, and coseismic displacements as reported by ITRF2014 IGS discontinuities file (http://itrf.ensg.ign.fr/ITRF_solutions/2014/computation_strategy.php?page=2). Due to this preprocessing step, our results are not affected by earthquakes.
Map of the selected IGS core sites. Pink circles denote the stations affected by large earthquakes as reported by ITRF2014 IGS discontinuities file
Theoretically, if the GPS stations are homogenously distributed, the effects of the higher-degree coefficients on GPS-estimated geocenter motion are small. However, an ideal global distributed network is difficult to realize, since no or few data are available around the polar and ocean areas. Therefore, to investigate the aliasing errors of higher degrees, the spherical harmonic load-induced geometrical displacements are truncated with degrees from 1 to 10, respectively.
GPS-derived geocenter motion
General features of the geocenter estimates
After obtaining the GPS-derived geocenter motion time series, the variations of the geocenter estimates are clear. Figure 2 describes the time series of the inversed geocenter motion with truncated degrees 1, 5 and 6, from January 2000 to December 2015. The geocenter variations obtained with other truncated degrees are similar to those in Fig. 2. It can be seen that in general the geocenter variations obtained with different truncated degrees are similar. Significant seasonal variations exist in the three components, and the amplitude of the Z component is obviously higher than those of the X and Y components.
Detrended geocenter time series at different truncated degrees. The blue, red and black solid lines represent the results for truncated degrees 1, 5 and 6. The X and Z components have been shifted by 20 mm and − 25 mm, respectively
To further understand the periodic signal in GPS-inversed geocenter time series, we compute the power spectral density (PSD) of the geocenter time series with the truncated degree from 1 to 10. The Lomb–Scargle periodogram method is implemented (Deng et al. 2016; Jiang et al. 2014; Williams 2003; Williams et al. 2004). After obtaining the spectra files, all the spectra of the geocenter motion series with truncated degree from 1 to 10 are stacked into one file and then smoothed with a Gaussian smoother, so that an overall view of the geocenter motion PSD variations is obtained. Figure 3 presents the final filtered PSD stacking results of the geocenter motion time series inversed from GPS observations. We can see that a dominant annual period with the 1 cycle per year (cpy) frequency is presented in all the three components, among which the Z component have the highest power value, the X component ranks the second, and the Y component ranks the third. The semiannual signal is relatively weak but is still clear, and the power of the semiannual signals of the X and Y components is less dominated than that of Z component. In addition, the spectra of the stacked GPS-inversed geocenter series show some peaks at the harmonics of the GPS draconitic year (1.04 cpy harmonics) in all three components, which is also shown in the IG1-inversed results.
Stacked periodogram of the GPS-derived geocenter motion time series with truncated degree from 1 to 10. The blue, green and red lines represent the PSD results of X, Y and Z components, respectively. The black and green vertical dashed lines represent the 1.0 cpy and 1.04 cpy harmonics
Correlations of the GPS geocenter estimates with SLR results
To assess the quality of the IG2-inverted geocenter motion, we compare our results with SLR to investigate the correlations of the geocenter estimates obtained from two different space geodesy technologies. We adopt the UT/CSR monthly geocenter time series from the analysis of SLR data based on five geodetic satellites (LAGEOS-1, LAGEOS-2, Starlette, Stella and Ajisai) as reference (ftp://ftp.csr.utexas.edu/pub/slr/geocenter/GCN_RL06_2018_11.txt). Although the SLR ground stations are very sparse and unevenly distributed with only 20 active stations on average, there is also correlated noise existing in the SLR time series, and SLR results still appear to exhibit the most reliable sensitivity to CM due to simpler orbit dynamics and have offered the first competitive geocenter motion result and adopted to estimate the ITRF origin (Altamimi et al. 2016; Riddell et al. 2017; Wu et al. 2017). Considering the data span of SLR and IG2, results from 2002.04107(2002/01/15) to 2015.04107(2015/01/15) are used when comparing this two types of data together.
The left panel of Fig. 4 shows the correlation between SLR and the GPS estimates with translational parameters coestimated at different truncated degrees. The X-coordinate shows the truncated degrees from 1 to 10. We can see that for all the three components, the geocenter estimates from GPS observations have favorable correlations with SLR results when the truncated degrees are below 6. The highest correlation occurs in the Z component, while X and Y components have relatively lower correlations. The average values of the correlation coefficients for truncated degree from 1 to 6 are 0.35, 0.44 and 0.74 for the X, Y and Z components, respectively. The variations of correlation coefficients from degree 1 to 6 are relatively moderate for all three components. However, for truncated degrees from degree 7, the correlation coefficients of all the three components exhibit obvious decreasing behavior, especially for the X and Z components, implying that the more truncated degrees estimated, the more significant the impact of the aliasing errors of higher degrees in the GPS geocenter motions.
Correlation between SLR and IG2-inverted geocenter motion truncated by different degrees. Blue, green and red lines denote the X, Y and Z components, respectively. Left: results with translational parameters coestimated, right: results without translational parameters coestimated
UT/CSR geocenter vector is consistent with IERS conventions, approximating the vector from the origin of the TRF to the instantaneous mass center of the Earth (ftp://ftp.csr.utexas.edu/pub/slr/geocenter/README_RL06). That is to say, the SLR time series reflects the motion of the CN with respect to the CM, which is not the same as the CM-CF estimates as obtained from the GPS estimates. To investigate the effects of the differences in SLR CN-CM data and SLR CF-CM data, we also select the GCN_L1_L2_30d_CN-CM (CN-CM reflects the correction required to best center the SLR network on the geocenter in the absence of modeling any local site loading) and GCN_L1_L2_30d_CF-CM (CF-CM is intended to reflect the true degree-1 mass variations without being affected by the higher-degree site loading effects (particularly at the annual frequency)) as references. Results show that the average correlation coefficients between IG2-inverted and the above two SLR geocenter motions for X, Y and Z are 0.20, 0.39 and 0.61 and 0.26, 0.41 and 0.56, which shows overall agreements for SLR results with CF-CM and CN-CM. Therefore, from practical point of view, we think that it is feasible to use UT/CSR CN-CM estimates as reference to evaluate the quality of the IG2-derived geocenter motions.
Seasonal signal analyses
Annual variations
As have been discussed in the above sections, there are obvious seasonal signals (mainly annual and semiannual signals) shown in the GPS-inversed geocenter motions. In this section, we quantitatively compute the variations of the seasonal amplitudes among different truncated degrees and compare these variations with SLR monthly results. Both the annual and the semiannual signals are determined by the unweighted least-squares method with the equation \(y(t_{i} ) = a + bt_{i} + A^{a} \cos (2\pi t - \varphi_{a} ) + A^{sa} \cos (4\pi t - \varphi_{sa} )\), where \(a\) is constant and \(b\) represent the trend, \(A^{a}\) and \(A^{sa}\) represent the annual and semiannual amplitudes, while \(\varphi_{a}\) and \(\varphi_{sa}\) are the corresponding initial phases.
The left panel of Figs. 5 and 6 demonstrates the GPS-derived and SLR obtained annual amplitudes and phases with translational parameters coestimated at truncated degrees from 1 to 10. Compared with SLR results, the GPS-derived annual amplitudes of X and Y components are smaller, especially for the estimates with truncated degrees 2 and 3. Along with increasing truncated degree, the variations of the GPS-derived annual amplitudes in the X and Y components are stable, among which results with truncated degrees 1, 5 and 10 are much closer to SLR.
Annual amplitudes of the GPS-derived and SLR obtained geocenter motions with truncated degrees from 1 to 10. Blue, green and red lines denote the X, Y and Z components. The solid and dashed lines denote the annual amplitudes from GPS and SLR, respectively. Left: results with translation parameters coestimated, right: results without translation parameters coestimated
Annual phases of the GPS-derived and SLR obtained geocenter motions with truncated degrees from 1 to 10. The phases of Y component have been shifted 180 degrees. The symbol meanings are the same as Fig. 5. Left: results with translational parameters coestimated, right: results without translational parameters coestimated. Note that in the left panel, the phase angle difference of the X component at 1, 7, 8, 9 and 10 is too big and is not shown in the figure. The situation is the same for the Y component at truncated degree 2
With respect to the Z component, the GPS-derived annual amplitudes are higher than the SLR estimates and become increasingly close to the SLR amplitudes until truncated degree 8, among which the estimates with truncated degrees 5 and 8 are much closer to the SLR amplitudes. In terms of the annual phases, results with truncated degree from 4 to 6 have better consistency with SLR for the three components. The X and Y components exhibit lightly bigger differences for different truncated degrees, while the Z component is relatively stable and close to the SLR-estimated phases.
Semiannual variations
The semiannual signals of GPS-derived geocenter motion time series obtained with coestimating the translational parameters are also investigated at different truncated degrees, as shown in the left panel of Figs. 7 and 8. Generally, the semiannual amplitudes for geocenter estimates with low degrees (i.e., lower than 5) are closer to the SLR results in the three components. Except for the Z component, which indicates a slow decrease with increasing truncated degree, the variations of the X and Y components with different truncated degrees are relatively stable. Compared to the SLR estimates, the semiannual amplitudes of the Y and Z components are closer to the SLR estimates, while the amplitudes of the X component are smaller than those of the SLR results. In terms of the semiannual phases, phase variations of the Z component are very stable and close to the SLR results, while the phases of both X and Y components exhibit obvious variation.
Semiannual amplitudes of the GPS-derived and SLR obtained geocenter motions with truncated degrees from 1 to 10. Symbols are the same as in Fig. 5. Left: results with translation parameters coestimated, right: results without translation parameters coestimated
Semiannual phases of the GPS-derived and SLR obtained geocenter motions with truncated degrees from 1 to 10. Left: results with translation parameters coestimated, right: results without translation parameters coestimated. Symbols are the same as in Fig. 5. Note that in the left panel, the phase angle difference of the Y component at 6 is too big and is not shown in the figure
From the above correlation and seasonal signal analyses, we can see that although the seasonal signals (both the annual and semiannual) of the GPS-inversed geocenter motion vary with different truncated degrees, the estimates with truncated degree 5 are the closest to the SLR geocenter estimates. Therefore, in the following analysis, the GPS-derived geocenter estimates with translation parameters estimated at the truncated degree 5 are selected as the optimal results.
Non-seasonal signal analyses
It is already well known that a large part of the geocenter motion comes from seasonal variations. Correspondingly, correlations are suspected to indicate how well the seasonality is dominated. To show the agreement of non-seasonal signals, we calculate the correlation of GPS-derived geocenter motion (coestimated with translational parameters) with SLR obtained results after removing the seasonal signals, and the result is shown in the left panel of Fig. 9. We find that the agreement of the non-seasonal signals is relatively weak, with average values of the correlation coefficients for truncated degree from 1 to 10 being 0.13, 0.24 and 0.04 for the X, Y and Z components. The GPS-estimated geocenter motions mainly indicate the seasonal signals rather than the non-seasonal signals, respectively. Only the Y component of the non-seasonal GPS geocenter estimates still is consistent with SLR result, indicating that the Y component of GPS results may have some possibility to detect the non-seasonal signals of the geocenter motions, for example the long-term trend.
Non-seasonal geocenter correlations of GPS estimates with SLR results. Blue, green and red lines denote the X, Y and Z components, respectively. Left: results with translation parameters coestimated, right: results without translation parameters coestimated
Comparison between different geocenter estimating approaches
To provide service under the unified and homogeneous reference frame, IGS transforms all the results into the IGS reference frame (such as IGb08) using Helmert similarity transformation after getting solutions from different analysis centers (ACs, ftp://cddis.gsfc.nasa.gov/gps/products/repro2/). The translation parameters are provided as the geocenter estimates in the IGS SINEX files, which refers to the results of the network shift approach. Hereafter, we call it the IG2 sinex solution. To compare our GPS-derived results with results that from the network shift approach, we also estimate the seasonal amplitudes and phases of the IG2 SINEX solution.
Here we mainly compare the annual signals in this section, just to be consistent with previous researches (Rietbroek et al. 2014; Wu et al. 2015, 2017). The annual amplitudes and phases of the geocenter estimates from different methods are listed in Table 1. According to the conclusion of "Seasonal signal analyses" section, we select the GPS-inversed geocenter motions with truncated degree 5 as our optimal results of the degree-1 deformation approach. We can see that in terms of the annual signals, all three components of the geocenter estimates using the degree-1 deformation approach are better than the results from the network shift approach (compared to slr_itrf2014). Particularly, the amplitude and phase of the Z component obtained from the network shift approach are significantly different from SLR estimates, while the results of the degree-1 deformation approach are very close to those of SLR.
Annual amplitudes and phases of the geocenter coordinate time series from IG2 data and the SLR data
Amp (mm)
Phase (day)
IG2 estimates
IG2 estimates without trans.
IG2 sinex
slr_GCN_RL06
slr_itrf2014
IG2 sinex refers to the network shift approach, and slr_itrf2014 refers to the SLR contribution to the ITRF2014 (Rebischung et al. 2015)
As denoted in "Correlations of the GPS geocenter estimates with SLR results" section, the SLR geocenter time series reflect the motion of the CN with respect to the CM, which are not the same as CM-CF. Here, we also list the geocenter motion estimates from SLR contribution to ITRF2014 (referred as slr_itrf2014) (Rebischung et al. 2015) in Table 1. We find that although the amplitudes from slr_itrf2014 are higher in the Y and Z components, the annual phases from slr_itrf2014 are identical to those from slr_GCN_RL06. Therefore, we conclude that the GPS-estimated geocenter motions are consistent with SLR results despite the definition differences in the CN and CF.
Since GNSS stations are unevenly distributed geographically with sparse coverage in oceanic areas and the Southern Hemisphere, the combination of global GPS data, the assimilated model ocean bottom pressure (OBP) models and GRACE data has the potential to provide a complete and independent global measure of spectral mass loading coefficients, including degree one. Therefore, another type of geocenter inverse method is to invert for degree-1 surface mass variation coefficients from various combinations of GNSS observations, OBP models, Jason ocean altimetry and GRACE data (Blewitt and Clarke 2003; Rietbroek et al. 2014; Wu et al. 2015, 2017). To investigate the possible ill-posedness of the GPS loading inversion over sparsely sampled regions such as the ocean, we also compare the geocenter motions estimated with combined techniques in terms of annual variations (Table 2). We can see that for the X component, the annual estimates from different approaches correlate quite well in both amplitudes and phases. However, the Y component of the IG2 GPS-derived geocenter motion amplitude is persistently smaller than that of the results derived from other methods, while the Z component of IG2 results has higher annual amplitude. As a whole, the annual phases predicted with the IG2 data are in line with those based on other techniques except that there is a discrepancy in the Z component, where the IG2-inverted solutions and those obtained from the KALREF approaches are more than one month before those based on other techniques.
Estimated amplitudes and phases of the annual variations of the geocenter motions obtained from different combination approaches
Refs.
GRACE + OBP-improved
Sun et al. (2017)
2002.8–2014.6
Combination approach
Jason 1/2 Alt. + GRACE
Rietbroek et al. (2014)
GNSS Unified
Lavallée et al. (2006)
KALREF-week 82-site
Wu et al. (2015)
KALREF + GRACE
Possible reasons for the above differences can be divided into two facets. On the one hand, there exist GNSS observation errors, such as the time-correlated errors in station displacements, remaining draconitic errors in the reprocessed GNSS data, inaccurate or neglected intrinsic variance and correlation structures in the covariance matrices, unmodeled technology-related systematic errors and the truncated higher-degree terms during geocenter motion inversion, etc. Impacts of these factors on parameter estimation, for the most part, are not precisely known right now. On the other hand, inclusion of an OBP model and GRACE data apparently improves geographic coverage and the separation of spherical harmonic coefficients (Rietbroek et al. 2014; Swenson et al. 2008; Wu et al. 2015, 2017). However, OBP models generally do not conserve mass or contain accurate mass input/output information (i.e., from evaporation, precipitation and discharge). They are also built on the static geoid without considering time variable self-gravitation and loading effects of surface mass variations. While these problems can be and have been corrected (e.g., Sun et al. 2017), a more serious concern is that ocean circulation models, even if many oceanographic data might be assimilated, perhaps remain poorly skilled in reproducing OBP. These unknown errors in GNSS data and the small oceanic contribution to geocenter motion would directly affect global inversion results and thus result in amplified errors in the estimated geocenter motion (Wu et al. 2017).
Comparison with the GPS geocenter estimates from IG1
Until now, IGS has performed two times of reanalysis on the full history of GPS data collected by the IGS global network: IG2 and the IG1. In this section, GPS observations from IG1 are adopted to inverse the geocenter motions. The same analyses as the IG2 are then implemented to the IG1 solutions so as to investigate the possible improvements. Considering that the IG1 data span ended in early 2010, we compare results from 2002.04107 to 2009.95547. Figure 10 represents the geocenter correlations of the IG1 and the SLR results together with the annual signal comparison with (left panels) and without (right panels) coestimating the translation parameters. Generally, the IG1 estimates with truncated degree 4 are the closest to the SLR results. The average values of the correlation coefficients with SLR results for IG1 are 0.58, 0.48 and 0.66 for the X, Y and Z components, while for IG2 results within the same time span, the mean values are 0.42, 0.46 and 0.75. The correlations of the X component for IG1 are better than IG2, but Z component exhibits worse correlations. On the whole, the estimates with the truncated degree 4 are the closest to the SLR geocenter estimates. Compared to IG2 estimates, although the IG1-derived annual amplitude of the Y component is more consistent with SLR estimates, its annual phase is far away (nearly two month apart) from SLR. There is no optimal truncated degree for both the annual amplitudes and phases that are consistent with the SLR. This indicates the IG2 improvements over the previous IG1 reprocessing efforts (see details at http://acc.igs.org/reprocess2.html). We also find that the IG1 geocenter estimates without coestimating the translational parameters show more moderate variations, but exhibit obvious discrepancies with the IG1 results with translational parameters coestimated.
Geocenter correlations and annual estimations of the IG1-inversed geocenter motions with respect to the SLR results. Blue, green and red lines denote the X, Y and Z components, respectively. Left: results with translation parameters coestimated, right: results without translation parameters coestimated
Effects of the translational parameters in the inversion model
As mentioned in "Method and data" section, we find that there are three translation parameters (\((t_{x} \mathop {}\nolimits_{{}}^{{}} t_{y} \mathop {}\nolimits_{{}}^{{}} t_{z} )\)) in the functional model of the degree-1 deformation approach. However, there are no agreements about whether the translational parameters should be estimated. In this section, we investigate the effects of the translational parameters on the IG2-inversed geocenter motions. For clear comparison, the right panel of Figs. 4, 5, 6, 7, 8 and 9 shows the results without coestimating the translational parameters.
With the growth of the truncated degree, the variations of correlation coefficients obtained without coestimating translational parameters are more moderate than results with translational parameters coestimated. The average values of the correlation coefficients for truncated degree from 1 to 10 are 0.45, 0.50 and 0.73 for the X, Y and Z components, respectively, which is close to the results with translational parameters coestimated. In terms of the annual geocenter motion estimates, the annual amplitudes with truncated degree 5 is the closest to the SLR amplitudes for all three components, while for the annual phases with different truncated degrees, only Z components show little differences and are always close to the SLR-estimated phases. The most significant variations with and without coestimating translational parameters appeared in the Y component. When the truncated degree is 1, the annual phase of Y component is close to the SLR. However, if truncated at degree 2, the phase became 30° larger than the SLR results. If the truncated degrees are between 3 and 10, the phases become stable and are approximately 45° smaller than the SLR results. For the X component, the annual phases with truncated degree from 3 to 6 have better consistence with SLR.
In total, despite the fact that the variations of correlation coefficients from different degrees without translational parameters coestimated are more moderate than results with translational parameters coestimated, for every truncated degree, big discrepancy exists in the annual signal estimates between the GPS-derived results without translational parameters coestimated and the SLR results. Therefore, the GPS geocenter motion estimated with translational parameters demonstrates more positive results than that without translation parameters. Thus, we propose that the translational parameters should be considered when estimating geocenter motion with the IG2 core stations' coordinate time series.
Effect of the post-seismic deformation
Post-seismic deformation following large earthquakes may cause problems if it is used as load-induced deformation. It is a possible reason why the Y-amplitude is so small. In this section, we remove the 64 stations affected by large earthquakes identified by ITRF2014 (Fig. 1) and then perform the same analyses as the 186 stations (Fig. 11). The average values of the correlation coefficients with SLR solutions for truncated degree from 1 to 6 are 0.44, 0.20 and 0.72 for the X, Y and Z components, respectively. Compared with results obtained from 186 stations, the values of X component become higher, while those of Y component become lower. In terms of the annual amplitudes, the differences mainly appear in the X component with truncated degree bigger than 7 and the Y component. Except for the solutions with truncated degree 3, the amplitudes of Y component are higher than those of the SLR results, and the average value of the annual amplitudes of the Y component for truncated degree from 1 to 10 is 3.3 mm, which is very close to results derived from other methods (Table 2). With respect to the annual phases, results for both the X and Z components indicate the same variations as from the 186 station solutions. The only significant difference occurs in the Y component. Except for the results with truncated degree 3, the annual phase of the Y component is nearly three month apart from SLR estimates. Therefore, we conclude that after removing the stations affected by large earthquakes, the amplitudes of Y component do become higher, but the annual phases of the Y component become far away from the SLR results. We also find that without coestimating the translational parameters, the geocenter estimates show nearly the same results as solutions from the 186 stations.
Geocenter correlations and the annual estimations of the IG2-inversed geocenter motions after removing the stations affected by large earthquakes identified by ITRF2014. Blue, green and red lines denote the X, Y and Z components, respectively. Left: results with translation parameters coestimated, right: results without translation parameters coestimated
An equivalent water height map of the estimated SH coefficients
As mentioned in "Comparison between different geocenter estimating approaches" section, the lack of the GPS stations in oceanic areas can lead to unstable solutions when inversing global measure of spectral mass loading coefficients, including degree one. In order to declare this ill-posedness, we create an equivalent water height (EWH) map from the IG2-estimated SH coefficients (including higher-order terms) and compare the results with the solutions provided by the Estimation of the Circulation and Climate of the Ocean (ECCO). The SH coefficients from 2008/04 (2008 April) are adopted to compute the equivalent water height to check whether the ocean values obtain reasonable physical values. Figure 12 illustrates the results from SH coefficients with truncated degree at 5, 7, 9, 10 (a–d) as well as the ECCO solutions (e). We can notice that the differences between IG2 SH coefficients and ECCO solutions vary with different truncated degrees. Despite the fact that the IG2 coefficient-inversed EWH with truncated degree 5 is most closely to ECCO solutions, there are still obvious differences occurring in northern and central Atlantic Ocean areas, as well as the central Indian Ocean areas. With the truncated degree grows, the differences become more and more significant, and the consistency between the two different types of solutions is getting worse and worse. Therefore, to obtain stable geocenter motions from the global GPS data, and to expand GPS to inverse the global surface mass loading displacements, further researches still need to be done to combining the OBP models, as well as other datasets (Rietbroek et al. 2014; Sun et al. 2017).
Equivalent water height maps from the IG2-estimated SH coefficients and ECCO OBP solutions. a–d Results from truncated degrees 5, 7, 9 and 10. e ECCO results
With the development of more and more global GPS products, geocenter motions inversed by GPS need to be further assessed. In this paper, we focus on thorough evaluation to the GPS-derived geocenter motions using state-of-the-art IG2-reprocessed data based on the degree-1 deformation approach. Considering that the performance of the GPS geocenter estimates is mainly related to the truncated degrees, the spherical harmonics on the inversion of the geocenter motion are truncated with degrees from 1 to 10. Then detailed comparisons have been made between our GPS-inversed geocenter motion and the SLR results. Results demonstrate that the IG2 GPS-derived geocenter estimates obtained by coestimating translational parameters agree with the SLR results in all three components, especially for the Z component. The variations of the correlation coefficients with different truncated degrees are moderate when the truncated degrees are below 6, which can reflect the overall consistency between the GPS-inversed results and the SLR results. The average values of the correlation coefficients with truncated degree from 1 to 6 are 0.35, 0.44 and 0.74 for the X, Y and Z components, respectively. For truncated degrees from degree 7, the three components exhibit obvious decreasing variations, especially for the X and Z components, implying that with more truncated degrees estimated, there is more significant impact of the aliasing errors of higher degrees on the GPS-derived geocenter motions.
Our stacked periodic analysis of the GPS geocenter estimates indicates that a dominant annual period with the 1 cpy frequency is presented in all three components, while the semiannual signal is relatively weak. The annual amplitudes of X and Y components are relatively smaller than SLR results, and the variations are stable with different truncated degrees. However, the amplitudes of the Z component of the inversed geocenter are higher than SLR. In terms of the annual phases, the X and Y components show bigger differences for different truncated degrees, while the Z component is relatively stable and close to the SLR-estimated phases. For all the three components, the GPS geocenter estimates with truncated degree 5 are the closest to the SLR estimates. Besides the periodic analysis, we also carry out non-seasonal analysis for the IG2-inverted and the SLR obtained geocenter motion. Results show that the Y component of the residual GPS geocenter estimates still correlates with SLR.
Then, the GPS-derived geocenter motions are compared with results from other different geocenter estimating approaches. For both annual amplitudes and phases, geocenter estimates obtained from the degree-1 deformation approach are better than results from the network shift approach using the same GPS data. They are also very consistent with the SLR RL06 results and the geocenter estimates from the SLR contribution to the ITRF2014. As a whole, the annual phases predicted with the IG2 data are in line with those based on other combination techniques. An exception is that there is a discrepancy in the annual phase estimates of the Z component. We find that solutions based on IG2 data, as well as those based on the KALREF approaches, are more than a month before than those based on other techniques. Moreover, the same IG2 GPS geocenter motion analysis is done with the IG1 solutions to investigate the possible improvements. We find that although the annual amplitude of the Y component is more consistent with SLR estimates, the annual phase of the Y component is far away (nearly 2 month apart) from the SLR-estimated phase for both with and without translational parameters coestimated. There is no optimal truncated degree for both the annual amplitudes and phases which are consistent with the SLR, indicating the IG2 improvements over the previous IG1 reprocessing efforts.
Furthermore, the effects of the translational parameters are investigated. In terms of the annual geocenter motion estimates, for every truncated degree, big discrepancy exists in the annual signal estimates between the GPS-derived results without translational parameters coestimated and the SLR results. The most significant variations appear in the Y component, which is approximately 45 days apart from the SLR results with different truncated degrees for results without translational parameters coestimated. Thus, we propose that the translational parameters should be considered when estimating geocenter motion with the IG2 core stations' coordinate time series.
Finally, we remove those stations affected by large earthquakes identified by ITRF2014 to investigate the impacts of post-seismic effect on the geocenter inversion. We find that after removing the stations affected by large earthquakes, the amplitudes of Y component do become higher, but the annual phases of the Y component become far away from the SLR solutions. We also implement EWH analysis to address the effects of the missing ocean data. We create the equivalent water height map from the IG2-estimated SH coefficients and select the solutions from the ECCO as references. Results show that the differences between IG2 SH coefficients and the ECCO solutions vary with different truncated degrees. Despite the fact that the IG2 coefficient-inversed EWH with truncated degree 5 is most closely to ECCO solutions, there are still obvious differences occurred in northern and central Atlantic Ocean areas, as well as the central Indian Ocean areas. With the truncated degree grows, the differences become more and more significant, and the consistency between the two different types of solutions is getting worse and worse. To obtain stable geocenter motions from the global GPS data, and to expand GPS to inverse the global surface mass loading displacements, further researches still need to be done to combining the OBP models, as well as other datasets.
We are grateful to the International GNSS Service (IGS) for providing the original data sets. Also many thanks to the editor and JEO assistant for important suggestion in the manuscript processing. We thank the two anonymous reviewers for their constructive comments and suggestions, which help to improve the manuscript significantly.
This research is supported by the National Science Fund for Distinguished Young Scholars (No. 41525014), together with the Scientific Research Project of Hubei Provincial Department of Education, Surveying and Mapping Basic Research Program of the National Administration of Surveying, Mapping and Geoinformation (No. 17-01-01) and Research Program of Hubei Polytechnic University (No. 18xjz09R).
All the authors contributed to the design of the study. LD came up with the idea of the study. LD, ZL and YM carried out the experiments and drafted the manuscript. NW and HC participated in the experimental analysis. All authors read and approved the final manuscript.
40623_2019_1054_MOESM1_ESM.rar Additional file 1. IG2 derived geocenter data.
Optics Valley BeiDou International School, Hubei Polytechnic University, 16 Guilinbei Road, Huangshi, 435003, China
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, 181 Chatham Road South, Hung Hom, Kowloon, 999077, Hong Kong, China
GNSS Research Center, Wuhan University, 129 Luoyu Road, Wuhan, 430079, China
IGN LAREG, Université Paris Diderot, Paris, France
School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan, 430079, China
Altamimi Z, Rebischung P, Métivier L, Collilieux X (2016) ITRF2014: a new release of the international terrestrial reference frame modeling nonlinear station motions. J Geophys Res Solid Earth 121(8):6109–6131Google Scholar
Argus DF (2012) Uncertainty in the velocity between the mass center and surface of earth. J Geophys Res Solid Earth 117(B10405):1–15Google Scholar
Blewitt G (2003) Self-consistency in reference frames, geocenter definition, and surface loading of the solid earth. J Geophys Res Solid Earth 108(B2):2103–2013Google Scholar
Blewitt G, Clarke P (2003) Inversion of Earth's changing shape to weigh sea level in static equilibrium with surface mass redistribution. J Geophys Res (Solid Earth) 108:2311. https://doi.org/10.1029/2002JB002290 Google Scholar
Blewitt G, Lavallée D, Clarke P, Nurutdinov K (2001) A new global mode of earth deformation: seasonal cycle detected. Science 294(5550):2342–2345Google Scholar
Bloßfeld M, Seitz M, Angermann D (2014) Non-linear station motions in epoch and multi-year reference frames. J Geod 88(1):45–63Google Scholar
Chambers DP, Tamisiea ME, Nerem RS, Ries JC (2007) Effects of ice melting on grace observations of ocean mass trends. Geophys Res Lett 34(5):5610Google Scholar
Cheng MK, Ries JC, Tapley BD (2013) Geocenter variations from analysis of SLR data. In: Altamimi Z, Collilieux X (eds) Reference frames for applications in geosciences. International association of geodesy symposia, vol 138. Springer, Berlin, HeidelbergGoogle Scholar
Collilieux X, Altamimi Z, Ray J, Dam TV, Wu X (2009) Effect of the satellite laser ranging network distribution on geocenter motion estimation. J Geophys Res Solid Earth 114(B4402):1–17Google Scholar
Collilieux X, Van Dam T, Ray JR, Coulot D, Metivier L, Altamimi Z (2012) Strategies to mitigate aliasing of loading signals while estimating GPS frame parameters. J Geod 86(1):1–14Google Scholar
Deng L, Jiang W, Li Z, Chen H, Wang K, Ma Y (2016) Assessment of second- and third-order ionospheric effects on regional networks: case study in China with longer CMONOCGPS coordinate time series. J Geod 91(2):1–21Google Scholar
Dong D, Dickey JO, Chao Y, Cheng MK (1997) Geocenter variations caused by atmosphere, ocean and surface ground water. Geophys Res Lett 24(15):1867–1870Google Scholar
Dong D, Fang P, Bock Y, Cheng MK, Miyazaki S (2002) Anatomy of apparent seasonal variations from GPS-derived site position time series. J Geophys Res Solid Earth 107(B4):ETG-1-ETG 9-16Google Scholar
Dong D, Qu W, Fang P, Peng D (2014) Non-linearity of geocentre motion and its impact on the origin of the terrestrial reference frame. Geophys J Int 198(2):1071–1080Google Scholar
Farrell WE (1972) Deformation of the earth by surface loads. Rev Geophys 10(3):761–797Google Scholar
Feissel-Vernier M, Bail KL, Berio P, Coulot D, Ramillien G, Valette JJ (2006) Geocentre motion measured with DORIS and SLR, and predicted by geophysical models. J Geod 80(8–11):637–648Google Scholar
Ferland R, Piraszewski M (2009) Theigs-combined station coordinates, earth rotation parameters and apparent geocenter. J Geod 83(3–4):385–392Google Scholar
Fritsche M, Dietrich R, Rülke A, Rothacher M, Steigenberger P (2010) Low-degree earth deformation from reprocessed GPS observations. GPS Solut 14(2):165–175Google Scholar
Jiang Weiping, Deng Liansheng, Li Zhao, Zhou Xiaohui, Liu Hongfei (2014) Effects on noise properties of GPS time series caused by higher-order ionospheric corrections. Adv Space Res 53(7):1035–1046Google Scholar
Jin S, Dam TV, Wdowinski S (2013) Observing and understanding the earth system variations from space geodesy. J Geodyn 72(12):1–10Google Scholar
Kang Z, Tapley B, Chen J, Ries J, Bettadpur S (2009) Geocenter variations derived from GPS tracking of the grace satellites. J Geod 83(10):895–901Google Scholar
Lavallée DA, Dam TV, Blewitt G, Clarke PJ (2006) Geocenter motions from GPS: a unified observation model. J Geophys Res Solid Earth 111(B05405):1–15Google Scholar
Lavallée DA, Moore P, Clarke PJ, Petrie EJ, van Dam T, King MA (2010) J2: an evaluation of new estimates from GPS, grace, and load models compared to SLR. Geophys Res Lett 37(22):707–716Google Scholar
Meindl M, Beutler G, Thaller D, Dach R, Jäggi A (2013) Geocenter coordinates estimated from GNSS data as viewed by perturbation theory. Adv Space Res 51(7):1047–1064Google Scholar
Melachroinos SA, Lemoine FG, Zelensky NP, Rowlands DD, Luthcke SB, Bordyugov O (2013) The effect of geocenter motion on Jason-2 orbits and the mean sea level. Adv Space Res 51(8):1323–1334Google Scholar
Petit G, Luzum B, Al E (2010) IERS conventions (2010). IERS Tech Note 36:1–95Google Scholar
Rebischung P et al (2015) Repro2 TRF combinations: combination of the IGS repro2 terrestrial frames a poster presentation at the 2015 European Geosciences UnionGoogle Scholar
Rebischung P, Altamimi Z, Springer T (2014) A collinearity diagnosis of the GNSS geocenter determination. J Geod 88(1):65–85Google Scholar
Rebischung P, Altamimi Z, Ray J, Garayt B (2016) Theigs contribution to ITRF2014. J Geod 90(7):611–630Google Scholar
Riddell AR, King MA, Watson CS, Sun Y, Riva R, Rietbroek R (2017) Uncertainty in geocenter estimates in the context of ITRF2014. J Geophys Res 122(5):4020–4032Google Scholar
Rietbroek R, Fritsche M, Brunnabend SE, Daras I, Kusche J, Schröter J et al (2012) Global surface mass from a new combination of grace, modelled OBP and reprocessed GPS data. J Geodyn 59–60(5):64–71Google Scholar
Rietbroek R, Fritsche M, Dahle C, Brunnabend S, Behnisch M, Kusche J et al (2014) Can GPS-derived surface loading bridge a GRACE mission gap? Surv Geophys 35(6):1267–1283Google Scholar
Sun Y, Ditmar P, Riva R (2017) Statistically optimal estimation of degree-1 and C 20 coefficients based on GRACE data and an ocean bottom pressure model. Geophys J Int 210(3):1305–1322Google Scholar
Swenson SC, Chambers DP, Wahr J (2008) Estimating geocenter variations from a combination of GRACE and ocean model output. J Geophys Res 113(B08410):1–12Google Scholar
Urschl C, Beutler G, Gurtner W, Hugentobler U, Schaer S (2007) Contribution of SLR tracking data to GNSS orbit determination. Adv Space Res 39(10):1515–1523Google Scholar
Williams SDP (2003) The effect of coloured noise on the uncertainties of rates estimated from geodetic time series. J Geod 76(9–10):483–494Google Scholar
Williams SDP, Bock Y, Fang P, Jamason P, Nikolaidis RM, Prawirodirdjo L et al (2004) Error analysis of continuous GPS position time series. J Geophys Res Solid Earth 109(B03412):1–19Google Scholar
Wu X, Argus DF, Heflin MB, Ivins ER, Webb FH (2002) Site distribution and aliasing effects in the inversion for load coefficients and geocenter motion from GPS data. Geophys Res Lett 29(24):63-1–63-4Google Scholar
Wu X, Ray J, Dam TV (2012) Geocenter motion and its geodetic and geophysical implications. J Geodyn 58(3):44–61Google Scholar
Wu X, Abbondanza C, Altamimi Z, Chin TM, Collilieux X, Gross RS et al (2015) KALREF—a Kalman filter and time series approach to the International Terrestrial Reference Frame realization. J Geophys Res 120(5):3775–3802Google Scholar
Wu X, Kusche J, Landerer FW (2017) A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data. Geophys J Int 209(3):1398–1402Google Scholar
Zannat UJ, Tregoning P (2017a) Estimating network effect in geocenter motion: applications. J Geophys Res Solid Earth 122(10):2017JB014,247. https://doi.org/10.1002/2017JB014247 Google Scholar
Zannat UJ, Tregoning P (2017b) Estimating network effect in geocenter motion: Theory. J Geophys Res Solid Earth 122(10):2017JB014,246. https://doi.org/10.1002/2017JB014246 Google Scholar
Zhang X, Jin S (2014) Uncertainties and effects on geocenter motion estimates from global GPS observations. Adv Space Res 54(1):59–71Google Scholar
Zou R, Freymueller JT, Ding K, Yang S, Wang Q (2014) Evaluating seasonal loading models and their impact on global and regional reference frame alignment. J Geophys Res Solid Earth 119(2):1337–1358Google Scholar
6. Geodesy | CommonCrawl |
About Contribute Source
main/mps/
Matrix Product State / Tensor Train
The matrix product state (MPS)[1][2][3][4] or tensor train (TT)[5] tensor network is a factorization of a tensor with N indices into a chain-like product of three-index tensors. The MPS/TT is one of the best understood tensor networks for which many efficient algorithms have been developed. It is a special case of a tree tensor network.
A matrix product state / tensor train factorization of a tensor $T$ can be expressed in tensor diagram notation as
where for concreteness $T$ is taken to have six indices, but the pattern above can be generalized for a tensor with any number of indices.
Alternatively, the MPS/TT factorization of a tensor can be expressed in traditional notation as
\begin{equation} T^{s_1 s_2 s_3 s_4 s_5 s_6} = \sum_{\{\mathbf{\alpha}\}} A^{s_1}_{\alpha_1} A^{s_2}_{\alpha_1 \alpha_2} A^{s_3}_{\alpha_2 \alpha_3} A^{s_4}_{\alpha_3 \alpha_4} A^{s_5}_{\alpha_4 \alpha_5} A^{s_6}_{\alpha_5} \end{equation}
Bond Dimension / Rank
Number of Parameters
Elementary Operations Involving MPS/TT
Retrieving a Component from an MPS/TT
Inner Product of Two MPS/TT
Compression / Rounding
Gauges and Canonical Forms
Connections to Other Formats and Concepts
Further Reading and Resources
where the bond indices $\alpha$ are contracted, or summed over. Note that each of the $A$ tensors can in general be different from each other; instead of denoting them with different letters, it is a useful convention to just distinguish them by their indices.
Any tensor can be exactly represented in MPS / TT form for a large enough dimension of the bond indices $\alpha$. [4][5]
A key concept in understanding the matrix product state or tensor train factorization is the bond dimension or tensor-train rank, sometimes also called the virtual dimension. This is the dimension of the bond index connecting one tensor in the chain to the next, and can vary from bond to bond. The bond dimension can be thought of as a parameter controlling the expressivity of a MPS/TT network. In the example above, it is the dimension of the $\alpha$ indices.
Given a large enough bond dimension or rank, an MPS/TT can represent an arbitrary tensor. Consider a tensor $T^{s_1 s_2 \cdots s_N}$ having N indices all of dimension $d$. Then this tensor can always be represented exactly as an MPS/TT with bond dimension $m=d^{N/2}$.
However, in most applications the MPS/TT form is used as an approximation. In such cases, the bond dimension or rank is either fixed at a moderate size, or determined adaptively.
Consider a tensor with $N$ indices, each of dimension $d$. Generically, such a tensor must be specified by $d^N$ parameters. In contrast, representing such a tensor by an MPS/TT network of bond dimension $m$ requires
\begin{equation} N d m^2 \end{equation}
parameters, and this number can be reduced even further by imposing or exploiting certain constraints on the factor tensors.
If the MPS/TT representation of the tensor is a good approximation, then it represents a massive compression from a set of parameters growing exponentially with $N$, to a set of parameters growing just linearly with $N$.
It is possible to reduce the number of parameters even further, without loss of expressive power, by exploiting the redundancy inherent in the MPS/TT network. For more information about this redundancy, see the section on MPS/TT gauges below.
For a tensor with an infinite number of indices, the MPS/TT parameters can be made independent of $N$ by assuming that all of the factor tensors are the same (or the same up to a gauge transformation).
The MPS/TT tensor network format makes it possible to efficiently carry out operations on a large, high-order tensor $T$ by manipulating the much smaller factors making up the MPS/TT representation of $T$.
There are many known algorithms for computations involving MPS/TT networks. Below, we highlight some of the simplest and most fundamental examples.
To read about other MPS/TT algorithms, see this page.
Consider an order-$N$ tensor $T$. In general, cost of storing and retrieving its components scales exponentially with $N$. However, if $T$ can be represented or approximated by an MPS/TT network, one can obtain specific tensor components with an efficient algorithm.
Say we want to obtain the specific component $T^{s_1 s_2 s_3 \cdots s_N}$, where the values $s_1, s_2, s_3, \ldots, s_N$ should be considered fixed, yet arbitrary.
If $T$ is given by the MPS/TT \begin{equation} T^{s_1 s_2 s_3 \cdots s_N} = \sum_{\{\mathbf{\alpha}\}} A^{s_1}_{\alpha_1} A^{s_2}_{\alpha_1 \alpha_2} A^{s_3}_{\alpha_2 \alpha_3} \cdots A^{s_N}_{\alpha_{N-1}} \end{equation} then the algorithm to retrieve this component is very simple. Fix the $s_j$ indices on each of the $A$ factor tensors. Then, thinking of the tensor $A^{(s_1)}_{\alpha_1}$ as a row vector and $A^{(s_2)}_{\alpha_1 \alpha_2}$ as a matrix, contract these over the $\alpha_1$ index. The result is a new vector $L_{\alpha_2}$ that one can next contract with $A^{(s_3)}_{\alpha_2 \alpha_3}$. Continuing in this manner, one obtains the $s_1,s_2,s_3,\ldots,s_N$ component via a sequence of vector-matrix multiplications.
Diagramatically, the algorithm for retrieving a tensor component can be written as (for the illustrative case of $N=6$):
If the typical bond dimension of the MPS/TT is $m$, then the computational cost for retrieving a single tensor component scales as $N m^2$.
In the physics literature, the name matrix product state refers to the fact that an individual tensor component (in the context of a quantum state or wavefunction) is parameterized as a product of matrices as in the algorithm above. (This name is even clearer in the case of periodic MPS.)
Inner Product of Two MPS/TT [6]
Consider two high-order tensors $T^{s_1 s_2 s_3 s_4 s_5 s_6}$ and $W^{s_1 s_2 s_3 s_4 s_5 s_6}$. Say that we want to compute the inner product of $T$ and $W$, viewed as vectors. That is, we want to compute:
\begin{equation} \langle T, W\rangle = \sum_{\{\mathbf{s}\}} T^{s_1 s_2 s_3 s_4 s_5 s_6} W^{s_1 s_2 s_3 s_4 s_5 s_6} \end{equation}
In the case of $T = W$, then this operation computes the Frobenius norm of $T$.
Now assume that $T$ and $W$ each can be efficiently represented or approximated by MPS/TT tensor networks as follows:
\begin{equation} W^{s_1 s_2 s_3 s_4 s_5 s_6} = \sum_{\{\mathbf{\beta}\}} B^{s_1}_{\beta_1} B^{s_2}_{\beta_1 \beta_2} B^{s_3}_{\beta_2 \beta_3} B^{s_4}_{\beta_3 \beta_4} B^{s_5}_{\beta_4 \beta_5} B^{s_6}_{\beta_5} \end{equation}
The strategy to efficiently compute $\iprod{T}{W}$ is to contract $A^{s_1}$ with $B^{s_1}$ over the $s_1$ index, forming a tensor $E^{\alpha_1}_{\beta_1}$. Then this tensor $E$ is contracted with $A^{s_2}$ and $B^{s_2}$ to form another intermediate tensor $E^{\alpha_2}_{\beta_2}$, etc.
Let us express this process more simply in diagrammatic notation:
The above algorithm makes no approximations, yet is very efficient. A careful analysis of each step shows that the cost of the algorithm scales as
\begin{equation} N m^3\,d \end{equation}
where $m$ is the bond dimension or rank of the MPS/TT networks and $d$ is the dimension of the external indices. In contrast, if one worked with the full $T$ and $W$ tensors and did not use the MPS/TT form the cost of calculating $\iprod{T}{W}$ would be $d^N$.
Compression / Rounding [5][7]
A particularly powerful operation is the compression of a tensor network into MPS/TT form. Here we will focus on the compression of a larger bond dimension MPS/TT into one with a smaller dimension, but the algorithm can be readily generalized to other inputs, such as sums of MPS/TT networks, sums of rank-1 tensors, other tree tensor network formats, and more, with the result that these inputs are controllably approximated by a single MPS/TT.
The algorithm we follow here was proposed in Ref. 7. Other approaches to compression include SVD-based compression,[8] such as the TT-SVD algorithm,[5] or variational compression.[6]
For concreteness, say we want to compress an MPS/TT of bond dimension $M$ into one of bond dimension $m$, such that the new MPS/TT is as close as possible to the original one, in the sense of Euclidean distance (see above discussion of inner product).
The compression procedure begins with contracting two copies of the input MPS/TT network over all of the external indices, except the last external index.
The intuition here is that the MPS/TT form is generated by a repeated SVD of a tensor. Just as an SVD of a matrix $M$ can be computed by diagonalizing $M^\dagger M$ and $M M^\dagger$, the MPS/TT appears twice in the diagrams above for the same reason.
To compute the contraction of the two MPS/TT copies efficiently, one forms the intermediate tensors labeled $E_1$, $E_2$, etc. as shown above. These tensors are saved for use in later steps of the algorithm, as will become clearer below.
Having computed all of the $E_j$ tensors, it is now possible to form the object $\rho_6$ below, which in physics is called a reduced density matrix, but more simply is the square of the tensor represented by the network, summed over all but its last index:
To begin forming the new, compressed MPS/TT, one diagonalizes $\rho_6$ as shown in the last expression above. It is a manifestly Hermitian matrix, so it can always be diagonalized by a unitary $U_6$, which is the box-shaped matrix in the last diagram above.
Crucially, at this step one only keeps the $m$ largest eigenvalues of $\rho_6$, discarding the rest and truncating the corresponding columns of $U_6$. The reason for this trunction is that the unitary $U_6$ is actually the first piece of the new, compressed MPS/TT we are constructing, which we wanted to have bond dimension $m$. In passing, we note that other truncation strategies are possible, such as truncating based on a threshold for small versus large eigenvalues, or even stochastically truncating by sampling over the eigenvalues.[9]
To obtain the next tensor of the new, compressed MPS/TT, one next forms the following "density matrix" and diagonalizes it (with truncation) as shown below
In defining this matrix, $U_6$ was used to transform the basis of the last site. As each $U$ is obtained, it is used in a similar way to compress the space, otherwise the cost of the algorithm would become unmanageable and each next $U$ tensor would not be defined in a basis compatible with the previous $U$'s.
Having obtained $U_5$ as shown above, one next computes the following density matrix, again using all previous $U$ tensors to rotate and compress the space spanned by all previous external indices as shown:
The pattern is repeated going further leftward down the chain to obtain $U_4$:
(In passing, note that a matter of efficiency, the contraction of the $U$ and $U^\dagger$ tensors with the uncompressed MPS/TT tensors does not have to be recalculated at each step, but partial contractions can be saved to form the next one efficiently. This sub-step of the algorithm is in fact identical to the inner product algorithm described above, except that it proceeds from right to left.)
Once all of the $U$ tensors are obtained by repeating the steps above, the last step of the algorithm is to obtain the first tensor of the new MPS/TT by the following contracted diagram:
Assembling all of the pieces—the first tensor $A_1$ together with the $U_j$ tensors computed above—the final compressed version of the original MPS/TT is:
Note that in this last expression, the indices of various tensors have been oriented differently on the page than in their original definitions above. But recall that it is the connectivity of indices, not their orientation, that carries the meaning of tensor diagrams.
The compression, or rounding, algorithm above leads to an interesting observation: the MPS/TT network after the compression can be made arbitrary close to the original one, but is made of isometric tensors $U_j$ (technically partial isometries). Because these tensors were the result of diagonalizing Hermitian matrices, they have the property that $U^\dagger U = I$, or diagrammatically:
Written in the orientation these tensors take when viewed as part of an MPS/TT, they have the property that:
An MPS/TT network can be viewed as a maximally unbalanced case of a tree tensor network.
MPS/TT with all factor tensors chosen to be the same and with a specified choice of boundary conditions are equivalent to weighted finite automata (WFA).[10] (However, the interpretation and applications of MPS and WFA can be rather different. For an interesting connection between the WFA and quantum physics literature see Ref. 11.)
MPS/TT networks constrained to have strictly non-negative entries can be mapped to hidden Markov models (HMM). Such constrained MPS have been studied using physics concepts in Ref. 12.
The Density-Matrix Renormalization Group in the Age of Matrix Product States[8] A very thorough review article focused on matrix product state tensor networks in a physics context, with many helpful figures.
Tensor Train Decomposition[5] A clear exposition of the MPS/TT network and algorithms geared at an applied mathematics audience.
A practical introduction to tensor networks: Matrix product states and projected entangled pair states[13] A friendly overview of tensor networks using physics terminology but aiming to be non-technical.
Hand-waving and Interpretive Dance: An Introductory Course on Tensor Networks[14] Detailed review article about tensor networks with a quantum information perspective.
Finitely correlated states on quantum spin chains, M. Fannes, B. Nachtergaele, R. F. Werner, Communications in Mathematical Physics 144, 443–490 (1992)
Groundstate properties of a generalized VBS-model, A. Klumper, A. Schadschneider, J. Zittartz, Zeitschrift fur Physik B Condensed Matter 87, 281–287 (1992)
Thermodynamic Limit of Density Matrix Renormalization, Stellan Ostlund, Stefan Rommer, Phys. Rev. Lett. 75, 3537–3540 (1995), cond‑mat/9503107
Efficient Classical Simulation of Slightly Entangled Quantum Computations, Guifre Vidal, Phys. Rev. Lett. 91, 147902 (2003), quant‑ph/0301063
Tensor-Train Decomposition, I. Oseledets, SIAM Journal on Scientific Computing 33, 2295-2317 (2011)
Matrix Product State Representations, D. Garcia, F. Verstraete, M. M. Wolf, J. I. Cirac, Quantum Info. Comput. 7, 401–430 (2007), quant‑ph/0608197
From density-matrix renormalization group to matrix product states, Ian P. McCulloch, J. Stat. Mech., P10014 (2007), cond‑mat/0701428
The density-matrix renormalization group in the age of matrix product states, U. Schollwock, Annals of Physics 326, 96–192 (2011), arxiv:1008.3477
Unbiased Monte Carlo for the age of tensor networks, Andrew J. Ferris (2015), arxiv:1507.00767
Spectral learning of weighted automata, Borja Balle, Xavier Carreras, Franco M Luque, Ariadna Quattoni, Machine learning 96, 33 (2014)
Quadratic weighted automata: Spectral algorithm and likelihood maximization, Raphael Bailly, Journal of Machine Learning Research 20, 147–162 (2011)
Stochastic Matrix Product States, Kristan Temme, Frank Verstraete, Phys. Rev. Lett. 104, 210502 (2010), arxiv:1003.2545
A practical introduction to tensor networks: Matrix product states and projected entangled pair states, Roman Orus, Annals of Physics 349, 117 - 158 (2014)
Hand-waving and interpretive dance: an introductory course on tensor networks, Jacob C Bridgeman, Christopher T Chubb, Journal of Physics A: Mathematical and Theoretical 50, 223001 (2017), arxiv:1603.03039 | CommonCrawl |
OSA Publishing > Optics Express > Volume 27 > Issue 22 > Page 32217
James Leger, Editor-in-Chief
Giant nonlinear optical effects induced by nitrogen-vacancy centers in diamond crystals
Mari Motojima, Takara Suzuki, Hidemi Shigekawa, Yuta Kainuma, Toshu An, and Muneaki Hase
Mari Motojima,1 Takara Suzuki,1 Hidemi Shigekawa,1 Yuta Kainuma,2 Toshu An,2 and Muneaki Hase1,*
1Department of Applied Physics, Faculty of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8573, Japan
2School of Materials Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan
*Corresponding author: [email protected]
Muneaki Hase https://orcid.org/0000-0003-4242-2546
M Motojima
T Suzuki
H Shigekawa
Y Kainuma
T An
M Hase
pp. 32217-32227
•https://doi.org/10.1364/OE.27.032217
Mari Motojima, Takara Suzuki, Hidemi Shigekawa, Yuta Kainuma, Toshu An, and Muneaki Hase, "Giant nonlinear optical effects induced by nitrogen-vacancy centers in diamond crystals," Opt. Express 27, 32217-32227 (2019)
Nonlinear Optics
Nitrogen vacancy centers
Nonlinear absorption
Nonlinear effects
Nonlinear optical crystals
Phase shift
Original Manuscript: July 24, 2019
Revised Manuscript: September 27, 2019
Manuscript Accepted: October 7, 2019
We investigate the effect of nitrogen-vacancy (NV) centers in single crystal diamond on nonlinear optical effects using 40 fs femtosecond laser pulses. The near-infrared femtosecond pulses allow us to study purely nonlinear optical effects, such as optical Kerr effect (OKE) and two-photon absorption (TPA), related to unique optical transitions by electronic structures with NV centers. It is found that both nonlinear optical effects are enhanced by the introduction of NV centers in the N$^{+}$ dose levels of 2.0$\times$10$^{11}$ and 1.0$\times$10$^{12}$ N$^{+}$/cm$^{2}$. In particular, our data demonstrate that the OKE signal is strongly enhanced for the heavily implanted type-IIa diamond. We suggest that the strong enhancement of the OKE is possibly originated from cascading OKE, where the high-density NV centers effectively break the inversion symmetry near the surface region of diamond.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Optical depth localization of nitrogen-vacancy centers in diamond with nanometer accuracy
Andreas J. Häußler, Pascal Heller, Liam P. McGuinness, Boris Naydenov, and Fedor Jelezko
Laser spectroscopic characterization of negatively charged nitrogen-vacancy (NV−) centers in diamond
Shova D. Subedi, Vladimir V. Fedorov, Jeremy Peppers, Dmitry V. Martyshkin, Sergey B. Mirov, Linbo Shao, and Marko Loncar
Suppression of fluorescence phonon sideband from nitrogen vacancy centers in diamond nanocrystals by substrate effect
Hong-Quan Zhao, Masazumi Fujiwara, and Shigeki Takeuchi
Picosecond-laser bulk modification induced enhancement of nitrogen-vacancy luminescence in diamond
Sergei M. Pimenov, Andrey A. Khomich, Beat Neuenschwander, Beat Jäggi, and Valerio Romano
J. Opt. Soc. Am. B 33(3) B49-B55 (2016)
Plasma treatments and photonic nanostructures for shallow nitrogen vacancy centers in diamond
Mariusz Radtke, Lara Render, Richard Nelz, and Elke Neu
Opt. Mater. Express 9(12) 4716-4733 (2019)
N. Mizuochi, T. Makino, H. Kato, D. Takeuchi, M. Ogura, H. Okushi, M. Nothaft, P. Neumann, A. Gali, F. Jelezko, J. Wrachtrup, and S. Yamasaki, "Electrically driven single-photon source at room temperature in diamond," Nat. Photonics 6(5), 299–303 (2012).
M. Pelliccione, A. Jenkins, P. Ovartchaiyapong, C. Reetz, E. Emmanouilidou, N. Ni, and A. C. B. Jayich, "Scanned probe imaging of nanoscale magnetism at cryogenic temperatures with a single-spin quantum sensor," Nat. Nanotechnol. 11(8), 700–705 (2016).
K. Sasaki, Y. Monnai, S. Saijo, R. Fujita, H. Watanabe, J. Ishi-Hayase, K. M. Itoh, and E. Abe, "Broadband, large-area microwave antenna for optically detected magnetic resonance of nitrogen-vacancy centers in diamond," Rev. Sci. Instrum. 87(5), 053904 (2016).
H. Clevenson, M. E. Trusheim, C. Teale, T. Schröder, D. Braje, and D. Englund, "Broadband magnetometry and temperature sensing with a light-trapping diamond waveguide," Nat. Phys. 11(5), 393–397 (2015).
Y. Sekiguchi, N. Niikura, R. Kuroiwa, H. Kano, and H. Kosaka, "Optical holonomic single quantum gates with a geometric spin under a zero field," Nat. Photonics 11(5), 309–314 (2017).
N. Naka, T. Kitamura, J. Omachi, and M. Kuwata-Gonokami, "Low-temperature excitons produced by two-photon excitation in high-purity diamond crystals," Phys. Status Solidi B 245(12), 2676–2679 (2008).
K. Ishioka, M. Hase, M. Kitajima, and H. Petek, "Coherent optical phonons in diamond," Appl. Phys. Lett. 89(23), 231916 (2006).
K. G. Nakamura, K. Ohya, H. Takahashi, T. Tsuruta, H. Sasaki, S. Uozumi, K. Norimatsu, M. Kitajima, Y. Shikano, and Y. Kayanuma, "Spectrally resolved detection in transient-reflectivity measurements of coherent optical phonons in diamond," Phys. Rev. B 94(2), 024303 (2016).
H. Sasaki, R. Tanaka, Y. Okano, F. Minami, Y. Kayanuma, Y. Shikano, and K. G. Nakamura, "Coherent control theory and experiment of optical phonons in diamond," Sci. Rep. 8(1), 9609 (2018).
S. Maehrlein, A. Paarmann, M. Wolf, and T. Kampfrath, "Terahertz sum-frequency excitation of a raman-active phonon," Phys. Rev. Lett. 119(12), 127402 (2017).
Y. R. Shen, "Nonlinear optical susceptibilities," in Principles of Nonlinear Optics, (Wiley-Interscience, 1984).
B. Zhang, S. Liu, X. Wu, T. Yi, Y. Fang, J. Zhang, Q. Zhong, X. Peng, X. Liu, and Y. Song, "Ultrafast dynamics of carriers and nonlinear refractive index in bulk polycrystalline diamond," Optik 130, 1073–1079 (2017).
J. M. P. Almeida, C. Oncebay, J. P. Siqueira, S. R. Muniz, L. De Boni, and C. R. Mendonca, "Nonlinear optical spectrum of diamond at femtosecond regime," Sci. Rep. 7(1), 14320 (2017).
J. I. Dadap, G. B. Focht, D. H. Reitze, and M. C. Downer, "Two-photon absorption in diamond and its application to ultraviolet femtosecond pulse-width measurement," Opt. Lett. 16(7), 499–501 (1991).
F. Trojánek, K. Zídek, B. Dzurnák, M. Kozák, and P. Malý, "Nonlinear optical properties of nanocrystalline diamond," Opt. Express 18(2), 1349–1357 (2010).
M. Sheik-Bahae, A. A. Said, T. H. Wei, D. J. Hagan, and E. W. V. Stryland, "Sensitive measurement of optical nonlinearities using a single beam," IEEE J. Quantum Electron. 26(4), 760–769 (1990).
G. R. Meredith, "Second-order cascading in third-order nonlinear optical processes," J. Chem. Phys. 77(12), 5863–5871 (1982).
R. Mondal, Y. Saito, Y. Aihara, P. Fons, A. V. Kolobov, J. Tominaga, S. Murakami, and M. Hase, "A cascading nonlinear magneto-optical effect in topological insulators," Sci. Rep. 8(1), 3908 (2018).
J. R. Maze, A. Gali, E. Togan, Y. Chu, A. Trifonov, E. Kaxiras, and M. D. Lukin, "Properties of nitrogen-vacancy centers in diamond: the group theoretic approach," New J. Phys. 13(2), 025025 (2011).
M. Hase, M. Katsuragawa, A. M. Constantinescu, and H. Petek, "Frequency comb generation at thz frequencies by coherent phonon excitation in si," Nat. Photonics 6(4), 243–247 (2012).
J. P. Biersack and J. F. Ziegler, "The stopping and range of ions in solids," in Ion Implantation Techniques, H. Ryssel and H. Glawischnig, eds. (Springer, 1982).
D. Kikuchi, D. Prananto, K. Hayashi, A. Laraoui, N. Mizuochi, M. Hatano, E. Saitoh, Y. Kim, C. A. Meriles, and T. An, "Long-distance excitation of nitrogen-vacancy centers in diamond via surface spin waves," Appl. Phys. Express 10(10), 103004 (2017).
S. Pezzagna, B. Naydenov, F. Jelezko, J. Wrachtrup, and J. Meijer, "Creation efficiency of nitrogen-vacancy centres in diamond," New J. Phys. 12(6), 065017 (2010).
M. Yin, H. P. Li, S. H. Tang, and W. Ji, "Determination of nonlinear absorption and refraction by single z-scan method," Appl. Phys. B: Lasers Opt. 70(4), 587–591 (2000).
H. P. Godfried, G. A. Scarsbrook, D. J. Twitchen, E. P. Houwman, W. G. M. Nelissen, A. J. Whitehead, C. E. Hall, and P. M. Martineau, "Optical quality diamond material," US Patent 8,936,774 B2 (2015).
X. Liu, B. Zhang, Q. Zhong, X. Peng, and S. Liu, "Ultrafast dynamics of photoexcited free carriers in CVD diamonds," J. Phys.: Conf. Ser. 867, 012013 (2017).
A. J. Sabbah and D. M. Riffe, "Femtosecond pump-probe reflectivity study of silicon carrier dynamics," Phys. Rev. B 66(16), 165217 (2002).
A. M. Zaitsev, "Refraction," in Optical Properties of Diamond, (Springer, 2001).
M. Kozák, F. Trojánek, B. Dzurňák, and P. Malý, "Two- and three-photon absorption in chemical vapor deposition diamond," J. Opt. Soc. Am. B 29(5), 1141–1145 (2012).
E. Bourgeois, A. Jarmola, P. Siyushev, M. Gulka, J. Hruby, F. Jelezko, D. Budker, and M. Nesladek, "Photoelectric detection of electron spin resonance of nitrogen-vacancy centers in diamond," Nat. Commun. 6(1), 8577 (2015).
A. Wotherspoon, J. W. Steeds, B. Catmull, and J. Butler, "Photoluminescence and positron annihilation measurements of nitrogen doped CVD diamond," Diamond Relat. Mater. 12(3-7), 652–657 (2003).
R. DeSalvo, D. J. Hagan, M. Sheik-Bahae, G. Stegeman, E. W. Van Stryland, and H. Vanherzeele, "Self-focusing and self-defocusing by cascaded second-order effects in KTP," Opt. Lett. 17(1), 28–30 (1992).
A. T. Collins, "The fermi level in diamond," J. Phys.: Condens. Matter 14(14), 3743–3750 (2002).
G. D. Fuchs, V. V. Dobrovitski, D. M. Toyli, F. J. Heremans, C. D. Weis, T. Schenkel, and D. D. Awschalom, "Excited-state spin coherence of a single nitrogen–vacancy centre in diamond," Nat. Phys. 6(9), 668–672 (2010).
S. D. Subedi, V. V. Fedorov, J. Peppers, D. V. Martyshkin, S. B. Mirov, L. Shao, and M. Loncar, "Laser spectroscopic characterization of negatively charged nitrogen-vacancy (NV$^{-}$−) centers in diamond," Opt. Mater. Express 9(5), 2076–2087 (2019).
N. Aslam, G. Waldherr, P. Neumann, F. Jelezko, and J. Wrachtrup, "Photo-induced ionization dynamics of the nitrogen vacancy defect in diamond investigated by single-shot charge state detection," New J. Phys. 15(1), 013064 (2013).
S. Dhomkar, J. Henshaw, H. Jayakumar, and C. A. Meriles, "Long-term data storage in diamond," Sci. Adv. 2(10), e1600911 (2016).
S. Dhomkar, H. Jayakumar, P. R. Zangara, and C. A. Meriles, "Charge dynamics in near-surface, variable-density ensembles of nitrogen-vacancy centers in diamond," Nano Lett. 18(6), 4046–4052 (2018).
T. M. Babinec, B. J. M. Hausmann, M. Khan, Y. Zhang, J. R. Maze, P. R. Hemmer, and M. Lončar, "A diamond nanowire single-photon source," Nat. Nanotechnol. 5(3), 195–199 (2010).
I. Aharonovich, A. D. Greentree, and S. Prawer, "Diamond photonics," Nat. Photonics 5(7), 397–405 (2011).
B. J. M. Hausmann, I. Bulu, V. Venkataraman, P. Deotare, and M. Lončar, "Diamond nonlinear photonics," Nat. Photonics 8(5), 369–374 (2014).
Abe, E.
Aharonovich, I.
Aihara, Y.
Almeida, J. M. P.
An, T.
Aslam, N.
Awschalom, D. D.
Babinec, T. M.
Biersack, J. P.
Bourgeois, E.
Braje, D.
Budker, D.
Bulu, I.
Butler, J.
Catmull, B.
Chu, Y.
Clevenson, H.
Collins, A. T.
Constantinescu, A. M.
Dadap, J. I.
De Boni, L.
Deotare, P.
DeSalvo, R.
Dhomkar, S.
Dobrovitski, V. V.
Downer, M. C.
Dzurnák, B.
Emmanouilidou, E.
Englund, D.
Fang, Y.
Fedorov, V. V.
Focht, G. B.
Fons, P.
Fuchs, G. D.
Fujita, R.
Gali, A.
Godfried, H. P.
Greentree, A. D.
Gulka, M.
Hagan, D. J.
Hall, C. E.
Hase, M.
Hatano, M.
Hausmann, B. J. M.
Hayashi, K.
Hemmer, P. R.
Henshaw, J.
Heremans, F. J.
Houwman, E. P.
Hruby, J.
Ishi-Hayase, J.
Ishioka, K.
Itoh, K. M.
Jarmola, A.
Jayakumar, H.
Jayich, A. C. B.
Jelezko, F.
Jenkins, A.
Kampfrath, T.
Kano, H.
Katsuragawa, M.
Kaxiras, E.
Kayanuma, Y.
Kikuchi, D.
Kim, Y.
Kitajima, M.
Kitamura, T.
Kolobov, A. V.
Kosaka, H.
Kozák, M.
Kuroiwa, R.
Kuwata-Gonokami, M.
Laraoui, A.
Li, H. P.
Liu, S.
Liu, X.
Loncar, M.
Lukin, M. D.
Maehrlein, S.
Makino, T.
Malý, P.
Martineau, P. M.
Martyshkin, D. V.
Maze, J. R.
Meijer, J.
Mendonca, C. R.
Meredith, G. R.
Meriles, C. A.
Minami, F.
Mirov, S. B.
Mizuochi, N.
Mondal, R.
Monnai, Y.
Muniz, S. R.
Murakami, S.
Naka, N.
Nakamura, K. G.
Naydenov, B.
Nelissen, W. G. M.
Nesladek, M.
Neumann, P.
Ni, N.
Niikura, N.
Norimatsu, K.
Nothaft, M.
Ogura, M.
Ohya, K.
Okano, Y.
Okushi, H.
Omachi, J.
Oncebay, C.
Ovartchaiyapong, P.
Paarmann, A.
Pelliccione, M.
Peng, X.
Peppers, J.
Petek, H.
Pezzagna, S.
Prananto, D.
Prawer, S.
Reetz, C.
Reitze, D. H.
Riffe, D. M.
Sabbah, A. J.
Said, A. A.
Saijo, S.
Saito, Y.
Saitoh, E.
Sasaki, H.
Sasaki, K.
Scarsbrook, G. A.
Schenkel, T.
Schröder, T.
Sekiguchi, Y.
Sheik-Bahae, M.
Shen, Y. R.
Shikano, Y.
Siqueira, J. P.
Siyushev, P.
Song, Y.
Steeds, J. W.
Stegeman, G.
Stryland, E. W. V.
Subedi, S. D.
Takahashi, H.
Takeuchi, D.
Tanaka, R.
Tang, S. H.
Teale, C.
Togan, E.
Tominaga, J.
Toyli, D. M.
Trifonov, A.
Trojánek, F.
Trusheim, M. E.
Tsuruta, T.
Twitchen, D. J.
Uozumi, S.
Van Stryland, E. W.
Vanherzeele, H.
Venkataraman, V.
Waldherr, G.
Watanabe, H.
Wei, T. H.
Weis, C. D.
Whitehead, A. J.
Wolf, M.
Wotherspoon, A.
Wrachtrup, J.
Wu, X.
Yamasaki, S.
Yi, T.
Yin, M.
Zaitsev, A. M.
Zangara, P. R.
Zhang, B.
Zhang, J.
Zhang, Y.
Zhong, Q.
Zídek, K.
Ziegler, J. F.
Appl. Phys. B: Lasers Opt. (1)
Appl. Phys. Express (1)
Appl. Phys. Lett. (1)
Diamond Relat. Mater. (1)
IEEE J. Quantum Electron. (1)
J. Chem. Phys. (1)
J. Opt. Soc. Am. B (1)
J. Phys.: Condens. Matter (1)
J. Phys.: Conf. Ser. (1)
Nano Lett. (1)
Nat. Commun. (1)
Nat. Nanotechnol. (2)
Nat. Photonics (5)
Nat. Phys. (2)
New J. Phys. (3)
Opt. Lett. (2)
Opt. Mater. Express (1)
Phys. Rev. B (2)
Phys. Rev. Lett. (1)
Phys. Status Solidi B (1)
Rev. Sci. Instrum. (1)
Sci. Adv. (1)
Sci. Rep. (3)
Fig. 1. (a) The experimental set-up for Z-scan measurement using the closed aperture mode. (b) Schematics of femtosecond pump-probe experiment with the reflection mode. The difference of the photocurrent of the two Si-pin detectors was amplified, and averaged in a digital oscilloscope using the scanning time delay at 20 Hz.
Fig. 2. Schematic for the production of NV centers by N$^{+}$ ion implantation and subsequent annealing in diamond crystalline samples. The N$^{+}$ ions implanted into diamond produce vacancies (V), which are required to make nitrogen-vacancy (NV) centers. Annealing the diamond sample induces diffusion of the vacancies, which can then be trapped by the implanted nitrogen atoms.
Fig. 3. Closed-aperture Z-scan results at $I_{0}\approx$20 mJ/cm$^{2}$ obtained for the pure diamond crystal and NV center introduced diamond at different dose levels, (a) non-implanted, (b) 2.0$\times$10$^{11}$N$^{+}$/cm$^{2}$, and (c) 1.0$\times$10$^{12}$ N$^{+}$/cm$^{2}$. The black solid lines represent the fit to the data using Eq. (2).
Fig. 4. Pump-probe reflectivity results obtained for the different dose levels of CVD diamond crystals at $I_{0}\approx$30 mJ/cm$^{2}$.
Fig. 5. Pump fluence dependence of the $|\Delta R/R|$ signal observed in diamond samples. The solid curves are the fit using a function of $aI_{0} + bI_{0}^{2}$.
Fig. 6. Energy diagram for the NV$^-$ and NV$^0$ states in diamond. The NV introduced samples have natively the mid-gap electronic state of the negatively charged nitrogen-vacancy, i.e., NV$^-$ state (Left panel). The energy levels of the NV$^-$ and NV$^0$ states are defined by the binding energy of the NV center. The Fermi level of our sample in the NV$^-$ state (the nitrogen concentration of $\sim$10$^{17}$cm$^{-3}$) would be $\approx$4 eV above the valence band (VB) [33], and thereby electrons are populated at the $^{3}$A$_{2}$ ground state in the equilibrium. The lifetime of the $^{3}$E state is generally nanosecond time scale [34], however, in the present non-resonant case, intermediate state given by the dashed line should have very short lifetime (an order of the pulse length) and it does not significantly contribute to the TPA signal. The pumping action promotes an electron from the $^{3}$A$_{2}$ ground state into the conduction band (CB) via TPA (transition ①), resulting in generation of a free electron, which is immediately followed by the formation of the neutrally charged NV$^0$ state (Ionization; right panel). Since femtosecond laser pulses can follow the ultrafast carrier dynamics in the CB, a part of the free carriers can absorb a probe photon (free carrier absorption; transition ② ) when the pump and probe pulses overlap each other at $t$ = 0, resulting in the negative peak signal on $\Delta R/R$. For the NV$^0$ state, unlike in the case of irradiation of green laser, TPA (or one-phonon excitation) from the $^{2}$E ground state with a 1.55 eV probe photon (transition ③) is hard to occur because of the energy mismatch and therefore recombination process via further excitation of electron from the VB into the $^{2}$E ground state (transition ④) is not available.
Table 1. The fitting parameters obtained using Eq. (2). The standard deviation of coefficients were obtained during the fitting procedure with Igor Pro.
View Table | View all tables in this article
Table 2. The nonlinear refraction coefficient, $n_{2}$, and the nonlinear absorption coefficient, $\beta$, obtained for different N$^{+}$ dose levels, by using the fitting parameters described in the main text. The standard deviations of coefficients were obtained during the fitting procedure with Igor Pro.
(1) χ ( 3 ) = χ b u l k ( 3 ) + χ N V ( 3 ) + χ N V ( 2 ) χ N V ( 2 ) ,
(2) T ( z , Δ ϕ 0 , Δ ψ 0 ) = 1 + 4 x ( x 2 + 9 ) ( x 2 + 1 ) Δ ϕ 0 − 2 ( x 2 + 3 ) ( x 2 + 9 ) ( x 2 + 1 ) Δ ψ 0 ,
(3) Δ R R = 4 n 0 2 − 1 Δ n ≈ 0.84 ( Δ n O K E + Δ n T P A ) ,
(4) Δ n T P A = − 2 π e 2 N n 0 m ∗ ω 2 = − 2 π e 2 n 0 m ∗ ω 2 β I 0 2 2 ℏ ω τ p = κ β I 0 2 .
The fitting parameters obtained using Eq. (2). The standard deviation of coefficients were obtained during the fitting procedure with Igor Pro.
$\Delta \phi _{0}$ Δ ϕ 0
$\Delta \psi _{0}$ Δ ψ 0
$z_{0}$ z 0 (mm)
Diamond (non-implanted) 0.394$\pm$ ± 0.010 0.0596$\pm$ ± 0.0067 0.221$\pm$ ± 0.008
Diamond (2.0$\times$ × 10$^{11}$ 11 N$^{+}$ + /cm$^{2}$ 2 ) 0.528$\pm$ ± 0.017 0.0816$\pm$ ± 0.0106 0.221$\pm$ ± 0.009
The nonlinear refraction coefficient, $n_{2}$, and the nonlinear absorption coefficient, $\beta$, obtained for different N$^{+}$ dose levels, by using the fitting parameters described in the main text. The standard deviations of coefficients were obtained during the fitting procedure with Igor Pro.
$|n_{2}|$ | n 2 | ($\times$ × 10$^{-20}$ − 20 m$^{2}$ 2 /W)
$\beta$ β ($\times$ × 10$^{-1}$ − 1 cm/GW)
Diamond (non-implanted) 0.73$\pm$ ± 0.63 0.90$\pm$ ± 0.06
Diamond (2.0$\times$ × 10$^{11}$ 11 N$^{+}$ + /cm$^{2}$ 2 ) 1.20$\pm$ ± 0.90 1.75$\pm$ ± 0.09
Diamond (1.0$\times$ × 10$^{12}$ 12 N$^{+}$ + /cm$^{2}$ 2 ) 24.2$\pm$ ± 0.50 1.01$\pm$ ± 0.05 | CommonCrawl |
Parent magnetic field models for the IGRF-12GFZ-candidates
Vincent Lesur1,
Martin Rother1,
Ingo Wardinski1,
Reyko Schachtschneider1,
Mohamed Hamoudi2 &
Aude Chambodut3
We propose candidate models for IGRF-12. These models were derived from parent models built from 10 months of Swarm satellite data and 1.5 years of magnetic observatory data. Using the same parameterisation, a magnetic field model was built from a slightly extended satellite data set. As a result of discrepancies between magnetic field intensity measured by the absolute scalar instrument and that calculated from the vector instrument, we re-calibrated the satellite data. For the calibration, we assumed that the discrepancies resulted from a small perturbing magnetic field carried by the satellite, with a strength and orientation dependent on the Sun's position relative to the satellite. Scalar and vector data were reconciled using only a limited number of calibration parameters. The data selection process, followed by the joint modelling of the magnetic field and Euler angles, leads to accurate models of the main field and its secular variation around 2014.0. The obtained secular variation model is compared with models based on CHAMP satellite data. The comparison suggests that pulses of magnetic field acceleration that were observed on short time scales average-out over a decade.
The International Geomagnetic Reference Field (IGRF) is a geomagnetic field model used for numerous scientific and industrial applications. It is updated every 5 years (Macmillan et al. 2003; Maus et al. 2005; Finlay et al. 2010). The model is determined by a group of geomagnetic field modellers associated with the International Association of Geomagnetism and Aeronomy (IAGA) and built by comparison of different model candidates generated by scientists affiliated to different institutions around the world. A single institution can propose only one series of models. In this short paper, we present the model candidates proposed by scientists of the GFZ German Research Centre for Geosciences, in collaboration with scientists from other institutions.
Three model candidates were provided for the IGRF-12: two main field model snapshots, one for 2010 and one for 2015, and a predictive linear secular variation (SV) covering years 2015 to 2020. The main field model for 2015 and the SV model were derived from a parent model built from a combination of Swarm satellites and observatory data. This parent model includes a complex time-dependent parameterisation of the core field, a static representation of the lithospheric field, the external fields and their induced counterparts. Weaker signals, such as the field generated by the tidal motion of the oceans, are not modelled. Hereinafter, we do not present the derivation of this parent model but a very similar model built following the same approach and using a slightly longer time series of satellite data that have been re-calibrated. In the same way, the parent model of the main field snapshot for 2010 is not described. It has been derived from observatory and CHAMP satellite data but otherwise follows the same model parameterisation as the parent model derived from Swarm data.
The Swarm constellation of satellites was launched in November 2013, but the satellites reached their survey orbits only by mid-April 2014. At this early stage of the mission, the data are not yet fully calibrated and a specific data set has been provided by the European Space Agency (ESA) to be used for modelling purposes. However, some difficulties had to be handled in order to use these data. In particular, each satellite is carrying two instruments for magnetic measurements, and a correction had to be applied to explain the observed differences between the calculated magnetic field strength from the vector fluxgate measurements (VFM) and the magnetic total intensity obtained form the absolute scalar measurements (ASM). The first part of the second section of this paper is dedicated to the description of this correction. We made the choice to present here the latest version of the different correction processes we studied and the field model associated with it. It will be shown in the last section of this paper that our IGRF candidates are very similar to the model obtained with this corrected data set. Of course, the parent model of our IGRF candidates also used corrected data, but with a slightly less robust correction process than the one presented below. Outside this re-calibration, the processing path used to obtain accurate models of the magnetic field is similar to that of previous models of the magnetic field model series GRIMM (Lesur et al. 2008, 2010; Mandea et al. 2012; Lesur et al. 2015) We note however that, unlike during the CHAMP epoch, only a very short time span of satellite data was available at the end of September 2014 when the IGRF candidates were submitted. Therefore, observatory data had to be used to obtain robust models. Furthermore, specific regularisation processes had to be introduced in order to obtain models of acceptable quality.
The next section described the methods used to obtain the field models. In the first step of the modelling effort, data are selected and processed. This is described in detail in the second sub-section. The model parameterisation is explained in the third sub-section and the data inversion process in the fourth. The results are presented in the third section and discussed.
Corrections of the ASM-VFM differences
On each Swarm satellite, two types of instruments are providing magnetic data. The scalar instrument provides ASM data. These are absolute data, possibly corrected for stray fields (e.g. the magnetic field generated by the torquers), but otherwise not sensitive to temperature changes or ageing. In contrast, the vector data provided by the fluxgates - i.e. VFM data - need full calibration. After correction of stray fields and possible temperature drift, nine parameters per satellite have to be estimated for this calibration:
Three scaling values, one for each of the three sensors, in the directions (defined below) E 1, E 2 and E 3, respectively. These three scaling parameters are s 1, s 2 and s 3.
Three offset values, one for each of the three sensors, in the directions E 1, E 2 and E 3, respectively. These three offset parameters are o 1, o 2 and o 3.
Three angles, called the non-orthogonality angles, that are calculated to insure that the three magnetic field components are in orthogonal directions. These three angles are a 12, a 23 and a 31. They describe deviations from 90° of the angles between the E 1 E 2, E 2 E 3 and E 3 E 1 sensor directions, respectively – i.e. if a 12, a 23 and a 31 are zero, then the E 1, E 2 and E 3 sensor directions are already along orthogonal directions. In the process of estimating these three angles, the E 1 direction is assumed fixed and is not modified. The re-orientation of the obtained orthogonal set of directions relative to the Earth-fixed, Earth-centred coordinate system is performed at a later stage of the processing, sometimes simultaneously with the field modelling process.
The VFM sensor E 1, E 2, and E 3 directions correspond roughly to the direction perpendicular to the satellite boom oriented down, the direction perpendicular to the boom oriented right relative to the satellite flying direction, and the direction along the boom oriented toward the scalar magnetometer, respectively. From the experience gained during previous satellite missions, it is known that the calibration parameters estimated on ground have to be re-estimated in flight. A description of the parameters defined for CHAMP and Ørsted satellites can be found in (Merayo et al. 2000; Olsen et al. 2003; Yin and Lühr 2011; Yin et al. 2013). In the case of Swarm satellites, the non-orthogonality angles and the offsets are not expected to change with time, whereas the scaling is likely to change slowly with time because of the ageing of the magnetometers. Due to the structure and mechanical properties of the magnetometers onboard Swarm satellites, it is expected that the rate of change of these scaling values is the same for the three orthogonal directions.
The calibration process requires that the strength of the magnetic field measured by the VFM matches the ASM scalar data. If data are collected over a full day, this requirement allows the estimation of snapshot values of these nine parameters. This is possible because the orientation of the measured magnetic field changes during the orbits and the day, relative to the sensors. Nonetheless, the estimated calibration parameters are not very robust and often only few of these nine parameters are estimated, or they are estimated on periods longer than one day. Technically, this match is obtained by least-squares, adjusting the parameters to minimise the sum of the squared differences between ASM and VFM field strengths (see Olsen et al. 2003; Yin and Lühr 2011). For a datum i, the relation between ASM and VFM data is:
$$ F_{\text{ASM}}(i) = \sqrt{B_{c1}^{2}(i) + B_{c2}^{2}(i) + B_{c3}^{2}(i)} + \epsilon(i), $$
where F ASM are the scalar ASM data. B c1, B c2 and B c3 are corrected vector VFM data in the E 1, E 2 E 3 directions, respectively. ε is the difference between the ASM and corrected VFM field strengths. The relation between corrected VFM data and measured VFM data is:
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} \left\{ \begin{array}{l} B_{c1} = \frac{B_{o1} - o_{1}}{s_{1}}\\ B_{c2} = \left[ \frac{B_{o2} - o_{2}}{s_{2}} - \sin a_{12} \; B_{c1} \right] \frac{1}{\cos a_{12}}\\ B_{c3} = \left[ \frac{B_{o3} - o_{3}}{s_{3}} - \cos a_{31} \, \sin a_{23} \; B_{c2} - \sin a_{31} \; B_{c1}\right] \frac{1}{\cos a_{23} \, \cos a_{31}} \end{array} \right. \end{aligned}}} $$
where B o1, B o2 and B o3 are observed vector VFM data. We note that these relations are non-linear. Also, B c2 depends on B c1, and B c3 depends on B c1 and B c2. These three relations become independent as soon as the non-orthogonality angles are zero. Slightly different, but equivalent, relations have been used for Ørsted, CHAMP and in official Swarm processing chains (Olsen et al. 2003; Yin and Lühr 2011). The inverse transform to compute the observed vector field from its corrected values is:
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} \left\{\! \begin{array}{l} B_{o1} = s_{1} B_{c1} + o_{1}\\ B_{o2} = s_{2} \left[ \cos a_{12} \, B_{c2} + \sin a_{12} \, B_{c1} \right] + o_{2} \\ B_{o3} = s_{3} \left[\cos a_{31} \left[ \cos a_{23} \, B_{c3} + \sin a_{23} \, B_{c2} \right] + \sin a_{31} \, B_{c1} \right] + o_{3}\,{.} \end{array} \right. \end{aligned}}} $$
To illustrate the difficulties with Swarm data, we used the full Swarm A magnetic data set (baseline 0301/0302), where the usual processing steps that apply the calibration on a daily basis are dropped. Instead, the parameters estimated on ground are applied, and a constant scaling factor, the same for all three directions, is recalculated so that roughly acceptable values of field measurements, in nT, are obtained. The data are also selected to keep only valid data as indicated by the different flags. The derivation of this data set is described in Tøffner-Clausen 2014. The data set is very large (24,345,289 vector data values); therefore, the data are sub-sampled to one vector measurement every 20 s and rejected during obvious manoeuvres. This way we obtained a data set from 25 November 2013 to 30 September 2014, consisting of 1,188,891 vector data values. The data set is relatively clean, and only few noisy data can be identified after this process as it will become evident later. We then apply our own processing, as defined above, where we assume the offsets and angles to be constant in time, and a linear variation of the scaling with time. The calculated offset, scaling and angle values are given in Table 1. The residuals of the least-squares fit to the data are shown in Figure 1. We observed unexpected large residuals that vary rapidly in time. Along a single orbit, variations can be as large as 7 nT peak-to-peak. Such rapid variations can be explained neither by our processing methods nor with the daily estimates of the nine parameters in place in the operational Swarm magnetic processing chain.
Residuals of the least-squares fit to ASM data. Black dots: residual differences between ASM data and the magnetic field strength calculated from VFM data for the set of calibration parameters given in Table 1. The model does not include a Sun position dependence of the offsets. Red line: estimated local time for data points selected within 1° of the equator when the satellite is flying North.
Table 1 Offsets, scaling and angles obtained for models with and without Sun dependence
The main expected changes in the satellite environment along a single orbit are the magnetic field strength and direction, and also the temperature. The latter depends on the satellite lighting orientation. We therefore display the obtained residuals as a function of the Sun position relative to the satellite, where the Sun position is parameterised by two angles, α and β. The angle α gives the Sun direction angle relative to the E 2 VFM direction, whereas the β angle is the angle between the −E 1 VFM direction and the projection of the Sun direction on the E 1 E 3 VFM plane. The angle α is 90° and β is zero when the Sun is nearly above the satellite, shifted by 13° because of the angle between the boom and satellite body. The angles α and β are both 90° when the Sun is in the E 3 direction. With such a parameterisation of the Sun position, the satellite local time (LT), shown in red in Figure 1, maps in the α angle. We therefore show in Figure 2 the residuals to the least-squares fit, displayed as a function of the Sun position for three different time spans, each corresponding to periods between dates where the satellite is flying in dawn-dusk orbits - i.e. 06h00, 18h00 LT.
Residuals to the least-squares fit displayed as a function of the Sun position. Same residuals as in Figure 1 that are now displayed as a function of the Sun position relative to the satellite and parameterised with the two angles α,β. The colour scales correspond to the residual amplitudes. The angles indicated on the Hammer projection plots are equal to 90° −α. The angles indicated on the polar plots correspond to the β values. The Sun is in the (−E 1) direction - i.e. nearly above the satellite, when 90° −α=0 and β=0. The Sun is beneath the E 2 E 3 plane when β is outside the [−90:90] range. Top panel: residuals from 25 November 2013 to 25 February 2014. Central panel: residuals from 25 February 2014 to 10 July 2014. Bottom panel: residuals from 10 July 2014 to 30 September 2014. On the central panel, two obvious small-scale features are circled in black.
The three panels of Figure 2 display similar patterns, although the central panel is of the opposite sign. This is consistent with a small magnetic perturbation carried by the satellite, generated in the vicinity of the VFM sensors, and that depends on the Sun's position. In such a scenario, the central panel where the satellite is in a descending mode on the dayside of the Earth - i.e. flying toward South on the dayside - should have roughly opposite sign anomalies compared to the two other panels where the satellite is flying North on the dayside of the Earth. We also observe that the anomalies differ dependent on the Sun being on one side of the satellite or the other - i.e. if α is smaller or larger than 90°. We note that the maximum perturbation is not observed when the Sun is just above the satellite, but rather slightly on the side. Finally, we see that there are areas, as those circled in black on the central panel in Figure 2, where the anomalies are displaying small-scale structures.
These observations suggested a simple modification of the offset parameterisation: to add a Sun position dependence to the otherwise constant offset values. This writes:
$$ \left\{ \begin{array}{l} o_{1}={o_{1}^{0}} + \sum_{l=1}^{30} \sum_{m=-l}^{m=l} o_{1}^{l,m} \; {Y_{l}^{m}}(\alpha,\beta)\\ o_{2}={o_{2}^{0}} + \sum_{l=1}^{30} \sum_{m=-l}^{m=l}o_{2}^{l,m} \; {Y_{l}^{m}} (\alpha,\beta)\\ o_{2}={o_{3}^{0}} + \sum_{l=1}^{30} \sum_{m=-l}^{m=l} o_{3}^{l,m} \; {Y_{l}^{m}} (\alpha,\beta) {\;,}\\ \end{array} \right. $$
where the \({Y_{l}^{m}} (\alpha,\beta)\) are spherical harmonics (SH). The parameters \({o_{1}^{0}}\), \({o_{2}^{0}}\) and \({o_{3}^{0}}\) are the constant offset values, and \(o_{1}^{l,m}\), \(o_{1}^{l,m}\), \(o_{1}^{l,m}\) are the parameters for the Sun position dependence of the offsets. We choose a maximum SH degree of 30 arbitrarily. Recent experiments have shown that this maximum degree can be reduced (Lars Tøffner-Clausen, personal communication). In the results presented below, we also used a SH representation for the scaling factor up to SH degree 10, but this can probably be dropped if a sensor temperature dependence is assumed instead. Using this parameterisation of the anomaly, we proceed as before and adjust the set of parameters to minimise the differences between the ASM readings and the strength of the magnetic field observed by the VFM instruments through a least-squares fit.
The residuals of the least-squares fit are shown in Figure 3. They remain weak and do not present a structured signal, neither in time nor as a function of the Sun position. Some residuals are still large for very short periods of time, probably associated with satellite manoeuvres. The processing is therefore a success. The Sun-dependent parts of the offsets are displayed in Figure 4. The constant values obtained for the offsets, non-orthogonality angles and scaling, and the slope of the latter are given in Table 1.
Residuals of the fit to ASM data with a Sun position dependence of the offsets. Residual differences between ASM data and the magnetic field strength calculated from VFM data for the set of calibration parameters given in Table 1. The model does include a Sun position dependence of the offsets. The red line is the estimated local time for data points selected within 1° of the equator when the satellite is flying North.
Sun position dependence of the offsets. Perturbation vector strength in the E 1, E 2 and E 3 directions in the VFM reference frame, as a function of the Sun position relative to the satellite. The projection and the definition of the angles are the same as in Figure 2.
The most interesting results are associated with Sun dependence of the offsets along the E 1 direction. The Sun dependence shows two strong offset anomalies:
A large negative anomaly when the Sun is nearly above and behind satellite A, slightly on the right side of the satellite when looking in the flight direction.
A large positive anomaly, just before the Sun lowers below the horizontal plane of the satellite, on its left side.
The offsets in the E 2 component also show a relatively large anomaly when the Sun is slightly behind the satellite on its right side when looking in the flight direction. These Sun-dependent offsets, which are supposed to be independent of time, correct the VFM data so that the anomalies presented in Figure 2 vanish.
Overall, the correction is successful. It can certainly be improved and stabilised, but what is described above was the best model available in December 2014. The differences between ASM and VFM values for the other two satellites are much weaker. Nonetheless, the processing described here leads also to a good fit between ASM and VFM data. With such a correction, the standard deviation of the differences between ASM values and the field strength, as estimated from VFM data, is around 0.17 nT for all three satellites.
The magnetic field models were derived from the three Swarm L1b Baseline 0301/0302 satellite data series that have been processed and corrected as described in the previous section. The corrected magnetic field values in the VFM reference frame are rotated in the required reference frame - typically the Earth-centred, Earth-fixed, North, East, Center reference frame (NEC), when necessary. We also use observatory hourly means as prepared by Macmillan and Olsen 2013.
For all these data, the time is defined in Modified Julian Days 2000 (MJD) that counts the days since 1 January 2000 at 00:00h. The time period is limited from MJD 4749 to 5479 - i.e. 2013.0 to 2015.0. Over this time span, Swarm satellite data were available for:
Satellite A: from MJD 5078.14 to 5369.87
Satellite B: from MJD 5080.02 to 5369.88
Satellite C: from MJD 5086.00 to 5369.87
The observatory data were available only up to MJD 5219.
The selection criteria used for these data are similar to those used in GRIMM series of models - e.g. Lesur et al. 2010. We recall these criteria for completeness.
The satellite and observatory vector data are selected in the solar magnetic (SM) coordinate system between ±55° magnetic latitude for magnetically quiet times according to the following criteria:
Positive value of the Z-component of the interplanetary magnetic field (IMF B z ),
Sampling points are separated by 20 s at minimum,
Data are selected at local time between 23 : 00 and 05 : 00, with the Sun below the horizon at 100 km above the Earth's reference radius (a=6371.2 km),
D s t values should be within ±30 nT and their time derivatives less than 100 nT/day, and
Quality flags set to have accurate satellite positioning and two star cameras operating.
At high latitudes - i.e. polewards of ±55° magnetic latitude - the three-component vector magnetic data are used in the NEC system of coordinates. Their selection criteria differ from those listed above on the following way:
Data are selected at all local times and independently of the Sun position.
We point here to the fact that provisional values of the D s t index are used for selection and modelling. The definitive D s t index values are likely to be different. At the time of data selection, no observatory data were available to define a more suitable selection index (e.g. the VMD index defined in Thomson and Lesur (2007)).
Model parameterisation
Away from its sources, the magnetic field can be described as the negative gradient of potentials associated with sources of internal and external origin:
$$ \begin{aligned} {} &\mathbf{B}= \mathbf{-\nabla} \{ V_{{\mathrm{i}}}(\theta,\phi,r,t) + V_{{\mathrm{e}}}(\theta,\phi,r,t) \} \\ {}&V_{{\mathrm{i}}}(\theta,\phi,r,t)= a {\sum\nolimits}_{l=1}^{L_{{\mathrm{i}}}}{\sum\nolimits}_{m=-l}^{l} \left(\frac{a}{r}\right)^{l+1} {g_{l}^{m}}(t) {Y_{l}^{m}}(\theta, \phi) \\ {}&V_{{\mathrm{e}}}(\theta,\phi,r,t)= a {\sum\nolimits}_{l=1}^{L_{{\mathrm{e}}}}{\sum\nolimits}_{m=-l}^{l} \left(\frac{r}{a}\right)^{l} {q_{l}^{m}}(t) {Y_{l}^{m}}(\theta, \phi) \end{aligned} $$
where \({Y_{l}^{m}}(\theta,\phi)\) are the Schmidt semi-normalised spherical harmonics (SH). θ,ϕ,r and a are the colatitude, longitude, satellite radial position and model reference radius, respectively, in geocentric coordinates. We use the convention that negative orders, m<0, are associated with sin(|m|ϕ) terms whereas null or positive orders, m≥0, are associated with cos(m ϕ) terms.
For the largest wavelengths of the field generated in the core and lithosphere (here, assumed up to SH degree L i=18), the reference radius used in Equation 5 is a=3485 km. It corresponds to the radius of the core. This choice has no effect on the final field model unless regularisation or constraints are applied on the Gauss coefficients. These are parameterised in time from 2013.0 to 2015.0, using order six B-splines \({\psi ^{6}_{i}}(t)\), with half-year time intervals between spline nodes. The time dependence of the Gauss coefficients is therefore given by:
$$ {g_{l}^{m}}(t)=\sum_{j=1}^{N_{\mathrm{t}}}g_{lj}^{m} \, {\psi^{6}_{j}}(t), $$
where N t=9. For the core and lithospheric field of SH degree greater than 18, the reference radius is set to a=6371.2 km. The maximum SH degree used for modelling the field of internal origin is 30, although a constant field, defined in Lesur et al. (2013), covering all SH degrees from 30 to 100 is subtracted from the data so that only very small contributions from the lithospheric field remain unmodelled. The remaining parts of the internal field model are the induced fields. They are modelled in a SM system of coordinates using four coefficients, for N e=4 different 6-month time intervals, scaling the internal part of the D s t index - i.e. the I s t. These coefficients are the SH degree l=1 coefficients and the zonal SH degree l=2 coefficient. The magnetic potential for the induced field is therefore:
$${} \begin{aligned} &V_{{\text{induced}}}(\theta,\phi,r,t)= \\ &\qquad a {\sum\nolimits}_{j=1}^{N_{\mathrm{e}}} \left\{ {\sum\nolimits}_{m=-1}^{1} \left(\frac{a}{r}\right)^{2} g_{1j}^{m \, \text{Dst}} \, {Y_{1}^{m}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}})\right.\\&\quad\qquad\qquad+\left. \left(\frac{a}{r}\right)^{3} g_{2j}^{0 \, \text{Dst}} \, {Y_{2}^{0}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}}) \vphantom{\left\{ \sum_{m=-1}^{1} \left(\frac{a}{r}\right)^{2} g_{1j}^{m \, \text{Dst}} \, {Y_{1}^{m}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}})\right.}\right\} \; {\mathcal{H}}_{j}(I\!st) \end{aligned} $$
where the function \({\mathcal {H}}_{j}(X)\) takes the value X in the time interval [t j :t j+1] and is zero otherwise. θ S, ϕ S are the colatitudes and longitudes in the SM reference frame. For observatory data, we also co-estimate crustal offsets.
The external field parameterisation also consists of independent parts. A slowly varying part of the external field model is parameterised over each 6-month time interval by a degree l=1 order m=0 coefficient in the geocentric solar magnetospheric (GSM) system of coordinates, and two coefficients of SH degree l=1 with orders m=0 and m=−1 in the SM system of coordinates. The rapidly varying part of the external field is controlled using the external part of the D s t index - i.e. the E s t - and the IMF B y time series. Here again, 6-month time intervals are used. Four scaling coefficients for the E s t are introduced in each interval in the SM system of coordinates: three for SH degree l=1 and orders m=−1,0,1 and one for SH degree l=2 and order m=0. One scaling coefficient for the IMF B y is introduced in each time interval for SH degree l=1 and order m=−1 in the SM system of coordinates.
Overall, the parameterisation of the external magnetic potential is:
$${} {\fontsize{7.9pt}{9.6pt}\selectfont{\begin{aligned} &{\mathbf V}_{\mathrm{e}}(\theta,\phi,r,t)= r {\sum\nolimits}_{j=1}^{N_{\mathrm{e}}} \left\{ \, q_{1j}^{0 \, \text{GSM}} \, {Y_{1}^{0}}(\theta_{\mathrm{G}}, \phi_{\mathrm{G}})\right\} \; {\mathcal{H}}_{j}(1) \\ &\quad + r {\sum\nolimits}_{j=1}^{N_{\mathrm{e}}} \left\{ \,q_{1j}^{0 \, \text{SM}} \, {Y_{1}^{0}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}}) + \; q_{1j}^{-1 \, \text{SM}} \, Y_{1}^{-1}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}}) \right\} \; {\mathcal{H}}_{j}(1) \\ &\quad + r {\sum\nolimits}_{j=1}^{N_{\mathrm{e}}} \left\{\! {\sum\nolimits}_{m=-1}^{1}\! q_{1j}^{m \, \text{Dst}} \, {Y_{1}^{m}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}}) \,+\, \left(\frac{r}{a}\right) q_{2j}^{0 \, \text{Dst}} \, {Y_{2}^{0}}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}})\!\right\} \! {\mathcal{H}}_{j}(E\!st) \\ &\quad + r {\sum\nolimits}_{j=1}^{N_{\mathrm{e}}} \left\{ \,q_{1j}^{-1\, \text{IMF}} \, Y_{1}^{-1}(\theta_{\mathrm{S}}, \phi_{\mathrm{S}}) \right\} \; {\mathcal{H}}_{j}(\text{IMF} \, {B_{y}}) \end{aligned}}} $$
θ G, ϕ G and θ S, ϕ S are the colatitudes and longitudes in GSM and SM reference frames, respectively.
We used independent external field parameterisations for the satellite and observatory data. For the latter, we impose that \(q_{1j}^{0 \, \text {SM}}\) is set to zero to avoid co-linearities with the observatory crustal offsets.
Independently of the parameterisation of the magnetic field, we also want to estimate the so-called Euler angles between the VFM orthogonal set of measurements and the star camera reference frame, such that the measured vector field in the VFM reference frame can be mapped into an Earth-centered, Earth-fixed reference frame. The latter reference frame is usually the NEC system of reference. We actually only compute corrections of predefined Euler angles, for a series of 30-day windows. The parameterisation and algorithm we used are detailed in Rother et al. (2013) and are not repeated here.
Inversion process
The relationships between data and model are given by Equations 5 to 8 and lead to a linear system of equations:
$$ {\mathbf d} = {\mathbf A}_{d} \, {\mathbf g} + {\mathbf \epsilon} $$
to be solved, where d is the data vector, g is the vector of Gauss coefficients defining the model and ε is this part of the data that cannot be explained by our model. We note that for mid-latitude data, the Equations 5 have to be rotated into the SM system of coordinates because the data at mid-latitudes are selected in that system. However, for the internal part of the model, the data in the direction Z SM are not used as they are strongly contaminated by the external fields. This approach has been used in all models of the GRIMM series. More details are provided in Lesur et al. (2008).
As we try to estimate a core field model with a relatively complex behaviour in time up to SH degree 18, the model is not fully resolved by the data and needs regularisation. The regularisation chosen minimises the integral of the squared second time derivative of the core field model radial component over the model time span. Further, at two single epochs close to the end point of the model, the integral of the squared first time derivative of the radial component of the core field model is minimised. This regularisation method defines three functionals at the core radius c:
$${} \begin{aligned} &\Phi_{{1\,\text{or}\,2}}(t)=\frac{1}{4\pi \, c^{2}} \int_{\Omega(c)} \left| \partial_{t} B_{r}(\theta, \phi, r, t) \right|^{2} \; d\omega \\ &\text{for } t=t_{1} \text{ and } t=t_{2}, \text{and } \\ &\Phi_{{3}}=\frac{1}{4\pi \, c^{2} \, (t_{2}-t_{1})}\int_{t_{1}}^{t_{2}} \int_{\Omega(c)} \left| {\partial_{t}^{2}} B_{r}(\theta, \phi, r, t) \right|^{2} \; d\omega \, dt, \end{aligned} $$
which have to be minimised for the core field model Gauss coefficients. Unlike Lesur et al. (2010), we control the first and second time derivatives of the radial field, as these time derivatives are poorly resolved by the brevity of input data. Ultimately, this leads to three systems of linear equations, one for each functional:
$$ {\mathbf 0}= \lambda_{i} \; {\mathbf L}_{Bi} \, {\mathbf g} \hspace{1cm} i=1,2,3. $$
Equation 9 and 11 should be solved simultaneously by least-squares where the λ i are scalars that need to be adjusted so that the satellite and observatory data are fit to their expected noise level.
Since the Swarm satellite data set contains data during manoeuvre days and since the most recent observatory data are 'only' quasi-definitive data, we expect a higher number of outliers than usual and a general distribution of residuals deviating significantly from a Gaussian distribution. We therefore use a re-weighted least-squares algorithm where at iteration j+1, the datum value d i in Equation 9 is associated with a weight \(w_{i}^{j+1}\) where:
$${} {w}_{i}^{j+1} = \left\{ \begin{array}{ll} \frac{1}{\sigma_{i}} &\text{for } |{d}_{i}-{\mathbf A}_{d} \cdot {\mathbf g}^{j}| \le k_{i} \, \sigma_{i},\\ \frac{1}{\sigma_{i}} \left[\frac{k_{i} \, \sigma_{i}}{|{d}_{i}-{\mathbf A}_{d} \cdot {\mathbf g}^{j}|} \right]^{1-\frac{a_{i}}{2}} \;\;\;\; & \text{for } |{d}_{i}-{\mathbf A}_{d} \cdot {\mathbf g}^{j}| > k_{i} \, \sigma_{i}. \end{array} \right. $$
The g j are the set of Gauss coefficients obtained at iteration j. σ i is the prior standard deviation of the noise associated with the datum d i . The control parameters σ i , k i and a i are set before starting the iterative process and are given in Table 2. They depend on the data type. The iterative process is started by setting a i =2 for all data points - i.e. assuming a Gaussian distribution of residuals.
Table 2 Satellite and observatory data weight parameters, residual means and rms
At each iteration, we therefore minimise the functional:
$${} \Phi_{j} = \left[ {\mathbf d}-{\mathbf A}_{d} \, {\mathbf g}^{j} \right]^{t} {\mathbf W}^{j} \left[ {\mathbf d}-{\mathbf A}_{d} \, {\mathbf g}^{j} \right] + \sum_{i=1}^{3} \lambda_{i} \left[{\mathbf L}_{Bi} \, {\mathbf g}^{j} \right]^{t} \left[{\mathbf L}_{Bi} \, {\mathbf g}^{j} \right]\!, $$
where j is the iteration number, and W j is a diagonal matrix which elements are \(({w}_{i}^{j})^{2}\), with \({w}_{i}^{j}\) defined by Equation 12.
Once an acceptable solution is obtained, a threshold filtering is applied to get rid of some remaining outliers. Then, the Euler angles are estimated and the inversion process is restarted. The final solution is reached when the Euler angles are not changing significantly from one iteration to the next.
In Table 2, we present the fit to the data for all the used subsets of data, together with the control parameters σ i , k i and a i . These are specified independently for each of the satellites when necessary. The residual rms for the low-latitude magnetic field components X SM, Y SM and Z SM data are in the same range as for the GRIMM models based on CHAMP satellite data (see Lesur et al. 2015). The Z SM component is included, as it enters in the Euler angles and external field parameter estimation, but it does not affect the estimation of the fields of internal origin directly. The misfits for the high-latitude data are slightly smaller than those obtained with CHAMP data for similar model parameterisation. It is not clear yet why this occurs. The magnetic data selected for the extended local time window are only used for the Euler angle estimation. Since early evening and late morning data are included, the fit to the Z SM component is slightly degraded. Nonetheless, the fit to these data remains surprisingly good.
The misfits for the observatory data are comparable to those of the satellite data, although slightly degraded for the mid- and low-latitude SM components. The mean of the residuals is relatively high for the observatory X HL component which is an indication of the strongly non-Gaussian residual distribution. Overall, accounting for the corrections applied to satellite data and the obligation to use preliminary D s t index values for the selection and modelling, the fit to the data is acceptable.
Figure 5 shows the histograms for three of the satellite data components. The residuals for other data types show similar distributions. The distributions, plotted in semi-logarithmic scale, clearly show the deviation from the Gaussian distribution. Our chosen weights correspond to a Gaussian distribution of residuals for small errors, and significant tails for large errors as shown by the blue dotted line. Our chosen weights and associated distributions of residuals allow prior error distributions that are compatible with the posterior distributions. It proves to be an important step for obtaining high-quality magnetic field models.
Residual histograms for three of the satellite data components. Histogram of the residuals for the X SM, Y SM and Z HL data types, scaled by their prior σ given in Table 2. Also, the Gaussian distribution is given in green, and the distributions corresponding to the weights in Equation 12 for large residuals are shown in blue.
The power spectra of a snapshot of the model core field and secular variation are presented in Figure 6 for the epoch 2014.0. The epoch 2014.0 corresponds to a time where satellite data and observatory data coverage overlaps. The results discussed here are labelled DCO(o) for dedicated core field where 'o' indicates the use of observatory data for a modelling approach largely based on development made in the Swarm SCARF DCO L2 framework (Rother et al. 2013). The power spectra of the IGRF-GFZ candidates are also displayed for comparison. The IGRF main field model candidate has been built from its parent model snapshot for 2014.0, forwarded in time by one year using the averaged SV of the parent model from 2013.5 to 2014.5. The IGRF SV candidate for 2015 to 2020 is this average SV derived from the parent model. While the differences for the two main field models in Figure 6 only reflect the SV between 2014.0 and 2015.0 (the reference date for the IGRF), some significant differences between parent SV models appear above SH degree 8 (not shown in Figure 6). Besides the data updates since the IGRF candidate submission, these differences may be caused by changes in the preliminary D s t values, the corrections applied on Swarm data and of course slightly different values for the λ controlling the regularisation. These increasing differences from SH degree 8 give some indications on the SV model robustness. The poor modelling of the SV above SH degree 8 is to be expected given the limited time span of available Swarm and observatory magnetic field vector data.
Power spectra of the modelled magnetic fields. Power spectra of the magnetic field model and its SV, calculated for 2014.0 at the Earth's surface. The spectra for the model presented in this paper - i.e. DCO(o) - for the GFZ-IGRF candidates and the power of the differences between the two models are also shown.
Figure 7 presents maps of the vertical down component of the core field model and its SV for epoch 2014.0. The main field model map shows the usual reverse patches in the southern hemisphere and the undulations of the magnetic equator. A close comparison with equivalent maps for 2005 shows small evidence of westward drift in the southern Atlantic (not shown). Similarly, the SV map displays the usual weakness of the SV over the Pacific and Antarctic regions. With only one year of data, it is not possible to obtain reasonably accurate models of the acceleration. We can nonetheless estimate the average acceleration since the CHAMP epoch at large scales. This is displayed in Figure 8, where the acceleration has been calculated at the Earth's surface by scaled differences between GRIMM-4 model SV for 2005 (Lesur et al. 2015) and our SV estimate for 2014.0. Interestingly, large peaks of acceleration during the CHAMP era, first shown in Lesur et al. (2008) and as large as 30 nT /y 2, average-out over several years. These acceleration pulses (as described in Chulliat et al. (2010)) can then be interpreted as magnetic field short-term disturbances over an otherwise smooth field evolution. The only region where this average acceleration displays significant values is under Eastern Asia. Yet, it is not clear if that is a transient or permanent feature of the geomagnetic field.
Maps of the magnetic field model and its SV. Maps of the vertical down component of the magnetic field model for year 2014.0 at the core mantle boundary. Top: snapshot of the core field. Bottom: snapshot of the SV.
Map of the modelled averaged acceleration. Map of the vertical down component of the magnetic field average acceleration between 2005.0 and 2014.0 at the Earth's surface.
Figure 9 shows time series of the Euler angle corrections for 30-day segments, estimated over the satellite data time span, for all three satellites. The Euler angles are constant during each 30-day period, but otherwise, no constraints have been applied to limit their amplitudes or variations. Although these corrections never exceed 20 arc sec, the variability of the Euler angles is surprisingly high. This is very likely not due to a lack of stability of the optic bench, which rigidly links the VFM sensors to the star cameras that define the spacecraft reference frame. The apparent correlation of the angle variations after the first third and before the last third of the period - i.e. at MJD ≃5170 and MJD ≃5300 - is clearly associated with the changes in northward versus southward flight directions of the satellites at these epochs in the selected nighttime data set. As indicated in Rother et al. (2013), the Euler angles may accumulate and respond to remaining noise or signals due to various sources, as well as hidden timing errors or poor/incomplete modelling of the external fields. The local time dependence is less obvious than it was for the CHAMP data, possibly because not enough Swarm data have been accumulated yet. For CHAMP data, the corrections were also of much larger amplitudes (up to 50 arc sec).
Euler angle corrections. Time series of estimated Euler angle corrections for constant rotations from VFM into S/C system over 30-day segments. There are three angles for each satellite. Each symbol defines the start of a 30-day interval.
In the view of providing candidates for the new IGRF-12, we processed a combination of Swarm satellite and observatory data. In order to obtain robust models of the main field, we first applied a correction to the satellite data such that the strength of the magnetic field, as measured by the VFM instrument, matches the data measured by the absolute scalar instrument. Our underlying hypothesis is that the discrepancies between these two instruments are closely linked to the position of the Sun relative to the satellite. So far, the corrections obtained with such a hypothesis seem suitable. The results presented here are actually part of a longer term study, made in close collaboration with the European Space Agency and several other European institutions. The ultimate goal is to identify the original source of this perturbative signal such that, firstly, the Swarm satellite data can be corrected to remove that signal and, secondly, a new design can be adopted in future satellite missions to avoid such difficulties. As stated in the text, the model presented here is not optimal but corresponds to our best results available at the end of 2014.
Using these corrected data, we have been able to derive accurate models of the geomagnetic field around epoch 2014.0. Since less than a year of satellite data had been accumulated, the SV model - i.e. the magnetic field linear variation in time - is only accurate for the longest wavelengths (up to SH degree 8). We therefore cannot compare meaningfully maps of the SV at the CMB with those obtained at earlier epochs. Nonetheless, the average acceleration of the magnetic field - i.e. its second time derivative - between 2005 and 2014.0 is remarkably weak at the Earth's surface compared with the acceleration values obtained during the CHAMP era. This observation strongly supports the view that the observed acceleration peaks, which have been associated with magnetic jerks in 2003, 2007 and 2010, correspond to short-term disturbances of the field over an otherwise slowly and, most of the time, smoothly evolving magnetic field.
Chulliat, A, Thébault E, Hulot G (2010) Core field acceleration pulse as a common cause of the 2003 and 2007 geomagnetic jerks. Geophys Res Lett 37(L07301). doi:10.1029/2009GL042019.
Finlay, CC, Maus S, Beggan CD, Bondar TN, Chambodut A, Chernova TA, Chulliat A, Golokov VP, Hamilton B, Hamoudi M, Holme R, Hulot G, Kuang W, Langlais B, Lesur V, Lowes FJ, Luehr H, Macmillan S, Mandea M, McLean S, Manoj C, Menvielle M, Michaelis I, Olsen N, Rauberg J, Rother M, Sabaka TJ, Tangborn A, Toffner-Clausen L, Thebault E (2010) International geomagnetic reference field: the eleventh generation. Geophys J Int 183(3): 1216–1230. doi:10.1111/j.1365-246X.2010.04804.x.
Lesur, V, Wardinski I, Rother M, Mandea M (2008) GRIMM - The GFZ Reference Internal Magnetic Model based on vector satellite and observatory data. Geophys J Int 173. doi:10.1111/j.1365-246X.2008.03724.x.
Lesur, V, Wardinski I, Hamoudi M, Rother M (2010) The second generation of the GFZ reference internal magnetic field model: GRIMM-2. Earth Planets Space 62(10): 765–773. doi:10.5047/eps.2010.07.007.
Lesur, V, Rother M, Vervelidou F, Hamoudi M, Thébault E (2013) Post-processing scheme for modeling the lithospheric magnetic field. Solid Earth 4: 105–118. doi:10.5194/sed-4-105-2013.
Lesur, V, Whaler K, Wardinski I (2015) Are geomagnetic data consistent with stably stratified flow at the core-mantle boundary?Geophys J Int. doi:10.1093/gji/ggv031.
Macmillan, S, Maus S, Bondar T, Chambodut A, Golovkov V, Holme R, Langlais B, Lesur V, Lowes F, Lühr H, Mai W, Mandea M, Olsen N, Rother M, Sabaka T, Thomson A, Wardinski I (2003) The 9th-generation International Geomagnetic Reference Field. Phys Earth Planet Int 140(4): 253–254. doi:10.1016/j.pepi.2003.09.002.
Macmillan, S, Olsen N (2013) Observatory data and the swarm mission. Earth Planets Space 65(11): 1355–1362. doi:10.5047/eps.2013.07.011.
Mandea, M, Panet I, Lesur V, De Viron O, Diament M, Le Mouël JL (2012) The earth's fluid core: recent changes derived from space observations of geopotential fields. PNAS. doi:10.1073/pnas.1207346109.
Maus, S, Macmillan S, Chernova T, Choi S, Dater D, Golovkov V, Lesur V, Lowes F, Luhr H, Mai W, McLean S, Olsen N, Rother M, Sabaka TJ, Thomson A, Zvereva T (2005) The 10th-generation International Geomagnetic Reference Field. Phys Earth Planet Int 151(3-4): 320–322. doi:10.1016/j.pepi.2005.03.006.
Merayo, JMG, Brauer P, Primdahl F, Petersen JR, Nielsen OV (2000) Scalar calibration of vector magnetometers. Meas Sci Technol 11: 120–132.
Olsen, N, Tøffner-Clausen L, Sabaka TJ, P Brauer JMGM, Jørgensen JL, Léger J-M, Nielsen OV, Primdahl F, Risbo T (2003) Calibration of the Ørsted vector magnetometer. Earth Planets Space 55: 11–18.
Rother, M, Lesur V, Schachtschneider R (2013) An algorithm for deriving core magnetic field models from swarm data set. Earth Planets Space 65: 1223–1231. doi:10.5047/eps.2013.07.005.
Thomson, AWP, Lesur V (2007) An improved geomagnetic data selection algorithm for global geomagnetic field modelling. Geophys J Int 169: 951–963. doi:10.1111/j.1365-246X.2007.03354.x.
Tøffner-Clausen, L (2014) Swarm ASM-VFM residual task force: test dataset description. Technical Report SW-TN-DTU-GS-006, Rev 2. European Space Agency.
Yin, F, Lühr H (2011) Recalibration of the CHAMP satellite magnetic field measurements. Meas Sci Technol 22. doi:10.1088/0957-0233/22/5/05510.
Yin, F, Lühr H, Rauberg J, Michaelis I, Cai H (2013) Characterization of CHAMP magnetic data anomalies: magnetic contamination and measurement timing. Meas Sci Technol 24. doi:10.1088/0957-0233/24/7/075005.
The authors acknowledge ESA for providing access to the Swarm L1b data. We acknowledge also the institutes and scientists running magnetic observatories which provide data that are essential for magnetic field modelling. IW was supported by the DFG through SPP 1488.
Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Telegrafenberg, Potsdam, 14473, Germany
Vincent Lesur
, Martin Rother
, Ingo Wardinski
& Reyko Schachtschneider
Université des Sciences et de la Technologie Houari Boumediene, EL ALIA, Alger, 16111 BAB EZZOUAR, Algeria
Mohamed Hamoudi
Institut de Physique du Globe de Strasbourg - EOST, 5 rue René Descartes, Strasbourg, 67084, France
Aude Chambodut
Search for Vincent Lesur in:
Search for Martin Rother in:
Search for Ingo Wardinski in:
Search for Reyko Schachtschneider in:
Search for Mohamed Hamoudi in:
Search for Aude Chambodut in:
Correspondence to Vincent Lesur.
VL and MR defined the magnetic field models and processed satellite and observatory data. RS and AC handled observatory data. IW and MH studied and proposed SV predictions. VL, MR, RS and IW contributed to the manuscript redaction. All authors read and approved the final manuscript.
Lesur, V., Rother, M., Wardinski, I. et al. Parent magnetic field models for the IGRF-12GFZ-candidates. Earth Planet Sp 67, 87 (2015) doi:10.1186/s40623-015-0239-6
Accepted: 21 April 2015
IGRF-12
Swarm satellite mission
Geomagnetic field models
International Geomagnetic Reference Field - The Twelfth generation | CommonCrawl |
DOE Experimental Condensed Matter Physics PI Meeting 2015 - Day 3
Things I learned from the last (half)day of the DOE PI meeting:
"vortex explosion" would be a good name for a 1980s metal band.
Pulsed high fields make possible some really amazing measurements in both high \(T_{\mathrm{C}}\) materials and more exotic things like SmB6.
Looking at structural defects (twinning) and magnetic structural issues (spin orientation domain walls) can give insights into complicated issues in pnictide superconductors.
Excitons can be a nice system for looking at coherence phenomena ordinarily seen in cold atom systems. See here and here. Theory proposes that you could play with these at room temperature with the right material system.
Thermal gradients can drive spin currents even in insulating paramagnets, and these can be measured with techniques that could be performed down to small length scales.
Very general symmetry considerations when discussing competing ordered states (superconductivity, charge density wave order, spin density wave order) can lead to testable predictions.
Hybrid, monocrystalline nanoparticles combining metals and semiconductors are very pretty and can let you drive physical processes based on the properties of both material systems.
Among the things I learned at the second day of the meeting:
In relatively wide quantum wells, and high fields, you can enter the quantum Hall insulating state. Using microwave measurements, you can see signatures of phase transitions within the insulating state - there are different flavors of insulator in there. See here.
As I'd alluded to a while ago, you can make "artificial" quantum systems with graphene-like energetic properties (for example).
In 2d hole gasses at the interface between Ge and overlying SiGe, you can get really huge anisotropy of the electrical resistivity in magnetic fields, with the "hard" axis along the direction of the in-plane magnetic field.
In single-layer thick InN quantum wells with GaN above and below, you can have a situation where there is basically zero magnetoresistance. That's really weird.
In clever tunneling spectroscopy experiments (technique here) on 2d hole gasses, you can see sharp inelastic features that look like inelastic excitation of phonons.
Tunneling measurements through individual magnetic nanoparticles can show spin-orbit-coupling-induced level spacings, and cranking up the voltage bias can permit spin processes that are otherwise blockaded. See here.
Niobium islands on a gold film are a great tunable system for studying the motion of vortices in superconductors, and even though the field is a mature one, new and surprising insights come out when you have a clean, controlled system and measurement techniques.
Scanning Josephson microscopy (requiring a superconducting STM tip, a superconducting sample, and great temperature and positional control) is going to be very powerful for examining the superconducting order parameter on atomic scales.
In magnetoelectric systems (e.g., ferroelectrics coupled to magnetic materials), combinations of nonlinear optics and electronic measurements are required to unravel which of the various possible mechanisms (charge vs strain mediated) generates the magnetoelectric coupling.
Strongly coupling light in a cavity with Rydberg atoms should be a great technique for generating many body physics for photons (e.g., the equivalent of quantum Hall).
Carbon nanotube devices can be great systems for looking at quantum phase transitions and quantum critical scaling, in certain cases.
Controlling vortex pinning and creep is hugely important in practical superconductors. Arrays of ferromagnetic particles as in artificial spin ice systems can control and manipulate vortices. Thermal fluctuations in high temperature superconductors could end up limiting performance badly, even if the transition temperature is at room temperature or more, and the situation is worse if the material is more anisotropic in terms of effective mass.
"Oxides are like people; it is their defects that make them interesting."
Things I learned at today's session of the DOE ECMP PI meeting:
In the right not-too-thick, not-too-thin layers of the 3d topological insulator Bi1.5Sb0.5Te1.7Se1.3 (a cousin of Bi2Se3 that actually is reasonably insulating in the bulk), it is possible to use top and bottom gates to control the surface states on the upper and lower faces, independently. See here.
In playing with suspended structures of different stackings of a few layers of graphene, you can get some dramatic effects, like the appearance of large, sharp energy gaps. See here.
While carriers in graphene act in some ways like massless particles because their band energy depends linearly on their crystal momentum (like photon energy depends linearly on photon momentum in electrodynamics), they have a "dynamical" effective mass, \(m^* = \hbar (\pi n_{2d})^{1/2}/v_{\mathrm{F}}\), related to how the electronic states respond to an electric bias.
PdCoO2 is a weird layered metal that can be made amazingly clean, so that its residual resistivity can be as small as 8 n\(\Omega\)-cm. That's about 200 times smaller than the room temperature resistivity of gold or copper.
By looking at how anisotropic the electrical resistivity is as a function of direction in the plane of layered materials, and how that anisotropy can vary with applied strain, you can define a "nematic susceptibility". That susceptibility implies the existence of fluctuations in the anisotropy of the electronic properties (nematic fluctuations). Those fluctuations seem to diverge at the structural phase transition in the iron pnictide superconductors. See here. Generically, these kinds of fluctuations seem to boost the transition temperature of superconductors.
YPtBi is a really bizarre material - nonmetallic temperature dependence, high resistivity, small carrier density, yet superconducts.
Skyrmions (see here) can be nucleated in controlled ways in the right material systems. Using the spin Hall effect, they can be pushed around. They can also be moved by thermally driven spin currents, and interestingly skyrmions tend to flow from the cold side of a sample to the hot side.
It's possible to pump angular momentum from an insulating ferromagnet, through an insulating antiferromagnet (NiO), and into a metal. See here.
The APS Conferences for Undergraduate Women in Physics have been a big hit, using attendance as a metric. Extrapolating, in a couple of years it looks like nearly all of the undergraduate women majoring in physics in the US will likely be attending one of these.
Making truly nanoscale clusters out of some materials (e.g., Co2Si, Mn5Si3) can turn them from weak ferromagnets or antiferromagnets in the bulk into strong ferromagnets in nanoparticle form. See here.
DOE Experimental Condensed Matter Physics PI meeting 2015
As they did two years ago, the Basic Energy Sciences part of the US DOE is having a gathering of experimental condensed matter physics investigators at the beginning of next week. The DOE likes to do this (see here for proceedings of past meetings), with the idea of getting people together to talk about the current and future state of the field and ideally seed some collaborations. I will try to blog a bit about the meeting, as I did in 2013 (here and here).
CMP and materials in science fiction
Apologies for the slower posting frequency. Other obligations (grants, research, teaching, service) are significant right now.
I thought it might be fun to survey people for examples of condensed matter and materials physics as they show up in science fiction (and/or comics, which are fantasy more than hard SF). I don't mean examples where fiction gets science badly wrong or some magic rock acts as a macguffin (Infinity Stones, Sankara stones) - I mean cases where materials and their properties are thought-provoking.
A couple of my favorites:
scrith, the bizarre material used to construct the Ringworld. It's some exotic material that has 40% opacity to neutrinos without being insanely dense like degenerate matter.
From the same author, shadow square wire, which is an absurdly strong material that also doubles as a high temperature superconductor. (Science goof in there: Niven says that this material is also a perfect (!) thermal conductor. That's incompatible with superconductivity, though - the energy gap that gives you the superconductivity suppresses the electronic contribution to thermal conduction. Ahh well.)
Even better, from the same author, the General Products Hull, a giant single-molecule structure that is transparent in the visible, opaque to all other wavelengths, and where the strength of the atomic bonds is greatly enhanced by a fusion power source.
Vibranium, the light, strong metal that somehow can dissipate kinetic energy very efficiently. (Like many materials in SF, it has whatever properties it needs to for the sake of the plot. Hard to reconcile the dissipative properties with Captain America's ability to bounce his shield off objects with apparently perfect restitution.)
Old school: cavorite, the H. G. Wells wonder material that blocks the gravitational interaction.
What are your favorite examples?
Amazingly clear results: density gradient ultracentrifugation edition
Ernest Rutherford reportedly said something like, if your experiment needs statistics, you should have done a better experiment. Sometimes this point is driven home by an experimental technique that gives results that are strikingly clear. To the right is an example of this, from Zhu et al., Nano Lett. (in press), doi: 10.1021/acs.nanolett.5b03075. The technique here is called "density gradient ultracentrifugation".
You know that the earth's atmosphere is denser at ground level, with density decreasing as you go up in altitude. If you ignore temperature variation in the atmosphere, you get a standard undergraduate statistical physics problem ("the exponential atmosphere") - the gravitational attraction to the earth pulls the air molecules down, but the air has a non-zero temperature (and therefore kinetic energy). A density gradient develops so that the average gravitational "drift" downward is balanced on average by "diffusion" upward (from high density to low density).
The idea of density gradient ultracentrifugation is to work with a solution instead of the atmosphere, and generate a vastly larger effective gravitational force (to produce a much sharper density gradient within the fluid) by using an extremely fast centrifuge. If there are particles suspended within the solution, they end up sitting at a level in the test tube that corresponds to their average density. In this particular paper, the particles in question are little suspended bits of hexagonal boron nitride, a quasi-2d material similar in structure to graphite. The little hBN flakes have been treated with a surfactant to suspend them, and depending on how many layers are in each flake, they each have a different effective density in the fluid. After appropriate dilution and repeated spinning (41000 RPM for 14 hours for the last step!), you can see clearly separated bands, corresponding to layers of suspension containing particular thickness hBN flakes. This paper is from the Hersam group, and they have a long history with this general technique, especially involving nanotubes. The results are eye-popping and seem nearly miraculous. Very cool.
The (Intel) Science Talent Search - time to end corporate sponsorship?
When I was a kid, I heard about the Westinghouse Science Talent Search, a national science fair competition that sparked the imaginations of many many young, would-be scientists and engineeers for decades. I didn't participate in it, but it definitely was inspiring. As an undergrad, I was fortunate enough to work a couple of summers for Westinghouse's R&D lab, their Science Technology Center outside of Pittsburgh, learning a lot about what engineers and applied physicists actually do. When I was in grad school, Westinghouse as a major technology corporation basically ceased to exist, and Intel out-bid rival companies for the privilege of supporting and giving their name to the STS. Now, Intel has decided to drop its sponsorship, for reasons that are completely opaque. "Intel's interests have changed," says the chair of the administrative board that runs the contest.
While it seems likely that some other corporate sponsor will step forward, I have to ask two questions. First, why did Intel decide to get out of this? Seriously, the cost to them has to be completely negligible. Is there some compelling business reason to drop this, under the assumption that someone else will take up the mantle? It's a free country, and of course they can do what they like with their name and sponsorship, but this just seems bizarre. Was this viewed as a burden? Was there a sense that they didn't get enough effective advertising or business return for their investment? Did it really consume far more resources than they were comfortable allocating?
Second, why should a company sponsor this? I ask this as it seems likely that the companies with the biggest capital available to act as sponsors will be corporations like Google, Microsoft, Amazon - companies that don't, as their core mission, actually do physical sciences and engineering research. Wouldn't it be better to establish a philanthropic entity to run this competition - someone who would not have to worry about business pressures in terms of the financing? There are a number of excellent, well-endowed foundations who seem to have missions that align well with the STS. There's the Gordon and Betty Moore Foundation, the David and Lucille Packard Foundation, the Alfred P. Sloan Foundation, the W. M. Keck Foundation, the Dreyfus Foundation, the John D. and Catherine T. MacArthur Foundation, and I'm sure I'm leaving out some possibilities. I hope someone out there gives serious consideration to endowing the STS, rather than going with another corporate sponsorship deal that may not stand the test of time.
Update: From the Wired article about this, the STS cost Intel about $6M/yr. Crudely, that means that an endowment of $120M would be enough to support this activity in perpetuity, assuming 5% payout (typical university investment assumptions, routinely beaten by Harvard and others).
Update 2: I've thought about this some more, and maybe the best solution would be for a university to sponsor this. For example, this seems tailor-made for MIT, which styles itself as a huge hub of innovation (see the Technology Review, e.g.). Stanford could do it. Harvard could do $6M a year and not even notice. It would be perfect as a large-scale outreach/high school education sponsorship effort. Comments?
Science and narrative tension
Recently I've come across some good examples of multidisciplinary science communication. The point of commonality: narrative tension, in the sense that the science is woven in as part of telling a story. The viewer/reader really wants to know how the story resolves, and either is willing to deal with the science to get there, or (more impressive, from the communication standpoint) actually wants to see the science behind the plot resolution.
Possibly the best example of the latter is The Martian, by Andy Weir. If you haven't read it, you should. There is going to be a big budget film coming out based on it, and while the preview looks very good, the book is going to be better. Here is an interview with Andy Weir by Adam Savage, and it makes the point that people can actually like math and science as part of the plot.
Another recent example, more documentary-style, is The Mystery of Matter: Search for the Elements, which aired this past month on PBS in the US. The three episodes are here, here, and here. This contains much of the same information as a Nova episode, Hunting the Elements. It's interesting to contrast the two - some people certainly like the latter's fun, jaunty approach (wherein the host plays the every-person proxy for the audience, saying "Gee whiz!" and asking questions of scientists), while the former has some compelling historical reenactments. I like the story-telling approach a bit more, but that may be because I'm a sucker for history. Note: Nice job by David Muller in Hunting the Elements, using Cornell's fancy TEM to look at the atoms in bronze.
I also heard a good story on NPR this week about Ainissa Ramirez, a materials scientist who has reoriented her career path into "science evangelist". Her work and passion are very impressive, and she is also a proponent of story-telling as a way to hold interest. We overlapped almost perfectly in time at Stanford during our doctorates, and I wish we'd met.
Now to think about the equivalent of The Martian, but where the audience longs to learn more about condensed matter physics and nanoscale science to see how the hero survives.... (kidding. Mostly.)
Nano and the oil industry
I went to an interesting lunchtime talk today by Sergio Kapusta, former chief scientist of Shell. He gave a nice overview of the oil/gas industry and where nanoscience and nanotechnology fit in. Clearly one of the main issues of interest is assessing (and eventually recovering) oil and gas trapped in porous rock, where the hydrocarbons can be trapped due to capillarity and the connectivity of the pores and cracks may be unknown. Nanoparticles can be made with various chemical functionalizations (for example, dangling ligands known to be cleaved if the particle temperature exceeds some threshold) and then injected into a well; the particles can then be sought at another nearby well. The particles act as "reporters". The physics and chemistry of getting hydrocarbons out of these environments is all about the solid/liquid interface at the nanoscale. More active sensor technologies for the aggressive, nasty down-hole environment are always of interest, too.
When asked about R&D spending in the oil industry, he pointed out something rather interesting: R&D is actually cheap compared to the huge capital investments made by the major companies. That means that it's relatively stable even in boom/bust cycles because it's only a minor perturbation on the flow of capital.
Interesting numbers: Total capital in hardware in the field for the petrochemical industry is on the order of $2T, built up over several decades. Typical oil consumption worldwide is around 90M barrels equivalent per day (!). If the supply ranges from 87-93M barrels per day, the price swings from $120 to $40/barrel, respectively. Pretty wild.
DOE Experimental Condensed Matter Physics PI Meeti...
Amazingly clear results: density gradient ultracen...
The (Intel) Science Talent Search - time to end co... | CommonCrawl |
·· Book (34)
·· Commissioned report (282)
·· Other report (66)
·· Working paper (188)
·· Discussion paper (105)
·· Other contribution (102)
·· Paper (172)
·· Poster (28)
·· Abstract (14)
·· Other (24)
·· Patent (6)
·· Article (8763)
·· Letter (52)
·· Comment/debate (43)
·· Book/Film/Article review (54)
·· Literature review (21)
·· Editorial (35)
·· Special issue (49)
·· Review article (150)
·· Short survey (1)
·· Chapter (196)
·· Conference contribution (545)
·· Foreword/postscript (1)
·· Chapter (peer-reviewed) (74)
Published in 2019 (20)
Published in 1997-2014 (174)
E-pub ahead of print (2)
Unpublished (13)
Does not have full text (755)
Has full text (293)
Book/Report → Book → Commissioned report → Other report Working paper → Working paper → Discussion paper Other contribution → Other contribution Contribution to conference → Paper → Poster → Abstract → Other Patent → Patent Contribution to journal → Article → Letter → Comment/debate → Book/Film/Article review → Literature review → Editorial → Special issue → Meeting abstract → Review article → Short survey → Conference article Chapter in Book/Report/Conference proceeding → Chapter → Entry for encyclopedia/dictionary → Conference contribution → Foreword/postscript → Other chapter contribution → Chapter (peer-reviewed) Contribution to specialist publication → Article → Featured article → Book/Film/Article review → Editorial Non-textual form → Software → Web publication/site
1 - 293 out of 293Page size: 500
PAFway: pairwise associations between functional annotations in biological networks and pathways
Mahjoub, M. & Ezer, D., 6 Dec 2019, (Submitted).
Research output: Working paper
Situation Coverage Testing for a Simulated Autonomous Car -- an Initial Case Study
Hawkins, H. R. & Alexander, R., 15 Nov 2019, arXiv.
Applying the three core concepts of economic evaluation in health to education in the UK
Hinde, S., Walker, S. M. & Lortie-Forgues, H., 12 Nov 2019, York, UK: Centre for Health Economics, University of York, 16 p. (CHE Research Paper; no. 170).
Research output: Working paper › Discussion paper
The economic and energy impacts of a UK export shock: comparing alternative modelling approaches
Allan, G., Barrett, J., Brockway, P., Sakai, M., Hardt, L., McGregor, P. G., Ross, A. G., Roy, G., Swales, J. K. & Turner, K., 4 Sep 2019, UK: UKERC, 89 p.
Drivers of Health Care Expenditure
Mason, A., Rodriguez Santana, I. D. L. N., Aragon Aragon, M. J. M., Rice, N., Chalkley, M. J., Wittenberg, R. & Fernandez, J-L., Aug 2019, York, UK: Centre for Health Economics, University of York, p. 1-56, 57 p. (CHE Research Paper; no. 169).
The causal effect of hospital volume on health gains from hip replacement surgery
Rachet Jacquet, L., Gutacker, N. & Siciliani, L., Aug 2019, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 168).
Is an ounce of prevention worth a pound of cure? Estimates of the impact of English public health grant on mortality and morbidity
Martin, S., Lomas, J. R. S. & Grant, U. O. Y., Jul 2019, York, UK: Centre for Health Economics, University of York, 35 p. (CHE Research Paper; no. 166).
The effect of government contracting with faith-based health care providers in Malawi
Tafesse, W., Manthalu, G. & Chalkley, M. J., Jul 2019, York, UK: Centre for Health Economics, University of York, 54 p. (CHE Research Paper; no. 167).
Fitness differences suppress the number of mating types in evolving isogamous species
Krumbeck, Y., Constable, G. W. A. & Rogers, T., 17 Jun 2019, arXiv.
Economic Analysis for Health Benefits Package Design
Love-Koh, J., Walker, S. M., Kataika, E., Sibandze, S., Arnold, M., Ochalek, J. M., Griffin, S., Revill, P. & Sculpher, M. J., 11 Jun 2019, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 165).
Unifying Semantic Foundations for Automated Verification Tools in Isabelle/UTP
Foster, S., Baxter, J., Cavalcanti, A., Woodcock, J. & Zeyda, F., 14 May 2019, (arXiv).
The impact of primary care incentive schemes on care home placements for people with dementia
Kasteridis, P., Liu, D., Mason, A., Goddard, M. K., Jacobs, R., Wittenberg, R. & Howdon, D. D. H., May 2019, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 164).
Plasma Wakefield Accelerator Research 2019 - 2040: A community-driven UK roadmap compiled by the Plasma Wakefield Accelerator Steering Committee (PWASC)
Hidding, B., Hooker, S., Jamison, S., Muratori, B., Murphy, C., Najmudin, Z., Pattathil, R., Sarri, G., Streeter, M., Welsch, C., Wing, M. & Xia, G., 19 Apr 2019, 38 p.
Productivity of the English National Health Service: 2016/17 update
Castelli, A., Chalkley, M. J., Gaughan, J. M., Pace, M. L. & Rodriguez Santana, I. D. L. N., Apr 2019, York, UK: Centre for Health Economics, University of York, 76 p. (CHE Research Paper; no. 163).
Effects of market structure and patient choice on hospital quality for planned patients
Moscelli, G., Gravelle, H. S. E. & Siciliani, L., Mar 2019, York, UK: Centre for Health Economics, University of York, 56 p. (CHE Research paper; no. 162).
Assessing health opportunity costs for the Indian health care systems
Ochalek, J. M., Asaria, M., Chuar, P. F., Lomas, J. R. S., Mazumdar, S. & Claxton, K. P., Feb 2019, UK: Centre for Health Economics, University of York, p. 1-29, 29 p. (CHE Research Paper; no. 161).
Incorporating concerns for equity into health resource allocation: A guide for practitioners
Love-Koh, J., Griffin, S., Kataika, E., Revill, P., Sibandze, S. & Walker, S. M., 21 Jan 2019, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 160).
Formal Methods in Dependable Systems Engineering: A Survey of Professionals from Europe and North America
Gleirscher, M. & Marmsoler, D., 2019, (Submitted) (arXiv).
Optimal investment and contingent claim valuation with exponential disutility under proportional transaction costs
Roux, A., 2019, (Unpublished).
System Safety Practice: An Interrogation of Practitioners about Their Activities, Challenges, and Views with a Focus on the European Region
Gleirscher, M. & Nyokabi, A., 2019, (Submitted) (arXiv).
Recommendations for the development of a health sector resource allocation formula in Malawi
McGuire, F., Revill, P., Twea, P., Mohan, S., Manthalu, G. & Smith, P. C., Dec 2018, York, UK: Centre for Health Economics, University of York, 40 p. (CHE Research Paper; no. 159).
A Substrate-Independent Framework to Characterise Reservoir Computers
Dale, M., Miller, J. F., Stepney, S. & Trefzer, M. A., 16 Oct 2018, arXiv, 19 p.
Estimating the marginal productivity of the English National Health Service from 2003/04 to 2012/13
Lomas, J. R. S., Martin, S. & Claxton, K. P., Oct 2018, York, UK: Centre for Health Economics, University of York, 127 p. (CHE Research Paper; no. 158).
Quasitriangular coideal subalgebras of $U_q(\mathfrak{g})$ in terms of generalized Satake diagrams
Regelskis, V. & Vlaar, B., 6 Jul 2018.
The determinants of health care expenditure growth
Rice, N. & Aragon Aragon, M. J. M., Jul 2018, York, UK: Centre for Health Economics, University of York, 48 p. (CHE Research Paper; no. 156).
Who benefits from universal child care? Estimating marginal returns to early child care attendance
Cornelissen, T., Dustmann, C., Raute, A. & Schönberg, U., 1 Jun 2018, CReaM (Centre for Research & Analysis of Migration).
Cost, context and decisions in Health Economics and cost-effectiveness analysis
Culyer, A. J., Jun 2018, York, UK: Centre for Health Economics, University of York, p. 1-17, 17 p. (CHE Research Paper; no. 154).
Setting research priorities in Global Health: appraising the value of evidence generation activities to support decision-making in health care
Woods, B. S., Rothery, C., Revill, P., Hallett, T. & Phillips, A., Jun 2018, York: Centre for Health Economics, University of York, p. 1-54, 55 p. (CHE Research Paper; no. 155).
An Evaluation of Classification and Outlier Detection Algorithms
Hodge, V. J. & Austin, J., 2 May 2018, 4 p.
The genome of Ectocarpus subulatus highlights unique mechanisms for stress tolerance in brown algae
Dittami et al. , 25 Apr 2018.
Stateful-Failure Reactive Designs in Isabelle/UTP
Foster, S. D., Baxter, J. E., Cavalcanti, A. L. C. & Woodcock, JAMES. C. P., 17 Apr 2018, (Unpublished).
Vertex Representations for Yangians of Kac-Moody algebras
Guay, N., Regelskis, V. & Wendlandt, C., 11 Apr 2018.
Generalised Reactive Processes in Isabelle/UTP
Foster, S. D. & Canham, S. J., 6 Apr 2018, (Unpublished) 56 p.
Reactive Designs in Isabelle/UTP
Foster, S. D., Baxter, J. E., Cavalcanti, A. L. C., Woodcock, JAMES. C. P. & Canham, S. J., 6 Apr 2018, (Unpublished) 108 p.
Theory of Designs in Isabelle/UTP
Foster, S. D., Nemouchi, Y. & Zeyda, F., 6 Apr 2018, (Unpublished) 47 p.
Kleene Algebra in Unifying Theories of Programming
Foster, S. D., 5 Apr 2018, (Unpublished) 6 p.
Isabelle/UTP: Mechanised Theory Engineering for the UTP
Foster, S. D., Zeyda, F., Nemouchi, Y., De Oliveira Salazar Ribeiro, P. F. & Wolff, B., 4 Apr 2018, (Unpublished) 162 p.
Accounting for the quality of NHS output
Bojke, C., Castelli, A., Grasic, K., Mason, A. & Street, A. D., Apr 2018, York, UK: Centre for Health Economics, University of York, p. 1-62, 62 p. (CHE Research Paper; no. 153).
Compositional Assume-Guarantee Reasoning of Control Law Diagrams using UTP
Ye, K., Foster, S. D. & Woodcock, JAMES. C. P., Apr 2018, (Unpublished) 202 p.
Castelli, A., Chalkley, M. J. & Rodriguez Santana, I. D. L. N., Apr 2018, York, UK: Centre for Health Economics, University of York, p. 1-78, 78 p. (CHE Research Paper; no. 152).
Bounded orbits of Diagonalizable Flows on finite volume quotients of products of SL2(R)
An, J., Ghosh, A., Guan, L. & Ly, T., 20 Mar 2018.
An optimal bound for the ratio between ordinary and uniform exponents of Diophantine approximation
Marnat, A. & Moshchevitin, N., 8 Feb 2018.
Smoking Inequality across Genders and Socioeconomic Classes. Evidence from Longitudinal Italian Data
Di Novi, C., Jacobs, R. & Migheli, M., 1 Feb 2018, Pavia, Italy: Università di Pavia, 24 p. ( DEM Working Paper Series; vol. 02-18, no. 152).
Spatial Competition and Quality: Evidence from the English Family Doctor Market
Gravelle, H. S. E., Liu, D., Propper, C. & Santos, R., Feb 2018, York, UK: Centre for Health Economics, University of York, 38 p. (CHE Research Paper; no. 151).
Poverty and Wellbeing Impacts of Microfinance: What Do We Know?
Maitrot, M. R. L. & Niño-Zarazúa, M., 29 Nov 2017, 39 p.
Win Prediction in Esports: Mixed-Rank Match Prediction in Multi-player Online Battle Arena Games
Hodge, V. J., Devlin, S. M., Sephton, N. J., Block, F. O., Drachen, A. & Cowling, P. I., 17 Nov 2017, 1711.06498 ed., arXiv, 7 p.
Does hospital competition improve efficiency? The effect of the patient choice reform in England
Longo, F., Siciliani, L., Moscelli, G. & Gravelle, H. S. E., Nov 2017, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 149).
Pricing implications of non-marginal budgetary impacts in health technology assessment: a conceptual model
Howdon, D. D. H. & Lomas, J. R. S., Nov 2017, York, UK: Centre for Health Economics, University of York, p. 1-17, 17 p. (CHE Research Paper; no. 148).
Scoping review on social care economic evaluation methods
Weatherly, H. L. A., Neves De Faria, R. I., van den Berg, B., Sculpher, M. J., O'Neill, P., Nolan, K., Glanville, J., Isojarvi, J., Baragula, E. & Edwards, M., Nov 2017, York UK: Centre for Health Economics, p. 1-53, 53 p. (CHE Research Paper; no. 150).
Nested algebraic Bethe ansatz for open spin chains with even twisted Yangian symmetry
Gerrard, A., MacKay, N. & Regelskis, V., 23 Oct 2017, 29 p.
Reference Pulse Attack on Continuous-Variable Quantum Key Distribution with Local Local Oscillator
Ren, S., Kumar, R., Wonfor, A., Tang, X., Penty, R. & White, I., 29 Sep 2017.
Modelling the Dynamics of a Public Health Care System: Evidence from Time-Series Data
Iacone, F., Martin, S., Siciliani, L. & Smith, P. C., Sep 2017, York, UK: Centre for Health Economics, University of York, 31 p. (CHE Research Paper; no. 29).
Towards atomically precise manipulation of 2D nanostructures in the electron microscope
Susi, T., Kepaptsoglou, D., Lin, Y-C., Ramasse, Q. M., Meyer, J. C., Suenaga, K. & Kotakoski, J., 29 Aug 2017, (arXiv).
Health care costs in the English NHS: reference tables for average annual NHS spend by age, sex and deprivation group
Asaria, M., Apr 2017, York, UK: Centre for Health Economics, University of York, p. 1-25, 25 p. (CHE Research Paper; no. 147).
Productivity of the English NHS: 2014/15 Update
Bojke, C., Castelli, A., Grasic, K., Howdon, D. D. H., Street, A. D. & Rodriguez Santana, I. D. L. N., Apr 2017, York, UK: Centre for Health Economics, University of York, p. 1-81, 81 p. (CHE Research Paper; no. 146).
Public financial management and health service delivery: A literature review
Goryakin, Y., Revill, P., Mirelman, A., Sweeney, R., Ochalek, J. M. & Suhrcke, M. E., Apr 2017, London, 43 p. (Overseas Development Institute Report).
The Hausdorff and dynamical dimensions of self-affine sponges: a dimension gap result
Simmons, D. S. & Das, T., 25 Mar 2017, (Accepted/In press) 42 p.
Generalised additive mixed models for dynamic analysis in linguistics: a practical introduction
Sóskuthy, M., 15 Mar 2017.
Do hospitals respond to rivals' quality and efficiency? A spatial econometrics approach
Longo, F., Siciliani, L., Gravelle, H. S. E. & Santos, R., Mar 2017, York, UK: Centre for Health Economics, University of York, p. 1-45, 45 p. (CHE Research Paper; no. 144).
The effect of hospital ownership on quality of care: evidence from England.
Moscelli, G., Gravelle, H. S. E., Siciliani, L. & Gutacker, N., Mar 2017, Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 145).
First degree cohomology of Specht modules and extensions of symmetric powers
Donkin, S. & Geranios, H., 28 Feb 2017, (Submitted) 94 p.
A measure theoretic result for approximation by Delone sets
Baake, M. & Haynes, A., 16 Feb 2017, 6 p.
The economics of health inequality in the English NHS: the long view
Asaria, M., 6 Feb 2017, Centre for Health Economics, University of York: Centre for Health Economics, University of York, p. 1-13, 14 p. (CHE Research Paper; no. 142).
First do no harm –: The impact of financial incentives on dental x-rays
Chalkley, M. J. & Listl, S., Feb 2017, York, UK: Centre for Health Economics, University of York, p. 1-22, 22 p. (CHE Research Papers; no. 143).
Defining and measuring unmet need to guide healthcare funding: identifying and filling the gaps.
Aragon Aragon, M. J. M., Chalkley, M. J. & Goddard, M. K., Jan 2017, York: Centre for Health Economics, University of York, p. 1-46, 46 p. (CHE Research Paper; no. 141).
Into their land and labours : a comparative and global analysis of trajectories of peasant transformation
Vanhaute, E. & Cottyn, H. D. G. J., 2017, p. 1, 21 p. (ICAS Review Paper Series; no. 8).
Sums of reciprocals and the three distance theorem
Beresnevich, V. & Leong, N. E. C., 2017, 15 p.
Paying for performance for health care in low- and middle-income countries: an economic perspective
Chalkley, M. J., Mirelman, A., Siciliani, L. & Suhrcke, M. E., Dec 2016, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 140).
Market structure, patient choice and hospital quality for elective patients
Moscelli, G., Gravelle, H. S. E. & Siciliani, L., Nov 2016, York, UK: Centre for Health Economics, University of York, p. 1-33, 33 p. (CHE Research Paper; no. 139).
Which local authorities are most unequal?
Bradshaw, J. R. & Bloor, K. E., 26 Oct 2016, 4 p.
A randomized version of the Littlewood Conjecture
Haynes, A. & Koivusalo, H., 25 Oct 2016.
Funding of mental health services: Do available data support episodic payment?
Jacobs, R., Chalkley, M. J., Aragon Aragon, M. J. M., Boehnke, J. R., Clark, M., Moran, V. & Gilbody, S. M., Oct 2016, York, UK: Centre for Health Economics, p. 1-82, 82 p. (CHE Research Paper; no. 137).
Hospital productivity growth in the English NHS 2008/09 to 2013/14
Aragon Aragon, M. J. M., Castelli, A., Chalkley, M. J. & Gaughan, J. M., Oct 2016, Centre for Health Economics, University of York, p. 1-44, 44 p. (CHE Research Paper; no. 138).
Using MinION nanopore sequencing to generate a de novo eukaryotic draft genome: preliminary physiological and genomic description of the extremophilic red alga Galdieria sulphuraria strain SAG 107.79
Davis, S. J., 20 Sep 2016, p. 1-17, 17 p.
Fairer decisions, better health for all: Health equity and cost-effectiveness analysis
Cookson, R. A., Mirelman, A., Asaria, M., Dawkins, B. & Griffin, S., Sep 2016, Centre for Health Economics, University of York, p. 1-43, 43 p. (CHE Research Paper; no. 135).
Supporting the development of an essential health package: principles and initial assessment for Malawi
Ochalek, J. M., Claxton, K. P., Revill, P., Sculpher, M. J. & Rollinger, A., Sep 2016, York, UK: Centre for Health Economics, University of York, p. 1-90, 90 p. (CHE Research Paper; no. 136).
Single-world theory of the extended Wigner's world experiment
Sudbery, A., 20 Aug 2016.
On Spin Calogero-Moser system at infinity
Sklyanin, E., Khoroshkin, S. & Matushko, M. G., 1 Aug 2016, arXiv, 28 p.
The impact of diabetes on labour market outcomes in Mexico: a panel data and biomarker analysis.
Seuring, T., Serneels, P. & Suhrcke, M. E., Aug 2016, York, UK: Centre for Health Economics, University of York, p. 1-37, 37 p. (CHE Research Paper; no. 134).
On the Minimum of a Positive Definite Quadratic Form over Non--Zero Lattice points. Theory and Applications
Adiceam, F. & Zorin, E., 15 Jul 2016, 46 p.
Delayed discharges and hospital type: evidence from the English NHS
Gaughan, J. M., Gravelle, H. S. E. & Siciliani, L., Jul 2016, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 133).
Years of good life based on income and health: re-engineering cost-benefit analysis to examine policy impacts on wellbeing and distributive justice
Cookson, R. A., Cotton-Barrett, O., Adler, M., Asaria, M. & Ord, T., Jul 2016, York UK: Centre for Health Economics, University of York, p. 1-25, 25 p. (CHE Research Paper; no. 132).
The Ecology of Fringe Science and its Bearing on Policy
Collins, HM., Bartlett, A. & Reyes-Galindo, LI., 18 Jun 2016.
Parents' Experiences of Administering Distressing Nursing and Healthcare Procedures as part of Supporting Children with Complex or Long-Term Conditions at Home: Final Report
Spiers, G. F., Beresford, B. A. & Clarke, S. E., Jun 2016, York: Social Policy Research Unit, University of York, 77 p.
Pathways to nuclear disarmament: delegitimising nuclear violence
Ritchie, N. E., 11 May 2016, (Unpublished) 14 p.
Optimal hospital payment rules under rationing by random waiting
Gravelle, H. S. E. & Schroyen, F., May 2016, York, UK: Centre for Health Economics, University of York, p. 1-56, 56 p. (CHE Research Paper; no. 130).
The Family Peer Effect on Mothers' Labour Supply
Tominey, E., Nicoletti, C. & Salvanes, K., 20 Apr 2016.
Assessing the impact of health care expenditures on mortality using cross-country data
Nakamura, R., Lomas, J. R. S., Claxton, K. P., Bokhari, F., Moreno Serra, R. A. & Suhrcke, M. E., Apr 2016, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 128).
Socioeconomic inequalities in health care in England
Cookson, R. A., Propper, C., Asaria, M. & Raine, R., Apr 2016, York, UK: Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 129).
Pharmaceutical Pricing: Early Access, The Cancer Drugs Fund and the Role of NICE
Claxton, K. P., Mar 2016, Centre for Health Economics, University of York, 4 p. (Policy & Research Briefing).
Sibling spillover effects in school achievement
Nicoletti, C. & Rabe, B., 16 Jan 2016, (Discussion Papers).
Eliciting the level of health inequality aversion in England
Robson, M., Asaria, M., Tsuchiya, A., Ali, S. & Cookson, R. A., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-19, 19 p. (CHE Research Paper; no. 125).
Health equity indicators for the English NHS
Cookson, R. A., Asaria, M., Ali, S., Ferguson, B., Fleetcroft, R., Goddard, M. K., Goldblatt, P., Laudicella, M. & Raine, R., Jan 2016, York: Centre for Health Economics, University of York, 265 p. (CHE Research Paper; no. 124).
Location, quality and choice of hospital: Evidence from England 2002/3-2012/13
Moscelli, G., Siciliani, L., Gutacker, N. & Gravelle, H. S. E., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-31, 31 p. (CHE Research Paper; no. 123).
Bojke, C., Castelli, A., Grasic, K., Howdon, D. & Street, A. D., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-69, 69 p. (CHE Research Paper; no. 126).
Making a drama out of learning
McCotter, S., 2016, York: The University of York, p. 20-21, (The Forum Magazine; no. 40).
Nonlinear Stochastic Partial Differential Equations of hyperbolic type driven by L\'{e}vy-type noises
Brzezniak, Z., 2016, (Unpublished) 30 p.
Cost per DALY averted thresholds for low- and middle-income countries: evidence from cross country data
Ochalek, J. M., Lomas, J. & Claxton, K. P., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-50, 50 p. (CHE Research Paper; no. 122).
Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use
Culyer, T., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-22, 22 p. (CHE Research Paper; no. 121).
Efficiency, equity and equality in health and health care
The socioeconomic and demographic characteristics of United Kingdom junior doctors in training across specialities.
Rodríguez Santana, I. & Chalkley, M. J., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-15, 15 p. (CHE Research Paper; no. 119).
Choosing and booking – and attending? Impact of an electronic booking system on outpatient referrals and non-attendances
Dusheiko, M. A. & Gravelle, H. S. E., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-51, 51 p. (CHE Research Paper; no. 116).
Hospital trusts productivity in the English NHS: uncovering possible drivers of productivity variations
Aragon Aragon, M. J. M., Castelli, A. & Gaughan, J. M., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 117).
How much should be paid for Prescribed Specialised Services?
Bojke, C., Grasic, K. & Street, A. D., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-56, 56 p. (CHE Research Paper; no. 118).
'An Unduly Moralistic Approach to Disputes?' Monetary Remedies After Coventry v Lawrence
Steele, J., 15 Sep 2015, (Unpublished).
Comparing predictive accuracy in small samples
Coroneo, L. & Iacone, F., Sep 2015, Department of Economics and Related Studies, University of York, 26 p. (Discussion Paper in Economics; vol. 15, no. 15).
Multidimensional performance assessment using dominance criteria
Gutacker, N. & Street, A. D., Sep 2015, York, UK: Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 115).
Waiting time prioritisation: evidence from England
Cookson, R. A., Gutacker, N. & Siciliani, L., Sep 2015, York, UK: Centre for Health Economics, University of York, p. 1-26, 26 p. (CHE Research Paper; no. 114).
Extremality and dynamically defined measures, part II: Measures from conformal dynamical systems
Das, T., Fishman, L., Simmons, D. & Urbański, M., 23 Aug 2015, (Submitted) 28 p.
The impact of primary care quality on inpatient length of stay for people with dementia: An analysis by discharge destination
Kasteridis, P., Goddard, M. K., Jacobs, R., Santos, R. & Mason, A., Jul 2015, York, UK: Centre for Health Economics, University of York, p. 1-41, 41 p. (CHE Research Paper; no. 113).
Socioeconomic inequality of access to healthcare: Does patients' choice explain the gradient?Evidence from the English NHS
Moscelli, G., Siciliani, L., Gutacker, N. & Cookson, R. A., Jun 2015, York, UK: Centre for Health Economics, University of York, p. 1-43, 43 p. (CHE Research Paper; no. 112).
Do patients choose hospitals that improve their health?
Gutacker, N., Siciliani, L., Moscelli, G. & Gravelle, H. S. E., May 2015, York, UK: Centre for Health Economics, University of York, p. 1-37, 37 p. (CHE Research Paper; no. 111).
Country-level cost-effectiveness thresholds: initial estimates and the need for further research.
Woods, B., Revill, P., Sculpher, M. & Claxton, K. P., Mar 2015, York, UK: Centre for Health Economics, University of York, p. 1-24, 24 p. (CHE Research Paper; no. 109).
Bojke, C., Castelli, A., Grasic, K. & Street, A. D., Mar 2015, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 110).
The UK Naval Nuclear Propulsion Programme and Highly Enriched Uranium
Ritchie, N., Mar 2015, Washington, D.C.: Federation of American Scientists, 26 p.
Hausdorff dimensions of very well intrinsically approximable subsets of quadratic hypersurfaces
Fishman, L., Merrill, K. & Simmons, D., 26 Feb 2015, (Unpublished) 9 p.
Rational curves on smooth cubic hypersurfaces over finite fields
Browning, T. & Vishe, P., 17 Feb 2015, (Unpublished) 12 p.
Cost analysis of the legal declaratory relief requirement for withdrawing Clinically Assisted Nutrition and Hydration (CANH) from patients in the Permanent Vegetative State (PVS) in England and Wales
Formby, A. P., Cookson, R. A. & Halliday, S., Feb 2015, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper; no. 108).
Health care expenditures, age, proximity to death and morbidity: implications for an ageing population
Howdon, D. D. H. & Rice, N., 28 Jan 2015, York, UK: Centre for Health Economics, University of York, p. 1-42, 42 p. (CHE Research Paper; no. 107).
Patient choice and the effects of hospital market structure on mortality for AMI, hip fracture and stroke patients
Gravelle, H., Moscelli, G., Santos, R. & Siciliani, L., Dec 2014, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 106).
Some questions about devolving "welfare"
Bradshaw, J. R., 1 Oct 2014, 4 p.
The impact of hospital financing on the quality of inpatient care in England
Martin, S., Street, A. D., Han, L. & Hutton, J., Oct 2014, York, UK: Centre for Health Economics, University of York, p. 1-68, 68 p. (CHE Research Paper; no. 105).
Understanding the differences in in-hospital mortality between Scotland and England
Aragon Aragon, M. J. M. & Chalkley, M. J., Oct 2014, York, UK: Centre for Health Economics, University of York, p. 1-20, 20 p. (CHE Research Paper; no. 104).
The costs of specialised care
Bojke, C., Grasic, K. & Street, A. D., Sep 2014, Centre for Health Economics, University of York, p. 1-54, 54 p. (CHE Research Papers; no. 103).
Testing the bed-blocking hypothesis: does higher supply of nursing and care homes reduce delayed hospital discharges?
Gaughan, J. M., Gravelle, H. S. E. & Siciliani, L., Aug 2014, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 102).
How to share it out: the value of information in teams
Gershkov, A., Li, J. & Schweinzer, P., 9 Jul 2014, York: Department of Economics and Related Studies, University of York, 43 p. (DERS Discussion Papers in Economics; vol. 14/08).
Concentric Symmetry
Silva, F. N., Comin, C. H., Peron, T. K. D., Rodrigues, F. A., Ye, C., Wilson, R. C., Hancock, E. & Costa, L. . F., 1 Jul 2014.
Addressing missing data in patient-reported outcome measures (PROMs): implications for comparing provider performance
Gomes, M., Gutacker, N., Bojke, C. & Street, A. D., Jul 2014, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 101).
The impact of diabetes on employment in Mexico
Seutring, T., Goryakib, Y. & Suhrcke, M. E., Jul 2014, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 100).
Financial mechanisms for integrating funds for health and social care: an evidence review
Mason, A., Goddard, M. K. & Weatherly, H. L. A., 20 Mar 2014, York: University of York, Centre for Health Economics;, 77 p. (CHE Research Paper; no. 97).
Network meta-analysis of (individual patient) time to event data alongside (aggregate) count data
Saramago Goncalves, P. R., Chuang, L-H. & Soares, M. F. O., Jan 2014, York, UK: Centre for Health Economics, University of York, 26 p. (CHE Research Paper; no. 95).
Productivity of the English National Health Service from 2004/5: updated to 2011/12
Bojke, C., Castelli, A., Grasic, K. & Street, A. D., Jan 2014, York, UK: Centre for Health Economics, University of York, p. 1-40, 40 p. (CHE Research Paper; no. 94).
Public Values for Energy Futures: Framing, Indeterminacy and Policy Making
Butler, C., Demski, C., Parkhill, K. A., Pidgeon, N. & Spence, A., 2014, UKERC.
The importance of multimorbidity in explaining utilisation and costs across health and social care settings: evidence from South Somerset's Symphony Project
Kasteridis, P., Street, A. D., Dolman, M., Gallier, L., Hudson, K., Martin, J. & Wyer, I., 2014, York, UK: Centre for Health Economics, University of York, 60 p. (CHE Research Paper; no. 96).
Unspanned macroeconomic factors in the yield curve
Coroneo, L., Giannone, D. & Modugno, M., 2014, Brussels: Federal Reserve Board, Washington, D.C., 35 p. (Finance and Economics Discussion Series ; vol. 2014, no. 57).
Using cost-effectiveness thresholds to determine value for money in low-and middle-income country healthcare systems: Are current international norms fit for purpose?
Revill, P., Walker, S. M., Madan, J., Ciaranello, A., Mwase, T., Gibb, D. M., Claxton, K. P. & Sculpher, M. J., 2014, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper ; no. 98).
WHO decides what is fair? International HIV treatment guidelines, social value judgements and equitable provision of lifesaving antiretroviral therapy
Revill, P., Asaria, M., Phillips, A., Gibb, D. M. & Gilks, C., 2014, York UK: Centre for Health Economics, University of York, 17 p. (CHE Research Paper; no. 99).
Distributional cost-effectiveness analysis of health care programmes
Asaria, M., Griffin, S., Cookson, R. A., Whyte, S. & Tappenden, P., Nov 2013, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 91).
Distributional cost-effectiveness analysis: a tutorial
Asaria, M., Griffin, S. & Cookson, R. A., Nov 2013, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 92).
Methods for the estimation of the NICE cost effectiveness threshold
Claxton, K. P., Martin, S., Soares, M. O., Rice, N., Spackman, E., Hinde, S., Devlin, N., Smith, P. C. & Sculpher, M., Nov 2013, York, UK: Centre for Health Economics, University of York, 436 p. (CHE Research Paper; no. 81).
The influence of cost-effectiveness and other factors on NICE decisions
Dakin, H., Devlin, N., Feng, Y., O'Neill, P., Rice, N. & Parkin, D., Nov 2013, york, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 93).
Who becomes the winner? Effects of venture capital on firms' innovative incentives
Beacham, M. I. & Datta, B., Oct 2013.
Attributing a monetary value to patients' time: A contingent valuation approach
Van Den Berg, B., Gafni, A. & Portrait, F., Sep 2013, York, UK: Centre for Health Economics, University of York, 44 p. (CHE Research Paper; no. 90).
Competition, prices, and quality in the market for physician consultations
Gravelle, H. S. E., Scott, A., Sivey, P. & Yong, J., Jul 2013, York, UK: Centre for Health Economics, University of York, 34 p. (CHE Research Paper; no. 89).
Does quality affect patients' choice of doctor? Evidence from the UK
Santos, R., Gravelle, H. S. E. & Propper, C., Jul 2013, York, UK: Centre for Health Economics, University of York, 49 p. (CHE Research Paper; no. 88).
NHS productivity from 2004/5 to 2010/11
Bojke, C., Castelli, A., Grasic, K., Street, A. D. & Ward, P., Jul 2013, York, UK: Centre for Health Economics, University of York, p. 1-38, 38 p. (CHE Research Paper; no. 87).
To block or not to block: Network Competition when Skype enters the mobile market
Datta, B. & LO, Y-S., Jul 2013.
Choice of contracts for quality in health care: Evidence from the British NHS
Fichera, E., Gravelle, H. S. E., Pezzino, M. & Sutton, M., Jun 2013, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 85).
Electromagnetic transition from the 4$^+$ to 2$^+$ resonance in $^8$Be measured via the radiative capture in $^4$He+$^4$He
M. Datar, V., R. Chakrabarty, D., Kumar, S., Nanal, V., Pastore, S., B. Wiringa, R., P. Behera, S., Chatterjee, A., Jenkins, D., J. Lister, C., T. Mirgule, E., Mitra, A., G. Pillay, R., Ramachandran, K., J. Roberts, O., C. Rout, P., Shrivastava, A. & Sugathan, P., 6 May 2013, Arxiv (Cornell University).
Long term care provision, hospital length of stay and discharge destination for hip fracture and stroke patients: ESCHRU Report to Department of Health, March 2013
Gaughan, J., Gravelle, H., Santos, R. & Siciliani, L., 1 May 2013, York, UK: Centre for Health Economics, p. 1-44, 44 p. (CHE Research Paper; no. 86).
The quality of life of female informal caregivers: from Scandinavia to the Mediterranean Sea
Di Novi, C., Jacobs, R. & Migheli, M., May 2013, York, UK: Centre for Health Economics, University of York, 29 p. (CHE Research Paper; no. 84).
Co-evolution of networks and quantum dynamics: a generalization of the Barabási-Albert model of preferential attachment
Hancock, E., Konno, N., Latora, V., Machida, T., Nicosia, V., Severini, S. & Wilson, R., 4 Feb 2013, Arxiv (Cornell University), 10 p.
Does a hospital's quality depend on the quality of other Hospitals? A spatial econometrics approach to investigating hospital quality competition
Gravelle, H. S. E., Santos, R. & Siciliani, L., Jan 2013, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 82).
Testing for optimal monetary policy via moment inequalities
Coroneo, L., Corradi, V. & Santos Monteiro, P., 2013, York: Department of Economics and Related Studies, University of York, 39 p. (Discussion Paper in Economics; no. 13/07).
Transforming the UK Energy System: Public Values, Attitudes and Acceptability - Synthesis Report
Parkhill, K. A., Demski, C., Butler, C., Spence, A. & Pidgeon, N., 2013, UKERC.
Transforming the UK Energy System: Public Values, Attitudes and Acceptability - Deliberating Energy System Transitions in the UK
Butler, C., Parkhill, K. A. & Pidgeon, N., 2013, UKERC, 72 p.
Hospital quality competition under fixed prices
Gravelle, H., Santos, R., Siciliani, L. & Goudie, R., 1 Nov 2012, York, UK: Centre for Health Economics, University of York, 53 p. (CHE Research Paper; no. 80).
Cubic hypersurfaces and a version of the circle method for number fields
Browning, T. & Vishe, P., 10 Jul 2012, (Accepted/In press) Arxiv (Cornell University), 45 p.
Incentives in the public sector: some preliminary evidence from a government agency
Burgess, S., Propper, C., Ratto, M. & Tominey, E., 1 Jul 2012, Bonn: Institute for the Study of Labor (IZA), 40 p. (IZA Discussion Papers; vol. 6738).
English hospitals can improve their use of resources: an analysis of costs and length of stay for ten treatments
Gaughan, J. M., Mason, A., Street, A. D. & Ward, P., Jul 2012, York, UK: Centre for Health Economics, University of York, 71 p. (CHE Research paper; no. 78).
Well-Being and psychological consequences of temporary contracts: the case of younger Italian employees
Carrieri, V., Di Novi, C., Jacobs, R. & Robone, S., Jul 2012, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 79).
Exotic dense matter states pumped by relativistic laser plasma in the radiation dominant regime
Colgan, J., Abdallah, Jr., J., Ya. Faenov, A., pikuz, S., Wagenaars, E., Booth, N., brown, C., Culfa, O., J. Dance, R., evans, R., gray, R., hoarty, D., Kaempfer, T., Lancaster, K., McKenna, P., Rossall, A. K., Yu. Skobelev, I., schulze, K., Uschmann, I., zhidkov, A. & 1 others, C. Woolsey, N., 27 Jun 2012, Arxiv (Cornell University).
Productivity of the English National Health Service 2003/4-2009/10 - Report for the Department of Health
Bojke, C., Castelli, A., Goudie, R., Street, A. & Ward, P., Mar 2012, York, UK: Centre for Health Economics, University of York, 53 p. (CHE Research Paper; no. 76).
Twenty Years of Using Economic Evaluations for Reimbursement Decisions: What Have We Achieved?
Drummond, M. F., Feb 2012, York, UK: Centre for Health Economics, University of York, 22 p. (CHE Research Paper; no. 75).
Analysing hospital variation in health outcome at the level of EQ-5D dimensions
Gutacker, N., Bojke, C., Daidone, S., Devlin, N. & Street, A., Jan 2012, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 74).
Estimating the costs of specialised care: updated analysis using data for 2009/10. Report to the Department of Health
Daidone, S. & Street, A. D., Dec 2011, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 71).
Keep it Simple? Predicting Primary Health Care Costs with Measures of Morbidity and Multimorbidity
Brilleman, S. L., Gravelle, H. S. E., Hollinghurst, S., Purdy, S., Salisbury, C. & Windmeijer, F., Dec 2011, York, UK: Centre for Health Economics, University of York, 29 p. (CHE Research Paper; no. 72).
Modelling Individual Patient Hospital Expenditure for General Practice Budgets
Gravelle, H. S. E., Dusheiko, M. A., Martin, S., Smith, P. C., Rice, N. & Dixon, J., Dec 2011, York, UK: Centre for Health Economics, University of York, 43 p. (CHE Research Paper; no. 73).
NICE's social value judgements about equity in health and health care
Shah, K., Cookson, R. A., Culyer, A. J. & Littlejohns, P., Nov 2011, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 70).
Does hospital competition harm equity? Evidence from the English National Health Service
Cookson, R. A., Laudicella, M. & Li Donni, P., Oct 2011, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 66).
Measuring Change in Health Care Equity Using Small Area Administrative Data: Evidence from the English NHS 2001-8
Truly Inefficient or Providing Better Quality of Care? Analysing the Relationship Between Risk-Adjusted Hospital Costs and Patients' Health Outcomes
Gutacker, N., Bojke, C., Daidone, S., Devlin, N., Parkin, D. & Street, A., Oct 2011, York, UK: Centre for Health Economics, University of York, 23 p. (CHE Research Paper; no. 68).
Uncertainty, evidence and irrecoverable costs: informing approval, pricing and research decisions for health technologies
Claxton, K. P., Palmer, S. J., Longworth, L., Bojke, L., Griffin, S., McKenna, C., Soares, M. O., Spackman, E. & Youn, J., Oct 2011, York, UK: Centre for Health Economics, University of York, 81 p. (CHE Research Paper; no. 69).
Does Better Disease Management in Primary Care Reduce Hospital Costs?
Dusheiko, M. A., Gravelle, H. S. E., Martin, S., Rice, N. & Smith, P. C., Aug 2011, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 65).
Voting and the macroeconomy: separating trend from cycle
Maloney, J. & Pickering, A. C., 12 Jul 2011, York, UK: Department of Economics and Related Studies, University of York, p. 1-45, 45 p. (University of York Discussion Papers in Economics).
Do Hospitals Respond to Greater Autonomy? Evidence from the English NHS
Verzulli, R., Jacobs, R. & Goddard, M., Jul 2011, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 64).
Avoidable mortality: what it means and how it is measured
Castelli, A. & Nizalova, O. Y., Jun 2011, York, UK: Centre for Health Economics, University of York, 44 p. (CHE Research Paper; no. 63).
An equity checklist: a framework for health technology assessments
Culyer, A. J. & Bombard, Y., May 2011, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 62).
Study of Electromagnetically Induced Transparency using long-lived Singlet States
Roy, S. S. & Mahesh, T. S., 17 Mar 2011.
Interventions to Promote Improved Access to Higher Education: Exploratory Paper: Evidence resource for the Bridge Group
Rudd, P., Mar 2011, London: Bridge Group, 8 p. (Evidence Resources for the Bridge Group).
Estimating the costs of specialised care
Daidone, S. & Street, A. D., Feb 2011, York, UK: Centre for Health Economics, University of York, (CHE Research Paper; no. 61).
Growing up in social housing in the new millennium: Housing, neighbourhoods and early outcomes for children born in 2000
Tunstall, B., Lupton, R., Kneale, D. & Jenkins, A., Feb 2011, London: London School of Economics and Polictical Science, 45 p. (CASE Papers; no. 143).
'Experiment Earth?' Reflections on a public dialogue on geoengineering: Reflections on a public dialogue on geoengineering
Corner, A., Parkhill, K. A. & Pidgeon, N., 2011, Cardiff University, 33 p.
Building the Big Society
Tunstall, R., Lupton, R., Power, A. & Richardson, L., 2011, CASE, LSE, 48 p. (CASE Reports; no. 67).
Examining instruments aimed at promoting waste reduction and recycling in achieving sustainability of the food supply chain
Thankappan, S., 2011, York, UK.: University of York, 11 p.
Social housing and social exclusion 2000-2011
Tunstall, R., 2011, CASE, LSE, (CASE Paper; no. 153).
Value-based pricing for pharmaceuticals: its role, specification and prospects in a newly devolved NHS
Claxton, K., Sculpher, M. & Carroll, S., 2011, York,UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 60).
Cause and effect relationship between post-merger operating performance changes and workforce adjustments
Kuvandikov, A., Aug 2010, Department of Management Studies, University of York.
Causes of post-merger workforce adjustments
Labour demand and wage effects of takeovers that involve employee layoffs
Shareholders and employees: rent transfer and rent sharing in corporate takeovers
Back to Life: Leadership from a Process Perspective
Wood, M., May 2010, Department of Management Studies, University of York.
Reluctant Bedfellows or Model Marriage? Postmodern Thinking Applied to Mainstream Public Sector Health Services Research Settings
Accounting and Labour Control at Boulton and Watt, c. 1775-1810
Toms, S., Mar 2010, Department of Management Studies, University of York.
How Can SMES Become More Competitive On The Graduate Labour Market?
Muenzinger, I., Mar 2010, Department of Management Studies, University of York.
Using history to help refine international business theory: ownership advantages and the eclectic paradigm.
da Silva Lopes, T., Mar 2010, Department of Management Studies, University of York.
Appropriate perspectives for health care decisions.
Claxton, K., Palmer, S., Sculpher, M. & Walker, S., 2010, York, UK: Centre for Health Economics, University of York, 86 p. (CHE Research Paper; no. 54).
Does cost-effectiveness analysis discriminate against patients with short life expectancy? Matters of logic and matters of context
Paulden, M. & Culyer, A. J., 2010, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper; no. 55).
Efficient emissions reduction
Roussillon, B. & Schweinzer, P., 2010, 27 p. (University of Manchester Discussion Paper Series; vol. EDP-1004).
Foundation trusts: a retrospective review
Bojke, C. & Goddard, M., 2010, York, UK: Centre for Health Economics, University of York, 21 p. (CHE Research Paper; no. 58).
Hospital car parking the impact of access costs
Mason, A., 2010, York, UK: University of York, Centre for Health Economics;, 65 p. (CHE Research Paper; no. 59).
Nordic welfare financiers made global portfolio investors: institutional change in pension fund governance in Sweden and Finland
Sorsa, V-P. & Roumpakis, A., 2010, Oxford: Centre for Employment Work and Finance, University of Oxford, 88 p. (Oxford University Working Papers in Employment, Work and Finance; vol. WPG10-01 ).
PSE Measures Review Paper: Children's Deprivation Items
Bradshaw, J. & Main, G., 2010, [s.l.]: Poverty and Social Exclusion, 48 p. (Poverty and Social Exclusion in the UK: The 2011 Survey, Working Paper Series; no. 7).
Regional variation in the productivity of the English National Health Service
Bojke, C., Castelli, A., Laudicella, M., Street, A. & Ward, P., 2010, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 57).
Simulation or cohort models? Continuous time simulation and discretized Markov models to estimate cost-effectiveness
Soares, M. & Canto e Castro, L., 2010, Centre for Health Economics, (CHE Research Paper; vol. 56).
Building Cross Cultural Competencies.
Richardson, H. & Warwick, P., Nov 2009, Department of Management Studies, University of York.
Risk Disclosure and Re-establishing Legitimacy in the Event of a Crisis - Did Northern Rock Use Risk Disclosure to Repair Legitimacy after their 2007 Collapse?
Edkins, A., Nov 2009, Department of Management Studies, University of York.
An economic framework for analysing the social determinants of health and health inequalities
Epstein, D., Jimenez-Rubio, D., Smith, P. C. & Suhrcke, M., Oct 2009, York, UK: Centre for Health Economics, University of York, p. 1-68, 68 p. (CHE Research Paper; no. 52).
Budget Allocation and the Revealed Social Rate of Time Preference for Health
Paulden, M. & Claxton, K. P., Oct 2009, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 53).
Does Community and Environmental Responsibility Affect Firm Risk? Evidence from UK Panel Data 1994-2006
Toms, S., Anderson, K. & Salama, A., Aug 2009, Department of Management Studies, University of York.
Risk and value in labour and capital markets: The UK corporate economy, 1980-2005.
Toms, S. & Salama, A., Aug 2009, Department of Management Studies, University of York.
Lines of Flight: Everyday Resistance along England's Backbone
Wood, M. & Brown, S., Jul 2009, University of York, The York Management School, 29 p.
The status of planning processes in family-owned businesses: A study of transformational economy and its relationship to the financial performance of family-owned Ukrainian firms
Lehkman, O., Jul 2009, University of York, The York Management School, 53 p.
The status of planning processes in family-owned businesses: A study of transformational economy and its relationship to the financial performance of family-owned Ukrainian firms.
Lehkman, O., Jul 2009, Department of Management Studies, University of York.
Charismatic Leadership and its emergence under crisis conditions: A case study from the airline industry
Kakavogianni, D., Mar 2009, University of York, The York Management School, 71 p.
Employee Share Ownership Plans: A Review
Kaarsemaker, E., Pendleton, A. & Poutsma, E., Feb 2009, University of York, The York Management School, 30 p.
The Wife's Administration of the Earnings'? Working-Class Women and Savings in the Mid-Nineteenth Century
Maltby, J., Feb 2009, University of York, 30 p.
A Political Theory of Decentralization Dynamics.
Jurado, I., 2009, Madrid, (Juan March Institute Working Paper Series).
Exploring the impact of public services on quality of life indicators
Castelli, A., Jacobs, R., Goddard, M. & Smith, P. C., 2009, York UK: Centre for Health Economics, University of York, 148 p. (CHE Research Paper; no. 46).
Investigating Patient Outcome Measures in Mental Health
Jacobs, R., 2009, York, UK: Centre for Health Economics, University of York, 94 p. (CHE Research Paper; no. 48).
MRC-NICE scoping project: identifying the national institute for health and clinical excellence's methodological research priorities and an initial set of priorities
Longworth, L., Bojke, L., Tosh, J. & Sculpher, M., 2009, York, UK: Centre for Health Economics, University of York, 120 p. (CHE Research Paper; no. 51).
NHS Input and Productivity Growth 2003/4 - 2007/8
Street, A. & Ward, P., 2009, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 47).
Payment by results in mental health: A review of the international literature and an economic assessment of the approach in the English NHS
Mason, A. & Goddard, M., 2009, York, UK: Centre for Health Economics, University of York, 71 p. (CHE Research Paper; no. 50).
What explains variation in the costs of treating patients in English obstetrics specialties?
Laudicella, M., Olsen, K. R. & Street, A., 2009, York, UK: Centre for Health Economics, University of York, 25 p. (CHE Research Paper ; no. 49).
Asymmetric Response: Explaining Corporate Social Disclosure by Multi-National Firms in Environmentally Sensitive Industries
Toms, S., Jul 2008, University of York, 28 p.
'Strangers and Brothers': The Secret History of Profit, Value and Risk. An inaugural lecture.
Toms, S., Jun 2008, University of York, 26 p.
Dimensions of Design Space: A Decision-Theoretic Approach to Optimal Research Design
Conti, S. & Claxton, K. P., Jun 2008, York, UK: Centre for Health Economics, University of York, 28 p. (CHE Research Paper; no. 38).
Budgetary policies and available actions: a generalisation of decision rules for allocation and research decisions
McKenna, C., Chalabi, Z., Epstein, D. & Claxton, K., 2008, York, UK: Centre for Health Economics, University of York, 28 p. (CHE Research Paper; no. 44).
Determinants of generals practitioners' wages in England
Morris, S., Sutton, A., Gravelle, H., Elliott, B., Hole, A., Ma, A., Sibbald, B. & Skatun, D., 2008, York, UK: Centre for Health Economics, University of York, 21 p. (CHE Research Paper; no. 36).
Doctor behaviour under a pay for performance contract: Further evidence from the quality and outcomes framework
Gravelle, H., Sutton, M. & Ma, A., 2008, York, UK: Centre for Health Economics, University of York, 39 p. (CHE Research Paper; no. 34).
Economic analysis of cost-effectiveness of community engagement to improve health
Carr-Hill, R. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 33).
Establishing a fair playing field for payment by results
Mason, A., Miraldo, M., Siciliani, L., Sivey, P. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 88 p. (CHE Research Paper; no. 39).
Fairness in primary care procurement measures of under-doctoredness: sensitivity analysis and trends
Hole, A., Marini, G., Goddard, . M. & Gravelle, H., 2008, York, UK: Centre for Health Economics, University of York, 41 p. (CHE Research Paper; no. 35).
Living with Nuclear Power in Britain: A Mixed-methods Study
Pidgeon, N. F., Henwood, K., Parkhill, K. A., Venables, D. & Simmons, P., 2008, Cardiff University.
Measuring NHS output growth
Castelli, A., Laudicella, M. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 68 p. (CHE Research paper; no. 43).
Optimal contracts and contractual arrangements within the hospital: bargaining vs. take-it-or-leave-it offers
Galizzi, M. M. & Miraldo, M., 2008, York, UK: Centre for Health Economics, University of York, 37 p. (CHE Research Paper; no. 37).
Price adjustment in the hospital sector
Miraldo, M., Siciliani, L. & Street, A. D., 2008, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 41).
Price regulation of pluralistic markets subject to provider collusion
Longo, R., Miraldo, M. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 25 p. (CHE Research Paper; no. 45).
Public health care resource allocation and the rule of rescue
Cookson, R., Tsuchiya, A. & McCabe, C., 2008, Journal of Medical Ethics.
Quality in and Equality of Access to Healthcare Services in England
Goddard, M., 2008, York, UK: Centre for Health Economics, University of York, 72 p. (CHE Research Paper; no. 40).
The link between health care spending and health outcomes for the new English Primary Care Trusts
Martin, S., Rice, N. & Smith, P. C., 2008, York, UK: Centre for Health Economics, University of York, 51 p. (CHE Research Paper; no. 42).
The limits of market-based governance and accountability - PFI refinancing and the resurgence of the regulatory state
Beck, M., Toms, S. & Asenova, D., Jul 2007, Department of Management Studies, University of York.
The private finance initiative (PFI) and finance capital: A note on gaps in the "accountability" debate
Asenova, D. & Beck, M., Jul 2007, Department of Management Studies, University of York.
BSE crisis and food safety regulation: a comparison of the UK and Germany
Beck, M., Kewell, B. & Asenova, D., Mar 2007, Department of Management Studies, University of York, 36 p.
Business strategy and firm performance: the British corporate economy, 1949-1984
Antcliff, V., Higgins, D., Toms, S. & Wilson, J. F., 2007, Department of Management Studies, University of York.
Doctor behaviour under a pay for performance contract: Evidence from the quality and outcomes framework
Further evidence on the link between health care spending and health outcomes in England
Hospital financing and the development and adoption of new technologies
Miraldo, M., 2007, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 26).
Introducing activity-based financing: a review of experience in Australia, Denmark, Norway and Sweden
Street, A., Vitikainen, K., Bjorvatn, A. & Hvenegaard, A., 2007, York, UK: Centre for Health Economics, University of York, 56 p. (CHE Research Paper; no. 30).
Keynes and the cotton industry: a reappraisal
Higgins, D., Toms, S. & Filatotchev, I., 2007, Department of Management Studies, University of York.
Leadership then at all events
Wood, M. & Ladkin, D., 2007, Department of Management Studies, University of York, 46 p.
Mark versus Luke? Appropriate methods for the evaluation of public health interventions
Claxton, K., Sculpher, M. & Culyer, A., 2007, York, UK: Centre for Health Economics, University of York, 22 p. (CHE Research Paper; no. 31).
Measurement of non-market output in education and health
Smith, P. C. & Street, A., 2007, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 23).
Moving the gender agenda or stirring chicken's entrails?: where next for feminist methodologies in accounting?
Haynes, K., 2007, Department of Management Studies, University of York, 33 p.
Oldham capitalism and the rise of the Lancashire textile industry
Toms, S., 2007, Department of Management Studies, University of York.
Political, social and economic determinants of corporate social disclosure by multi-national firms in environmentally sensitive industries
Toms, S., Hasseldine, J. & Massoud, H., 2007, York: The York Management School.
Reference pricing versus co-payment in the pharmaceutical industry: firms' pricing strategies
Reference pricing versus co-payment in the pharmaceutical industry: price, quality and market coverage
The failed promise of foreign direct investment: some remarks on 'malign' investment and political instability in former Soviet states
Beck, M. & Acc-Nikmehr, N., 2007, Department of Management Studies, University of York.
The link between health care spending and health outcomes: Evidence from English programme budgeting data
Martin, S., Rice, N. & Smith, P., 2007, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 24).
There is no such thing as an audit society
Maltby, J., 2007, Department of Management Studies, University of York, 21 p.
'Real business'? : gendered identities in accounting and management academia
Haynes, K. & Fearful, A., 2007, Department of Management Studies, University of York.
"We do not share the troubles of our trans-Atlantic cousins": The statutory framework for accounting in the UK and the US in the interwar period
Maltby, J., 2007, Department of Management Studies, University of York.
Development of company law in India : the case of the Companies Act 1956
Verma, S. & Gray, S. J., Feb 2006, York: Department of Management Studies, University of York, 44 p.
The establishment of the Institute of Chartered Accountants of India (ICAI): the first step in the development of an accounting profession in post-independence India
Valuing human resources: perceptions and practices in UK organisations.
Verma, S. & Dewe, P., Feb 2006, York: Department of Management Studies, University of York, 42 p.
(Re)figuring accounting and maternal bodies: the gendered embodiment of accounting professionals
Haynes, K., Jan 2006, York: Department of Management Studies, University of York, 42 p.
Drugs for exceptionally rare diseases: a commentary on Hughes et al
Claxton, K., McCabe, C., Tsuchiya, A. & Raftery, J., 2006, QJM.
Industrial districts as organizational environments: resources, networks and structures
Popp, A., Toms, S. & Wilson, J., 2006, York: Department of Management Studies, University of York, 35 p.
International Students in the UK : how can we give them a better experience?
Warwick, P., 2006, York: Department of Management Studies, University of York, 26 p.
Other lives in accounting: critical reflections on oral history methodology in action
Haynes, K., 2006, York: Department of Management Studies, University of York, 32 p.
Quantum conductance of homogeneous and inhomogeneous interacting electron systems
Godby, R. W., Bokes, P. & Jung, J., 2006.
Reputation in organizational settings: a research agenda
Kewell, B., 2006, York: Department of Management Studies, University of York, 14 p.
Sustained Competitive Advantage and the Modern Labour Theory of Value
Toms, S., 2006, York Management School: University of York, 46 p.
The rise and fall of the patient forum
Warwick, P., 2006, York: Department of Management Studies, University of York, 8 p.
Who invests too much in employer stock, and why do they do it? Some evidence from uk stock ownership plans
Pendleton, A., 2006, York: Department of Management Studies, University of York, 32 p.
A project management module for virtual teaching
Ward, A., Dec 2005, Department of Management Studies: University of York, 18 p.
Making sense of tragedy: the 'reputational' antecedents of a hospital disaster
Kewell, B., Dec 2005, York: Department of Management Studies, University of York, 37 p.
The association between accounting and market-based risk measures
Toms, S., Nguyen, D. T. & Salama, A., Dec 2005, York: Department of Management Studies, University of York, 26 p.
The labour theory of value, risk and the rate of profit
Toms, S., Dec 2005, York: Department of Management Studies, University of York, 20 p.
Back to the future in NHS reform
Interactive situation modelling in knowledge intensive domains
Fernandes, K., 2005, York: Department of Management Studies, University of York, 27 p.
Self as social practice: rewriting the feminine in qualitative organizational research
Linstead, A., 2005, York: Department of Management Studies, University of York, 43 p.
The resource-based view of the firm and the labour theory of value
Toms, S., 2005, York: Department of Management Studies, York, 29 p.
Asset pricing models, the labour theory of value and their implications for accounting
Toms, S., Mar 2004, York: Department of Management Studies, University of York, 31 p.
Derrida reappraised: deconstruction, critique and emancipation in management studies
Learmonth, M., Mar 2004, York: Department of Management Studies, University of York, 31 p.
Forgotten feminists: The Federation of British Professional and Business Women, 1933-1969
Perriton, L., 2004, York: Department of Management Studies, University of York, 25 p.
Modelling time-constrained software development
Powell, A., 2004, York: Department of Management Studies, University of York, 21 p.
The reform of the NHS in Portugal
Diogo, C., 2004, York: Department of Management Studies, University of York, 39 p.
Theorising path dependence: how does history come to matter in organisations, and what can we do about it?
Greener, I., 2004, York: Department of Management Studies, University of York, 30 p.
Transforming identities: accounting professionals and the transition to motherhood
Haynes, K., 2004, York: Department of Management Studies, University of York, 46 p. (The York Management School Working Paper Series; no. 6).
Determining the parameters in a social welfare function using stated preference data: an application to health
Shaw, R., Dolan, P., Tsuchiya, A., Smith, P. & Williams, A., 2002, Applied Economics.
Influence of dynamics on magic numbers for silicon clusters
Porter, A. R. & Godby, R. W., 1997, p. 1-12, 11 p. | CommonCrawl |
Time-scaling transformation for optimal control problem with time-varying delay
Optimization model and solution method for dynamically correlated two-product newsvendor problems based on Copula
June 2020, 13(6): 1653-1682. doi: 10.3934/dcdss.2020097
A multi-stage method for joint pricing and inventory model with promotion constrains
Li Deng 1,2,3, , Wenjie Bi 1, , Haiying Liu 4,, and Kok Lay Teo 5,6,
Business School, Central South University, Changsha 410083, China
Department of Mathematics and Statistics, Curtin University, Perth, 6102, Australia
School of Economics and Management, Hunan University of Science and Engineering, Yongzhou 425199, China
School of Accountancy, Hunan University of Finance and Economics, Changsha 410205, China
Coordinated Innovation Center for Computable Modeling in Management Science, University of Finance and Economics, Tianjin 300222, China
* Corresponding author: Haiying Liu
Received February 2018 Revised August 2018 Published September 2019
In this paper, we consider a joint pricing and inventory problem with promotion constrains over a finite planning horizon for a single fast-moving consumer good under monopolistic environment. The decision on the inventory is realized through the decision on inventory replenishment, i.e., decision on the quantity to be ordered. The demand function takes into account all reference price mechanisms. The main difficulty in solving this problem is how to deal with the binary logical decision variables. It is shown that the problem is equivalent to a quadratic programming problem involving binary decision variables. This quadratic programming problem with binary decision variables can be expressed as a series of conventional quadratic programming problems, each of which can be easily solved. The global optimal solution can then be obtained readily from the global solutions of the conventional quadratic programming problems. This method works well when the planning horizon is short. For longer planning horizon, we propose a multi-stage method for finding a near-optimal solution. In numerical simulation, the accuracy and efficiency of this multi-stage method is compared with a genetic algorithm. The results obtained validate the applicability of the constructed model and the effectiveness of the approach proposed. They also provide several interesting and useful managerial insights.
Keywords: Pricing and Inventory, promotion optimization, promotion constrains, binary decision variables, quadratic programming, multi-stage method.
Mathematics Subject Classification: Primary: 90B50, 90B05.
Citation: Li Deng, Wenjie Bi, Haiying Liu, Kok Lay Teo. A multi-stage method for joint pricing and inventory model with promotion constrains. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1653-1682. doi: 10.3934/dcdss.2020097
H.-s. Ahn, M. Gümüş and P. Kaminsky, Pricing and manufacturing decisions when demand is a function of prices in multiple periods, Operations Research, 55 (2007), 1039-1057. doi: 10.1287/opre.1070.0411. Google Scholar
H. Arslan and S. Kachani, Dynamic Pricing under Consumer Reference-Price Effects, Wiley Encyclopedia of Operations Research and Management Science, 2010. doi: 10.1002/9780470400531.eorms0273. Google Scholar
M. Baucells, M. Weber and F. Welfens, Reference-point formation and updating, Management Science, 57 (2011), 506-519. doi: 10.1287/mnsc.1100.1286. Google Scholar
W. Bi, G. Li and M. Liu, Dynamic pricing with stochastic reference effects based on a finite memory window, International Journal of Production Research, 55 (2017), 3331-3348. doi: 10.1080/00207543.2016.1221160. Google Scholar
G. R. Bitran and S. V. Mondschein, Periodic pricing of seasonal products in retailing, Management Science, 43 (1997), 64-79. doi: 10.1287/mnsc.43.1.64. Google Scholar
M. Chen and Z.-L. Chen, Recent developments in dynamic pricing research: Multiple products, competition, and limited demand information, Production and Operations Management, 24 (2015), 704-731. doi: 10.1111/poms.12295. Google Scholar
M. Chen, Z.-L. Chen, G. Pundoor, S. Acharya and J. Yi, Markdown optimization at multiple stores, IIE Transactions, 47 (2015), 84-108. doi: 10.1080/0740817X.2014.916459. Google Scholar
X. Chen, P. Hu and Z. Hu, Efficient algorithms for the dynamic pricing problem with reference price effect, Management Science, 63 (2016), 4389-4408. doi: 10.1287/mnsc.2016.2554. Google Scholar
X. Chen, P. Hu, S. Shum and Y. Zhang, Dynamic stochastic inventory management with reference price effects, Operations Research, 64 (2016), 1529-1536. doi: 10.1287/opre.2016.1524. Google Scholar
X. Chen, S. W. Shum, P. Hu and Y. Zhang, Stochastic inventory model with reference price effects, Operations Research, (2013). Google Scholar
X. Chen and D. Simchi-Levi, Coordinating inventory control and pricing strategies with random demand and fixed ordering cost: The finite horizon case, Operations Research, 52 (2004), 887-896. doi: 10.1287/opre.1040.0127. Google Scholar
X. Chen and D. Simchi-Levi, Pricing and inventory management, The Oxford Handbook of Pricing Management, (2012), 784Ƀ824. doi: 10.1093/oxfordhb/9780199543175.013.0030. Google Scholar
M. C. Cohen, N.-H. Z. Leung, K. Panchamgam, G. Perakis and A. Smith, The impact of linear optimization on promotion planning, Operations Research, 65 (2017), 446-468. doi: 10.1287/opre.2016.1573. Google Scholar
P. R. Dickson and A. G. Sawyer, The price knowledge and search of supermarket shoppers, The Journal of Marketing, (1990), 42-53. Google Scholar
W. Elmaghraby and P. Keskinocak, Dynamic pricing in the presence of inventory considerations: Research overview, current practices, and future directions, Management Science, 49 (2003), 1287-1309. Google Scholar
G. Fibich, A. Gavious and O. Lowengart, Explicit solutions of optimization models and differential games with nonsmooth (asymmetric) reference-price effects, Operations Research, 51 (2003), 721-734. doi: 10.1287/opre.51.5.721.16758. Google Scholar
K. Gedenk, S. A. Neslin and K. L. Ailawadi, Sales promotion, Retailing in the 21st Century, (2006), 345-359. Google Scholar
L. Gimpl-Heersink, Joint Pricing and Inventory Control under Reference Price Effects, PhD thesis, WU Vienna University of Economics and Business, 2008. doi: 10.3726/b13901. Google Scholar
E. A. Greenleaf, The impact of reference price effects on the profitability of price promotions, Marketing Science, 14 (1995), 82-104. doi: 10.1287/mksc.14.1.82. Google Scholar
M. G. Güler, The value of modeling with reference effects in stochastic inventory and pricing problems, Expert Systems with Applications, 40 (2013), 6593-6600. Google Scholar
M. G. Güler, T. Bilgiç and R. Güllü, Joint inventory and pricing decisions with reference effects, IIE Transactions, 46 (2014), 330-343. Google Scholar
M. G. Güler, T. Bilgiç and R. Güllü, Joint pricing and inventory control for additive demand models with reference effects, Annals of Operations Research, 226 (2015), 255-276. doi: 10.1007/s10479-014-1706-3. Google Scholar
R. L. Hall and C. J. Hitch, Price theory and business behaviour, Oxford Economic Papers, 2 (1939), 12-45. doi: 10.1093/oxepap/os-2.1.12. Google Scholar
P. K. Kopalle, A. G. Rao and J. L. Assuncao, Asymmetric reference price effects and dynamic pricing policies, Marketing Science, 15 (1996), 60-85. doi: 10.1287/mksc.15.1.60. Google Scholar
D. Levy, M. Bergen, S. Dutta and R. Venable, The magnitude of menu costs: direct evidence from large US supermarket chains, The Quarterly Journal of Economics, 112 (1997), 791-824. doi: 10.1162/003355397555352. Google Scholar
D. Levy, S. Dutta, M. Bergen and R. Venable, Price adjustment at multiproduct retailers, Managerial and Decision Economics, 19 (1998), 81-120. doi: 10.1002/(SICI)1099-1468(199803)19:2<81::AID-MDE867>3.0.CO;2-W. Google Scholar
T. Mazumdar, S. Raj and I. Sinha, Reference price research: Review and propositions, Journal of Marketing, 69 (2005), 84-102. doi: 10.1509/jmkg.2005.69.4.84. Google Scholar
D. C. Montgomery, M. Bazaraa and A. K. Keswani, Inventory models with a mixture of backorders and lost sales, Naval Research Logistics (NRL), 20 (1973), 255-263. doi: 10.1002/nav.3800200205. Google Scholar
J. Nasiry and I. Popescu, Dynamic pricing with loss-averse consumers and peak-end anchoring, Operations Research, 59 (2011), 1361-1368. doi: 10.1287/opre.1110.0952. Google Scholar
S. Netessine, Dynamic pricing of inventory/capacity with infrequent price changes, European Journal of Operational Research, 174 (2006), 553-580. doi: 10.1016/j.ejor.2004.12.015. Google Scholar
I. Popescu and Y. Wu, Dynamic pricing strategies with reference effects, Operations Research, 55 (2007), 413-429. doi: 10.1287/opre.1070.0393. Google Scholar
S. A. Smith, N. Agrawal and S. H. McIntyre, A discrete optimization model for seasonal merchandise planning, Journal of Retailing, 74 (1998), 193-221. doi: 10.1016/S0022-4359(99)80093-1. Google Scholar
Y. Song, S. Ray and T. Boyaci, Optimal dynamic joint inventory-pricing control for multiplicative demand with fixed order costs and lost sales, Operations Research, 57 (2009), 245-250. doi: 10.1287/opre.1080.0530. Google Scholar
A. Taudes and C. Rudloff, Integrating inventory control and a price change in the presence of reference price effects: A two-period model, Mathematical Methods of Operations Research, 75 (2012), 29-65. doi: 10.1007/s00186-011-0374-1. Google Scholar
A. Tversky and D. Kahneman, Loss aversion in riskless choice: A reference-dependent model, The Quarterly Journal of Economics, 106 (1991), 1039-1061. Google Scholar
T. L. Urban, Coordinating pricing and inventory decisions under reference price effects, International Journal of Manufacturing Technology and Management, 13 (2008), 78-94. doi: 10.1504/IJMTM.2008.015975. Google Scholar
S. Wu, Q. Liu and R. Q. Zhang, The reference effects on a retailer's eynamic pricing and inventory strategies with strategic consumers, Operations Research, 63 (2015), 1320-1335. doi: 10.1287/opre.2015.1440. Google Scholar
C. A. Yano and S. M. Gilbert, Coordinated pricing and production/procurement decisions: a review, Managing Business Interfaces, 16 (2005), 65-103. doi: 10.1007/0-387-25002-6_3. Google Scholar
M. J. Zbaracki, M. Ritson, D. Levy, S. Dutta and M. Bergen, Managerial and customer costs of price adjustment: Direct evidence from industrial markets, The Review of Economics and Statistics, 86 (2004), 514-533. Google Scholar
Y. Zhang, Essays on Robust Optimization, Integrated Inventory and Pricing, and Reference Price Effect, PhD thesis, University of Illinois at Urbana-Champaign, 2010. Google Scholar
Figure 1. The paths of optimal prices
Figure 2. The paths of reference prices
Figure 3. The optimal pricing paths of models without promotion constraints
Figure 4. Comparison of the optimal pricing paths ($ 1\leq M\leq8 $)
Figure 5. Comparison of the optimal pricing paths ($ 9\leq M\leq19 $)
Figure 6. The paths of optimal replenishment quantities
Figure 7. Change of the inventory level
Figure 8. Change of the backorder level
Figure 9. Comparison of the total profit obtained by different methods
Figure 10. Comparison of the computational time used by different methods
Figure 11. Relations between L/S and the total profit
Figure 12. Relations between L/S and the total profit with L/S cost being taken into consideration
Figure 13. Relations between T and the total profit
Figure 14. Relations between h/b and the total profit
Table 1. Notations
Variable Description
$ T\; (T \in Z^+) $ length of the planning horizon
$ t\; (t \in \{1,\ldots,T\}) $ time period
$ L\; (L \in \{1,\ldots,T\}) $ the maximum total number of promotion times in the planning horizon
$ S\; (S \in \{0,\ldots,T-1\}) $ the minimum separating period (a separating period is a period which spaces out the two successive promotions)
$ M\; (M \in \{0,\ldots,T-1\}) $ length of consumers' memory window
$ \underline{u_1}\; (\underline{u_1}\in (0,\infty)) $ permitted minimum promotion price
$ p_0\; (p_0\in (\underline{u_1},\infty)) $ full/normal price without promotion
$ \overline{u_1}\; (\overline{u_1}\in (\underline{u_1},p_0)) $ permitted maximum promotion price
$ u_1(t)\; (u_1(t) \in [\underline{u_1},\overline{u_1}]\; or\; u_1(t)=p_0) $ price in period $ t $
$ \underline{u_2}\; (\underline{u_2}\in (0,\infty)) $ permitted minimum ordering quantity
$ \overline{u_2}\; (\overline{u_2} \in (\underline{u_2},\infty)) $ permitted maximum ordering quantity
$ u_2(t)\; (u_2(t) \in [\underline{u_2},\overline{u_2}]) $ ordering quantity in period $ t $
$ u_3(t)\; (u_3(t)\in \{0,1\}) $ marking decision variable of promotion in period t, set $ u_3(t)=1 $ when there exists a price discount in period t and $ u_3(t)=0 $ when the item's price in period t equals to $ p_0 $
$ y_{max}\; (y_{max}\in (0,\infty)) $ permitted maximum inventory level in each period
$ y_t\; (y_t\in (-\infty,y_{max}]) $ initial inventory level in period t, where $ y_t <0 $ means the back-order demand is $ |y_t| $ at period $ t $
$ y_1\; (y_1\in [0,\infty)) $ initial inventory level at the beginning of the planning horizon
$ c\; (c\in (0,\infty)) $ per unit ordering cost
$ h\; (h\in (0,\infty)) $ per unit inventory cost
$ b\; (b\in (0,\infty)) $ per unit back-order cost
$ d_t\; (d_t\in [0,\infty)) $ demand at period t
$ \gamma\; (\gamma\in (0,1)) $ discount factor
Table 2. Comparison of the total profit obtained by different methods ($ T\in \{x\mid x = 20+5y,\; y = 1,2,\ldots,7\} $)
Conditions (T, L, S) Enumeration Method ($) 2-stage Method ($) 3-stage Stage ($) 4-stage Method ($) GA-Roulette Wheel ($) GA-Tournament ($) GA-Random ($)
T=20;S=4; 320.75 318.93 320.75 320.75 319.86 314.74 320.75
T=35;S=4; - 400.41 400.61 397.40 396.31 398.86 396.73
Table 3. Comparison of the computational time for using different methods ($ T\in \{x\mid x = 20+5y,\; y = 1,2,\ldots,7\} $)
Conditions (T, L, S) Enumeration Method (s) 2-stage Method (s) 3-stage Stage (s) 4-stage Method (s) GA-Roulette Wheel (s) GA-Tournament (s) GA-Random (s)
T=20;S=4; 102.52 1.26 1.02 1.23 44.19 40.76 61.43
T=20;S=2; 186.40 8.66 1.81 1.68 78.61 102.34 100.20
T=25;S=4; 3119.32 4.59 1.93 0.98 82.83 78.76 99.75
T=25;S=3; 3302.93 7.21 1.75 4.26 98.68 109.26 102.26
T=25;S=2; 4183.17 16.69 4.18 2.54 121.41 108.78 118.15
T=30;S=4; 98508.37 13.99 3.09 2.75 107.32 128.91 154.09
T=30;S=3; 100972.64 26.12 5.46 3.03 111.41 148.17 134.50
T=35;S=4; - 40.80 6.76 4.67 164.83 151.35 126.96
T=35;S=3; - 64.80 11.00 4.24 148.59 177.05 166.54
T=40;S=4; - 177.39 12.91 5.52 166.76 197.82 166.82
T=45;S=3; - 611.07 24.03 10.58 194.80 177.25 211.04
T=50;S=4; - 3810.19 23.29 9.68 160.34 142.00 130.60
T=50;S=3; - 4411.66 39.18 14.79 149.26 128.57 147.92
T=55;S=4; - 14704.68 64.87 14.13 172.13 162.80 168.13
T=55;S=2; - 23839.38 239.74 49.88 158.11 178.72 172.18
Conditions (T, L, S) 6-stage Method ($) GA-Roulette Wheel ($) GA-Tournament ($) GA-Random ($)
T=50;S=4; 430.93 425.82 426.32 426.94
Table 5. Comparison of the computational time for using different methods ($T\in \{x\mid x = 50+5y,\; y = 1,2,\ldots,7\} $)
Conditions (T, L, S) 6-stage Method (s) GA-Roulette Wheel (s) GA-Tournament (s) GA-Random (s)
T=50;S=4; 5.66 160.34 142.00 130.60
T=65;S=4; 13.83 224.00 253.94 246.07
Ripeng Huang, Shaojian Qu, Xiaoguang Yang, Zhimin Liu. Multi-stage distributionally robust optimization with risk aversion. Journal of Industrial & Management Optimization, 2021, 17 (1) : 233-259. doi: 10.3934/jimo.2019109
Sun-Ho Choi, Hyowon Seo, Minha Yoo. A multi-stage SIR model for rumor spreading. Discrete & Continuous Dynamical Systems - B, 2020, 25 (6) : 2351-2372. doi: 10.3934/dcdsb.2020124
Rocio de la Torre, Amaia Lusa, Manuel Mateo, El-Houssaine Aghezzaf. Determining personnel promotion policies in HEI. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1835-1859. doi: 10.3934/jimo.2019031
Songqiang Qiu, Zhongwen Chen. An adaptively regularized sequential quadratic programming method for equality constrained optimization. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2675-2701. doi: 10.3934/jimo.2019075
Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013
Juliang Zhang. Coordination of supply chain with buyer's promotion. Journal of Industrial & Management Optimization, 2007, 3 (4) : 715-726. doi: 10.3934/jimo.2007.3.715
Ming-Jong Yao, Tien-Cheng Hsu. An efficient search algorithm for obtaining the optimal replenishment strategies in multi-stage just-in-time supply chain systems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 11-32. doi: 10.3934/jimo.2009.5.11
Pengyu Yan, Shi Qiang Liu, Cheng-Hu Yang, Mahmoud Masoud. A comparative study on three graph-based constructive algorithms for multi-stage scheduling with blocking. Journal of Industrial & Management Optimization, 2019, 15 (1) : 221-233. doi: 10.3934/jimo.2018040
Jinghuan Li, Yu Li, Shuhua Zhang. Optimal expansion timing decisions in multi-stage PPP projects involving dedicated asset and government subsidies. Journal of Industrial & Management Optimization, 2020, 16 (5) : 2065-2086. doi: 10.3934/jimo.2019043
Jinghuan Li, Shuhua Zhang, Yu Li. Modelling and computation of optimal multiple investment timing in multi-stage capacity expansion infrastructure projects. Journal of Industrial & Management Optimization, 2022, 18 (1) : 297-314. doi: 10.3934/jimo.2020154
Jide Sun, Lili Wang. The interaction between BIM's promotion and interest game under information asymmetry. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1301-1319. doi: 10.3934/jimo.2015.11.1301
Zhenbo Wang, Shu-Cherng Fang, David Y. Gao, Wenxun Xing. Global extremal conditions for multi-integer quadratic programming. Journal of Industrial & Management Optimization, 2008, 4 (2) : 213-225. doi: 10.3934/jimo.2008.4.213
Paul B. Hermanns, Nguyen Van Thoai. Global optimization algorithm for solving bilevel programming problems with quadratic lower levels. Journal of Industrial & Management Optimization, 2010, 6 (1) : 177-196. doi: 10.3934/jimo.2010.6.177
Jun Tu, Zijiao Sun, Min Huang. Supply chain coordination considering e-tailer's promotion effort and logistics provider's service effort. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021062
Meiling Jin, Yufu Ning, Fengming Liu, Yan Wang, Chunhua Gao. Uncertain KOL selection with advertising videos circulation and KOL selection diversification in advertising promotion. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021219
Yanqin Bai, Chuanhao Guo. Doubly nonnegative relaxation method for solving multiple objective quadratic programming problems. Journal of Industrial & Management Optimization, 2014, 10 (2) : 543-556. doi: 10.3934/jimo.2014.10.543
Harish Garg, Dimple Rani. Multi-criteria decision making method based on Bonferroni mean aggregation operators of complex intuitionistic fuzzy numbers. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2279-2306. doi: 10.3934/jimo.2020069
Shoufeng Ji, Jinhuan Tang, Minghe Sun, Rongjuan Luo. Multi-objective optimization for a combined location-routing-inventory system considering carbon-capped differences. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021051
Xiaoling Sun, Xiaojin Zheng, Juan Sun. A Lagrangian dual and surrogate method for multi-dimensional quadratic knapsack problems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 47-60. doi: 10.3934/jimo.2009.5.47
Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial & Management Optimization, 2020, 16 (1) : 1-9. doi: 10.3934/jimo.2018136
Li Deng Wenjie Bi Haiying Liu Kok Lay Teo | CommonCrawl |
Modelling trust and risk for cloud services
Erdal Cayirci1 &
Anderson Santana de Oliveira2
Journal of Cloud Computing volume 7, Article number: 14 (2018) Cite this article
A joint trust and risk model is introduced for federated cloud services. The model is based on cloud service providers' performance history. It addresses provider and consumer concerns by relying on trusted third parties to collect soft and hard trust data elements, allowing for continuous risk monitoring in the cloud. The negative and positive tendencies in performance are differentiated and the freshness of the historic data is considered in the model. It addresses aleatory uncertainty through probability distributions and static stochastic simulation. An analytical insight into the model is also provided through the numerical analysis by Monte-Carlo simulation.
New cloud services and architectures are introduced every day, and cloud service providers (CSP) begin federating cloud services as cloud service mashups [3, 7,8,9,10, 22,23,24, 55]. A cloud service mashup (CSM) comprises multiple cloud services of various delivery models (i.e., Iaas, PaaS or SaaS) for providing a composite service, and can be in one of the following structures:
Intra data-center: All the cloud services in the fan-in of a CSM (i.e., the services that compose the CSM) are co-located in the same data center.
Inter data-centre: The cloud services in the fan-in of a CSM are located in multiple data centres owned by the same CSP.
Inter cloud service provider: The cloud services in the fan-in of a CSM are located in multiple data centres owned by multiple CSPs.
We call intra data-centre and inter data-centre mashups as internal and inter CSP mashups as external CSM.
The outsourcing model of CSM presents economic and technological advantages. However, it also impacts on data governance, as risks and compliance management are delegated to third parties. The security practices of these third parties may not be visible to cloud customers (CCs), raising the question about the accountability of service providers when processing data in highly dynamic and heterogeneous environments. Accountability regards the data stewardship regime in which organisations that are entrusted with personal and business confidential data are responsible and liable for processing, sharing, storing and using the data according to the contractual and legal constraints from the time it is collected until when the data is destroyed [47].
CCs need to trust that their CSP secure the CC data, and to provide the quality of service (QoS) and grade of service (GoS), i.e., service level objectives (SLO) agreed in service level agreements (SLA). Hence, both CC and CSP take risk. The risk taken by CCs is that their operations may be hampered due to service outages, security or privacy breaches. The risk for CSPs is two folded: They may not be able to fulfil SLOs agreed in an SLA, and therefore face penalties and loose reputation. Secondly, a CSP may also be a CC for the services provided by the other clouds in a CSM, and therefore CSPs are also subject to the risks similar to the ones taken by CCs.
There are various models developed to analyse and to assess the risks and the trustworthiness of information systems and services. We provide a short survey on them in Section "Related work and definitions". To the best of our knowledge, there is not a quantitative risk and trust model which is based on the CSP performance data and can be used for internal and external CSM. In this paper, we introduce a new quantitative and stochastic scheme called joint risk and trust model (JRTM) for CSMs. JRTM is based on the definitions made by "Accountability for Cloud and Other Future Internet Services" (A4Cloud), which was a large European Union Framework Seven Project with partners from academia (i.e., law, data science and computer science departments) and industry. A4Cloud contacted with many stakeholders including users and organizations, such as, Cloud Security Alliance to determine and to verify the risk and trust parameters and their inter-relations. JRTM is a product of the A4Cloud Project and provides a quantitative scale to analyse the risk and trust jointly.
JRTM is based on a stochastic process, and therefore addresses uncertainty. It uses observations that we call evidence to assess the risk and to build trust for a CSP. These observations are in abstract level such that the details about threats, vulnerabilities, CSP architectures and security schemes are not needed for the risk assessment. Hence, JRTM is low cost, scalable and practical. JRTM depends on historic data collected by a trust as a service (TaaS) provider, a trusted third party, who makes recommendations about the trustworthiness of a CSM. TaaS approach helps overcoming barriers due to the lack of transparency. The level of risks acceptable for a CSM is defined by the TaaS Provider and CC together. CC also provides several parameters that influence the effects of the historic data to the results of the model based on the freshness and the tendency of the data (i.e., if it changes in negative or positive direction).
Both risk and trust have been extensively studied in various contexts for hundreds of years. Risk management, and specifically risk assessment for IT has also been a hot research topic for several decades [25, 32]. On the other hand, modelling risk and trust for cloud computing and associating it with the notion of accountability has attracted researchers only recently [29, 46]. In Section "Related work and definitions", we provide a short survey on these recent risk and trust modelling related work. In the same section, we also give some definitions for the terms that we refer later. We explain the details about our new model, JRTM, in Section "Joint trust and risk model for cloud service mashups". We analyse the sensitivity of JRTM against several engineering parameters by using the results from our simulation based experiments in Section "Experimental results". The performance of JRTM is also evaluated in the same section. Finally, we conclude our paper in Section "Conclusions".
Related work and definitions
International Organization for Standardization (ISO) published a standard on Risk Management [27], ISO 31000, and the joint publication by ISO and The International Electrotechnical Commission (IEC) complemented ISO 31000 with the publication of ISO/IEC 31010 [19, 28] about risk assessment techniques. Both of these standards are generic. Information Technology (IT) Governance Institute and the Information Systems Audit and Control Association (ISACA) introduced COBIT in 1996, which is a common language to communicate the goals, objectives and results of businesses. The latest version of COBIT is from 2013 and provides recommendations also on enterprise risk management [26]. COBIT is a generic framework for information technology (IT), and its adaptation to Cloud Computing has been made for selected cases [21]. JRTM is a quantitative risk assessment scheme specifically designed for cloud service mashups and complies with the definitions made in all of these standards.
In its recommendations on risk assessment for cloud computing [18], European Network and Information Security Agency (ENISA) provides a list of relevant incident scenarios, assets and vulnerabilities. It suggests estimating the level of risk on the basis of likelihood of a risk scenario mapped against the estimated negative impact, which is the common approach for the risk formulation in the literature [4, 11, 12, 27, 28, 32]. Although ENISA's recommendations are specific for cloud approach, it is a generic framework that does not provide a way to map the specifics of CSPs and CCs to the 35 risk scenarios listed in the report [18]. This risk assessment by ENISA is based on a qualitative inductive risk model. Another qualitative inductive scheme is by "The Commission nationale de l'informatique et des libertés" (CNIL), in English "The French National Commission on Informatics and Liberty" [12] more recently. CNIL's methodology is similar to the one by ENISA with the following differences: It is a risk assessment focused on privacy risks in cloud computing. It also recommends measures to reduce the risks and assess the residual privacy risks after the application of these measures. However, it is still generic and does not differentiate CSPs or CCs. JRTM is also a risk assessment scheme similar to ENISA's assessment and CNIL's report. However, it is not generic for cloud approach as ENISA and CNIL methodologies but for a specific cloud service mashup and a CC. Another difference is that JRTM is dynamic (i.e., not a fixed risk assessment for cloud concept), quantitative and based on historic data.
Cloud Security Alliance Cloud Assessment Initiative Questionnaire (CAIQ) [13] is a questionnaire prepared for CSPs to document the implemented security measures. It is based on the Cloud Control Matrix (CCM) taxonomy of security controls. The questionnaire has been answered by many CSPs, and is publicly available in CSA Security, Trust and Assurance Registry (STAR) [15]. Cloud Adoption Risk Assessment Model (CARAM) [11] is a model developed and implemented by A4Cloud recently. CARAM is another qualitative model that adapts the methodology and assessments made by ENISA and CNIL to assess the risk for a given CSP-CC pair. It is a decision support tool designed to help CCs in selecting a CSP that fits best to their risk profile. It is different from JRTM because it is a qualitative scheme, and it does not use performance data related to CSPs but the information about how CSPs implement the security measures.
A risk is the product of a threat, a vulnerability and the consequences (i.e., the impact of an incident) [17, 20, 32], and cloud computing is subject to a long list of threats [14] and vulnerabilities [9]. A CC has a special challenge in risk assessment for the cloud when compared to conventional information technology (i.e., other than cloud) customers. CSPs usually keep the locations, architecture and details about the security of their server farms and data centers confidential from CCs. Therefore, it is more difficult to a CC to assess all the threats and vulnerabilities. Additionally, CSPs have to prioritize the issues to solve when risks are realized. A CC has to rely on the autonomic procedures of CSP for managing the infrastructure appropriately according to the CCs' security dynamics, treating the CCs' issues in a timely manner, detecting, recovering and reporting the security incidents accurately. These uncertainties increase risk and imply that the CCs have to trust CSP [50], and on its certifications, without further insight into the real time risk landscape. JRTM takes all these facts into account. For risk assessment, JRTM does not require the details about the technical structure, vulnerabilities and threats specific to a CSP.
Risk and trust should not be treated as related only to security but also QoS and GoS. The centralization and mutualization of resources reduce the costs. However, shared resources may be congested from time to time. Congestion control, service differentiation, user differentiation and prioritization are complex challenges especially for large clouds with high scalability requirements. The CCs need to be assured that their SLOs on GoS and QoS requirements are fulfilled and their operations are not hampered due to congested cloud resources. Providing such an assurance, measuring and guaranteeing QoS/GoS are not trivial tasks. JRTM treats QoS and GoS related risks within the service risk domain based on the SLOs agreed in SLAs.
Accountability [46] and trust are concepts required to be realized before potential CCs embrace cloud computing approach. Therefore, "trust" with cloud computing perspective has attracted researchers recently [45, 49], and "trust as a service" is introduced to the cloud business model. Standardised trust models are needed for verification and assurance of accountability, but none of the large number of existing trust models to date is adequate for the cloud environment [34]. There are many trust models which strive to accommodate some of the factors defined by [35] and others [6] and there are many trust assessment mechanisms which aim to measure them.
Definition of trust can be a starting point for modeling it. In [39, 50], trust is defined as "the willingness of a party to be vulnerable to the action of another party based on the expectation that the other will perform a particular action important to the trusting party, irrespective to the ability to monitor or control the trusted party". This definition does not fully capture all the dynamics of trust, such as the probabilities that the trustee will perform a particular action and will not engage in opportunistic behavior [45]. There are also hard and soft aspects of trust [42, 53, 60]. Hard part of trust depends on the security measures, such as authentication and encryption, and soft trust is based on things like brand loyalty and reputation. In [51], the authors introduce not only security but also accountability and auditability as elements which impact CC trust in cloud computing and show that they can be listed among the hard aspects. In [31], SLA is identified as the only way that the accountability and auditability of a CSP is clarified and therefore a CSP can make CCs trust them. The conclusion is that "trust" is a complex notion to define. Although JRTM is a quantitative model, it differentiates the soft and hard parts of the trust. To the best of our knowledge, JRTM is the only model specifically designed for cloud service mashups and integrates all these aspects into a practical formulation.
In [49], the CC trust to a CSP is related to the following parameters:
Data location: CCs know where their data are actually located.
Investigation: CCs can investigate the status and location of their data.
Data segregation: Data of each CCs are separated from the others.
Availability: CCs can access services and their data pervasively at any time.
Privileged CC access: The privileged CCs, such as system administrators, are trustworthy.
Backup and recovery: CSP has mechanisms and capacity to recover from catastrophic failures and not susceptible for disasters.
Regulatory compliance: CSP complies with security regulations, certified for them and open for audits.
Long-term viability: CSP has been performing the required standards for a long time.
The authors in [49] statistically analyze the results of a questionnaire answered by 72 CCs to investigate the perception of the CCs on the importance of the parameters above. According to this analysis, backup and recovery produces the strongest impact on a CC's trust in cloud computing followed by availability, privileged CC access, regulatory compliance, long-term viability and data location. Their survey showed that data segregation and investigation have weak impact on a CC's trust on cloud computing. In [33], the Authors propose giving controls to CCs, so they can monitor the parameters explained above [49]. They categorize these controls into five broad classes as controls on data stored, data during processing, software, regulatory compliance and billing. The techniques that need to be developed for these controls include remote monitoring, prevention of access to residual data, secure outsourcing, data scrambling, machine readable regulations and SLA, automatic reasoning about compliance, automatic collection of real time consumption data, and the capability of making your own bill. Although these are techniques which have already been developed for both cloud computing and the other purposes, many CSP still need time for their implementation, deployment and maturity. They also require quite an effort and expertise by CCs. Moreover, using these controls for all the services in a mashup may not always be practical. JRTM and TaaS Provider approach eliminates the requirement for the controls given to CCs for building trust.
In [5], risk is modelled in relation with trust. The reliability trust is defined as the probability of success and included into the risk based decision making process for a transaction. In [62], trust is introduced for assessing risks on the basis of organizational setting of a system. The trustworthiness of critical actors impacts on the probability of a risk scenario. [62] addresses this relation. JRTM links risk and trust, too. JRTM differs from these two models such that trust is calculated based on the probability that a CSP can eliminate a risk scenario. Moreover, JRTM is specifically designed for CSM while [5, 62] are for making investment decisions and managing critical systems like an air traffic control system.
Several frameworks have been proposed to assist users in service selection based on a variety of criteria such as QoS performance [56, 58], trust and reputation level [38, 43, 57, 59, 61] and privacy [16, 36]. Please note again that JRTM is not a service selection or service mashup configuration scheme. JRTM assess the risk level for a given CSM and makes recommendation if it is below the risk level acceptable by the CC. JRTM can be used by a service selection scheme similar to the ones cited in this paragraph. The main difference of JRTM from the schemes listed above is that JRTM makes the recommendation based on risk and trust jointly by taking privacy, security and service risks together into account for a specific CSM.
Before explaining JRTM in detail, we would like to introduce additional definitions for our model, and then to clarify the setting and the environment where JRTM can be used:
Threat: A threat is the potential cause of an unwanted incident, which may result in harm to a system, person or organization.
Vulnerability: Vulnerability is the weakness of an asset or control that can be exploited by a threat.
Asset: An asset is something of value to the organization, which may be tangible (e.g., a building, computer hardware) or intangible (e.g., knowledge, experience, know-how, information, software, data).
Control: A Control prevents or reduces the probability of a security, privacy or service incident (preventive or deterrent control), indicates that an incident has occurred (detective control) and/or minimizes the damage caused by an incident, i.e. reduces or limits the impact (corrective control).
Personal data: Personal data relate to an individual who can be identified. The identification of the individual does not need to be directly. For example, there can be many people, whose name are John and were born on a certain date, but there may be only one John with that birth date and working in a certain company.
Personally identifiable information (PII): PII are data that identify a person, such as social security number.
Data subject: A data subject is an individual or organization who is the subject of personal data.
Data controller: A data controller is an institution, organizational entity or person who alone or jointly with others determine the purposes and means of the processing of data.
Incident: An incident is an event that results in a security, privacy or service violation/outage; e.g., respectively confidential data leakages after an attack, personal data collection without appropriate consent from the data subjects, or data cannot be recovered after a hardware failure.
Event: An event is something that creates a vulnerability which may be exploited by a threat to compromise someone's asset(s). It is important not to confuse event with incident; For instance, losing an access badge is a security event. If an outsider uses the lost badge to enter a building without authorization, then it is an incident.
Security incident: A security incident can be defined as a single attack or a group of attacks that can be distinguished from other attacks by the method of attack, identity of attackers, victims, sites, objectives or timing, etc. It results in the violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices.
Privacy Incident: A privacy incident can be an intentional or unintentional violation of the consent obtained by the data controller from the data subjects, or a violation of the applicable data protection regulatory framework. A privacy incident can be the result of a security or service incident. For example, a data controller uses data for purposes not originally declared; an attacker gains access to personally identifiable information (PII); personal data is transferred to third parties without consent.
Service Incident - A service incident is an event that violates the terms of service, service level agreement, or contracts between the CC and the CSP. It may be the result of failure (e.g. power outage, natural disaster, hardware failure, or human errors), attacks, or intervention of third parties (governmental agencies or law enforcement) preventing customers to use the services as established via contracts, resulting in service outages. Please note that we count the incidents caused by denial of service (DoS) attacks as security incidents, because their results are service outages.
A risk is a combination of the probability/likelihood of an incident and its impact/consequences. Since a risk is realized when threats exploit vulnerabilities, the probability of an incident is related to the existence of threats and vulnerabilities, as well as, the capabilities and willingness of the threats to exploit the vulnerabilities. On the other hand, the consequences depend on the assets owned by the subject, and it's security policy. In JRTM, the probability is based on the historic data related to the CSP performance, and the consequences are represented by means of the maximum incident probability acceptable by the CC, i.e., thresholds. If the impact on a data subject is high (i.e., the value of the assets are high), lower thresholds are set.
JRTM is designed as a risk assessment tool for TaaS providers. A TaaS provider can be an organization like Cloud Security Alliance (CSA) or a certification agency in which all kinds of cloud ecosystem stakeholders are represented. Please note again that, JRTM alone is not for composing/configuring CSM or selecting a CSM. It assess if the risks related to a CSM are below the acceptable risk level for a CC. However, JRTM can also be used together with a service composition or selection tool. We explain how to do this in the next Section.
Joint trust and risk model for cloud service mashups
We first would like to highlight that JRTM is not an architecture or mechanism to build trust but a model that supports a CC to decide if a service mashup is trustworthy enough. In other words, our model computes if the risks associated with using a CSM is below the risk level that the CC is ready to accept. This model can be embedded into an overarching framework such as the one introduced in [54].
Collecting evidence for JRTM
JRTM is based on the CSP performance data collected by a TaaS provider. Evidence (i.e., performance data) are collected (i.e., counted) for periods as shown in Fig. 1. The length of the periods depends on the CSP dynamics, such as the number of subscribers and services, and may vary from the order of hours to the order of weeks.
Collecting the evidence for risks
For collecting evidences, the TaaS providers depend on the transparency by the CSP, i.e., the reports by the CSP. A CSP reports every event and incident to the TaaS provider as soon as they are detected. Note that JRTM is independent from the protocol and the means for reporting the events/incidents. This approach (i.e., CSP transparency to a TaaS provider) is more practical comparing to the approach that recommends giving controls to every CC [33], because:
It is more secure for CSP comparing to the controls given to every CC. The probability that a TaaS provider makes use of controls to compromise the security or the performance of a cloud is lower.
The TaaS provider does not need to share all the technical data with every CC. Therefore, CSP can protect both commercially and security wise sensitive data.
CCs do not need to monitor or to control CSP for every cloud service. Instead, they take a recommendation from a third party who is an expert on this topic.
When TaaS providers are organizations accepted by the cloud industry, they may act also as a quality assurance mechanism. Therefore, accredited certification third parties can naturally become also a TaaS provider.
The TaaS provider approach assumes that accurate event and incident data can be collected by the TaaS providers. One may argue that this is not a realistic assumption because CSPs would not share this information with the TaaS providers. Therefore, our model also has a penalty scheme for the CSPs that do not report accurately. TaaS providers can detect false or incomplete incident reports during regular audits, triggering penalties. In addition to this, TaaS providers can use monitoring tools similar to the ones used for Monitoring as a Service, such as, Amazon Cloud Watch [1], Paraleap AzuroWatch [44], RackSpace CloudKick [48], Ganglia [37], Nagios [40], Zabbix [63], MonALISA [41] and GridICE [2]. Finally, there are already incident reporting frameworks such as ENISA Cloud Security Incident Reporting (CSIR) Framework [19]. A trust penalty scheme, such as the one in JRTM, can also complement frameworks like ENISA CSIR, and enforces accurate reporting of the incidents.
JRTM is a practical and scalable scheme that requires collection of only the following information (i.e., evidence) in every period:
εi: the number of CCs who were subject to at least one security event in period i
εei: the number of CCs whose all security events were eliminated before they become incidents in period i
φi: the number of CCs who were subject to at least one privacy event in period i
φei: the number of CCs whose all privacy events were eliminated before they become incidents in period i
ρi: the number of CCs who were subject to at least one service event in period i
ρei: the number of CCs whose all service events are eliminated before they prevent achieving the SLOs defined in the SLA in period i
ui.: the total number of CCs in period i
D: the set of privacy event durations (i.e., the number of time periods between the time that a privacy event starts and the time that it is detected)
Two bytes suffice for storing each field of information listed above except for the last field, i.e., privacy event durations. Let's assume that the worst case for the privacy events occurs, which is 216 privacy events in the last period. Then, we do not expect larger than 128 KB of data per CSP per period. If the periods are one day and there are 210 CSPs registered in the TaaS Provider's database, the size of the data that the TaaS Provider needs to collect and store is around 128 MB, which is not much. Even if the number of CSPs in the database increases in the order of magnitude, the size of the data collected daily by the TaaS Provider stays around several GB.
As implied by the evidence collected, JRTM distinguishes three types of risks: security, privacy and service. Privacy has a difference from the other two. It is very likely that a privacy event is not detected when it is initiated. It is even probable that some of them may never be detected because their effect is not directly observable. On the other hand, the potential damage of a privacy event is higher when its duration is longer. Therefore, we collect evidence about privacy event durations and address this issue within our model. However, we cannot take undetected privacy events into account not only because they are not measurable but also because the TaaS providers' recommendations have to be based on evidence but speculations.
Computing risk and trust
The data collected by the TaaS need to be analyzed and assessed against various aspects of risk and trust. JRTM provides an aleatory approach for this purpose. Without a tool like JRTM, the collected data would not be very easy to read, to understand and to compare, and therefore useful for many users. Moreover, such evaluation requires a qualitative or quantitative scale accepted and understood by the stakeholders.
In JRTM, risk and trust are modelled jointly by using the evidences. The real risk is the risk that cannot be (or is not) eliminated by the CSP. If the part of the security risk δε, privacy risk δφ and the service risk δρ not eliminated by the CSP is lower than the CC can take (i.e., τε, τφ and τρ), then the cloud service is viable for the CC. We further elaborate on this relation at the end of this section. As shown in (1), we perceive risk as the probabilities rε, rφ, and rρ that a security, privacy or service event occurs, and trust as the probabilities tε, tφ and tρ that the CSP can eliminate the events before they become security, privacy or service incidents.
$$ {\delta}_x={r}_x-\left({r}_x\times {t}_x\right)\mathrm{for}\ x\mathrm{in}\left\{\varepsilon, \varphi, \left.\rho \right\}\right. $$
This approach to model risk fits well for the dynamics in cloud computing because of two reasons: Firstly, it does not require that the TaaS provider assesses the consequence for the realization of a risk, which is very much dependent on the CCs' functions. Instead, the consequences are represented by the thresholdsτε, τΦand τρ given by the CCs. We discuss the selection of thresholds in Section "Due diligence and TaaS recommendation for accepting a service". Secondly, it does not need to assess all threats and vulnerabilities. For a TaaS provider or CC, it is not practical to list all threats and vulnerabilities because it is not likely that CSP will share all the details about their physical architecture, platforms and security systems with public, their CC or even with TaaS providers.
JRTM predicts the expected number of events by using the data about the past events as shown in Eqs. (2), (3) and (4). The periodical data related to risks rε, rΦ, and rρ are weighted based on their freshness as given by (2), (3) and (4), where the Period i is the latest period, and rε(i), rΦ(i) and rρ(i) are the current risk assessments for security, privacy and service respectively. The parameter ω in (2), (3) and (4) is the weight parameter, and can be given any value between 0 and 1 including 0 and 1 (i. e. , {ω∈ ℜ | 0 ≤ ω ≤ 1}). The higher ω implies the lower level of uncertainty and the higher level of influence by the statistics in the last period. When it is 1, risk is determined based on the frequency of the incidents in the last period and there is not any uncertainty for the end result. When it is 0, risk is completely random according to the distribution and the statistics of the observations.
S in (2), P in (3) and G in (4) are random variables based on the probability distribution functions derived from the statistical analysis of the observations on the ratios between security ε, privacy φ, service ρ events and the number of CCs u in that period respectively. The security, privacy and service event ratios (i. e. , s = ε/u, p = φ/u and g = ρ/u) are fit to a distribution and statistics (i.e., shape, scale and location parameters), and this analysis for distribution and the statistics is repeated at the end of every period. The random variables S : Ω → ℜ+, P : Ω → ℜ+ and G : Ω → ℜ+ use these distributions and statistics in their probability spaces, i.e., RS(Ω, ℑS, PS), RP(Ω, ℑP, PP) and RS(Ω, ℑG, PG), respectively. Ω is the set of positive real numbers including 0 (i. e., Ω ⊂ ℜ). ℑS is the security event rate (i.e., the number of service events per user in a period), ℑP is the privacy event rate, and ℑG is the service event rate. PS is the probability density function and statistics that fits best to the security event data set (i. e., {ε0/u0, ε1/u1, …, εi/ui}), PP is the probability density function and statistics that fits best to the privacy event data set (i. e., {φ0/u0, φ1/u1, …, φi/ui}) and PG is the probability density function and statistics that fits best to the service event data set (i.e., {ρ0/u0, ρ1/u1,…, ρi/ui}) collected for a CSP. Please note that the distribution and statistics for PS, PP and PG include the data from the last period i.
$$ {r}_{\varepsilon (i)}=\left(1-\omega \right)S+\omega \frac{\varepsilon_i}{u_i}, $$
$$ {r}_{\varphi (i)}=\left(1-\omega \right)P+\omega \frac{\varphi_i}{u_i}, $$
$$ {r}_{g(i)}=\left(1-\omega \right)G+\omega \frac{\rho_i}{u_i}. $$
The uncertainties in Eqs. 2 to 4 are treated as aleatory by using random parameters. Stochastic uncertainties reflect variation in populations, and therefore imply the existence of knowledge (i.e., large and analyzed data set). That must be the reason why most of the available cloud risk assessments are based on epistemic uncertainty models, where uncertainties are due to the lack of knowledge. JRTM is developed for the mature stage of the cloud ecosystems, and therefore aleatory uncertainty approach is preferred.
For service risks, a stochastic model is a natural fit. However, security and privacy risks are in the essence not random but based on the deliberate acts by adversaries. Still, stochastic processes can be used and therefore very often used to model security and privacy risks when there is enough knowledge about their dynamics. However, randomization may not be appropriate to model security risks for special time periods, such as war, or for special type of security attacks designed by sophisticated attackers (i.e., first of its kind zero-day exploit attacks). We are working also on the application of the possibility and evidence theories instead of the probability theory [30] for the special cases of security and privacy attacks.
We would like also to clarify one more time that we recommend categorizing events and incidents due to denial of service (DoS) attacks as in the service risk domain, because a DoS attack is designed to diminish the QoS or GoS, and therefore it is a service risk from the perspective of consequences.
As shown in Eq. 1, the risk is based on the number of expected events and the trust on CSP to eliminate them before they cause harm. We explained the stochastic process to calculate the number of expected events in the previous paragraphs. Now it is time to elaborate on the trust part of our equations. Nevertheless, before that we would like to introduce our penalty parameter α, which is for encouraging the CSPs to report the incidents timely and accurately. TaaS also collects the data about incidents from CCs and from the other sources such as ENISA CSIR framework [19]. TaaS Providers investigate the incidents reported by the other sources. If they find that a proven incident is not reported by a CSP, the trust value for the CSP decreases, which as a result increases the risk value for the CSP. In Eqs. (5) and (6), αi is the penalty parameter for the CSP reporting accuracy in Period i, qi is the number of incidents not reported by the CSP in Period i, and λ is the penalty degradation parameter which is a positive real number larger than or equal to one (i.e., {λ∈ ℜ| λ≥1}) selected by CC similar to the slope value γ. TaaS providers assist CC to determine an appropriate slope value. The higher the penalty degradation parameter λ is, the quicker it takes to forget the inaccurate reporting by a CSP.
$$ {\alpha}_c=\left\{\begin{array}{cc}\lambda -1&, \mathrm{if}\ {\alpha}_{i\hbox{-} 1}=0\\ {}1&, {\alpha}_{i-1}\times \lambda >1\\ {}{\alpha}_{i-1}\times \lambda &, \mathrm{otherwise}\end{array}\right. $$
$$ {\alpha}_i=\left\{\begin{array}{rr}{\alpha}_c&, \mathrm{if}\ {q}_i=0\\ {}\frac{\alpha_{ci}}{q_i}&, \mathrm{otherwise}\end{array}\right.; $$
Trust parameters tε, tφ and tρ consist of two parts, i.e., hard tεh, tφh, tρh and soft tεs, tφs, tρs, as shown in (7). Hard part of trust is based on the architecture (i.e., the security systems and capacity) of the CSP and the content of SLA. Therefore, it is mostly related to evidence, and we calculate it purely based on the performance of CSP. On the other hand, soft trust is sensitive to the latest incidents and more sensitive to negative incidents when compared to positive incidents. Typically trust and reputation are built slowly but can be lost very quickly. Please note that soft trust can be a negative value. Therefore, it is added to the hard trust, and may have a negative or a positive effect in overall trust value. Apart from soft and hard part of the trust, we also have the penalty parameter α introduced in (5) and (6). We capture these relations through (7) to (15).
$$ {t}_x=\left\{\begin{array}{l}0,\\ {}1,\\ {}\left({t}_{xh}+{t}_{xs}\right)\times \alpha, \end{array}\begin{array}{c}\mathrm{if}\left({t}_{xh}+{t}_{xs}\right)\times \alpha <0;\\ {}\mathrm{if}\left({t}_{xh}+{t}_{xs}\right)\times \alpha >1;\\ {}\mathrm{otherwise}.\end{array}\right.\kern2.5em \mathrm{for}x\mathrm{in}\left\{\varepsilon, \varphi, \left.\rho \right\}\right. $$
Hard trust measurement is similar to risk assessment. In (8), (9) and (10), εei, φei and ρei is the number of subscribers whose all security, privacy and service events are eliminated before they become incidents respectively at period i. Random variables Se : Ω → ℜ+, Pe : Ω → ℜ+and Ge : Ω → ℜ+ generate random numbers according to the distributions and statistics of the ratios between the number of eliminated security events and total number of security events (i. e., se = εe/ε), the number of eliminated privacy events and total number of privacy events (i. e., pe = φe/φ) and between the number of eliminated service events and the total number of service events (i. e., ge = ρe/ρ). In (7), we have another random variable R : Ω → ℜ+, which assigns random values according to the distributions and statistics of the values in privacy event duration set D. The probability spaces for Se, Pe, Ge and R are similar to S, P and G except for the data used for the probability functions. Therefore, we are not giving their formal definitions here.
$$ {t}_{\varepsilon h(i)}=\left(1-\omega \right){S}_e+\omega \frac{\varepsilon_{ei}}{\varepsilon_i}. $$
$$ {t}_{\varphi h(i)}={\left(\left(1-\omega \right){P}_e+\omega \frac{\varphi_{ei}}{\varphi_i}\right)}^R. $$
$$ {t}_{ph(i)}=\left(1-\omega \right){G}_e+\omega \frac{\rho_{ei}}{\rho_i} $$
Soft parts of trust tεs(i), tφs(i) and tps(i) are calculated based on the change in the performance of CSP. In Eq. (12), the slope value γ is a positive real number larger than or equal to one (i.e., {γ∈ ℜ | γ ≥ 1}) and represents the relation of trust with the negative/positive change (i.e., trend) in performance. If the performance of the CSP worsens, the CSP loses its credibility quickly. The sharpness of the drop in trust is related to the slope value γ. On the other hand, it takes more effort and time to gain trust as captured by (12).
$$ {d}_{x(i)}=\frac{x_{ei}}{x_i}-\frac{x_{e\left(i-1\right)}}{x_{i-1}}\kern2.25em \mathrm{for}x\mathrm{in}\left\{\varepsilon, \varphi, \rho \right\}; $$
$$ {t}_{xs(i)}=\left\{\begin{array}{ccc}{d}_{x(i)}^{\gamma },& if& {d}_{x(i)}\\ {}-\sqrt[\gamma ]{\left|{d}_{x(i)}\right|},& if& {d}_{x(i)}\end{array}\begin{array}{r}\ge 0;\\ {}<0;\end{array}\kern1.75em \mathrm{for}x\mathrm{in}\left\{\varepsilon, \varphi, \rho \right\}\right. $$
Equation (1) captures risks for a single service. We extend them for CSMs in (13), (14) and (15), where AS, AP and AG are the expected overall security, privacy and service risk (i.e., the risk that cannot be eliminated by the CSP) for CSMs respectively. The number of services in a CSM is n, and ak is the number of alternative services available for service k in the inter-cloud (all the CSPs that can be accessed for this service). It is trivial to see at (13) and (14) that the higher the number of services compose a mashup, the higher the security and privacy risks become. The same relation can also be observed at (15) with a difference: the higher number of alternatives decreases the service risk. We examine these relations in more detail in Section "Experimental results".
$$ {A}_s=1-\prod \limits_{k=1}^n\left(1-{\delta}_{\varepsilon k}\right); $$
$$ {A}_p=1-\prod \limits_{k=1}^n\left(1-{\delta}_{\varphi k}\right); $$
$$ {A}_G=1-\prod \limits_{k=1}^n\left(1-\prod \limits_{m=1}^{a_k}{\delta}_{pkm}\right) $$
Since AS, AP and AG are stochastic processes, their result are not deterministic (i.e., includes uncertainty through random variables). Therefore, a TaaS using our model first needs to build confidence intervals for AS, AP and AG (i.e., u(AS) < AS < v(AS), u(AP) < AP < v(AP) and u(AG) < AG < v(AG)) according to the confidence level λ given by the CC. For this, static Monte-Carlo simulation can be used. After building the confidence interval, the TaaS provider recommends the service mashup if and only if, v(AS) < τε, v(AP) < τφ and v(AG) < τρ, where τε, τφand τρ are the security, privacy and service risk thresholds agreed with the CC.
We would like to highlight that JRTM analyzes δε, δφ, δρ for a CSP but not for a service. However, when AS, AP or AG are being assessed, the services in the mashup may be coming from different CSP, and JRTM can compute the risk accordingly. Please note that composing a service mashup is not the aim of JRTM. JRTM is not a scheme to compare alternative service mashups, either. The purpose of JRTM is to assess if the risk of a CSM is below the acceptable level for a CC. However, the services of a TaaS provider that employs JRTM can be used by another service for mashup composition [10] and selection, because JRTM does not only assess if the risk of a service is acceptable but also assigns probabilities for security, privacy and service risks. Therefore, JRTM can be integrated into a "multi-criteria decision making with posterior articulation of user preferences [11]" algorithm as follows:
Step 1: The set S = {CSM1, CSM2, CSMn} of alternative CSM for a CC is given.
Step 2: JRTM computes security, privacy and service risks for each CSM in S.
Step 3: The CSMs assessed as too risky (i.e., at least one of the security, privacy or service risk probabilities for the CSM is higher than the thresholds) are removed from S, which creates the set S′ of feasible CSM, i.e., S′ ⊂S.
Step 4: If S′ = ∅ (i.e., S′ is an empty set), the CC is informed that there is no feasible CSM in S and the process ends.
Else if |S′|=1 (i.e., S′ has only one element), the CC is informed about the only feasible CSM and the process ends.
Step 5: If CC is interested in only one risk domain (i.e., security, privacy and service), the CSMs in S′ is ordered according to that domain, and the CC is informed about the best CSM, which may be more than one if multiple CSM has the same score. Please note that this is very unlikely.
Else the non dominated set S″ of feasible CSMs, S″ ⊂ S′, is created. In S″, there is no CSM, which is worse in all risk domains comparing to another CSM in S″, i.e., there is no CSM dominated by another CSM. After this the CC is informed with S″ for posterior articulation of CC preferences.
Step 6: The process ends.
Due diligence and TaaS recommendation for accepting a service
To be complete, two further questions need to be answered: How can a CC determines τε, τφand τρ? How can a TaaS provider assign the distributions and statistics for S, P, G, Se, Pe, Ge and R when a CSP is registered first time?
JRTM is in essence based on the expected rate of security, privacy and service incidents with some adjustments related to the tendency (i.e., increasing or decreasing incident rates). Therefore, risk thresholds for JRTM are intuitively clear and an experienced TaaS provider can make suggestions for them based on the assets [18] that the CC would like to process or to store in the cloud. Then the CC either agrees with or can ask justifications on them. When the thresholds are agreed, the TaaS provider's recommendation is that the assessed absolute risk is below the risk acceptable by the CC. Hence, the acceptable level of risk is understood and agreed by the correct party, i.e., the CC, and can also be based on a relative risk analysis [32]. For this, CCs do not need to know the details about the technical architectures, their vulnerabilities and threats. Instead they focus on a comprehensive and abstract risk probability given based on practical evidence. Therefore, it is easier for a CC to run a risk assessment based on the consequences and opportunities of the risks taken. A number of recommendations may guide TaaS providers and CCs in the definition of the risk profiles [18, 52].
There are multiple ways to answer the second question. The TaaS provider can initialize S, P, G, Se, Pe, Ge and R with the same distributions and statistics as the average of the other CSP that have a similar architecture to the CSP registered the first time. After this, the CCs may be provided with a recommendation by using larger confidence intervals than the confidence level specified by the CCs and warned about this fact.
Another difficulty in making the statistics is related to temporal and geographic correlations of the risks. For example, a law such as "the data protection act" affects not only one CSP but all CSP that have data center in the same country. Therefore, the impacts of this law should not be reflected only to a CSP that has the experience due to this law but also all the CSPs that have a data center in the same country. Similarly, when this law changes or is removed, its effects should be removed from the statistics associated with all the CSP in the country. None of these changes the essence of JRTM.
We run experiments by using Monte-Carlo simulation methodology for three purposes: to have better insight into our models; to examine the relations between independent engineering variables (i.e., freshness (ω), slope (γ) and period length) and dependent variables (i.e., confidence intervals for security S, privacy P and service S risks); and, more importantly, to verify our models. A subset of results from our experiments are depicted and analyzed in this section. For the experiments, we generated random values for S, P, G, Se, Pe, Ge and R. We factored our experiments for the Poisson and Normal distributions and expected values to analyze the sensitivity of the model against the statistical characteristics of our random variables. Because of the stochastic nature of our model, we repeated each experiment 50 times and constructed the confidence intervals. For the other independent variables, we changed their value according to our design of experiment, which is based on partial factoring. The details about the value ranges for our factoring parameters are clarified below, where we analyze and explain some of the results.
In Fig. 2, the sensitivity of security AS, privacy AP and service AG risks against the changes in independent engineering variables freshness (ω) and slope (γ) are depicted. S, P and G are distributed according to Poisson distribution with 0.02, which means 2% percent of users were subject to a security, privacy or a service event in every period. In the last period, the CSPs managed to eliminate 95% of all these events before they become an incident. In the previous period, this value is 93%. This indicates 2% improvement in the performance of CSPs in the average (i.e., the success in eliminating events before they become incidents), which affects the soft trust according to the slope (γ) value.
Upper bounds of security v(AS), privacy v(AP) and service v(AG) risks for various slope (γ) and freshness (ω) values
As shown in Fig. 2, the effect of the changes in slope value is not much, because it changes only the soft trust, which should not contribute to the risk perception in a major way when the change in CSP performance is only 2% and positive. Most probably this would be unrecognizable. On the other hand, the effect of the freshness (ω) parameter is significant. The reason for that is the change in the number of events. In our experiments, the number of events in the last period goes down from 0.02 to 0.002. With the effect of soft trust, the model calculates risks as almost 0, except for privacy, when risk perception is based on only the events that happened in the last period.
In the privacy risk calculation, the duration of incidents before they get detected is also an important parameter. In the experiments for the results shown in Fig. 2, the duration is Poisson distributed with 3 period lengths in the average. Therefore, the privacy risk P is always above 0.0001 and around 60% higher comparing to the security and service risks.
The difference of the experiments in Fig. 3 with respect to Fig. 2 is the change in the performance of CSPs. In Fig. 2, it is 0.2 and positive. In Fig. 3, it is again 0.2, but this time it is negative, which means that the CSPs become less successful in eliminating the events before they become incidents (i.e., it goes down from 95% to 93%). The impact of this is trivial and the model captures it very well. First, the soft trust reduces because this is a negative performance change, and therefore the slope γ becomes more effective at the risk perception. This is more significant when freshness ω is higher. Nevertheless, the relation between freshness and slope are not direct but indirect. When the risk perception is higher, the effect of soft trust and therefore the slope also becomes higher. Except for these differences, the other relations between the independent and dependent variables in Fig. 3 are almost the same as in Fig. 2.
Upper bounds of security v(AS), privacy v(AP) and service v(AG) risks for various slope (γ) and freshness (ω)
If the slope value is higher, the improvement in CSP performance is reflected to the risk perception slowly, on the other hand, the degradation in CSP performance is reflected to the risk perception more aggressively. Therefore, there is always a positive relation with risk and slope γ, which means that the higher the slope value becomes, the higher the risk is perceived independent from the tendency in the CSP performance. This behaviour is exactly what we expect from our model, and observable in Figs. 2, 3 and 4.
Upper bounds of v(AS), v(AP) and v(AG) for various slope (γ) and CSP performance tendency (dε = dΦ = d) values
In Fig. 4, we examine the relation between slope and the tendency in CSP performance more closely. We assign 0.85 for the event elimination performance in the period before the last period. Then, we change the event elimination rates between 0.95 and 0.75 for the last period.
When there is no change in the CSP performance, changing the slope value does not affect the risk perception. That is not a surprise, because slope is for amplifying the effects of the performance change on the soft trust. When the tendency is positive, which means the performance of the CSP gets better in eliminating events, the effect of the slope at the risk perception is less comparing to the negative tendency. Trust can be gained slowly and lost more quickly. Therefore, we can tell that our model addresses the soft trust effect as expected and explained in Section "Related work and definitions".
In Fig. 5, the relation between the number of services and risk is depicted for the same values as the ones used for the experiments in Figs. 2 and 3. As illustrated in Fig. 5, there is a linear relation, and the privacy risk is the most sensitive against the number of services.
Upper bounds of security v(AS), privacy v(AP) and service v(AG) risks for various number of services n in cloud service mashup
We also examine the sensitivity of risk against the number of alternative services (i.e., available services for the same service type within the inter-cloud.) We observe that the number of alternative services does not change the security or privacy risks. However, it impacts on the service risk. When there is one alternative service in the average for each type of service in a mashup made up of 11 service types, the service risk is calculated as 0.0191. When the average number of alternatives becomes two, the service risk goes down to 0.0001. When it is three in the average, the risk becomes almost zero.
Figure 6 shows the sensitivity of the privacy risk against the average duration of events before they are detected. Please note that the security and service risks are not sensitive against the event duration. In our tests, we examine the sensitivity of JRTM not only against the average duration length but also the change in duration length distribution (i.e., Poisson distribution and Normal distribution with various standard deviations). We observed an interesting result for Normal distribution. When standard deviation is as large as the average, the risk perception gets higher. This fits with the intuition because the higher variation means the higher uncertainty. Nevertheless, when the average gets higher, the higher standard deviation may reduce the risk because it also implies the lower privacy event durations.
Upper bounds of privacy risk v(AP) for various average durations of privacy incidents
Figure 7 depicts the security v(AS), privacy v(AP) and service v(AG) risks for various period lengths. Figure 7 also shows the sensitivity of the model against the changes in the event rates, because the longer the period length becomes, the higher the number of events are observed in each period.
Upper bounds of security v(AS), privacy v(AP) and service v(AG) risk for various event rates, which also means various period length
In Fig. 7, we also analyse the effect of changing the distribution for the event occurrence. When we apply the same average rates, the risks calculated for the Normal distribution is higher. Please note that we assign a standard deviation equal to 10% of the average for the Normal distribution. We observe an anomaly at the plots for privacy risk v(AP) in Fig. 7. The privacy risk reduces at several points when the event rate gets higher. That is because we do not increase the event rate but the period length which causes the event durations in the number of periods are reduced when period lengths are increased. When privacy event duration is reduced, that decreases privacy risk. Therefore, we observe a decrease in privacy risk although the event rate increases.
In Fig. 8, the sensitivity of the model against the changes in the penalty parameter α is depicted. Please remember that the penalty parameter is for encouraging the CSPs to report incidents more timely and accurately. As expected there is a linear relation between the risk values and the penalty parameter, and the effect of a decrease in penalty value significantly decreases the trust and increases the risk. This is exactly what the parameter is designed for, and therefore verifies the model. There is an interesting observation from Fig. 8: The privacy risk is the highest comparing to the service and security risks when no penalty is applied (i.e., the penalty value is 1). The higher the penalty is (i.e., the penalty value decreases), the lower the difference between the service and privacy risks becomes. For penalty values 0.1, 0.2 and 0.3, the service risk is higher than the privacy risk. The reason for this is the effect of the trust in the model. Privacy risk does not decrease as much as the other risks when the trust value becomes higher, because the probability that a privacy incident cannot be detected is higher comparing to the service and security risks. Therefore, when the penalty reduces the positive effect of the trust in the model, security and service risks become higher than privacy risk.
Upper bounds of security v(AS), privacy v(AP) and service v(AG) risk for various penalty (α) values
In Fig. 9, the likelihood for privacy, security and service risks are given for 44 CSPs registered in Security Trust and Assurance Registry (STAR) [15] by Cloud Security Alliance (CSA). Almost 300 CSP provided information about how they implemented controls for the security, privacy and service assurance in the STAR. The likelihood value is the result of Cloud Adopted Risk Assessment Model (CARAM) [11], which aggregates the ENISA likelihood assessment [18] for cloud risks into three domains (i.e., security, privacy and service) and applies to CSPs based on the data in STAR. The scale used for likelihood is between 0 and 9. CARAM results are not based on the historic data but vulnerabilities and threats assessed by ENISA and the controls implemented by the CSPs. However, we can observe that privacy risk is higher than security and service risks, which are close to each other, almost for every CSP. This supports the JRTM results depicted in Figs. 7 and 8.
The probability values calculated by CARAM for 44 CSPs in STAR
Risk and trust are critical notions for cloud services and closely related to each other. In literature, trust is stated as the main barrier for potential subscribers before they embrace cloud services. For realization of cloud computing, trust relation between the CC and the CSP has to be established. This requires an in depth understanding of risk and the accountability of the CSP. Cloud service mash-ups exacerbate the complexity of accountability, risk and trust relations among the CC and the CSP. Therefore, practical services possibly in the form of TaaS are required. A TaaS provider may use the data about the reputation of a CSP, and the risk constraints of the subscribers, to recommend or not to recommend a specific service to a subscriber.
A joint trust and risk model based on statistical data is introduced for this purpose. The model, i.e., JRTM, addresses not only the security related risks but also the risks related to privacy and the performance of the services. It is amenable to the automated treatment, allowing to represent the service chain, and to dynamically monitor risk thresholds, according to profiles established with the CC. JRTM differentiates the negative performance from the positive performance in risk assessment based on the CC preferences. It also takes into account the freshness of the data about the performance of the CSP again according to the parameters specified by the CC. The model is practical for a TaaS and for cloud service mashups. Our initial experimentation verifies that our model is aligned with the perception of risks and trust as explained in the literature.
CAIQ:
Cloud Assessment Initiative Questionnaire
CARAM:
Cloud Adoption Risk Assessment Model
Cloud Customer
Cloud Control Matrix
CNIL:
The Commission nationale de l'informatique et des libertés
CSA:
CSIR:
Cloud Security Incident Reporting
CSM:
Cloud Service Mashup
CSP:
Cloud Service Provider
DoS:
ENISA:
European Network and Information Security Agency
GoS:
Grade of Service
IEC:
ISACA:
Information Systems Audit and Control Association
JRTM:
Joint Risk and Trust Model
PII:
QoS:
SLA:
SLO:
Service Level Objectives
Security, Trust and Assurance Registry
TaaS:
Trust as a Service
Amazon CloudWatch, http://aws.amazon.com/cloudwatch/
Andreozzi S, De Bortoli N, Fantinel S, Ghiselli A, Rubini GL, Tortone G, Vistoli MC (2005) Gridice: a monitoring service for grid systems. Future Gener Comput Syst 21(4):559–571
Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58
Asnar A, Zannone N (2008) Perceived risk assessment. In Proceedings of the 4th ACM Workshop on Quality of Protection (QoP), p 59–64
Audun J, Presti SL (2004) Analysing the relationship between risk and trust. In Proceedings of the 2nd International Conference on Trust Management (iTrust), p 135–45
Banerjee S, Mattmann C, Medvidovic N, Golubchik L (2005) Leveraging architectural models to inject trust into software systems. In: Proc. SESS '05. ACM, New York, pp 1–7
Buyya R, Ranjan R, Calheiros RN (2010) InterCloud: Utility-oriented Federation of Cloud Computing Environments for Scaling of Application Services, Proceedings of the 10th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP'10), pp 13–31
Cayirci E (2013) A Joint Trust and Risk Model for MSaaS Mashups. In: Pasupathy R, Kim S-H, Tolk A, Hill R, Kuhl ME (eds) Proceedings of the 2013 Winter Simulation Conference. Institute of Electrical and Electronics Engineers, Inc, Piscataway, New Jersey.
Cayirci E (2013) Modelling and Simulation as a Service: A Survey. In: Pasupathy R, Kim S-H, Tolk A, Hill R, Kuhl ME (eds) Proceedings of the 2013 Winter Simulation Conference. Institute of Electrical and Electronics Engineers, Inc, Piscataway, New Jersey.
Cayirci E (2013) "Configuration Schemes for Modelling and Simulation as a Service Federations," Simulation Transactions of the Society for Modelling and Simulation International. 89(11):1388–1399
Cayirci E, Garaga A, de Oliveira AS, Roudier Y (2016) A risk assessment model for selecting cloud service providers. J Cloud Computing 5:14
CNIL, "Methodology for Privacy Risk Management: How to Implement the Data Protection Act," 2012, https://www.cnil.fr/en/media, June 2014
CSA, "Consensus Assessment Initiative Questionnaire (CAIQ)," https://cloudsecurityalliance.org/research/cai/. Accessed 23 July 2018.
CSA, "The Notorious Nine Cloud Computing Top Threats in 2013," https://downloads.cloudsecurityalliance.org/initiatives/top_threats/The_Notorious_Nine_Cloud_Computing_Top_Threats_in_2013.pdf. Accessed 23 July 2018.
CSAN, "Security, Trust & Assurance Registry (STAR)," https://cloudsecurityalliance.org/star/#_registry. Accessed 23 July 2018.
Costante E, Paci F, Zannone N (2013) Privacy-aware web service composition and ranking. In Proceedings of the 20th IEEE Conference on Web Services (ICWS), p 131–38
DHS. (2008) DHS risk lexicon. Department of Homeland Security
ENISA. (2009). Cloud computing: benefits, risks and recommendation for information security
ENISA. (2013). Cloud security incident reporting: framework for reporting about major cloud security incidents
Ezell BC, Bennet SP, Von Winterfeldt D, Sokolowski J, Collins AJ (2010) Probabilistic risk analysis and terrorism risk. Risk Anal 30(4):575–589
Gadia S (2011) Cloud computing risk Assesment: a case study. ISACA Journal 4:1–6
Garg, S.K., S. Versteeg, and R. Buyya. 2011. "SMICloud: a framework for comparing and ranking cloud services." Fourth International Conference on Utility and Cloud Computing
Gupta R, Prasad K, Luan L, Rosu D, Ward C (2009) Multi-dimensional knowledge integration for efficient incident Management in a Services Cloud. In Proceedings of the IEEE International Conference on Services Computing (SCC), p 57–64
Hwang K, Fox G, Dongarra J (2011) Distributed and cloud computing. Morgan Kauffmann Publishers, San Francisco
IEC 62198, "Managing Risk in Projects – Application Guidelines," http://webstore.iec.ch/webstore/webstore.nsf/artnum/048815!opendocument. Accessed 23 July 2018.
ISACA, "COBIT 5: A Business Framework for the Governence and Management of Enterprise IT," http://www.isaca.org/cobit/pages/default.aspx. Accessed 23 July 2018.
ISO 31000, "Risk Management (2009)," http://www.iso.org/iso/home/standards/iso31000.htm. Accessed 23 July 2018.
ISO/IEC 31010, "Risk Management-Risk Assesment Techniques (2009)," https://www.iso.org/obp/ui/#iso:std:iec:31010:ed-1:v1:en. Accessed 23 July 2018.
Jansen W, Grance T (2011) Guidelines on security & privacy, draft special publication 800–144 NIST. Department of Commerce, US
Jøsang A (2001) A logic for uncertain probabilities. International journal of uncertainty, Fuzziness and Knowledge-Based Systems 9(3):279–311
Kandukuri, B.R., R. Paturi, and V.A. Rakshit. 2009. "Cloud security issues." IEEE International Conference on Services Computing
Kaplan S, Garrick BJ (1981) On the quantitative definition of risk. Risk Anal 1(1):11–27
Khan, K. and Malluhi, Q, 2013. "Trust in Cloud Services: Providing More Controls to Clients," IEEE Computer, July
Li W, Ping L (2009) Trust model to enhance security and interoperability of cloud environment. In Proceedings of the 1st IEEE Cloud Computing Conference (CloudCom), p 69–79
Marsh S (1994) Formalising trust as a computational concept. Doctoral dissertation, University of Stirling, Scotland UK
Massacci F, Mylopoulos J, Zannone N (2006) Hierarchical hippocratic databases with minimal disclosure for virtual organizations. VLDB J 15(4):370–387
Massie ML, Chun BN, Culler DE (2004) The ganglia distributed monitoring system: design, implementation and ex-perience. Vol 30. Elsevier Parallel Computting, pp 817-40
Maximilien M, Singh MP (2004) Toward autonomic web services trust and selection. In: Proceedings of the 2nd international conference on service oriented computing (ICSOC '04). ACM, New York, pp 212–221
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734
Nagios, http://www.nagios.org/. Accessed 23 July 2018.
Newman HB, Legrand IC, Galvez P, Voicu R, Cirstoiu C (2003) Monalisa : a distributed monitoring service architecture. In: Proceedings of CHEP03, LaJolla, USA
Osterwalder D (2001) Trust through evaluation and certification. Soc Sci Comput Rev 19(1):32–46
Paradesi S, Doshi P, Swaika S (2009) Integrating behavioral Trust in web Service Compositions. In Proceedings of the IEEE International Conference on Web Services, p 453–60
Paraleap AzureWatch, https://www.paraleap.com/AzureWatch. Accessed 23 July 2018.
Pearson S (2012) Privacy, security and Trust in Cloud Computing. In: Pearson S, Yee G (eds) Privacy and security for cloud Computting, computer communications and networks. Springer-Verlag, New York, pp 3–42
Pearson S, Charlesworth A (2009) Accountability as a way forward for privacy protection in the cloud. In: Jaatun MG, Zhao G, Rong C (eds) Proceedings of the 2009 CloudCom. Springer-Verlag, New York, pp 131–144
Pearson S, Tountopoulos V, Catteddu D, Südholt M, Molva R, Reich C, Fischer-Hübner S, Millard C, Lotz V, Jaatun MG, Leenes R, Rong C, Lopez J (2012) Accountability for cloud and other future internet services. In Proceedings of the 4th IEEE Cloud Computing Conference (CloudCom), p 629–32
RackSpace Cloud Monitor, http://www.rackspace.com/cloud/monitoring/
Rashidi A, Movahhedinia N (2012) A model for user trust in cloud computing. International Journal on Cloud Computing: Services and Architecture (IJCCSA) 2(2):1–8
Rousseau D, Sitkin S, Burt R, Camerer C (1998) Not so different after all: a cross-discipline view of trust. Acad Manag Rev 23(3):393–404
Ryan KLK, Jagadpramana P, Mowbray M, Pearson S, Kirchberg M, Liang Q, Lee BS (2011) TrustCloud: a framework for accountability and Trust in Cloud Computing. In: 2nd IEEE cloud forum for practitioners (ICFP)
Simmonds, P., Chris Rezek, & Archiee Reed. (2011). Security Guidance for Critical Areas of Focus in Cloud Computing V3.0 (No. 3.0) (p. 177). Cloud Security Alliance. Retrieved from http://www.cloudsecurityalliance.org/guidance/
Singh S, Morley C (2009) Young Australians' privacy, security and Trust in Internet Banking. In: Proceedings of the 21st Annual Conference of the Australian Computer-Human interaction Special interest Group: Design: Open 24/7
Singhal M, Chandrasekhar S, Tingjian G, Sandhu RS, Krishnan R, Ahn G-J, Bertino E (2013) Collaboration in multicloud computing environments: framework and security issues. IEEE Computer Magazine 46(2):76–84
Toosi AN, Calheiros RN, Thulasiram RK, Buyya R (2011) Resource provisioning policies to increase IaaS Provider's profit in a federated cloud environment. In the Proceedings of the IEEE International Conference on High Performance Computing and Communications (HPCC), p 279-87
Tran WX, Tsuji H (2008) QoS based ranking for web services: fuzzy approaches. In: Proceedings of the 2008 4th international conference on next generation web services practices (NWESP '08). IEEE computer society, Washington, DC, USA, pp 77–82
Vu L-H, Hauswirth M, Aberer K (2005) QoS-based service selection and ranking with trust and reputation management. In: Proceedings of the 2005 confederated international conference on on the move to meaningful internet systems (OTM'05). Springer-Verlag, Berlin, Heidelberg, pp 466–483
Wang P, Chao K-M, Lo C-C, Huang C-L, Li Y (2006) A fuzzy model for selection of QoS-aware web services. In: Proceedings of the IEEE international conference on e-business engineering (ICEBE '06). IEEE computer society, Washington, DC, USA, pp 585–593
Wang P, Chao K-M, Lo C-C, Farmer R, Kuo P-T (2009) A reputation-based service selection scheme, e-Business Engineering. In: IEEE International Conference on ICEBE '09, pp 501–506
Wang Y, Lin K-J (2008) Reputation-oriented trustworthy computing in E-commerce environments. Internet Computing 12(4):55–55
Xu Z, Martin P, Powley W, Zulkernine F (2007) Reputation-enhanced QoS-based web services discovery. In: IEEE International Conference on web services. ICWS 2007, pp 249–256
Asnar Y, Giorgini P, Massacci F, Zannone N (2007) From trust to dependability through risk analysis. In Proceedings of the 2nd International Conference on Availability, Reliability and Security (ARES), p 19–26
Zabbix, http://www.zabbix.com/. Accessed 23 July 2018.
This work is conducted as part of the EU-funded FP7 project titled as "Accountability for Cloud and Other Future Internet Services" (A4Cloud) under grant agreement number 31755 which introduces an accountability-based approach for risk and trust management in cloud ecosystems.
University of Stavanger funds the publication of this paper.
There is no data available specifically for this paper. Our research benefited from the other datasets such as CSA STAR.
Electrical Engineering and Computer Science Department, University of Stavanger, Stavanger, Norway
Erdal Cayirci
SAP Security Research, Mougins, France
Anderson Santana de Oliveira
Search for Erdal Cayirci in:
Search for Anderson Santana de Oliveira in:
The paper is the common effort of the both authors in all sections. Both authors read and approved the final manuscript.
Correspondence to Erdal Cayirci.
Erdal Cayirci, PhD, is a professor at University of Stavanger in Norway and the CEO of DataUniTor AS. His research interests include modelling and simulation, data analytics, security, privacy, cloud computing, blockchain and mobile communications.
Anderson Santana de Oliveira, PhD, is a Researcher in the Product Security Research team, working on Data Protection, Privacy and security topics. His research interests include cloud computing, data protection, privacy, accountability, trust and risk assessment.
Cayirci, E., de Oliveira, A.S. Modelling trust and risk for cloud services. J Cloud Comp 7, 14 (2018) doi:10.1186/s13677-018-0114-7
Risk model
Service mashup
Inter-cloud
Cloud federation | CommonCrawl |
Upper risk bounds in internal factor models with constrained specification sets
Jonathan Ansari1 na1 &
Ludger Rüschendorf2
Probability, Uncertainty and Quantitative Risk volume 5, Article number: 3 (2020) Cite this article
For the class of (partially specified) internal risk factor models we establish strongly simplified supermodular ordering results in comparison to the case of general risk factor models. This allows us to derive meaningful and improved risk bounds for the joint portfolio in risk factor models with dependence information given by constrained specification sets for the copulas of the risk components and the systemic risk factor. The proof of our main comparison result is not standard. It is based on grid copula approximation of upper products of copulas and on the theory of mass transfers. An application to real market data shows considerable improvement over the standard method.
In order to reduce the standard upper risk bounds for a portfolio \(S=\sum _{i=1}^{d} X_{i}\) based on marginal information, a promising approach to include structural and dependence information are partially specified risk factor models, see Bernard et al. (2017b). In this approach, the risk vector X=(Xi)1≤i≤d is described by a factor model
$$\begin{array}{@{}rcl@{}} X_{i}=f_{i}(Z,\varepsilon_{i})\,,~~~1\leq i \leq d\,, \end{array} $$
with functions fi, systemic risk factor Z, and individual risk factors εi. It is assumed that the distributions Hi of (Xi,Z), 1≤i≤d, and thus also the marginal distributions Fi of Xi and G of Z are known. The joint distribution of (εi)1≤i≤d and Z, however, is not specified in contrast to the usual independence assumptions in factor models. It has been shown in Bernard et al. (2017b) that in the partially specified risk factor model a sharp upper bound in convex order of the joint portfolio is given by the conditionally comonotonic sum, i.e., it holds
$$\begin{array}{@{}rcl@{}} S =\sum_{i=1}^{d} X_{i} \leq_{cx} S_{Z}^{c}=\sum_{i=1}^{d} F_{X_{i}|Z}^{-1}(U) \end{array} $$
for some U∼U(0,1) independent of Z. Furthermore, \(S_{Z}^{c}\) is an improvement over the comonotonic sum, i.e,
$$\begin{array}{@{}rcl@{}} S_{Z}^{c}\leq_{cx} S^{c}=\sum_{i=1}^{d} F_{i}^{-1}(U)\,. \end{array} $$
For a law-invariant convex risk measure \(\Psi \colon L^{1}(\Omega,\mathcal {A},P) \to \mathbb {R}\) that has the Fatou-property it holds that Ψ is consistent with respect to the convex order which yields that
$$\begin{array}{@{}rcl@{}} \Psi(S)\leq \Psi\left(S_{Z}^{c}\right)\leq \Psi(S^{c})\,, \end{array} $$
assuming generally that Xi∈L1(P) are integrable and defined on a non-atomic probability space \((\Omega,\mathcal {A},P)\,,\) see (Bäuerle and Müller (2006), Theorem 4.3).
We assume that Z is real-valued. Then, the improved upper risk bound depends only on the marginals Fi, the distribution G of Z, and on the bivariate copulas \(C^{i}=C_{X_{i},Z}\) specifying the dependence structure of (Xi,Z). An interesting question is how the worst case dependence structure and the corresponding risk bounds depend on the specifications Ci, 1≤i≤d. More generally, for some subclasses \({\mathcal {S}}^{i}\subset \mathcal {C}_{2}\) of the class of two-dimensional copulas \(\mathcal {C}_{2}\,,\) the problem arises how to obtain (sharp) risk bounds given the information that \(C^{i}\in {\mathcal {S}}^{i}\,,\) 1≤i≤d. More precisely, for univariate distribution functions Fi,G, we aim to solve the constrained maximization problem
$$\begin{array}{@{}rcl@{}} \max\left\{\sum_{i=1}^{d} X_{i}~|~X_{i}\sim F_{i}\,,~Z\sim G\,,~C_{X_{i},Z}\in {\mathcal{S}}^{i}\right\}~~~\text{w.r.t.} \leq_{cx} \end{array} $$
for some suitable dependence specification sets \({\mathcal {S}}^{i}\,.\) As an extension of (5), we also determine solutions of the constrained maximization problem
$$\begin{array}{@{}rcl@{}} \max \left\{\sum_{i=1}^{d} X_{i} ~| ~F_{i}\in {\mathcal{F}}_{i}\,, ~G\in {\mathcal{F}}_{0}\,,~ C_{X_{i},Z}\in {\mathcal{S}}^{i}\right\}~~~\text{w.r.t.} \leq_{cx} \end{array} $$
with dependence specification sets \({\mathcal {S}}^{i}\) and marginal specification sets \({\mathcal {F}}_{i}\subset {\mathcal {F}}^{1}\,,\) where \({\mathcal {F}}^{1}\) denotes the set of univariate distribution functions.
A main aim of this paper is to solve the constrained supermodular maximization problem
$$\begin{array}{@{}rcl@{}} \max\left\{(X_{1},\ldots,X_{d})~|~X_{i}\sim F_{i}\,,~Z\sim G\,,~C_{X_{i},Z}\in {\mathcal{S}}^{i}\right\}~~~\text{w.r.t.} \leq_{sm} \end{array} $$
for \(F_{i}\in {\mathcal {F}}_{i}\) and \(G\in {\mathcal {F}}_{0}\,.\) A solution of this stronger maximization problem allows more general applications. In particular, it holds that
$$\begin{array}{@{}rcl@{}} (\xi_{i})_{i}\leq_{sm} (\zeta_{i})_{i} ~~~\Longrightarrow ~~~ \sum_{i} \xi_{i} \leq_{cx} \sum_{i} \zeta_{i}\,, \end{array} $$
and thus a solution of (7) also yields a solution of (5).
Note that solutions of the maximization problems do not necessarily exist because both the convex ordering of the constrained sums and the supermodular ordering are partial orders on the underlying classes of distributions that do not form a lattice, see Müller and Scarsini (2006). In general, the existence of solutions also depends on the marginal constraints Fi and G. In this paper, we determine solutions of the maximization problems for large classes \({\mathcal {F}}_{i}\subset {\mathcal {F}}^{1}\) of marginal constraints under some specific dependence constraints \({\mathcal {S}}^{i}\,.\)
In Ansari and Rüschendorf (2016), some results on the supermodular maximization problem are given for normal and Kotz-type distributional models for the risk vector X. Some general supermodular ordering results for conditionally comonotonic random vectors are established in Ansari and Rüschendorf (2018). Therein, as a useful tool, the upper product \(\bigvee _{i=1}^{d} D^{i}\) of bivariate copulas \(D^{i}\in \mathcal {C}_{2}\) is introduced by
$$\begin{array}{@{}rcl@{}} \bigvee_{i=1}^{d} D^{i} (u):=D^{1} \vee \cdots \vee D^{d} (u):=\int_{0}^{1} \min\limits_{1\leq i \leq d}\{\partial_{2} D^{i}(u_{i},t)\} \, {\mathrm{d}} t \end{array} $$
for u=(u1,…,ud)∈[0,1]d, where ∂2 denotes the partial derivative operator w.r.t. the second variable. (Note that we superscribe copulas with upper indices in this paper which should not be confused with exponents.) If the risk factor distribution G is continuous, then \(\bigvee _{i=1}^{d} C^{i}\) is the copula of the conditionally comonotonic risk vector \(\left (F_{X_{i}|Z}^{-1}(U)\right)_{1\leq i \leq d}\) with specifications \(C_{X_{i},Z}=C^{i}\,,\) see Ansari and Rüschendorf (2018), Proposition 2.4. Thus, ordering the dependencies of conditionally comonotonic random vectors is based on ordering the corresponding upper products. In particular, a strong dependence ordering condition on the copulas \(A^{i},B^{i}\in \mathcal {C}_{2}\) (based on the sign sequence ordering) allows us to infer inequalities of the form
$$\begin{array}{@{}rcl@{}} A^{1}\vee \cdots \vee A^{d}\vee B^{1}\leq_{sm} A^{1}\vee \cdots \vee A^{d}\vee B^{2}\,, \end{array} $$
see (Ansari and Rüschendorf 2018), Theorem 3.10. In this paper, we characterize upper product inequalities of the type
$$\begin{array}{@{}rcl@{}} M^{2}\vee D^{2}\vee \cdots \vee D^{d}\leq_{sm} M^{2}\vee \underbrace{E\vee \cdots \vee E}_{(d-1)\text{-times}} \end{array} $$
for copulas \(D^{2},\ldots D^{d},E\in \mathcal {C}_{2}\,,\) where M2 denotes the upper Fréchet copula in the bivariate case. These inequalities are based on simple lower orthant ordering conditions on the sets \({\mathcal {S}}^{i}\) such that solutions of the maximization problems (5) – (7) exist and can be determined.
The problem to find risk bounds for the Value-at-Risk (VaR) or other risk measures of a portfolio under the assumption of partial knowledge of the marginals and the dependence structure is a central problem in risk management. Bounds for the VaR (or the closely related distributional bounds), resp., for the Tail-Value-at-Risk (TVaR) based on some moment information have been studied extensively in the insurance literature by authors such as Kaas and Goovaerts (1986), Denuit et al. (1999), de Schepper and Heijnen (2010), Hürlimann (2002); Hürlimann (2008), Goovaerts et al. (2011), Bernard et al. (2017a); Bernard et al. (2018), Tian (2008), and Cornilly et al. (2018). Hürlimann (2002) derived analytical bounds for VaR and TVaR under knowledge of the mean, variance, skewness, and kurtosis.
The more recent literature has focused on the problem of finding risk bounds under the assumption that all marginal distributions are known but the dependence structure of the portfolio is either unknown or only partially known. Risk bounds with pure marginal information were intensively studied but were often found to be too wide in order to be useful in practice (see Embrechts and Puccetti (2006); Embrechts et al. (2013); Embrechts et al. (2014)). Related aggregation-robustness and model uncertainty for risk measures are also investigated in Embrechts et al. (2015). Several approaches to add some dependence information to marginal information have been discussed in ample literature (see Puccetti and Rüschendorf (2012a); Puccetti and Rüschendorf (2012b); Puccetti and Rüschendorf (2013); Bernard and Vanduffel (2015), Bernard et al. (2017a); Bernard et al. (2017b), Bignozzi et al. (2015); Rüschendorf and Witting (2017); Puccetti et al. (2017)). For some surveys on these developments, see Rueschendorf (2017a, b).
Apparently, a relevant dependence information and structural information leading to a considerable reduction of the risk bounds is given by the partially specified risk factor model as introduced in Bernard et al. (2017b). In this paper, we show that for a large relevant class of partially specified risk factor models–the internal risk factor models–more simple sufficient conditions for the supermodular ordering of the upper products–and thus for the conditionally comonotonic risk vectors–can be obtained by simple lower orthant ordering conditions on the dependence specifications. These simplified conditions allow easy applications to ordering results for risk bounds with subset specification sets \({\mathcal {S}}^{i}\) described above. We give an illuminating application to real market data which clearly shows the potential usefulness of the comparison results. For some further details, we refer to the dissertation of Ansari (2019).
Internal risk factor models
A simplified supermodular ordering result for conditionally comonotonic random vectors can be obtained in the case that the risk factor Z is itself a component of these risk vectors. As a slight generalization, we define the notion of an internal risk factor model.
(Internal risk factor model)
A (partially specified) internal risk factor model with internal risk factorZ is a (partially specified) risk factor model (Xi)1≤i≤d, Xi=fi(Z,εi), such that for some j∈{1,…,d} and a non-decreasing function gj holds Xj=gj(Z).
Without loss of generality, the distribution function of the internal risk factor can be chosen continuous, i.e., \(Z\sim G\in {\mathcal {F}}_{c}^{1}\,.\) Thus, the not necessarily uniquely determined copula of (Xj,Z) can be chosen as the upper Fréchet copula M2. This means that Xj and Z are comonotonic and Z can be considered as a component of the risk vector X which explains the denomination of Z as an internal risk factor.
In partially specified risk factor models, the dependence structure of the worst case conditionally comonotonic vector is represented by the upper product of the dependence specifications if \(G\in {\mathcal {F}}_{c}^{1}\,,\) i.e.,
$$\begin{array}{@{}rcl@{}} \left(F_{X_{i}|Z}^{-1}(U)\right)_{1\leq i \leq d} \sim \bigvee_{i=1}^{d} C^{i} (F_{1},\ldots,F_{d})\,. \end{array} $$
Thus, assuming w.l.o.g. that j=1, our aim is to derive supermodular ordering results for the upper product M2∨D2∨⋯∨Dd with respect to the dependence specifications Di.
For a function \(f\colon \mathbb {R}^{d}\to \mathbb {R}\,,\) let \(\triangle _{i}^{\varepsilon } f(x):=f(x+\varepsilon e_{i})-g(x)\) be the difference operator, where ε>0 and ei denotes the unit vector w.r.t. the canonical base in \(\mathbb {R}^{d}\,.\) Then, f is said to be supermodular, resp., directionally convex if \(\triangle _{i}^{\varepsilon _{i}} \triangle _{j}^{\varepsilon _{j}} f\geq 0\) for all 1≤i<j≤d, resp., 1≤i≤j≤d. For d-dimensional random vectors ξ,ξ′, the supermodular ordering ξ≤smξ′, resp., the directionally convex ordering ξ≤dcxξ′ is defined via \({\mathbb {E}} f(\xi) \leq {\mathbb {E}} f(\xi ^{\prime })\) for all supermodular, resp., directionally convex functions f for which the expectations exist. The lower, resp., upper orthant ordering ξ≤loξ′, resp., ξ≤uoξ′ is defined by the pointwise comparison of the corresponding distribution, resp., survival functions, i.e., \(F_{\xi }(x)\leq F_{\xi ^{\prime }}(x)\,,\) resp., \(\overline {F}_{\xi }(x)\leq \overline {F}_{\xi ^{\prime }}(x)\) for all \(x\in \mathbb {R}^{d}\,.\) Remember that the convex ordering ζ≤cxζ′ for real-valued random variables ζ,ζ′ is defined via \({\mathbb {E}} \varphi (\zeta)\leq {\mathbb {E}} \varphi (\zeta ^{\prime })\) for all convex functions φ for which the expectation exists. Note that these orderings depend only on the distributions and, thus, are also defined for the corresponding distribution functions. For an overview of stochastic orderings, see Müller and Stoyan (2002), Shaked and Shantikumar (2007), and Rüschendorf (2013).
The following theorem is a main result of this paper. It characterizes the upper product inequality (9) concerning partially specified internal risk factor models.
(Supermodular ordering of upper products)
Let \(D^{2}\ldots,D^{d},E\in \mathcal {C}_{2}\,.\) Then, the following statements are equivalent:
Di≤loE for all 2≤i≤d.
\(M^{2}\vee D^{2}\vee \cdots \vee D^{d} \leq _{lo} M^{2}\vee \underbrace {E \vee \cdots \vee E}_{(d-1)\text {-times}}\,.\)
\(M^{2}\vee D^{2}\vee \cdots \vee D^{d} \leq _{sm} M^{2}\vee \underbrace {E \vee \cdots \vee E}_{(d-1)\text {-times}}\,.\)
The proof of the equivalence of (i) and (ii) is not difficult, whereas the equivalence w.r.t. the supermodular ordering in (iii) which we derive in Section 3 requires some effort. Its proof is based on the mass transfer theory for discrete approximations of the upper products and, further, on a conditioning argument using extensions of the standard orderings ≤lo, ≤uo, ≤sm as well as of the comonotonicity notion to the frame of signed measures.
(Proof of '(i) ⇔ (ii)':) Assume that Di≤loE. Then, for u=(u1,…,ud)∈[0,1]d, we obtain from the definition of the upper product that
$$ {}\begin{aligned} M^{2} \vee D^{2}\vee \cdots \vee D^{d} (u)&=\int_{0}^{u_{1}}\min\limits_{2\leq i \leq d}\left\{\partial_{2} D^{i}(u_{i},t)\right\} \, {\mathrm{d}} t\\ &\leq \min\limits_{2\leq i \leq d}\left\{\int_{0}^{u_{1}}\partial_{2} D^{i}(u_{i},t)\, {\mathrm{d}} t\right\}\\ &=\min\limits_{2\leq i \leq d} \left\{D^{i}(u_{i},u_{1})\right\}\\ &\leq \min\limits_{2\leq i \leq d} \{E(u_{i},u_{1})\}\\ &=\int_{0}^{u_{1}} \min\limits_{2\leq i \leq d}\left\{\partial_{2} E(u_{i},t)\right\} \, {\mathrm{d}} t\\ &=M^{2}\vee \underbrace{E \vee \cdots \vee E}_{(d-1)\text{-times}}(u)\,, \end{aligned} $$
using that \(\phantom {\dot {i}\!}\partial _{2} M^{2}(u_{1},t)={\mathbb{1}}_{\{u_{1}\geq t\}}\) almost surely.
The reverse direction follows from the closures of the upper product (see Ansari and Rüschendorf (2018), Proposition 2.4.(iv)) and of the lower orthant ordering under marginalization.
The proof of '(i) ⇔ (iii)' is given in Section 3. □
As a consequence of the above supermodular ordering theorem for upper products, we obtain improved bounds in partially specified internal risk factor models in comparison to the standard bounds based on marginal information.
(Improved bounds in internal risk factor models)
For \(F_{j}\in {\mathcal {F}}^{1}\,,\) let Xj∼Fj, 1≤j≤d, be real-valued random variables such that \(C^{i}=C_{X_{i},X_{1}}\leq _{lo}E\) for all 2≤i≤d. Then, for Y1,…,Yd with Xj=dYj for all 1≤j≤d and \(C_{Y_{i},Y_{1}}=E\) for all 2≤i≤d holds
$$\begin{array}{@{}rcl@{}} (X_{1},\ldots,X_{d})\leq_{sm} \left(Y_{1},F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U)\right) \end{array} $$
for U∼U(0,1) independent of Y1. In particular, this implies
$$\begin{array}{@{}rcl@{}} \sum_{i=1}^{d} X_{i} \leq_{cx} Y_{1}+\sum_{i=2}^{d} F_{Y_{i}|Y_{1}}^{-1}(U)\,. \end{array} $$
Without loss of generality, let Xi∼U(0,1). Then, (X1,…,Xd) follows a partially specified internal risk factor model with internal risk factor Z=X1 and dependence constraints \(C_{X_{i},Z}=C^{i}\,,\) 2≤i≤d. We obtain
$$\begin{array}{@{}rcl@{}} F_{X_{1},\ldots,X_{d}}\leq_{sm} M^{2} \vee C^{2}\vee \cdots \vee C^{d}\leq_{sm} M^{2}\vee \underbrace{E \vee \cdots \vee E}_{(d-1)\text{-times}}\,, \end{array} $$
where the first inequality follows from Ansari and Rüschendorf (2018), Proposition 2.4.(i) and the second inequality holds due to Theorem 1. Thus, (12) follows from the representation in (10). The statement in (13) is a consequence of (8) and (12). □
The upper bound in (12) is comonotonic conditionally on Y1. Further, the vector \(\left (F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U)\right)\) is comonotonic because all copulas \(C_{Y_{i},Y_{1}}=E\) coincide, 2≤i≤d, cf. Ansari and Rüschendorf (2018), Proposition 2.4(v).
For d=2, (12) reduces to (X1,X2)≤sm(Y1,Y2) and the upper product in Theorem 1 simplifies to M2∨D2=(D2)T, resp., M2∨E=ET, where the copula CT is the transposed copula of \(C\in \mathcal {C}_{2}\,,\) i.e., CT(u,v)=C(v,u). In this case, the statements of Theorems 1 and 2 are known from the literature, see, e.g., (Müller (1997), Theorem 2.7). Further, for d>2, the result in Theorem 2 cannot be obtained by a simple supermodular mixing argument because, in the general case, a supermodular ordering of all conditional distributions is not possible, i.e., there exists a z outside a null set such that
$$(X_{1},\ldots,X_{d})|X_{1}=z~\not \leq_{sm} ~(Y_{1},F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U))|Y_{1}=z\,,$$
unless \(C_{X_{i},X_{1}}=E\) for all i, see (Ansari (2019), Proposition 3.18).
If (Xi,X1) are negatively lower orthant dependent for all 2≤i≤d, i.e., \(C_{X_{i},X_{1}}(u,v)\leq \Pi ^{2}(u,v)=uv\) for all (u,v)∈[0,1]2, then Theorem 2 simplifies to
$$\begin{aligned} (X_{1},\ldots,X_{d})&\leq_{sm} \left(X_{1},F_{X_{2}}^{-1}(U),\ldots,F_{X_{d}}^{-1}(U)\right)\\ \text{and}~~~~\sum_{i=1}^{d} X_{i} &\leq_{cx} X_{1}+\sum\limits_{i=2}^{d} F_{X_{i}}^{-1}(U)\,, \end{aligned} $$
where U∼U(0,1) is independent of X1.
For \(G\in {\mathcal {F}}^{1}_{c},\) the right side in (12), resp., (13) solves the constrained maximization problem (7), resp., (5) for the dependence specification sets
$$\begin{array}{@{}rcl@{}} {\mathcal{S}}^{1}&=&\{M^{2}\}\,,~~~\text{and}\\ {\mathcal{S}}^{i}&=&\{C\in \mathcal{C}^{2}\,|\,C\leq_{lo} E\}\,,~~2\leq i \leq d\,. \end{array} $$
As a consequence of Theorem 2, we also obtain improved upper bounds under some correlation information. For a bivariate random vector \((V_{1},V_{2})\sim C\in \mathcal {C}_{2}\,,\) denote Spearman's ρ, resp., Kendall's τ of (V1,V2) by ρS(V1,V2)=ρS(C), resp., τ(V1,V2)=τ(C).
For r∈[−1,1], define \(C^{r}(u,v):=\sup \{C(u,v)~|~C\in \mathcal {C}_{2}\,,~\rho _{S}(C)=r\}\,,\) (u,v)∈[0,1]2. Then, Cr is a bivariate copula and is given by
$$C^{r}(u,v)=\min\left\{u,v,\tfrac {u+v-1}2+\phi(u+v-1,1+r)\right\}\,,$$
where \(\phi (a,b)=\frac 1 6 \left [(9b+3\sqrt {9b^{2}-3a^{6}})^{1/3}+(9b-3\sqrt {9b^{2}-3a^{6}})^{1/3}\right ]\,,\) see Nelsen et al. (2001) [Theorem 4].
For t∈[−1,1], define \(D^{t}(u,v):=\sup \{C(u,v)~|~C\in \mathcal {C}_{2}\,,~\tau (C)=t\}\,,\) (u,v)∈[0,1]2. Then, Dt is a bivariate copula and given by
$$D^{t}(u,v)=\min\left\{u,v,\frac 1 2\left[(u+v-1)+\sqrt{(u+v-1)^{2}+1+t}\right]\right\}\,,$$
see Nelsen et al. (2001) [Theorem 2].
The risk bounds can be improved under correlation bounds as follows.
(Improved bounds based on correlations)
Let X1,…,Xd be real-valued random variables such that either
ρS(X1,Xi)<0.5 for all 2≤i≤d, or
τ(X1,Xi)<0 for all 2≤i≤d.
Let r:= max2≤i≤d{ρS(X1,Xi)}, resp., t:= max2≤i≤d{τ(X1,Xi)}. Then, for Y1,…,Yd with Yj=dXj, 1≤j≤d, and \(C_{Y_{i},Y_{1}}=C^{r}\,,\) resp., \(C_{Y_{i},Y_{1}}=D^{t}\) for all 2≤i≤d, it holds true that
$$ \begin{aligned} (X_{1},\ldots,X_{d})&\leq_{sm} \left(Y_{1},F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U)\right)\\ &<_{sm} \left(F_{Y_{1}}^{-1}(U),\ldots,F_{Y_{d}}^{-1}(U)\right)\,, \end{aligned} $$
where U∼U(0,1) is independent of Y1.
The result follows from Theorem 2 using the monotonicity of the distributional bound Cr in r, resp., Dt in t w.r.t. the lower orthant ordering, see Nelsen et al. (2001) [Corollary 5 (a),(b),(e), resp., Corollary 3 (a),(b),(e)]. □
For r∈(−1,0.5) and t∈(−1,0), it holds that ρS(Cr)>r and τ(Dt)>t, see Nelsen et al. (2001) [Corollary 3(h), resp., 5(h)]. Thus, for \(F_{i}\in {\mathcal {F}}^{1},\) 1≤i≤d,\(G\in {\mathcal {F}}_{c}^{1}\) and
$$\begin{aligned} {\mathcal{S}}^{1}&=\{M^{2}\}\,,~~~\text{and}\\ {\mathcal{S}}^{i}&=\{C\in \mathcal{C}_{2}~|~\rho_{S}(C)\leq r\}~~~\text{resp.}~~~{\mathcal{S}}^{i}=\{C\in \mathcal{C}_{2}~|~\tau(C)\leq t\}\,, \end{aligned} $$
2≤i≤d, only an improved upper bound in supermodular ordering for the constrained risk vectors but not a solution of maximization problem (5), resp., (7) can be achieved.
To also allow a comparison of the univariate marginal distributions, remember that a bivariate copula D is conditionally increasing (CI) if there exists a bivariate random vector (U1,U2)∼D such that U1|U2=u2 is stochastically increasing in u2 and U2|U1=u1 is stochastically increasing in u1. Equivalently, ∂2D(u,v) is almost surely decreasing in v for all u∈[0,1] and ∂1D(u,v) is almost surely decreasing in u for all v∈[0,1].
If the upper bound E in Theorem 2 is conditionally increasing, then the case of increasing marginals in convex order can also be handled.
(Improved bounds in ≤dcx-order) Let X1,…,Xd be real-valued random variables with \(C_{X_{i},X_{1}}\leq _{lo}E\) for all 2≤i≤d. Assume that E is conditionally increasing. Then, for Y1,…,Yd with Xj≤cxYj for all 1≤j≤d and \(C_{Y_{i},Y_{1}}=E\) for all 2≤i≤d holds
$$\begin{array}{@{}rcl@{}} (X_{1},\ldots,X_{d})\leq_{dcx} \left(Y_{1},F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U)\right)\,, \end{array} $$
where U∼U(0,1) is independent of Y1. This implies
Let Y1′,…,Yd′ with Xj=dYj′ for all 1≤j≤d and \(C_{Y_{i}',Y_{1}'}=E\) for all 2≤i≤d. Then, we obtain from (12) that
$$\begin{array}{@{}rcl@{}} (X_{1},\ldots,X_{d})\leq_{sm} \left(Y_{1}',F_{Y_{2}',Y_{1}'}^{-1}(V),\ldots,F_{Y_{d}',Y_{1}'}^{-1}(V)\right) \end{array} $$
for V∼U(0,1) independent of Y1′. Since both \(\left (Y_{1}',F_{Y_{2}',Y_{1}'}^{-1}(V),\ldots,F_{Y_{d}',Y_{1}'}^{-1}(V)\right)\) and \(\left (Y_{1},F_{Y_{2}|Y_{1}}^{-1}(U),\ldots,F_{Y_{d}|Y_{1}}^{-1}(U)\right)\) have the same copula \(M^{2}\vee \underbrace {E\vee \cdots \vee E}_{(d-1)\text {-times}}\,,\) which is easily shown to be CI, the statement follows from Müller and Scarsini (2001), Theorem 4.5 using Yi′≤cxYi. □
For \(F_{1},\ldots,F_{d}\in {\mathcal {F}}^{1},\) consider the sets \({\mathcal {F}}_{i}':=\{F\in {\mathcal {F}}^{1}|F\leq _{cx} F_{i}\}\,.\) Let the sets \({\mathcal {S}}^{i}\) of dependence specifications be given as in (14). If \(E=C_{Y_{i},Y_{1}}\) is CI and Yi∼Fi, then the upper bound in (16) solves maximization problem (6) with marginal specification sets \({\mathcal {F}}_{0}={\mathcal {F}}_{c}^{1}\) and \({\mathcal {F}}_{i}={\mathcal {F}}_{i}'\) for 1≤i≤d.
For a generalization of Theorem 2, we need an extension of (8) as follows.
Let \(X=\left (X_{k}^{i}\right)_{1\leq i \leq d, 1\leq k \leq m}\) and \(Y=\left (Y_{k}^{i}\right)_{1\leq i \leq d, 1\leq k \leq m}\) be (d×m)-matrices of real random variables with independent columns.
If \(\left (X_{k}^{i}\right)_{1\leq i \leq d}\leq _{sm} \left (Y_{k}^{i}\right)_{1\leq i \leq d}\) for all 1≤k≤m, then it holds true that
$$\begin{array}{@{}rcl@{}} \sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(X_{k}^{i}\right)\right) \leq_{cx} \sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(Y_{k}^{i}\right)\right) \end{array} $$
for all increasing convex functions ψi and increasing functions \(f_{k}^{i}\,.\)
By straightforward calculations, it can be shown that the function \(h\colon (\mathbb {R}^{m})^{d} \to \mathbb {R}\) given by
$$\begin{array}{@{}rcl@{}} h(x)=\varphi\left(\sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} x_{k}^{i}\right)\right) \end{array} $$
is supermodular for all increasing convex functions φ. Then, the invariance under increasing transformations and the concatenation property of the supermodular order (see, e.g., Shaked and Shantikumar (2007) [Theorem 9.A.9(a),(b)]) imply that
$$\begin{array}{@{}rcl@{}} \sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(X_{k}^{i}\right)\right)\leq_{icx} \sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(Y_{k}^{i}\right)\right)\,, \end{array} $$
where ≤icx denotes the increasing convex order. Since it holds for 1≤i≤d that \(\sum _{k=1}^{m} f_{k}^{i}\left (X_{k}^{i}\right) {\stackrel {\mathrm {d}}=} \sum _{k=1}^{m} f_{k}^{i}\left (Y_{k}^{i}\right)\,,\) we obtain
$$\begin{array}{@{}rcl@{}} {\mathbb{E}} \left[\sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(X_{k}^{i}\right)\right)\right]={\mathbb{E}} \left[\sum_{i=1}^{d} \psi_{i}\left(\sum_{k=1}^{m} f_{k}^{i}\left(Y_{k}^{i}\right)\right)\right]\,. \end{array} $$
Hence, the assertion follows from Shaked and Shantikumar (2007) [Theorem 4.A.35]. □
The application to improved portfolio TVaR bounds in Section 4 is based on the following generalization of Theorem 2.
(Concatenation of upper bounds)
For \(F_{i}^{k}\in {\mathcal {F}}^{1}\,,\) let \(\left (X_{1}^{k},\ldots,X_{d}^{k}\right)\,,\) 1≤k≤m, be independent random vectors with \(X_{i}^{k}\sim F_{i}^{k}\,.\) Assume that \(C_{X_{i}^{k},X_{1}^{k}}\leq _{lo} E^{k}\) for \(E^{k}\in \mathcal {C}_{2}\) for all 2≤i≤d, 1≤k≤m. Then, for independent vectors \(\left (Y_{1}^{k},\ldots,Y_{d}^{k}\right)\) with \(Y_{i}^{k}{\stackrel {\mathrm {d}}=} X_{i}^{k}\) and \(C_{Y_{i}^{k},Y_{1}^{k}}=E^{k}\) for all 2≤i≤d, 1≤k≤m holds
$$\begin{array}{@{}rcl@{}} \left(X_{1}^{k},\ldots,X_{d}^{k}\right)_{1\leq k \leq m}\leq_{sm} \left(Y_{1}^{k},F_{Y_{2}^{k}|Y_{1}^{k}}^{-1}(U^{k}),\ldots,F_{Y_{d}^{k}|Y_{1}^{k}}^{-1}(U^{k})\right)_{1\leq k \leq m}\,, \end{array} $$
where U1,…,Um∼U(0,1)are i.i.d. and independent of \(Y_{1}^{k}\) for all k. This implies
$$\begin{array}{@{}rcl@{}} \sum_{i=1}^{d} \varphi_{i}\left(\sum_{k=1}^{m} X_{i}^{k}\right)\leq_{cx} \varphi_{1}\left(\sum_{k=1}^{m} Y_{1}^{k}\right) + \sum_{i=2}^{d}\varphi_{i}\left(\sum_{k=1}^{m} F_{Y_{i}^{k}|Y_{1}^{k}}^{-1}(U^{k})\right) \end{array} $$
for all increasing convex functions φ1,…,φd.
Statement (17) follows from Theorem 2 with the concatenation property of the supermodular ordering. Statement (18) is a consequence of Theorem 2 and Lemma 1. □
Under the assumptions of Theorem 4, the right hand side in (18) solves maximization problem (5) for
$$\begin{array}{@{}rcl@{}} {\mathcal{S}}^{1}&:=&\left\{M^{2}\right\}\,,\\ {\mathcal{S}}^{i}&:=&\left\{C_{\varphi_{i}\left(\sum_{k} X_{i}^{k}\right),\varphi_{1}(\sum_{k} X_{i}^{k})}~|~C_{X_{i}^{k},X_{1}^{k}}\leq_{lo} E^{k} ~\text{f.a.} i,k\right\}\,,~~~2\leq i \leq d\,, \end{array} $$
where \(F_{i}=F_{\varphi _{i}\left (\sum _{k} X_{i}^{k}\right)}\,,\) 1≤i≤d and \(G\in {\mathcal {F}}_{c}^{1}\,.\)
Proof of the supermodular ordering in Theorem 1
In this section, we prove the equivalence of (i) and (iii) in Theorem 1. This requires some preparations. We approximate the upper products by discrete upper products based on grid copula approximations. Then, we show that these discrete upper products can be supermodularly ordered using a conditioning argument and mass transfer theory from Müller (2013). However, requires an extension of the orderings ≤lo, ≤uo, ≤sm, and of comonotonicity to the frame of signed measures.
Extensions of ≤lo, ≤uo, and ≤sm to signed measures
For a Borel-measurable subset \(\Xi \subset \mathbb {R}^{d}\,,\) denote by \({\mathcal {B}}(\Xi)\) the Borel- σ-algebra on Ξ. Denote by \({\mathcal {M}}_{d}^{1}\) the set of probability measures on \({\mathcal {B}}(\Xi)\,.\) A signed measure on \({\mathcal {B}}(\Xi)\) is a σ-additive mapping \(\mu \colon {\mathcal {B}}(\Xi) \to \mathbb {R}\) such that μ(∅)=0. Let \({\mathbb {M}}^{0}_{d}={\mathbb {M}}_{d}^{0}(\Xi)\,,\) resp., \({\mathbb {M}}^{1}_{d}={\mathbb {M}}_{d}^{1}(\Xi)\) be the set of all signed measures μ on \({\mathcal {B}}(\Xi)\) with μ(Ξ)=0, resp., μ(Ξ)=1 and finite variation norm ∥μ∥=μ+(Ξ)+μ−(Ξ)<∞, where μ+,μ− are the unique measures obtained from the Hahn–Jordan decomposition of μ=μ+−μ−. Then, the definition of the orderings ≤lo, ≤uo, and ≤sm can be extended to signed distributions using this decomposition.
Let \(P,Q\in {\mathbb {M}}^{1}_{d}\) be signed measures. Then, define
the lower orthant orderP≤loQ if P((−∞,x])≤Q((−∞,x]) holds for all \(x\in \mathbb {R}^{d}\,,\)
the upper orthant orderP≤uoQ if P((x,∞))≤Q((x,∞)) holds for all \(x\in \mathbb {R}^{d}\,,\)
the supermodular orderP≤smQ if \(\int f(x)\, {\mathrm {d}} P(x)\leq \int f(x) \, {\mathrm {d}} Q(x)\) holds for all supermodular integrable functions f.
We generalize the concept of comonotonicity to signed measures as follows.
Quasi-comonotonicity
We say that a probability distribution Q, resp., a distribution function F is comonotonic if there exists a comonotonic random vector ξ such that ξ∼Q, resp., Fξ=F.
For a signed measure \(P\in {\mathbb {M}}_{d}^{1}\,,\) we define the associated measure generating functionF=FP by F(x)=P((−∞,x]) and its univariate marginal measure generating functionsFi by \(F_{i}(x_{i})=P(\mathbb {R}\times \cdots \mathbb {R}\times (-\infty,x_{i}]\times \mathbb {R}\times \cdots \times \mathbb {R})\) for \(x=(x_{1},\ldots,x_{d})\in \mathbb {R}^{d}\) and 1≤i≤d. We define the notion of quasi-comonotonicity as follows.
(Quasi-comonotonicity) We denote P, resp., F as quasi-comonotonic if \(F(x)=\min \limits _{1\leq i \leq d}\left \{F_{i}(x_{i})\right \}\) for all \(x=(x_{1},\ldots,x_{d})\in \mathbb {R}^{d}.\)
Obviously, if \(P\in {\mathcal {M}}^{1}_{d}\,,\) then the quasi-comonotonicity and comonotonicity of P are equivalent.
The following lemma characterizes the lower orthant ordering of (quasi-) comonotonic distributions in terms of the upper orthant order.
Let \(P\in {\mathbb {M}}_{d}^{1}\) be a signed distribution with univariate marginal distribution functions Fi, 1≤i≤d. Let \(Q\in {\mathcal {M}}_{d}^{1}\) be a probability distribution. Assume that Fi(t)≤1 for all \(t\in \mathbb {R}\,,\) 1≤i≤d. If P is quasi-comonotonic and Q is comonotonic, then it holds that
$$\begin{array}{@{}rcl@{}} P\leq_{lo} Q ~~~\Longleftrightarrow ~~~ P\geq_{uo} Q\,. \end{array} $$
Let \(A_{i}=\{(y_{1},\ldots,y_{d})\in \mathbb {R}^{d}\,|\, y_{i}\in (x_{i},\infty ]\},\phantom {\dot {i}\!}\) 1≤i≤d, and let \(a_{j}:=F_{i_{j}}(x_{i_{j}})\phantom {\dot {i}\!}\) for i1,…,id∈{1,…,d} such that a1≥…≥ad. Then, the survival function \(\overline {F}\) corresponding to F is calculated by
$$\begin{array}{@{}rcl@{}} \overline{F}(x)&=&P\left(\bigcap_{i=1}^{d} A_{i}\right)=1- P \left(\bigcup_{i=1}^{d} A_{i}^{c}\right)\\ &=&1-\sum_{\emptyset\ne J\subset\{1,\ldots,d\}} (-1)^{|J|+1}P\left(\bigcap_{j\in J} A_{j}^{c}\right)\\ &=& 1- \sum_{\emptyset\ne J\subset\{1,\ldots,d\}} (-1)^{|J|+1} \min\limits_{j\in J}\{F_{j}(x_{j})\}\\ &=& 1- \sum_{k=1}^{d} \sum_{j=1}^{k} (-1)^{k-j+2} \binom{k-1}{k-j}\, a_{k}\\ &=& 1-a_{1} =1-\max_{1\leq i\leq d}\{F_{i}(x_{i})\}\,, \end{array} $$
where the fourth equality holds true because P is quasi-comonotonic, Fi≤1 and Fi(∞)=1 for all i. The fifth equality follows since there are \(\binom {k-1}{k-j}\) subsets of {1,…,k} with k−j+1 elements such that k is the maximum element. The sixth equality holds due to the symmetry of the binomial coefficient.
Let G be the distribution function corresponding to Q with univariate margins Gi. Then, it holds analogously that \(\overline {G}(x)=Q((x,\infty))=1-\max _{i}\left \{G_{i}(x_{i})\right \}\) for \(x=(x_{1},\ldots,x_{d})\in \mathbb {R}^{d}\,.\) We obtain that
$$\begin{aligned} P&\leq_{{lo}} Q\\ \Longleftrightarrow\;\;\;\;\;\;\; \min\limits_{i} \left\{F_{i}(x_{i})\right\}\quad &\leq \min\limits_{i}\left\{G_{i}(x_{i})\right\} ~~~\forall ~(x_{1},\ldots,x_{d})\in \mathbb{R}^{d}\\ \Longleftrightarrow\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; F_{i}(t)&\leq G_{i}(t) ~~~\forall ~t\in \mathbb{R}~\forall ~1\leq i \leq d\\ \Longleftrightarrow\;\;\;\; 1-\max_{i} \left\{F_{i}(x_{i})\right\}&\geq 1-\max_{i}\left\{G_{i}(x_{i})\right\} ~~~\forall~(x_{1},\ldots,x_{d})\in \mathbb{R}^{d}\\ \Longleftrightarrow\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; P&\geq_{uo} Q\,, \end{aligned} $$
where we use for the second equivalence that Fi,Gi≤1 and Fi(∞)=Gi(∞)=1 for all 1≤i≤d. The third equivalence holds true because Gi≥0 and Gi(−∞)=0 for all i. □
Grid copula approximation
In this subsection, we consider the approximation of the upper product by grid copulas. In the proof of the supermodular ordering in Theorem 1, we make essential use of the property that this approximation is done by distributions with finite support.
For \(n\in {\mathbb {N}}\) and d≥1, denote by
$$\begin{aligned} {\mathbb{G}}_{n}^{d}:&= \left\{\left(\tfrac {i_{1}} n,\ldots,\tfrac {i_{d}} n\right)|i_{k}\in \{1,\ldots,n\},k\in \{1,\ldots,d\}\right\}\,, \text{ resp.,}\\ {\mathbb{G}}_{n,0}^{d}:&= \left\{\left(\tfrac {i_{1}} n,\ldots,\tfrac {i_{d}} n\right)|i_{k}\in \{0,\ldots,n\},k\in \{1,\ldots,d\}\right\} \end{aligned} $$
the (extended) uniform unit n-grid of dimension d with edge length \(\tfrac 1 n\,.\)
The following notion of an n-grid d-copula is related to an d-subcopula with domain \({\mathbb {G}}_{n}^{d}\,,\) see, e.g., Nelsen (2006), Definition 2.10.5. For our purpose, we also need a signed version. Denote by ⌊·⌋ the componentwise floor function.
(Grid copula) For \(d\in {\mathbb {N}}\,,\) a (signed) n-grid d-copula (briefly grid copula) \(D\colon [0,1]^{d} \to \mathbb {R}\) is the (signed) measure generating function of a (signed) measure \(\mu \in {\mathbb {M}}_{d}^{1}({\mathbb {G}}_{n,0}^{d})\) with uniform univariate margins, i.e., it holds that
\(D(u)=D\left (\frac {\lfloor n u \rfloor }{n}\right)=\mu \left (\left [0,\frac {\lfloor n u \rfloor }{n}\right ]\right)\) for all u∈[0,1]d, and
for all i=1,…,d holds \(D(u)=\tfrac k n\) for all k=0,…,n, if \(u_{i}=\tfrac k n\) and uj=1 for all j≠i.
Denote by \(\mathcal {C}_{d,n}\) (resp., \(\in \mathcal {C}_{2,n}^{s}\)) the set of all (signed) n-grid d-copulas.
An \(\tfrac 1 n\)-scaled doubly stochastic matrix or, if the dimension of the matrix is clear, a mass matrix is defined as an n×n-matrix with non-negative entries and row, resp., column sums equal to \(\tfrac 1 n\,.\) By an signed \(\tfrac 1 n\)-scaled doubly stochastic matrix or also signed mass matrix, we mean an \(\tfrac 1 n\)-scaled doubly stochastic matrix where negative entries are also allowed.
Obviously, there is a one-to-one correspondence between the set of (signed) n-grid 2-copulas and the set of (signed) \(\tfrac 1 n\)-scaled doubly stochastic matrices.
For a bivariate (signed) n-grid copula \(E\in \mathcal {C}_{2,n}\) (\(\in \mathcal {C}_{2,n}^{s}\)), the associated (signed) probability mass function e is defined by
$$e(u,v):=\Delta_{n}^{1} \Delta_{n}^{2} E(u,v)\,,~~~(u,v)\in {\mathbb{G}}_{n}^{2}\,,$$
where \(\Delta _{n}^{i}\) (distinct from \(\triangle _{i}^{\varepsilon _{i}}\)) denotes the difference operator of length \(\tfrac 1 n\) with respect to the i-th variable, i.e.,
$$\Delta_{n}^{i} g(u):= g(u)-g((u-\tfrac 1 n e_{i})\vee 0)$$
for \(u\in {\mathbb {G}}_{n,0}^{d}\) and the i-th unit vector ei. Further, define its associated (signed) mass matrix (ekl)1≤k,l≤n by
$$\begin{array}{@{}rcl@{}} e_{kl}=e\left(1-\tfrac {k-1} n,\tfrac l n\right)\,. \end{array} $$
For every d-copula \(D\in \mathcal {C}_{d}\,,\) denote by \({\mathbb {G}}_{n}(D)\) its canonical n-grid d-copula given by
$$\begin{array}{@{}rcl@{}} {\mathbb{G}}_{n}(D)(u):=D\left(\tfrac{\lfloor nu \rfloor}{n}\right)\,,~~~u\in [0,1]^{d}\,. \end{array} $$
Define the upper product \(\bigvee \colon (\mathcal {C}_{2,n})^{d} \to \mathcal {C}_{d,n}\) for grid copulas \(D_{n}^{1},\ldots,D_{n}^{d}\in \mathcal {C}_{2,n}\) by
$$ \begin{aligned} \bigvee_{i=1}^{d} D_{n}^{i} (u_{1},\ldots,u_{d}):&=\sum_{k=1}^{n} \min\limits_{1\leq i \leq d}\left\{\Delta_{n}^{2} D_{n}^{i}\left(u_{i},\tfrac k n\right)\right\}\\ &=\frac 1 n\sum_{k=1}^{n} \min\limits_{1\leq i \leq d}\left\{n\Delta_{n}^{2} D_{n}^{i}\left(u_{i},\tfrac k n\right)\right\} \end{aligned} $$
for (u1,…,ud)∈[0,1]d. A version for signed grid copulas is defined analogously.
The following result gives a sufficient supermodular ordering criterion for the upper product based on the approximations by grid copulas, see Ansari and Rüschendorf (2018), Proposition 3.7.
Let \(D^{i},E^{i}\in \mathcal {C}_{2}\) be bivariate copulas for 1≤i≤d. Then, it holds true that
$$\begin{array}{@{}rcl@{}} \bigvee_{i=1}^{d} {\mathbb{G}}_{n}(D^{i}) \leq_{sm} \bigvee_{i=1}^{d} {\mathbb{G}}_{n}(E^{i}) ~~~\forall n\in {\mathbb{N}}~~~\Longrightarrow~~~\bigvee_{i=1}^{d} D^{i} \leq_{sm} \bigvee_{i=1}^{d} E^{i}\,. \end{array} $$
We make use of the above ordering criterion because the approximation is done by distributions with finite support. But the supermodular ordering of distributions with finite support enjoys a dual characterization by mass transfers as follows.
Mass transfer theory
This section and the notation herein is based on the mass transfer theory as developed in Müller (2013).
For signed measures \(P,Q\in {\mathbb {M}}^{1}_{d}\) with finite support, denote the signed measure Q−P a transfer from P to Q. To indicate this transfer, write
$$\begin{array}{@{}rcl@{}} \sum_{i=1}^{n} \alpha_{i} \delta_{x_{i}} \to \sum_{i=1}^{m} \beta_{i} \delta_{y_{i}}\,, \end{array} $$
where \((Q-P)^{-}=\sum _{i=1}^{n} \alpha _{i} \delta _{x_{i}}\) and \((Q-P)^{+}=\sum _{i=1}^{m} \beta _{i} \delta _{y_{i}}\) for αi,βj>0 and \(x_{i},y_{j}\in \mathbb {R}^{d}\,,\) 1≤i≤n, 1≤j≤m. A reverse transfer from P to Q is a transfer from Q to P.
Since \(Q=P+(Q-P)=P-\sum _{i=1}^{n} \alpha _{i} \delta _{x_{i}} + \sum _{i=1}^{m} \beta _{i} \delta _{y_{i}}\,,\) the mapping in (21) illustrates the mass that is transferred from P to Q. By definition, it holds that \(Q-P\in {\mathbb {M}}_{d}^{0}\,.\) Thus, mass is only shifted and, in total, neither created nor lost.
For a set \(M\subset {\mathbb {M}}^{0}_{d}\) of transfers, one is interested in the class \({\mathcal {F}}\) of continuous functions \(f\colon S\to \mathbb {R}\) such that
$$\begin{array}{@{}rcl@{}} \sum_{j=1}^{m} \beta_{j} f(y_{j}) \geq \sum_{i=1}^{n} \alpha_{i} f(x_{i}) \end{array} $$
whenever μ∈M, where \(\mu :=\sum _{j=1}^{m} \beta _{j} \delta _{y_{j}}-\sum _{i=1}^{n} \alpha _{i} \delta _{x_{i}}\,,\)αi,βj>0. Then, \({\mathcal {F}}\) is said to be induced from M.
We focus on the set of Δ-monotone, resp., Δ-antitone, resp., supermodular transfers. These sets induce the classes \({\mathcal {F}}_{\Delta }\) of Δ-monotone, resp., \({\mathcal {F}}_{\Delta }^{-}\) of Δ-antitone, resp., \({\mathcal {F}}_{sm}\) of supermodular functions on S.
Let η>0. Let x≤y with strict inequality xi<yi for k indices i1,…,ik for some k∈{1,…,d}. Denote by \({\mathcal {V}}_{o}(x,y)\,,\) resp., \({\mathcal {V}}_{e}(x,y)\) the set of all vertices z of the k-dimensional hyperbox [x,y] such that the number of components with zi=xi, i∈{i1,…,ik} is odd, resp., even.
A transfer indicated by
$$\begin{array}{@{}rcl@{}} \eta\left(\sum_{z\in {\mathcal{V}}_{0}(x,y)} \delta_{z} \right)\to \eta\left(\sum_{z\in {\mathcal{V}}_{e}(x,y)} \delta_{z} \right) \end{array} $$
is called a (k-dimensional) Δ-monotone transfer.
$$\begin{aligned} \eta \left(\sum_{z\in {\mathcal{V}}_{o}(x,y)} \delta_{z} \right) &\to \eta\left(\sum_{z\in {\mathcal{V}}_{e}(x,y)} \delta_{z}\right)~~~\text{if }k \text{ is even, and}\\ \eta \left(\sum_{z\in {\mathcal{V}}_{e}(x,y)} \delta_{z} \right) &\to \eta\left(\sum_{z\in {\mathcal{V}}_{o}(x,y)} \delta_{z}\right)~~~\text{if }k \text{ is odd} \end{aligned} $$
is called a (k-dimensional) Δ-antitone transfer.
For \(v,w\in \mathbb {R}^{d}\,,\) a transfer indicated by
$$\eta(\delta_{v} + \delta_{w}) \to \eta (\delta_{v\wedge w}+\delta_{v\vee w})$$
is called a supermodular transfer, where ∧, resp., ∨ denotes the component-wise minimum, resp., maximum.
The characterizations of the orderings ≤uo, ≤lo, resp., ≤sm by mass transfers due to (Müller (2013), Theorems 2.5.7 and 2.5.4) also hold in the case of signed measures because the proof makes only a statement on transfers, i.e., on the difference of measures.
For signed measures \(P,Q\in {\mathbb {M}}^{1}_{d}\) with finite support holds:
P≤uoQ if and only if Q can be obtained from P by a finite number of Δ-monotone transfers.
P≤loQ if and only if Q can be obtained from P by a finite number of Δ-antitone transfers.
P≤smQ if and only if Q can be obtained from P by a finite number of supermodular transfers.
From Definition 5, we obtain that exactly the one-dimensional Δ-monotone, resp., Δ-antitone transfers affect the univariate marginal distributions. Hence, for measures \(P,Q\in {\mathcal {M}}^{1}_{d}(\Xi)\) with equal univariate distributions, i.e., \(\phantom {\dot {i}\!}P^{\pi _{i}}=Q^{\pi _{i}}\,,\)πi the i-th projection, for all 1≤i≤d, holds that P≤uoQ, resp., P≤loQ if and only if Q can be obtained from P by a finite number of at least 2-dimensional Δ-monotone, resp., Δ-antitone transfers. But note that also the one-dimensional Δ-monotone, resp., Δ-antitone transfers can affect the copula, resp., dependence structure.
Now, we are able to give the proof of the main ordering result of this paper.
Proof of '(i) ⇔ (iii)' in Theorem 1
Assume that (iii) holds. Then, the closures of the upper product and the supermodular ordering under marginalization imply (Di)T=M2∨Di≤smM2∨E=ET. But this means that Di≤loE.
For the reverse direction, assume that Di≤loE for all 2≤i≤d. Consider the discretized grid copulas \(D_{n}^{i}:={\mathbb {G}}_{n}(D^{i}), M_{n}^{2}:={\mathbb {G}}_{n}(M^{2}),\) and \(E_{n}:={\mathbb {G}}_{n}(E)\,,\) 2≤i≤d, and denote by \(d_{n}^{i}\,,\) resp., en the associated mass matrices of \(d_{n}^{i}\,,\) resp., En. We prove for the upper products of grid copulas, defined in (20), that
$$\begin{array}{@{}rcl@{}} C_{n}:=M_{n}^{2} \vee D_{n}^{2} \vee \cdots \vee D_{n}^{d} ~\leq_{sm}~ M_{n}^{2} \vee \underbrace{E_{n} \vee\cdots \vee E_{n}}_{(d-1)\text{-times}}=:B_{n}\,, \end{array} $$
showing that there exists a finite number of supermodular transfers that transfer Cn to Bn. This yields (iii) applying Propositions 2 (iii) and 1.
To show (22), consider for 2≤i≤d the signed grid copulas \((D_{n,k}^{i})_{1\leq k \leq n}\) on \({\mathbb {G}}_{n}^{2}\) defined through the signed mass matrices \((D_{n,k}^{i})_{1\leq k \leq n}\) given by
$$\begin{array}{@{}rcl@{}} {} d_{n,1}^{i}:&=&d_{n}^{i}\,,~~~\text{and}\\ {}d_{n,k+1}^{i}(u_{i},t)&=&\!\!\left\{\begin{array}{ll} e_{n}(u_{i},t) &\text{ if } t=\tfrac k n\,,\\ d_{n,k}^{i}(u_{i},t)+d_{n,k}^{i}\left(u_{i},t-\tfrac 1 n\right)-e_{n}\left(u_{i},t-\tfrac 1 n\right) & \text{ if }t=\tfrac {k+1}n\,,\\ d_{n,k}^{i}(u_{i},t) & \text{ if }t\ne \tfrac k n,\tfrac {k+1} n \end{array}\right. \end{array} $$
for 1≤k≤n−1.
For all 2≤i≤d and for all \(n\in {\mathbb {N}}\,,\) the sequence \((D_{n,k}^{i})_{1\leq k \leq n}\) of signed mass matrices adjusts the signed mass matrix \(d_{n}^{i}\) column by column to the signed mass matrix en. It holds that \(d_{n,n}^{i}=e_{n}\) for all i and n.
For \(C_{n,k}:=M^{2}_{n}\vee D_{n,k}^{2} \vee \cdots \vee D_{n,k}^{d}\,,\) 1≤k≤n, we show that
$$\begin{array}{@{}rcl@{}} C_{n,k}\leq_{sm} C_{n,k+1} \end{array} $$
for all 1≤k≤n−1. Then, transitivity of the supermodular ordering implies (22) because Cn,1=Cn and Cn,n=Bn.
We observe that Di≤loE yields \(D_{n}^{i}\leq _{lo} E_{n}\) and also
$$\begin{array}{@{}rcl@{}} D_{n,k}^{i}\leq_{lo} D_{n,k+1}^{i}\leq_{lo} E_{n} \end{array} $$
for all 1≤k≤n−1. Further, we observe that Cn,k and Cn,k+1 are (signed) grid copulas with uniform univariate marginals, i.e.,
$$\begin{array}{@{}rcl@{}} C_{n,k}(1,\ldots,1,u_{j},1,\ldots,1)=u_{j}=C_{n,k+1}(1,\ldots,1,u_{j},1\,\ldots,1) \end{array} $$
for all \(u_{j}\in {\mathbb {G}}_{n,0}^{1}\) and 1≤j≤d. This holds because \(\Delta _{n}^{2} D_{n,k}^{i}(u_{i},t)\leq \tfrac 1 n\) for all \((u_{i},t)\in {\mathbb {G}}_{n,0}^{2}\) and for all i and k, even if \(d_{n,k}^{i}\) can get negative for \(t=\tfrac k n\) and some ui<1.
By construction of \((D_{n,k}^{i})_{1\leq k \leq n}\,,\) it holds that
$$\begin{array}{@{}rcl@{}} \Delta_{n}^{2} D_{n,k+1}^{i}(u_{i},t)=\Delta_{n}^{2} E_{n}(u_{i},t)~~~\text{for }t\leq \tfrac k n\,, \end{array} $$
for all 1≤k≤n−1 and for all \(u_{i}\in {\mathbb {G}}_{n,0}^{1}\,,\) 2≤i≤d.
To show (24), fix column k∈{1,…,n−1} of the signed mass matrices. Conditioning under \(u_{1}\in {\mathbb {G}}_{n}^{1}\,,\) consider the conditional (signed) measure generating functions
$$\begin{array}{@{}rcl@{}} C_{n,l}^{u_{1}}:= n \Delta_{n}^{1} C_{n,l}(u_{1},\cdot)=n(C_{n,l}(u_{1},\cdot)-C_{n,l}\left(u_{1}-\tfrac 1 n,\cdot\right) \end{array} $$
for l=1,…,n, where
$$\begin{aligned} C_{n,l}(u)&=\sum\limits_{z\in {\mathbb{G}}_{n}^{1}} \min\left\{\tfrac 1 n{\mathbbm{1}}_{\{u_{1}\geq z\}},\min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D_{n,l}^{i}(u_{i},z)\}\right\}\\ &=\sum\limits_{z\leq u_{1}\atop z\in {\mathbb{G}}_{n}^{1}} \min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D^{i}_{n,l}(u_{i},z)\} \end{aligned} $$
is the upper product of the (signed) grid copulas \(M_{n},D_{n,l}^{2},\ldots,D_{n,l}^{d}\) for \(u\in {\mathbb {G}}_{n,0}^{d}\,.\) Hence, it holds for the conditional (signed) measure generating function that
$$\begin{array}{@{}rcl@{}} C_{n,l}^{u_{1}}(u_{-1})&=n \min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D_{n,l}^{i}(u_{i},u_{1})\}\,, \end{array} $$
and for its corresponding (signed) survival function that
$$\begin{array}{@{}rcl@{}} \overline{C_{n,l}^{u_{1}}}(u_{-1})&=1-n\max\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D_{n,l}^{i}(u_{i},u_{1})\}\,, \end{array} $$
where \(u_{-1}=(u_{2},\ldots,u_{d})\in {\mathbb {G}}_{n,0}^{d-1}\,.\)
By the construction of \((D_{n,l}^{i})_{1\leq l \leq n}\,,\) it holds that
$$\begin{array}{@{}rcl@{}} C_{n,k}^{x}=C_{n,k+1}^{x}~~~\text{for all }x\in {\mathbb{G}}_{n}^{1}\setminus \{\tfrac k n,\tfrac {k+1} n\}\,. \end{array} $$
We show that
$$\begin{array}{@{}rcl@{}} &C_{n,k}^{x} \geq_{uo} C_{n,k+1}^{x}&\text{for }x=\tfrac k n, \end{array} $$
$$\begin{array}{@{}rcl@{}}& C_{n,k}^{x} \leq_{uo} C_{n,k+1}^{x}&\text{for }x=\tfrac {k+1} n\,, ~~~\text{and} \end{array} $$
$$\begin{array}{@{}rcl@{}} &P^{U_{1}}(\cdot)\times P_{C_{n,k}^{\cdot}} \!\!\leq_{sm} P^{U_{1}}(\cdot)\times P_{C_{n,k+1}^{\cdot}} &\text{for }U_{1}\sim U(\{\tfrac k n,\tfrac{k+1} n\})\,, \end{array} $$
where \(P^{U_{1}}(\cdot)\times P_{C_{n,k}^{\cdot }}\,,\) resp., \(P^{U_{1}}(\cdot)\times P_{C_{n,k+1}^{\cdot }}\) is the conditional measure generating function of \(P_{C_{n,k}}\,,\) resp., \(P_{C_{n,k+1}}\) given the set \(\{\tfrac k n,\tfrac {k+1} n\}\times {\mathbb {G}}_{n,0}^{d-1}\,.\) Then, (28) and (31) imply (24) using (a slightly generalized version of) the closure of the supermodular ordering under mixtures given by Shaked and Shantikumar (2007) [Theorem 9.A.9.(d)].
To show (29), let us fix \(u_{1}=\tfrac k n\,.\) Then, we calculate
$$\begin{array}{@{}rcl@{}} &\min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D^{i}_{n,k}\left(u_{i},\tfrac k n\right)\}+\sum\limits_{m=1}^{k-1} \min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} E_{n}\left(u_{i},\tfrac m n\right)\}\\ &= \sum\limits_{m=1}^{k} \min\limits_{2\leq i \leq d} \{\Delta_{n}^{2} D^{i}_{n,k}(u_{i},\tfrac m n)\}\\ &\leq \min\limits_{2\leq i \leq d}\{D^{i}_{n,k}(u_{i},u_{1})\}\\ &\leq \min\limits_{2\leq i \leq d}\{E_{n}(u_{i},u_{1})\} \end{array} $$
$$\begin{array}{@{}rcl@{}} &= E_{n}(\min\limits_{2\leq i \leq d}\{u_{i}\},u_{1})\\ &=\sum\limits_{m=1}^{k} \Delta_{n}^{2} E_{n}(\min\limits_{2\leq i \leq d}\{u_{i}\},\tfrac m n) \end{array} $$
$$\begin{array}{@{}rcl@{}}&= \sum\limits_{m=1}^{k} \min\limits_{2\leq i \leq d} \{\Delta_{n}^{2} E_{n}\left(u_{i},\tfrac m n\right)\}\,, \end{array} $$
where the first equality follows from (27), the first inequality is Jensen's inequality, the second inequality is due to (25). Equality (33) holds because En is a grid copula and does not depend on i, the third equality holds by definition of \(\Delta _{n}^{2}\,,\) and the last equality is true because En is a grid copula, thus 2-increasing, and hence \(\Delta _{n}^{2} E_{n}(\cdot,t)\) is increasing for all \(t\in {\mathbb {G}}_{n}^{1}\,.\)
Then, from (32) and (34) it follows for the k-th columns of the matrices that
$$\min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D^{i}_{n,k}(u_{i},u_{1})\}\leq \min\limits_{2\leq i \leq d} \{\Delta_{n}^{2} E_{n}(u_{i},u_{1})\}=\min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D_{n,k+1}^{i}(u_{i},u_{1})\}\,,$$
where the equality holds true due to (27). This means that
$$\begin{array}{@{}rcl@{}} C_{n,k}^{u_{1}} \leq_{lo} C_{n,k+1}^{u_{1}} \end{array} $$
holds. Further, \(C_{n,k}^{u_{1}}\) corresponds to a quasi-comonotonic signed measure in \({\mathbb {M}}_{d}^{1}\) with univariate marginals given by \(n\Delta _{n}^{2} D_{n,k}^{i}(\cdot,u_{1})\leq 1\,,\) and \(C_{n,k+1}^{u_{1}}\) corresponds to a comonotonic probability distribution. Thus, we obtain from Lemma 2 that (29) holds.
Next, we show (30). Due to (29) and Proposition 2, there exists a finite number of reverse Δ-monotone transfers that transfer \(C_{n,k}^{u_{1}}\) to \(C_{n,k+1}^{u_{1}}\,,\) i.e., there exist \(m\in {\mathbb {N}}\) and a finite sequence \(\left (P_{l}^{u_{1}}\right)_{1\leq l \leq m}\) of signed measures on \({\mathbb {G}}_{n}^{d-1}\) such that
$$\begin{aligned} P_{1}^{u_{1}}&= P_{C_{n,k}^{u_{1}}}\,,~~~P_{m}^{u_{1}}=P_{C_{n,k+1}^{u_{1}}}\,,~~~\text{and}\\ \mu_{l}^{u_{1}}&:=P_{l+1}^{u_{1}}-P_{l}^{u_{1}}~~~\text{is a reverse }\Delta\text{-monotone transfer f.a. }1\leq l \leq m-1\,. \end{aligned} $$
Since the univariate margins of \(C_{n,k}^{u_{1}}\) and \(C_{n,k+1}^{u_{1}}\) do not coincide, some of the transfers \(\left (\mu _{l}^{u_{1}}\right)_{l}\) must be one-dimensional, see Remark 5. Each one-dimensional transfer \(\mu _{l}^{u_{1}}\) transports mass from one point \(u^{l}=\left (u_{2}^{l},\ldots,u_{d}^{l}\right)\in {\mathbb {G}}_{n}^{d-1}\) to another point \(v^{l}=\left (v_{2}^{l},\ldots,v_{d}^{l}\right)\in {\mathbb {G}}_{n}^{d-1}\) such that \(v_{\iota }^{l}< u_{\iota }^{l}\) for an ι∈{2,…,d} and \(u_{j}^{l}=v_{j}^{l}\) for all j≠ι, i.e., \(\mu _{l}^{u_{1}}=\eta ^{l} \left (\delta _{v^{l}}-\delta _{u^{l}}\right)\) is indicated by
$$\begin{array}{@{}rcl@{}} \eta^{l} \delta_{u^{l}} \to \eta^{l} \delta_{v^{l}} \end{array} $$
for some ηl>0. Since applying mass transfers is commutative, we first choose to apply all of these one-dimensional reverse Δ-monotone transfers. Because δ-dimensional Δ-monotone transfers leave the univariate marginals unchanged for δ≥2, see Remark 5, the univariate margins of \(C_{n,k}^{u_{1}}\) must be adjusted to the univariate margins of \(C_{n,k+1}^{u_{1}}\) having applied all of these one-dimensional reverse Δ-monotone transfers.
Then, since the grid copula of \(C_{n,k+1}^{u_{1}}\) is the upper Fréchet bound and hence the greatest element in the ≤uo-ordering, no further reverse Δ-monotone transfer is possible. Thus, \(C_{n,k+1}^{u_{1}}\) is reached from above having applied only one-dimensional Δ-monotone transfers \(\mu _{l}^{u_{1}}\,,\) 1≤l≤m−1, on \(P_{C_{n,k}^{u_{1}}}\,,\) i.e.,
$$\begin{array}{@{}rcl@{}} P_{C_{n,k}^{u_{1}}}+\sum_{l=1}^{m-1} \mu_{l}^{u_{1}} = P_{C_{n,k}^{u_{1}}}+\sum_{l=1}^{m-1} \eta^{l} (\delta_{v^{l}}-\delta_{u^{l}})= P_{C_{n,k+1}^{u_{1}}}\,. \end{array} $$
For all reverse Δ-monotone transfers \(\mu _{l}^{u_{1}},\phantom {\dot {i}\!}\) consider its corresponding reverse transfer \(\mu _{l}^{u_{1}+1/n}:=-\mu _{l}^{u_{1}}\phantom {\dot {i}\!}\) on \({\mathbb {G}}_{n}^{d-1}\) indicated by \(\phantom {\dot {i}\!}\eta ^{l} \delta _{v^{l}}\to \eta ^{l} \delta _{u^{l}}\,.\) Define
$$\begin{array}{@{}rcl@{}} P_{1}^{u_{1}+1/n}&:=&P_{C_{n,k}^{u_{1}+1/n}}\,,~~~\text{and}\\ P_{l+1}^{u_{1}+1/n}&:=& P_{l}^{u_{1}+1/n}+\mu_{l}^{u_{1}+1/n}=P_{l}^{u_{1}+1/n}-\mu_{l}^{u_{1}}~~~\text{for }l=1,\ldots,m-1\,. \end{array} $$
The transfers \(\phantom {\dot {i}\!}\left (\mu _{l}^{u_{1}+1/n}\right)_{l}\) are one-dimensional Δ-monotone transfers. Then, it holds true that they adjust the univariate marginals of \(\phantom {\dot {i}\!}P_{C_{n,k}^{u_{1}+1/n}}\) to the univariate marginals of \(P_{C_{n,k+1}^{u_{1}+1/n}}.\phantom {\dot {i}\!}\) This can be seen because only two entries (in column k) of matrix ι are changed by the mass transfer \(\mu _{l}^{u_{1}}.\phantom {\dot {i}\!}\) All other columns and matrices j≠ι are unaffected by this transfer. From (28) follows that exactly the reverse transfers \(\mu _{l}^{u_{1}+1/n}\phantom {\dot {i}\!}\) applied simultaneously on the corresponding entries in column k+1 of mass matrix ι guarantee the uniform margin condition (26) to stay fulfilled. Having applied all transfers μl, then each column j≠k+1 of the mass matrix \(d_{n,k}^{i}\) is adjusted to column j of the mass matrix \(d_{n,k+1}^{i}\) for all 2≤i≤d. But this also means that column k+1 of the mass matrix \(d_{n,k}^{i}\) must be adjusted to column k+1 of \(d_{n,k+1}^{i}\) due to the uniform margin condition.
Since applying the one-dimensional transfers \(\phantom {\dot {i}\!}\mu _{l}^{u_{1}+1/n}\) on \(\phantom {\dot {i}\!}P_{C_{n,k}^{u_{1}+1/n}}\) (which is comonotonic) can change the dependence structure, the signed measure \(P_{m}^{u_{1}+1/n}\) is not necessarily quasi-comonotonic, i.e., \(P_{m}^{u_{1}+1/n}\phantom {\dot {i}\!}\) does not necessarily coincide with \(P_{C_{n,k+1}^{u_{1}+1/n}}\) (which is quasi-comonotonic). We show that
$$\begin{array}{@{}rcl@{}} P_{m}^{u_{1}+1/n}=P_{C_{n,k+1}^{u_{1}+1/n}}\,. \end{array} $$
Since \(C_{n,k}^{u_{1}}\leq _{lo} C_{n,k+1}^{u_{1}}\,,\) see (35), it also holds that
$$\begin{array}{@{}rcl@{}} \Delta_{n}^{2} D_{n,k}^{i}\left(u_{i},\tfrac k n\right)\leq \Delta_{n}^{2} D_{n,k+1}^{i}\left(u_{i},\tfrac k n\right)~~~\forall u_{i}\in {\mathbb{G}}_{n}^{1}~ \forall i \in \{2,\ldots,d\}\,, \end{array} $$
where we use that
$$\begin{aligned} \Delta_{n}^{2} D_{n,k}^{i}\left(\cdot,\tfrac k n\right),\Delta_{n}^{2} D_{n,k+1}^{i}\left(\cdot,\tfrac k n\right)&\leq \tfrac 1 n~~~\text{and}\\ \Delta_{n}^{2} D_{n,k}^{i}\left(1,\tfrac k n\right)=\Delta_{n}^{2} D_{n,k+1}^{i}\left(1,\tfrac k n\right)&=\tfrac 1 n \end{aligned} $$
for all 2≤i≤d. By construction of \(\left (d^{i}_{n,l}\right)_{1\leq l\leq n}\,,\) it follows that
$$\begin{array}{@{}rcl@{}} \Delta_{n}^{2} D_{n,k}^{i}\left(u_{i},\tfrac {k+1} n\right)\geq \Delta_{n}^{2} D_{n,k+1}^{i}\left(u_{i},\tfrac {k+1} n\right)~~~\forall u_{i}\in {\mathbb{G}}_{n}^{1}~ \forall i \in \{2,\ldots,d\}\,. \end{array} $$
This implies
$$\begin{array}{@{}rcl@{}} \min\limits_{2\leq i \leq d} \{\Delta_{n}^{2} D_{n,k}^{i}\left(u_{i},\tfrac {k+1}n\right)\} \geq \min\limits_{2\leq i \leq d}\{\Delta_{n}^{2} D_{n,k+1}^{2}\left(u_{i},\tfrac {k+1} n\right)\}~\,\forall u_{i}\in {\mathbb{G}}_{n}^{1}\,, \,2\leq i \leq d\,. \end{array} $$
But this means that \(C_{n,k}^{u_{1}+1/n}\geq _{lo} C_{n,k+1}^{u_{1}+1/n}\,.\) Due to (23), it holds that \(C_{n,k}^{u_{1}+1/n}\) is comonotonic and \(C_{n,k+1}^{u_{1}+1/n}\) is quasi-comonotonic with univariate marginal measure generating functions \(n\Delta _{n}^{2} D_{n,k+1}^{i}\left (\cdot,\tfrac {k+1}n\right)\leq 1\,.\) Thus, Proposition 2 yields (30).
Further, (30) and Proposition 2 imply that there exist \(m'\in {\mathbb {N}}\) and a finite number of reverse Δ-monotone transfers \(\phantom {\dot {i}\!}(\gamma _{l})_{1\leq l \leq m'}\) that adjust \(P_{C_{n,k+1}^{u_{1}+1/n}}\phantom {\dot {i}\!}\) to \(\phantom {\dot {i}\!}P_{C_{n,k}^{u_{1}+1/n}}\,.\) With the same argument as above, these transfers are one-dimensional. Further, the reverse transfers \(\phantom {\dot {i}\!}(\gamma _{l}^{r})_{1\leq l \leq m'}\,,\) where \(\gamma _{l}^{r}=-\gamma _{l}\,,\) correspond to the Δ-monotone transfers \(\phantom {\dot {i}\!}\left (\mu _{l}^{u_{1}+1/n}\right)_{1\leq l \leq m}\) that adjust the margins of \(C_{n,k}^{u_{1}+1/n}\phantom {\dot {i}\!}\) to the margins of \(\phantom {\dot {i}\!}C_{n,k+1}^{u_{1}+1/n}\,.\) This yields m=m′,\(\sum _{l=1}^{m-1} \mu _{l}^{u_{1}+1/n} =\sum _{l=1}^{m'-1} \gamma _{l}^{r}\phantom {\dot {i}\!}\) and thus \(P_{m}^{u_{1}+1/n}=P_{C_{n,k+1}^{u_{1}+1/n}}\,,\) which proves (39). Hence, (38) yields
$$\begin{array}{@{}rcl@{}} P_{C_{n,k}^{u_{1}+1/n}}+\sum_{l=1}^{m-1} \mu_{l}^{u_{1}+1/n}=P_{C_{n,k+1}^{u_{1}+1/n}}\,. \end{array} $$
It remains to show (31). Each transfer \(\mu _{l}^{u_{1}}\,,\,,\) resp." \(\mu _{l}^{u_{1}+1/n}\) on \({\mathbb {G}}_{n}^{d-1}\) can be extended to a reverse Δ-monotone, resp., Δ-monotone transfer μl,r, resp., μl on \(\{u_{1}\}\times {\mathbb {G}}_{n}^{d-1}\,,\) resp., \(\{u_{1}+\tfrac 1 n\}\times {\mathbb {G}}_{n}^{d-1}\,,\) indicated by
$$\begin{array}{@{}rcl@{}} \eta^{l} \delta_{\left(u_{1},u^{l}\right)} \to \eta^{l} \delta_{\left(u_{1},v^{l}\right)}~~~\text{resp.}~~~ \eta^{l} \delta_{\left(u_{1}+1/n,v^{l}\right)} \to \eta^{l} \delta_{\left(u_{1}+1/n,u^{l}\right)}\,. \end{array} $$
Then, for each l∈{1,…,m−1}, applying the transfers μl,r and μl in (41) simultaneously yields exactly a transfer νl on \(\{u_{1},u_{1}+\tfrac 1 n\}\times {\mathbb {G}}_{n}^{d-1}\) between (u1,ul) and \(\left (u_{1}+\tfrac 1 n,v^{l}\right)\,,\) indicated by
$$\begin{array}{@{}rcl@{}} \eta^{l} \left(\delta_{\left(u_{1},u^{l}\right)}+\delta_{\left(u_{1}+1/n,v^{l}\right)}\right) \to \eta^{l} \left(\delta_{\left(u_{1},v^{l}\right)}+\delta_{\left(u_{1}+1/n,u^{l}\right)}\right)\,. \end{array} $$
Each transfer νl is a supermodular transfer. Denote by ε{x} the one-point probability measure in x. Then, finally, we obtain
$$\begin{array}{@{}rcl@{}} P^{U_{1}}&&(\cdot)\times P_{C_{n,k+1}^{\cdot}}=\frac{1}{2}\left(\varepsilon_{\{u_{1}\}}\times P_{C_{n,k+1}^{u_{1}}}+\varepsilon_{\{u_{1}+\tfrac 1 n\}}\times P_{C_{n,k+1}^{u_{1}+1/n}}\right)\\ &&=~\frac{1}{2}\left(\varepsilon_{\{u_{1}\}}\times \left(P_{C_{n,k}^{u_{1}}}+\sum_{l=1}^{m-1} \mu_{l}^{u_{1}}\right)+\varepsilon_{\{u_{1}+\tfrac 1 n\}}\times \left(P_{C_{n,k}^{u_{1}+1/n}}+\sum_{l=1}^{m-1} \mu_{l}^{u_{1}+1/n}\right)\right)\\ &&=~\frac{1}{2}\left(\varepsilon_{\{u_{1}\}}\times P_{C_{n,k}^{u_{1}}}+\sum_{l=1}^{m-1} \mu_{l,r}+\varepsilon_{\{u_{1}+\tfrac 1 n\}}\times P_{C_{n,k}^{u_{1}+1/n}}+\sum_{l=1}^{m-1} \mu_{l}\right)\\ &&=~ \frac{1}{2}\left(\varepsilon_{\{u_{1}\}}\times P_{C_{n,k}^{u_{1}}}+~\varepsilon_{\{u_{1}+\tfrac 1 n\}}\times P_{C_{n,k}^{u_{1}+1/n}}+\sum_{l=1}^{m-1} \nu_{l} \right)\\ &&=~ P^{U_{1}}(\cdot)\times P_{C_{n,k}^{\cdot}} + \frac 1 2 \sum_{l=1}^{m-1} \nu_{l} \end{array} $$
which implies (31) using Proposition 2. The first and last equality hold due to the definition of the measures. The second equality is given by (37) and (40), the third equality holds by the definition of μl,r, resp., μl, and the fourth equality holds true by the definition of νl.\(\square \)
The proof is based on an approximation by finite sequences of signed grid copulas that fulfill the conditioning argument in (28)–(31). Further, we use the necessary condition that the lower orthant ordering holds true–indeed, (32) and (34) yield Cn,k≤loCn,k+1–in order to show that the supermodular ordering is also fulfilled.
The condition that the upper bound E for Di is a joint upper bound, i.e., it does not depend on i, is crucial for the proof. Otherwise, Eq. (33) can fail, see also (11). In general, it holds that
$$\begin{array}{@{}rcl@{}} D^{i}\leq_{lo} E^{i} ~\forall i ~\not \Longrightarrow ~M^{2} \vee D^{1} \vee \cdots \vee D^{d} \leq_{lo} M^{2} \vee E^{1} \vee \cdots \vee E^{d}\,. \end{array} $$
For a counterexample assume that D1=D2<loE1<loE2. Then, it holds
$$M^{2}\vee D^{1} \vee D^{2} (1,\cdot,\cdot)=M^{2} >_{lo} E^{1}\vee E^{2} = M^{2}\vee E^{1}\vee E^{2} (1,\cdot,\cdot)\,,$$
using the marginalization and the maximality property of the upper product, see Ansari and Rüschendorf (2018), Proposition 2.4, which yields a contradiction to M2∨D1∨D2≤loM2∨E1∨E2.
While in the proof the \(D_{n,k}^{i}\,,\) 1≤k≤n, can be signed grid copulas with \(\Delta _{n}^{2} D_{n,k}^{i}(u_{i},t)\leq \tfrac 1 n\) for all \((u_{i},t)\in {\mathbb {G}}_{n,0}^{2}\,,\) it is necessary that En is a grid copula and not only a signed grid copula. Otherwise, both monotonicity properties in (33) and (34) can fail.
We illustrate the idea of the proof with an example for n=4 and d=3 :
Let \(D^{2}_{4},D^{3}_{4},E_{4}\) be 4-grid copulas given through the mass matrices \(d^{2}_{4},d^{3}_{4}\,,\) resp., e4 by
$$\begin{array}{@{}rcl@{}} d^{2}_{4}= \tfrac 1 {16} \left(\begin{array}{llll} 3 & 1 & 0 & 0\\ 0 & 2 & 1 & 1\\ 0 & 1 & 2 & 1\\ 1 & 0 & 1 & 2 \end{array}\right)\,,~ d^{3}_{4}= \tfrac 1 {16} \left(\begin{array}{llll} 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1\\ 0 & 1 & 2 & 1\\ 2 & 1 & 0 & 1 \end{array}\right)\,,~ e_{4}= \tfrac 1 {16} \left(\begin{array}{llll} 0 & 1 & 1 & 2\\ 1 & 1 & 2 & 0\\ 1 & 0 & 1 & 2\\ 2 & 2 & 0 & 0 \end{array}\right)\,. \end{array} $$
Then, we observe that \(D^{2}_{4},D^{3}_{4}\leq _{lo} E_{4}\,.\) Consider the signed 4-grid copulas \(D_{4,l}^{i}\,,\) 1≤l≤4, i=2,3, in Fig. 1 constructed by (23). The conditional distribution of \(M^{2}_{4}\vee D^{2}_{4,1}\vee D^{3}_{4,1}\) under \(u_{1}=\tfrac 1 4\) is given by \(4\, \Delta _{4}^{2} \, M^{2}_{4}\vee D^{2}_{4,1}\vee D^{3}_{4,1}\left (\tfrac 1 4,\cdot,\cdot \right)=4 \min \limits _{i}\{\Delta _{4}^{2} D^{i}\left (\cdot,\tfrac 1 4\right)\}\,,\) where the arguments of the min-function correspond to the distributions given through the first columns of \(d_{4}^{2}\,,\) resp., \(d_{4}^{3}\,.\)
This figure illustrates the mass transfers in Example 1
$$C_{4,1}^{u_{1}}=4\cdot M^{2}_{4}\vee D^{2}_{4,1}\vee D^{3}_{4,1}\left(\tfrac 1 4,\cdot,\cdot\right)\geq_{uo} 4\cdot M^{2}_{4}\vee E_{4}\vee E_{4}\left(\tfrac 1 4,\cdot,\cdot\right)=C_{4,2}^{u_{1}}\,,$$
the solid-marked reverse Δ-monotone transfers can be applied to adjust the first column of \(d^{2}_{4,1}\,,\) resp., \(d^{3}_{4,1}\) to the first column of e4. These transfers are balanced by the dashed-marked Δ-monotone transfers in the second columns which guarantee that the new matrices \(d_{4,2}^{2}\,,\) resp., \(d_{4,2}^{3}\) are still (signed) copula mass matrices. This procedure is repeated column by column until \(d_{4,4}^{2}=d_{4,4}^{3}=e_{4}\,.\)
Application to improved portfolio TVaR bounds
In this section, we determine improved Tail-Value-at-Risk bounds for a portfolio \(\Sigma _{t}=\sum _{i=1}^{8} Y_{t}^{i}\,,\)t≥0, t in trading days, of d=8 (derivatives on) assets \(S_{t}^{i}\) applying Theorem 4 about internal risk factor models. More specifically, let \(Y_{t}^{i}=S_{t}^{i}\) for i=1,…,6 and \(Y_{t}^{i}=(S_{t}^{i}-K^{i})_{+}\) for i=7,8, where K7=70 and K8=10. In this application, \((S_{t}^{i})_{t\geq 0}\) denotes the asset price process of Audi (i=1), Allianz (i=2), Daimler (i=3), Siemens (i=4), Adidas (i=5), Volkswagen (i=6), SAP (i=7), resp., Deutsche Bank (i=8).
We aim to determine improved TVaR bounds for ΣT for T=1 year=254 trading days, resp., T=2 years=508 trading days. The underlying process \(S_{t}=(S_{t}^{1},\ldots,S_{t}^{8})\) is modeled by an integrable exponential process St=S0 exp(Lt) under the following assumptions:
Let \(m\in {\mathbb {N}}\) and 0=t0<t1<⋯<tm=T with \(t_{i}-t_{i-1}=\tfrac T m\) for 1≤i≤m.
The component processes \((L_{t}^{i})_{t\geq 0}\) are Lévy processes for all i.
The increments \((\xi _{k}^{1},\ldots,\xi _{k}^{d}):=(L_{t_{k}}^{1}-L_{t_{k-1}}^{1},\ldots,L_{t_{k}}^{d}-L_{t_{k-1}}^{d})\,,\) 1≤k≤m, are independent in k (but not necessarily stationary).
For all k, there exists a bivariate copula \(E^{k}\in \mathcal {C}_{2}\) such that \(C_{\xi _{k}^{i},\xi _{k}^{1}}\leq _{lo} E^{k}\) for all 2≤i≤d.
Assumptions (I)–(III) are consistent. Assumption (I) is a standard assumption on the log-increments of \((S_{t}^{i})_{t\geq 0}\) while Assumption (II) generalizes the dependence assumptions for multivariate Lévy models because neither multivariate stationarity nor independence for all increments is assumed. Assumption (III) reduces the dependence structure between the k-th log-increment of the i-th component and the k-th log-increment of the first component (which is the internal risk factor) by a subclass \({\mathcal {S}}_{k}^{i}=\{C\in \mathcal {C}_{2}|C\leq _{lo} E^{k}\}\) of bivariate copulas.
Then, Theorem 4 yields improved bounds in convex order for the portfolio ΣT if the claims \(Y_{T}^{i}\) are of the form \(Y_{T}^{i}=\psi _{i}(S_{T}^{i})=\psi _{i}(\exp (L_{T}^{i}))=\psi _{i}\left (\exp \left (\sum _{k=1}^{m} \xi _{k}^{i}\right)\right)\) with ψi increasing convex.
For the estimation of the distribution of \(S_{T}^{i}\,,\) we make the following specification of Assumption (4):
Each \(\left (S_{t}^{i}\right)_{t\geq 0}\,,\)i=1,…,8, follows an exponential NIG process, i.e.,
$$\begin{array}{@{}rcl@{}} S_{t}^{i}=S_{0}^{i}\exp\left(L_{t}^{i}\right)\,~~t\geq 0\,, \end{array} $$
where \(S_{0}^{i}>0\) and where each \(\left (L_{t}^{i}\right)_{t\geq 0}\) is an NIG process with parameters αi,βi,δi,νi.
For the estimation of upper bounds in supermodular order for the increments \(\left (\xi _{k}^{1},\ldots,\xi _{k}^{8}\right)\), we specify Assumption (I) as follows:
For fixed ν∈(2,∞], the copula Ek in Assumption (III) is given by a t-copula with some correlation parameter ρk∈[−1,1] (which we specify later) and ν degrees of freedom, i.e., \(E^{k}=C_{\nu }^{\rho _{k}}\,.\)
We make use of the relation between the (pseudo-)correlation parameter ρ of elliptical copulas and Kendall's τ given by \(\rho (\tau)=\sin \left (\tfrac \pi 2 \tau \right)\,,\) see McNeil et al. (2015) [Proposition 5.37], because Kendall's rank correlation does not depend on the specified univariate marginal distributions in contrast to Pearson's correlation. Thus, in order to determine a reasonable value for ρk, we estimate an upper bound for \(\tau _{k}:=\max _{2\leq i \leq 8}\{\tau _{k}^{i}\}\,,\) where \(\tau _{k}^{i}:=\tau \left (C_{\xi _{k}^{i},\xi _{k}^{1}}\right)\,.\) Since it is not possible to determine the dependence structure of each increment from a single observation, we estimate \(\tau _{k}^{i}\) from a sample of past observations. To do so, we assume that the dependence structure of \(\left (\xi _{k}^{i},\xi _{k}^{1}\right)\) does not jump too rapidly to strong positive dependence in a short period of time as follows:
For \(n\in {\mathbb {N}}\,,\) define the averaged correlations over the past n time points at time k by \(\tau _{k,n}^{i}:=\tfrac 1 n \sum _{j=0}^{n-1}\tau _{k-n+j}^{i}\,,\) for k>n. Then, we assume that
$$\begin{array}{@{}rcl@{}} \tau_{k}=\max_{2\leq i \leq d}\{\tau_{k}^{i}\}\leq \max_{2\leq i \leq d}\{\tau_{k,n}^{i}\}+\epsilon_{k} \end{array} $$
for some error εk≥0 (which we fix later).
The above assumptions include the basic assumptions of multivariate exponential Lévy models because the stationarity condition in Assumption (II) yields (42). Further, ρk=1 yields Ek=M2 which means that Assumption (III) is trivially fulfilled in this case. Note that in this application the dependence constraints are allowed to come from quite a big subclass of copulas (see Remark 4).
Under the Assumptions (I)–(III), the dependence structure of \(\left (Y_{T}^{i},Y_{T}^{1}\right)\) is not uniquely determined for i=2,…,8. Thus, we need to solve the constrained maximization problem (5) to obtain improved upper bounds compared to applying (2) for partially specified risk factor models.
We mention that the structure of this section and the underlying data are similar to Ansari and Rüschendorf (2018), Section 4. But, there, the risk factor (which is the "DAX") is an external risk factor which is not part of the portfolio, whereas in our application the internal risk factor "AUDI" is part of the underlying portfolio. This allows use of the simplified ordering conditions established in this paper. Further, the improved TVaR-bounds in this application are based on large sets of dependence specifications of the daily log-returns (see Assumption (III) and Remark 4), whereas in Ansari and Rüschendorf (2018) all the dependence constraints on the time- T log-returns are assumed to come from a one-parametric family of copulas.
Application to real market data
As data set, we take the daily adjusted close data from "Yahoo! Finance" from 23/04/2008 to 20/04/2018. It contains the values of 2540 trading days for 8 assets (with some missing data) which we denote by \(\left (s_{k}^{1},\ldots,s_{k}^{8}\right)_{1\leq k \leq 2540}\,.\) More precisely, \(\left (s_{k}^{1}\right)_{k}\) are the adjusted close data of "AUDI AG (NSU.DE)", \(\left (s_{k}^{2}\right)_{k}\) of "Allianz SE (ALV.DE)", \(\left (s_{k}^{3}\right)_{k}\) of "Daimler AG (DAI.DE)", \(\left (s_{k}^{4}\right)_{k}\) of "Siemens Aktiengesellschaft (SIE.DE)", \(\left (s_{k}^{5}\right)_{k}\) of "adidas AG (ADS.DE)", \(\left (s_{k}^{6}\right)_{k}\) "Volkswagen AG (VOW.DE)", \(\left (s_{k}^{7}\right)_{k}\) of "SAP SE (SAP.DE)" and \(\left (s_{k}^{8}\right)_{k}\) of "Deutsche Bank Aktiengesellschaft (DBK.DE)".
We choose \(\hat {\tau }_{k,n}^{i}:=\hat {\tau }\left (\left (x_{k-n+j}^{i},x_{k-n+j}^{1}\right)_{0\leq j< n}\right)\) as an estimator for \(\tau _{k,n}^{i}\) in Assumption (4), where \(\hat {\tau }\) denotes Kendall's rank correlation coefficient (see, e.g., (McNeil et al. (2015), equation (5.50))) and \(\left (x_{k}^{i}\right)_{2\leq k \leq 2540}\) are the historical log-returns of the i-th component, i.e., \(x_{k}^{i}:=\log s_{k}^{i}-\log s_{k-1}^{i}\) for 2≤k≤2540. Further, we choose n=30 and εk=ε=0.05 in (42).
In Fig. 2, the historical estimates \(\hat {\tau }_{k,30}\) are illustrated for 31≤k≤2540 and for i=2,…,8. Further, the plot at the bottom-right shows the maximum of the historical estimates \(\hat {\tau }_{k,30}=\max _{2\leq i \leq 8}\{\hat {\tau }_{k,30}^{i}\}\) (solid graph) as an estimator for τk, and it also shows the estimated historical upper bound \(\overline {\hat {\rho }_{k}}:= \max _{2\leq i\leq 8}\{\rho \left (\hat {\tau }_{k,n}^{i}+\epsilon _{k}\right)\}\) (dotted graph) with error εk for ρk, 31≤k≤2540, see Assumption 4.
Plots of estimated Kendall's rank correlation coefficients \(\hat {\tau }_{k,n}^{i}\,,\) for i=2,…,8, for n=30, and for different k; at the bottom right: \(\hat {\tau }_{k,30}=\max _{2\leq i \leq 8}\{\hat {\tau }_{k,30}^{i}\}\) (solid) and \(\overline {\hat {\rho }}_{k}=\max _{2\leq i \leq 8}\{\rho (\hat {\tau }_{k,30}^{i}+\epsilon _{k})\}\) (dotted) for different k and εk=ε=0.05.
As we observe from Fig. 2 there is no strong correlation between the log-returns \(\left (x_{k}^{1}\right)_{k}\) of "AUDI" and the log-returns \(\left (x_{k}^{i}\right)_{k}\,,\)i≠1, of the other assets. We use this property to apply Theorem 4 as follows.
For the prediction of an improved worst-case upper bound for ΣT w.r.t. convex order for T=1 year, resp., T=2 years, we choose the worst-case period of the historical estimates \(\overline {\hat {\rho }}_{k}\) for ρk with a length of m=254 trading days, resp., m=508 trading days. We identify visually that \((\overline {\hat {\rho }}_{k})_{k}\) takes the historically largest values in a period of length m=254, resp., m=508 for 1797≤k≤2050, resp., 1543≤k≤2050, see the plot at the bottom right in Fig. 2. Thus, we decide on \((\overline {\hat {\rho }}_{k})_{1797\leq k \leq 2050}\,,\) resp., \((\overline {\hat {\rho }}_{k})_{1543\leq k \leq 2050}\) as the worst-case estimate for (ρk)1≤k≤254, resp., (ρk)1≤k≤508 with error εk=0.05 in (42).
Then, we obtain from Theorem 4 that
$$ \begin{aligned} {{\Sigma}_{T}}&=\sum_{i=1}^{8} Y_{T}^{i} = \sum_{i=1}^{6} S_{T}^{i} +\sum_{i=7}^{8} (S_{T}^{i}-K^{i})_{+}\\ &=\sum_{i=1}^{6} \exp\left(\sum_{k=1}^ m \xi_{k}^{i}\right) +\sum_{i=7}^{8} \left(\exp\left(\sum_{k=1}^{m} \xi_{k}^{i}\right)-K^{i}\right)_{+}\\ &\leq_{cx} \exp\left(\sum_{k=1}^{m} \zeta_{k}^{1}\right) +\sum_{i=2}^{6} \exp\left(\sum_{k=1}^{m} F_{\zeta_{k}^{i}|\zeta_{k}^{1}}^{-1}(U^{k})\right)\\ &~~~~~~~~~~~~~~~~~~~~~~~~+\sum_{i=7}^{8} \left(\exp\left(\sum_{k=1}^{m} F_{\zeta_{k}^{i}|\zeta_{k}^{1}}^{-1}(U^{k})\right)-K^{i}\right)_{+}\\ &=:{{\Sigma}_{T,(\rho_{k}),\nu}^{c}} \\ &{\leq}_{cx} \sum_{i=1}^{8} F_{Y_{T}^{i}}^{-1}(U)=:{{\Sigma}_{T}^{c}}\,, \end{aligned} $$
where \(\zeta _{k}^{i}\sim \xi _{k}^{i}\) for 1≤i≤8,\(C_{\zeta _{k}^{i},\zeta _{k}^{1}}=E^{k}=C_{\nu }^{\rho _{k}}\) for \(\rho _{k}=\overline {\hat {\rho }}_{2050-m+k+1}\) and 2≤i≤8, Uk∼U(0,1) and \(U^{k},\zeta _{l}^{1}\) independent for all 1≤k,l≤m. Denote by \(\tau _{\zeta _{k}^{1}}\) the distributional transform of \(\zeta _{k}^{1}\,,\) see Rüschendorf (2009), and let tν be the distribution function of the t-distribution with ν degrees of freedom. Then, it holds that
$$\begin{array}{@{}rcl@{}} F^{-1}_{\zeta_{k}^{i}|\zeta_{k}^{1}}(U^{k}) = F_{\zeta_{k}^{i}}^{-1}\left(f\left(\rho_{k},\nu,\tau_{\zeta_{k}^{1}},U^{k}\right)\right)\,, \end{array} $$
where f is given by
$$\begin{array}{@{}rcl@{}} f(r,\nu,z,e):= t_{\nu}\left(r \,t_{\nu}^{-1}(z)+\sqrt{\frac{\left(\nu+t_{\nu}^{-1}(z)^{2}\right)(1-r^{2})}{\nu+1}}~ t_{\nu+1}^{-1}(e)\right)\,. \end{array} $$
Note that the distribution function of (f(r,ν,Z,ε),Z), Z,ε∼U(0,1) independent, is the t-copula with correlation r and ν degrees of freedom, see Aas et al. (2009).
The Tail-Value-at-Risk at level λ (also known as Expected Shortfall) is defined by
$$\begin{array}{@{}rcl@{}} {\text{TVaR}}_{\lambda}(\zeta):=\frac{1}{1-\lambda}\int_{\lambda}^{1} F_{\zeta}^{-1}(t) \, {\mathrm{d}} t\,,~~~\lambda\in (0,1)\,. \end{array} $$
for a real-valued random variable ζ. If ζ is integrable, then TVaRλ is a convex law-invariant risk measure, see, e.g., Föllmer and Schied (2010), which satisfies the Fatou-property. As a consequence of (4) and (43) we obtain
$$\begin{array}{@{}rcl@{}} {\text{TVaR}}_{\lambda}(\Sigma_{T})\leq {\text{TVaR}}_{\lambda}\left(\Sigma_{T,(\rho_{k}),\nu}^{c}\right) \leq {\text{TVaR}}_{\lambda}\left(\Sigma_{T}^{c}\right)\,. \end{array} $$
Empirical results and conclusion
The improved risk bounds \({\text {TVaR}}_{\lambda }\left (\Sigma _{T,(\rho _{k}),\nu }^{c}\right)\) for TVaRλ(ΣT) are compared in Table 1 with the standard comonotonic risk bound \({\text {TVaR}}_{\lambda }\left (\Sigma _{T}^{c}\right)\) (5 million simulated points) for different values of λ and ν and for T=1 year (=254 trading days), resp., T=2 years (=508 trading days).
Table 1 Comparison of the improved risk bound \({\text {TVaR}}_{\lambda }\left (\Sigma _{T,(\rho _{k}),\nu }^{c}\right)\) with the standard comonotonic risk bound \({\text {TVaR}}_{\lambda }\left (\Sigma _{T}^{c}\right)\) for TVaRλ(ΣT) for T=1 year, resp., T=2 years, for different levels λ and for different ν.
As observed from Table 1, there is a substantial improvement of the risk bounds up to 20% for T=1 year and about 20% for T=2 years for all degrees of freedom ν of the t-copulas \(C_{\nu }^{\rho _{k}}\) and high levels of λ. For T=2 years, the improvement is even better because the two-year worst-case period for \(\overline {\hat {\rho _{k}}}\) also contains the one-year worst-case period for \(\overline {\hat {\rho _{k}}}\) where in the latter one attains higher values.
We see that the improvement is larger for higher values of ν. This can be explained by the fact that \(C_{\nu }^{\rho }\) has a higher tail-dependence for smaller values of ν, see, e.g., Demarta and McNeil (2005). Thus, for small ν, more extreme events (= realizations of the log-increments) occur more often simultaneously which sums up to a higher risk.
The results of this application clearly indicate the potential usefulness and flexibility of the comparison results for the supermodular ordering to an improvement of the standard risk bounds.
Aas, K., Czado, C., Frigessi, A., Bakken, H.: Pair-copula constructions of multiple dependence. Insur. Math. Econ. 44(2), 182–198 (2009).
Ansari, J.: Ordering risk bounds in partially specified factor models. University of Freiburg, Dissertation (2019).
Ansari, J., Rüschendorf, L.: Ordering results for risk bounds and cost-efficient payoffs in partially specified risk factor models. Methodol. Comput. Appl. Probab., 1–22 (2016).
Ansari, J., Rüschendorf, L.: Ordering risk bounds in factor models. Depend. Model.6.1, 259–287 (2018).
Bäuerle, N., Müller, A.: Stochastic orders and risk measures: consistency and bounds. Insur. Math. Econ. 38(1), 132–148 (2006).
Bernard, C., Vanduffel, S.: A new approach to assessing model risk in high dimensions. J. Bank. Financ. 58, 166–178 (2015).
Bernard, C., Rüschendorf, L., Vanduffel, S.: Value-at-Risk bounds with variance constraints. J. Risk. Insur. 84(3), 923–959 (2017a).
Bernard, C., Rüschendorf, L., Vanduffel, S., Wang, R.: Risk bounds for factor models. Financ. Stoch. 21(3), 631–659 (2017b).
Bernard, C., Denuit, M., Vanduffel, S.: Measuring portfolio risk under partial dependence information. J. Risk Insur. 85(3), 843–863 (2018).
Bignozzi, V., Puccetti, G., Rüschendorf, L: Reducing model risk via positive and negative dependence assumptions. Insur. Math. Econ. 61, 17–26 (2015).
Cornilly, D., Rüschendorf, L., Vanduffel, S.: Upper bounds for strictly concave distortion risk measures on moment spaces. Insur Math Econ. 82, 141–151 (2018).
de Schepper, A., Heijnen, B.: How to estimate the Value at Risk under incomplete information. J. Comput. Appl. Math. 233(9), 2213–2226 (2010).
Demarta, S., McNeil, A. J.: The t copula and related copulas. Int. Stat. Rev. 73(1), 111–129 (2005).
MATH Google Scholar
Denuit, M., Genest, C., Marceau, E.: Stochastic bounds on sums of dependent risks. Insur. Math. Econ. 25(1), 85–104 (1999).
Embrechts, P., Puccetti, G.: Bounds for functions of dependent risks. Financ. Stoch. 10(3), 341–352 (2006).
Embrechts, P., Puccetti, G., Rüschendorf, L.: Model uncertainty and VaR aggregation. J. Banking Financ. 37(8), 2750–2764 (2013).
Embrechts, P., Puccetti, G., Rüschendorf, L., Wang, R., Beleraj, A.: An academic response to basel 3.5. Risks. 2(1), 25–48 (2014).
Embrechts, P., Wang, B., Wang, R.: Aggregation-robustness and model uncertainty of regulatory risk measures. Financ. Stoch. 19(4), 763–790 (2015).
Föllmer, H., Schied, A.: Convex and coherent risk measures.Encycl. Quant. Financ., 355–363 (2010).
Goovaerts, M. J., Kaas, R., Laeven, R. J. A.: Worst case risk measurement: back to the future?Insur. Math. Econ. 49(3), 380–392 (2011).
Hürlimann, W.: Analytical bounds for two Value-at-Risk functionals. ASTIN Bull. 32(2), 235–265 (2002).
Hürlimann, W.: Extremal moment methods and stochastic orders. Bol. Asoc. Mat. Venez. 15(2), 153–301 (2008).
Kaas, R., Goovaerts, M. J.: Best bounds for positive distributions with fixed moments. Insur. Math. Econ. 5, 87–95 (1986).
McNeil, A. J., Frey, R., Embrechts, P.: Quantitative Risk Management. Concepts, Techniques and Tools., second edn. Princeton University Press, Princeton (2015).
Müller, A.: Stop-loss order for portfolios of dependent risks. Insur. Math. Econ. 21(3), 219–223 (1997).
Müller, A.: Duality theory and transfers for stochastic order relations. In: Stochastic orders in reliability and risk. Springer, New York (2013).
Müller, A., Scarsini, M.: Stochastic comparison of random vectors with a common copula. Math. Oper. Res. 26(4), 723–740 (2001).
Müller, A., Scarsini, M.: Stochastic order relations and lattices of probability measures. SIAM J. Optim. 16(4), 1024–1043 (2006).
Müller, A., Stoyan, D.: Comparison Methods for Stochastic Models and Risks. Wiley, Chichester (2002).
Nelsen, R. B.: An introduction to copulas. 2nd ed. Springer, New York (2006).
Nelsen, R. B., Quesada-Molina, J. J., Rodríguez-Lallena, J. A., Úbeda-Flores, M.: Bounds on bivariate distribution functions with given margins and measures of association. Commun. Stat. Theory Methods. 30(6), 1155–1162 (2001).
Puccetti, G., Rüschendorf, L.: Bounds for joint portfolios of dependent risks. Stat. Risk. Model. Appl. Financ. Insur. 29(2), 107–132 (2012a).
Puccetti, G., Rüschendorf, L.: Computation of sharp bounds on the distribution of a function of dependent risks. J. Comput. Appl. Math. 236(7), 1833–1840 (2012b).
Puccetti, G., Rüschendorf, L.: Sharp bounds for sums of dependent risks. J. Appl. Probab. 50(1), 42–53 (2013).
Puccetti, G., Rüschendorf, L., Small, D., Vanduffel, S.: Reduction of Value-at-Risk bounds via independence and variance information. Scand. Actuar. J. 2017(3), 245–266 (2017).
Rüschendorf, L.: On the distributional transform, Sklar's theorem, and the empirical copula process. J. Stat. Plann. Inference. 139(11), 3921–3927 (2009).
Rüschendorf, L.: Mathematical Risk Analysis. Springer, New York (2013).
Rüschendorf, L.: Improved Hoeffding–Fréchet bounds and applications to VaR estimates. In: Copulas and Dependence Models with Applications. Contributions in Honor of Roger B. Nelsen. In: Úbeda Flores, M., de Amo Artero, E., Durante, F., Fernández Sánchez, J. (eds.), pp. 181–202. Springer, Cham (2017a). https://doi.org/10.1007/978-3-319-64221-5_12.
Rüschendorf, L.: Risk bounds and partial dependence information. In: From Statistics to Mathematical Finance, pp. 345–366. Springer, Festschrift in honour of Winfried Stute, Cham (2017b).
Rüschendorf, L., Witting, J.: VaR bounds in models with partial dependence information on subgroups. Depend Model. 5, 59–74 (2017).
Shaked, M., Shantikumar, J. G.: Stochastic Orders. Springer, New York (2007).
Tian, R.: Moment problems with applications to Value-at-Risk and portfolio management. Georgia State University, Dissertation (2008).
We thank the reviewers for their comments that greatly improved the manuscript.
Jonathan Ansari and Ludger R\"{u}schendorf contributed equally to this work.
Department of Quantitative Finance, Albert-Ludwigs University of Freiburg, Platz der Alten Synagoge 1, KG II, Freiburg i. Br., 79098, Germany
Jonathan Ansari
Department of Mathematical Stochastics, Albert-Ludwigs University of Freiburg, Ernst-Zermelo-Straße 1, Freiburg, 79104, Germany
Ludger Rüschendorf
Both authors read and approved the final manuscript.
Correspondence to Jonathan Ansari.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Ansari, J., Rüschendorf, L. Upper risk bounds in internal factor models with constrained specification sets. Probab Uncertain Quant Risk 5, 3 (2020). https://doi.org/10.1186/s41546-020-00045-y
Accepted: 13 March 2020
DOI: https://doi.org/10.1186/s41546-020-00045-y
Risk bounds
Risk factor model
Supermodular order
Convex order
Convex risk measure
Upper product of bivariate copulas
Comonotonicity | CommonCrawl |
Natural Language Understanding
Bots & Agents
Probabilistic Programming
Graph Analytics
Quantum Cryptography
Categorical Quantum Computing
Topological Computing
The toric code
This article is part of a series on topological quantum computing.
Storing or manipulating information with a real physical system is naturally subject to errors. To obtain a reliable outcome from a quantum computation one needs to be certain that the processed information remains resilient to errors at all times. Overcoming errors means to detect and correct them continuously. The error detection process is based on an active monitoring of the system and the possibility of identifying errors without destroying the encoded information. Error correction employs the error detection outcome and performs the appropriate steps to correct it, thus reconstructing the original information.
Because cloning does not work on a quantum level (remember the quantum no-cloning theorym) one cannot use redundancy as one would on a classical level. So, the principle of quantum error correction is to encode information in a sophisticated way that gives the ability to monitor and correct errors. More concretely, the encoding is performed nonlocally
such that errors, assumed to act in a local way, can be identified and then corrected without accessing the non-local information. Much like other quantum algorithms one makes use of entanglement but the toric code below goes a step further and uses topological information. The toric code is in fact
a fun toy model demonstrating how anyons show up as excitations of a concrete (loop) ground state
how the fusion rules we explained earlier correspond to a physical process
how loops and Wilson lines (integrals) can encode information
how quantum fluctuations can be corrected as a sympton detected by Hamiltonian dynamics.
The toric code is a particular example of what is known as quantum double models. Quantum double models are particular lattice realisations of topological systems. They are based on a finite group, G, that acts on spin states, defined on the links of the lattice. Based on these groups, stabiliser Hamiltonians can be defined consistently that have analytically tractable spectra. It can be shown that the ground states of these Hamiltonians behave like error correcting codes. Anyons are associated with properties of the spin states around each vertex or plaquette of the lattice. The fusion and braiding behaviour of the anyons depends on the property of the employed group, G. For example, an Abelian group leads to Abelian anyons and a non-Abelian group leads to to non-Abelian anyons. All the properties of the anyons emerge from the mathematical structure of the quantum double.
Before going into some details of the toric code I'd like to point out the cool QTop Python project allowing to simulate and visualize topological quantum codes. With just a few line one can instantiate a toric as well as a color code. A lot of fun.
Let's consider a system of localized electrons that live on the edges of a square lattice. We'll define the dynamics via the Hamiltonian
$$H = -A \sum_+ \prod_+\sigma_z – B \sum_\Box \prod_\Box\sigma_x$$
where the product in the first part is over the edges around a vertex and in the second part the edges of a plaquette.
All the terms commute between themselves in this Hamilatonian. The only terms that you might suspect not to commute are a plaquette term and a vertex term that share some bonds. But you can convince yourself easily (by looking at the figure) that such terms always share an even number of spins. This means that the commutation picks up an even number of minus signs and so these terms commute as well.
Since the Hamiltonian is a sum of commuting terms, we can calculate the ground state $\xi$
$$H\,|\xi\rangle = |\xi\rangle$$
as the simultaneous ground state for all the terms. Let us first look at the vertex terms proportional to $A$. If we draw a red line through bond connecting neighboring spins with $\sigma_z=−1$ on our lattice (as shown below), then we find that each vertex in the ground state configuration has an even number of red lines coming in. Thus, we can think of the red lines forming loops that can never be open ended. This allows us to view the ground state of the toric code as a loop gas. You can also describe the ground state as one where both the vertex and plaquette operators give +1.
The loop gas can be divided in four sectors, corresponding to the homotopy
$$\pi_1(T_2) = \mathbb{Z}\times\mathbb{Z}$$
of the torus. If one uses the loops to store information it means that this four dimensional vector space can encode two qubits. One calls the ground state the stabilizer space of the code and the vectex/plaquette operators are called stabilizers.
One can get excitations of the vertex Hamiltonian by breaking loops. We can think of the end points of the loops as excitations, since the plaquette terms proportional to $B$ make the plaquette terms fluctuate. These particles (that you see in the figure below) are called the electric defects, which we label 'e'. As shown below, analogous defects in the $\sigma_x$-loops on the dual lattice are referred to as magnetic defects, which we will label 'm'.
The Wilson operatos for these loops $l_x$ and $l_m$ are
$$W_e = \prod_{l_e}\sigma_z,\;W_m = \prod_{l_m}\sigma_x.$$
Hence $W_e$ and $W_m$ are conserved 'flux' operators that measure the parity of the number of electric and magnetic defects inside the loops $l_e, l_m$ respectively. Thus, the values $W_{e,m}=-1$ can also be used to define what it means to have a localized 'e' or 'm' excitation respectively. These defects describe the localized excitations of the toric code. In fact in this model, this excitation on the ground states are localized to exactly one lattice site and may be viewed as point-like particles in a vacuum.
Since electrical vertex defects are produced in pairs and can be brought back together and annihilated in pairs we have
$$e\times x = 1$$
and similarly for the magnetic defects: $m\times m = 1$. We might then wonder what happens if we bring together a vertex and a plaquette defect. They certainly do not annihilate, so we define another particle type, called $f$, which is the fusion of the two
$$e\times m = f$$
and one has $f\times f = 1$ because of associativity and commutativity
$$f\times f = (e\times m)\times (e\times m) = (e\times e)\times (m\times m) = 1.$$
Let us first consider the e particles. These are both created and moved around by applying $\sigma_x$ operators. All of the $\sigma_x$ operators commute with each other, so there should be no difference in what order we create, move, and annihilate the e particles. This necessarily implies that the e particles are bosons. There are several "experiments" we can do to sow this fact. For example, we can create a pair of e's move one around in a circle and reannihilate, then compare this to what happens if we put another e inside the loop before the experiment. We see that the presence of another e inside the loop does not alter the phase of moving the e around in a circle. The same logic applies to the magnetic excitations. In a word, the electric and magnetic particles are bosons.
The proof that f on the other hand is a fermion is both beautifully simple and deep. It's also a proof which combines a lot of what we tried to explain in previous articles:
January 2, 2018 /by Orbifold
https://orbifold.net/default/wp-content/uploads/2018/01/Intricate.jpg 1125 1500 Orbifold /default/wp-content/uploads/2016/11/OrbifoldNextLogo.png Orbifold2018-01-02 15:33:322018-12-08 15:30:27The toric code
© Copyright 2005-2018 - Orbifold Consulting |Terms | Privacy
Some references Polynomials from quantum loops | CommonCrawl |
Conditional distribution for Exponential family
We have a random variable $X$ that belongs to the exponential family with p.d.f.
$$ P_X(x|\boldsymbol \theta) = h(x) \exp\left(\eta({\boldsymbol \theta}) . T(x) - A({\boldsymbol \theta}) \right) $$
where ${\boldsymbol \theta} = \left(\theta_1, \theta_2, \cdots, \theta_s \right )^T$ is the parameter vector and $\mathbf{T}(x)= \left(T_1(x), T_2(x), \cdots,T_s(x) \right)^T$ is the joint sufficient statistic and $A({\boldsymbol \theta}) = \log \int_x h(x)\exp( \eta(\boldsymbol \theta).T(x))dx$
(The notation is following the Wikipedia page on the exponential family of distributions)
Let the data be given labels such that the joint distribution is now associated with $(x, y) \in \mathcal{X}\times\mathcal{Y}$.
EDIT The sufficient statistics for this joint distribution is given by $\mathbf{T}(x, y)$
I am unable to derive the following expression for the exponential form of the conditional distribution of labels given data (ignoring the reference measure $h(x)$)
$$ P(y | x; \theta) = \exp\left(\eta({\boldsymbol \theta}) . T(x,y) - A({\boldsymbol \theta | x}) \right) $$
with $A({\boldsymbol \theta|x}) = \log \int_{\mathcal{Y}} \exp( \eta(\boldsymbol\theta).T(x,y))dy$
This expression is used in a paper on missing variables that I am trying to implement. I have tried writing out the conditional in terms of the joint probability but did not get any clean decomposition of terms.
Is there any standard proof or text that derives the expression for conditional probability of an exponential family distribution? Any hints or references would be great. Thanks.
machine-learning predictive-models conditional-probability exponential-family
AruniRC
AruniRCAruniRC
$\begingroup$ $h(x)$ is not the natural parameter but the reference measure. $\endgroup$ – Xi'an Feb 5 '15 at 18:08
$\begingroup$ @Xi'an You're right, I am sorry - missed out on adding the joint sufficient statistics. Does it make sense now? Thanks. $\endgroup$ – AruniRC Feb 6 '15 at 0:41
$\begingroup$ Defining the "joint sufficient statistics" does not tell about the joint distribution. Is it still an exponential family with natural sufficient statistics $T(x,y)$? $\endgroup$ – Xi'an Feb 6 '15 at 7:39
$\begingroup$ Yes, the joint distribution is still an exponential family with the natural sufficient statistics $T(x,y)$ $\endgroup$ – AruniRC Feb 6 '15 at 23:02
I have answered my own question. It turned out to be a rather obvious application of Bayes Rule only after making a somewhat arbitrary assumption. My question was not very clear, mostly due to my own tenuous understanding at that time.
However, this result is used quite a lot in machine learning literature involving integrating out missing variables. I am including the proof in case others find it helpful when seeing the result.
$$ P(x, y|\boldsymbol \theta) = h(x) \exp\left(\eta({\boldsymbol \theta}) . T(x, y) - A({\boldsymbol \theta}) \right) $$
By Bayes Rule,
$$ P(y|x, \theta) = \frac{ P(x|y, \theta)}{ \int_{y^{'}} P(x|{y^{'}}, \theta) P(y^{'}|\theta)d{y^{'}}} = \frac{ P(x, y| \theta)}{ \int_{y^{'}} P(x,{y^{'}}| \theta) d{y^{'}}} = \frac{h(x) \exp (\eta (\theta) . T(x,y) - A(\theta))}{ \int_{y^{'}} h(x) \exp (\eta (\theta) . T(x,y^{'}) - A(\theta))dy{'}} $$
Assumed the $h(x)$ base reference measure to be a function only of $x$ so that we can cancel it from numerator and denominator in the last step above, getting
$$ \frac{\exp ( \eta(\theta).T(x,y))}{\int_{y^{'}} \exp ( \eta(\theta).T(x,y^{'}))dy^{'}} = \exp ( \eta(\theta).T(x,y) - \log(\int_{y^{'}} \exp ( \eta(\theta).T(x,y^{'}))dy^{'}) ) = \exp ( \eta(\theta).T(x,y) - A(\theta|x) ) $$
Not the answer you're looking for? Browse other questions tagged machine-learning predictive-models conditional-probability exponential-family or ask your own question.
Is a normalized version of an exponential family distribution still an exponential family distribution?
Do the mean and the variance always exist for exponential family distributions?
What the dimension of an exponential family tell us about that family?
Sufficient Statistic for non-exponential family distribution
Why doesn't the exponential family include all distributions?
Does sufficiency implies the reference measure to be a function of the sufficient statistic in exponential family?
What is the rationale behind the exponential family of distributions?
Is the negative exponential distribution a member of the exponential family?
Jointly sufficient statistics of a multi-parameter exponential family
Reagarding the base measure h(x) in the exponential family | CommonCrawl |
All Title and abstract Name Research Group
1 - 50 out of 100,521Page size: 50
View: Vancouver
García-Mauriño SM, Díaz-Quintana A, Rivero-Rodríguez F, Cruz-Gallardo I, Grüttner C, Hernández-Vellisca M et al. A putative RNA binding protein from Plasmodium vivax apicoplast. FEBS Open Bio. 2017 Dec 31. https://doi.org/10.1002/2211-5463.12351
Ropkins K, DeFries TH, Pope F, Green DC, Kemper J, Kishan S et al. Evaluation of EDAR vehicle emissions remote sensing technology. Science of the Total Environment. 2017 Dec 31;609:1464-1474. https://doi.org/10.1016/j.scitotenv.2017.07.137
Vergu C. Twistor Parametrization of Locally BPS Super-Wilson Loops. Advances in Mathematical Physics. 2017 Dec 31;2017. 4852015. https://doi.org/10.1155/2017/4852015
Roach RC, Lee H. "Three words you must never say": Hermione Lee interviewed about Interviewing. Biography-An Interdisciplinary Quarterly. 2017 Dec 31.
Babiloni C, Del Percio C, Lizio R, Noce G, Lopez S, Soricelli A et al. Abnormalities of Resting State Functional Cortical Connectivity In Patients With Dementia Due To Alzheimer's and Lewy Body Diseases: An Eeg Study. Neurobiology of Aging. 2017 Dec 30. https://doi.org/10.1016/j.neurobiolaging.2017.12.023
Shen H, Li F, Yan H, Karimi HR, Lam HK. Finite-time event-triggered <formula><tex>$\mathcal{H}_\infty$</tex></formula> control for T-S fuzzy Markov jump systems. IEEE Transactions on Fuzzy Systems. 2017 Dec 30. https://doi.org/10.1109/TFUZZ.2017.2788891
Schmidt T. What Kind of Work Is This? Performance and Materialisms in the Gallery. Performance Paradigm. 2017 Dec 30;13:7-28.
Capristo E, Panunzi S, De Gaetano A, Raffaelli M, Guidone C, Iaconelli A et al. Intensive lifestyle modifications with or without liraglutide 3mg vs sleeve gastrectomy: A three-arm non-randomized, controlled, pilot study. DIABETES AND METABOLISM. 2017 Dec 29. https://doi.org/10.1016/j.diabet.2017.12.007
Lumyongsatien J, Yangsakul W, Bunnag C, Hopkins C, Tantilipikorn P. Reliability and validity study of Sino-nasal outcome test 22 (Thai version) in chronic rhinosinusitis. BMC Ear, Nose & Throat Disorders. 2017 Dec 29;17(1). https://doi.org/10.1186/s12901-017-0047-7
Tzouvara V, Papadopoulos C, Randhawa G. Self-Stigma Experiences Among Older Adults with Mental Health Problems Residing in Long-Term Care Facilities: A Qualitative Study. Issues in Mental Health Nursing. 2017 Dec 29;1-8. https://doi.org/10.1080/01612840.2017.1383540
Petzke F, Jensen KB, Kosek E, Choy E, Carville S, Fransson P et al. Using fMRI to evaluate the effects of milnacipran on central pain processing in patients with fibromyalgia. Scandinavian Journal of Pain. 2017 Dec 29;4(2):65-74. https://doi.org/10.1016/j.sjpain.2012.10.002
de Jonge P, Wardenaar K, Evans-Lacko S, Kovess-Masfety V, Aguilar-Gaxiola S, Al-Hamzawi A et al. Complementary and Alternative Medicine Contacts by Persons with Mental Disorders in 25 Countries: Results from the World Mental Health Surveys. Epidemiology And Psychiatric Sciences. 2017 Dec 28. https://doi.org/10.1017/S2045796017000774
McCutcheon R, Beck K, Jauhar S, Howes OD. Defining the Locus of Dopaminergic Dysfunction in Schizophrenia: A Meta-analysis and Test of the Mesolimbic Hypothesis. Schizophrenia Bulletin. 2017 Dec 28. https://doi.org/10.1093/schbul/sbx180
Jurcevic S, Juif PE, Hamid C, Greenlaw R, D'Ambrosio D, Dingemanse J. Effects of multiple-dose ponesimod, a selective S1P1 receptor modulator, on lymphocyte subsets in healthy humans. Drug Design, Development and Therapy. 2017 Dec 28;11:123-131. https://doi.org/10.2147/DDDT.S120399
Duignan M, Everett S, Walsh L, Cade N. Leveraging physical and digital liminoidal spaces: the case of the #EATCambridge festival. Tourism Geographies. 2017 Dec 28;1-22. https://doi.org/10.1080/14616688.2017.1417472
Zaric M, Becker PD, Hervouet C, Kalcheva P, Yus BI, Cocita C et al. Long-lived tissue resident HIV-1 specific memory CD8 + T cells are generated by skin immunisation with live virus vectored microneedle arrays. JOURNAL OF CONTROLLED RELEASE. 2017 Dec 28;268:166-175. https://doi.org/10.1016/j.jconrel.2017.10.026
Gaughran F, Stahl D, Ismail K, Greenwood K, Atakan Z, Gardner-Sood P et al. Randomised control trial of the effectiveness of an integrated psychosocial health promotion intervention aimed at improving health and reducing substance use in established psychosis (IMPaCT). BMC Psychiatry. 2017 Dec 28;17(1). 413. https://doi.org/10.1186/s12888-017-1571-0
Pels A, Kenny LC, Alfirevic Z, Baker PN, von Dadelszen P, Gluud C et al. STRIDER (Sildenafil TheRapy in dismal prognosis early onset fetal growth restriction): an international consortium of randomised placebo-controlled trials. BMC Pregnancy and Childbirth. 2017 Dec 28;17(1):440. https://doi.org/10.1186/s12884-017-1594-z
Lund C, Cois A. Simultaneous social causation and social drift: Longitudinal analysis of depression and poverty in South Africa. Journal of Affective Disorders. 2017 Dec 28. https://doi.org/10.1016/j.jad.2017.12.050
Mathie K, Lainer J, Spreng S, Dawid C, Andersson DA, Bevan SJ et al. Structure-Pungency Relationships and TRP Channel Activation of Drimane Sesquiterpenes in Tasmanian Pepper (Tasmannia lanceolata). JOURNAL OF AGRICULTURAL AND FOOD CHEMISTRY. 2017 Dec 28;65(28):5700-5712. https://doi.org/10.1021/acs.jafc.7b02356
Clark LL, Merrick I, O'Driscoll F, Lycett H, White R, White A. Template for action for patients with intellectual disabilities in mental health services. British Journal of Mental Health Nursing. 2017 Dec 28;6(6):279-285. https://doi.org/10.12968/bjmh.2017.6.6.279
Kalk NJ, Robertson JR, Kidd B, Day E, Kelleher MJ, Gilvarry E et al. Treatment and Intervention for Opiate Dependence in the United Kingdom: Lessons from Triumph and Failure. European Journal on Criminal Policy and Research. 2017 Dec 28;1-18. https://doi.org/10.1007/s10610-017-9364-z
Jutten RJ, Harrison J, Lee Meeuw Kjoe PR, Opmeer EM, Schoonenboom NSM, de Jong FJ et al. A novel cognitive-functional composite measure to detect changes in early Alzheimer's disease: Test–retest reliability and feasibility. Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring. 2017 Dec 27;10:153-160. https://doi.org/10.1016/j.dadm.2017.12.002
Archer S, Hull L, Soukup T, Mayer E, Athanasiou T, Sevdalis N et al. Development of a theoretical framework of factors affecting patient safety incident reporting: A theoretical review of the literature. BMJ Open. 2017 Dec 27. https://doi.org/10.1136/bmjopen-2017-017155
Hellings PW, Borrelli D, Pietikainen S, Agache I, Akdis C, Bachert C et al. European Summit on the Prevention and Self-Management of Chronic Respiratory Diseases: Report of the European Union Parliament Summit (29 March 2017). Clinical and translational allergy. 2017 Dec 27;7(1). https://doi.org/10.1186/s13601-017-0186-3
Sadik CD, Bischof J, Van Beek N, Dieterich A, Benoit S, Sárdy M et al. Genomewide association study identifies GALC as susceptibility gene for mucous membrane pemphigoid. Experimental Dermatology. 2017 Dec 27;26(12):1214-1220. https://doi.org/10.1111/exd.2017.26.issue-12
Rahim A, Meskas J, Drissler S, Yue A, Lorenc A, Laing A et al. High Throughput Automated Analysis of Big Flow Cytometry Data. Methods. 2017 Dec 27. https://doi.org/10.1016/j.ymeth.2017.12.015
Mueller C, Perera G, Rajkumar AP, Bhattarai M, Price A, O'Brien JT et al. Hospitalization in people with dementia with Lewy bodies: Frequency, duration, and cost implications. Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring. 2017 Dec 27. https://doi.org/10.1016/j.dadm.2017.12.001
van Zoest RA, Underwood J, De Francesco D, Sabin CA, Cole JH, Wit FW et al. Structural Brain Abnormalities in Successfully Treated HIV Infection: Associations With Disease and Cerebrospinal Fluid Biomarkers. Journal of Infectious Diseases. 2017 Dec 27;217(1):69-81. https://doi.org/10.1093/infdis/jix553
Swords C, Patel A, Smith ME, Williams RJ, Kuhn I, Hopkins C. Surgical and interventional radiological management of adult epistaxis: Systematic review. Journal of Laryngology and Otology. 2017 Dec 27;131(12):1108-1130. https://doi.org/10.1017/S0022215117002079
Lauder K, Toscani A, Scalacci N, Castagnolo D. Synthesis and Reactivity of Propargylamines in Organic Chemistry. Chemical Reviews. 2017 Dec 27;117(24):14091-14200. https://doi.org/10.1021/acs.chemrev.7b00343
INTEGRATE (THE NATIONAL ENT TRAINEE RESEARCH NETWORK), Ellis M, Hall A, Hardman J, Mehta N, Nankiwell P et al. The British Rhinological Society multidisciplinary consensus recommendations on the hospital management of epistaxis. Journal of Laryngology and Otology. 2017 Dec 27;131(12):1142-1156. https://doi.org/10.1017/S0022215117002018
Wood JDG. The integrating role of private homeownership and mortgage credit in British neoliberalism. HOUSING STUDIES. 2017 Dec 27. https://doi.org/10.1080/02673037.2017.1414159
Salter JP. The multiple accountabilities of the European Banking Authority. Journal of Economic Policy Reform. 2017 Dec 27;1-16. https://doi.org/10.1080/17487870.2017.1400436
Hazra NC, Gulliford MC, Rudisill C. 'Fair innings' in the face of ageing and demographic change. Health economics policy and law. 2017 Dec 26. https://doi.org/10.1017/S1744133117000329
Randall TS, Yip YY, Wallock-Richards DJ, Pfisterer K, Sanger A, Ficek W et al. A small-molecule activator of kinesin-1 drives remodeling of the microtubule network. Proceedings of the National Academy of Sciences of the United States of America. 2017 Dec 26;114(52):13738-13743. https://doi.org/10.1073/pnas.1715115115
Blower JE, Cousin SF, Gee AD. Convergent synthesis of 13N-labelled Peptidic structures using aqueous [13N]NH3. EJNMMI Radiopharmacy and Chemistry. 2017 Dec 26. https://doi.org/10.1186/s41181-017-0035-7
Mukherjee RK, Roujol S, Chubb H, Harrison J, Williams S, Whitaker J et al. Epicardial electroanatomical mapping, radiofrequency ablation, and lesion imaging in the porcine left ventricle under real-time magnetic resonance imaging guidance —an in vivo feasibility study. EUROPACE. 2017 Dec 26. https://doi.org/10.1093/europace/eux341, https://doi.org/10.1093/europace/eux341
Krishnan ML, Wang Z, Aljabar P, Ball G, Mirza G, Saxena A et al. Machine learning shows association between genetic variability in PPARG and cerebral connectivity in preterm infants. Proceedings of the National Academy of Sciences of the United States of America. 2017 Dec 26;114(52):13744-13749. https://doi.org/10.1073/pnas.1704907114
Quednow BB, Ejebe K, Wagner M, Giakoumaki SG, Bitsios P, Kumari V et al. Meta-analysis on the association between genetic polymorphisms and prepulse inhibition of the acoustic startle response. Schizophrenia Research. 2017 Dec 26. https://doi.org/10.1016/j.schres.2017.12.011
Paternoster V, Rajkumar AP, Nyengaard JR, Børglum AD, Grove J, Christensen JH. THE IMPORTANCE OF DATA STRUCTURE IN STATISTICAL ANALYSIS OF DENDRITIC SPINE MORPHOLOGY. Journal of Neuroscience Methods. 2017 Dec 26. https://doi.org/10.1016/j.jneumeth.2017.12.022
Boniface S, Malet-Lambert I, Coleman R, Deluca P, Donoghue K, Drummond C et al. The Effect of Brief Interventions for Alcohol Among People with Comorbid Mental Health Conditions: A Systematic Review of Randomized Trials and Narrative Synthesis. Alcohol and Alcoholism. 2017 Dec 26;53(3):282-293.
Heath C, Luff P. The Naturalistic Experiment: Video and Organizational Interaction . ORGANIZATIONAL RESEARCH METHODS. 2017 Dec 26. https://doi.org/10.1177/1094428117747688
Simoncelli S, Makarova M, Wardley W, Owen DM. Toward an Axial Nanoscale Ruler for Fluorescence Microscopy. ACS Nano. 2017 Dec 26;11(12):11762-11767. https://doi.org/10.1021/acsnano.7b07133
Zheng X, Morrell J, Watts K. A quantitative longitudinal study to explore factors which influence maternal self-efficacy among Chinese primiparous women during the initial postpartum period. MIDWIFERY. 2017 Dec 25. https://doi.org/10.1016/j.midw.2017.12.022
Arora S, Chun B, Ahlawat RK, Abaza R, Adshead J, Porter JR et al. Conversion of Robot Assisted Partial Nephrectomy to Radical Nephrectomy; a Prospective Multi-Institutional Study. Urology. 2017 Dec 25. https://doi.org/10.1016/j.urology.2017.11.046
Nousias S, Chadebecq F, Pichat J, Keane P, Ourselin S, Bergeles C. Corner-based geometric calibration of multi-focus plenoptic cameras. 2017 IEEE International Conference on Computer Vision (ICCV). 2017 Dec 25;2017-October:957-965. https://doi.org/10.1109/ICCV.2017.109
Thornton C, Jones A, Nair S, Aabdien A, Mallard C, Hagberg H. Mitochondrial dynamics, mitophagy and biogenesis in neonatal hypoxic-ischaemic brain injury. FEBS Letters. 2017 Dec 25.
Mongue-Din H, Patel AS, Looi YH, Grieve DJ, Anilkumar N, Sirker A et al. NADPH Oxidase-4 Driven Cardiac Macrophage Polarization Protects Against Myocardial Infarction–Induced Remodeling. JACC: Basic to Translational Science. 2017 Dec 25;2(6):688-698. https://doi.org/10.1016/j.jacbts.2017.06.006
Rucker JJH, Iliff J, Nutt DJ. Psychiatry & the psychedelic drugs. Past, present & future. Neuropharmacology. 2017 Dec 25. https://doi.org/10.1016/j.neuropharm.2017.12.040
Previous 1 2 3 4 5 6 7 8 ...2011 Next
·· Book (3951)
·· Anthology (56)
·· Scholarly edition (52)
·· Commissioned report (372)
·· Confidential report (15)
·· Report (432)
·· Patent (82)
·· Article (226)
·· Featured article (27)
·· Book/Film/Article review (43)
·· Editorial (22)
·· Letter (2)
·· Special issue (36)
·· Chapter (9345)
·· Entry in encyclopedia/dictionary (270)
·· Foreword/postscript (49)
·· Other chapter contribution (333)
·· Conference paper (4965)
·· Conference contribution (14)
Non-textual form
·· Software (23)
·· Data set/Database (27)
·· Digital or Visual Products (61)
·· Web publication/site (139)
·· Artefact (1)
·· Exhibition (31)
·· Performance (18)
·· Lecture (18)
·· Composition (76)
Contribution to conference
·· Paper (95)
·· Poster (4)
·· Abstract (8)
·· Other contribution (829)
·· Working paper (149)
·· Discussion paper (15)
·· Pre-print (33)
·· Article (85619)
·· Letter (2248)
·· Comment/debate (570)
·· Book/Film/Article review (3374)
·· Literature review (3573)
·· Review article (1154)
·· Editorial (2955)
·· Conference paper (894)
·· Short survey (24)
·· Meeting Abstract (109)
·· Conference article (1)
Multiple languages (7)
Undefined/Unknown (977)
Published in 2021 (385)
Published in 2020 (7679)
Published before 2017 (93184)
Tim Spector (834)
Robert Plomin (718)
Robin Murray (714)
Simon Wessely (688)
Kypros Nicolaides (581)
Steven Williams (527)
Philip McGuire (520)
Irene Higginson (490)
Anthony David (478)
John Strang (477)
Janet Treasure (470)
Graham Thornicroft (417)
Jill Manthorpe (409)
Matthew Hotopf (394)
Anne Greenough (376)
Martin Prince (368)
Gareth Barker (362)
John McGrath (361)
Prokar Dasgupta (358)
Michael Brammer (356)
Robert Stewart (350)
David Phillips (342)
Frank Kelly (339)
Pierre Gressens (332)
Brendon Stubbs (332)
Peter McGuffin (317)
Catey Bunce (315)
Bobby Acharya (307)
Andy Simmons (304)
Philip Asherson (300)
Dinesh Bhugra (297)
Declan Murphy (288)
Eike Nagel (286)
David Collier (278)
Tony Charman (278)
Paul McCrone (276)
Ghulam Mufti (271)
Charles Wolfe (270)
Ajay Shah (269)
David Taylor (267)
Ulrike Schmidt (266)
Jo Hajnal (265)
Munther Khamashta (263)
David Edwards (262)
Simon Lovestone (262)
Martin Knapp (261)
Martin Gulliford (259)
Kallol Ray Chaudhuri (259)
Clive Ballard (259)
Lucilla Poston (256) | CommonCrawl |
Relationship between formal system and formal languages
In a course of computer science it is common to study the hierarchy of formal languages, grammars, automata and Turing machines. I wonder what is the relationship of these objects with formal systems.
For example, lambda calculus is said to be a formal system. Would its grammar also be considered a formal system?
formal-languages formal-grammars
edited May 15 '15 at 6:04
Rafael CastroRafael Castro
$\begingroup$ I'm asking if a formal grammar can be categorized as a formal system. English is not my usual language, so probably my sentence could be better written. Feel free to rewrite it, if you want to. $\endgroup$ – Rafael Castro May 14 '15 at 12:58
$\begingroup$ Well, I would have edited if it was clear to me what you meant. But OK, I tried editing with one possible guess at what you might have meant. Does the edit represent what you were trying to ask? $\endgroup$ – D.W.♦ May 15 '15 at 6:05
In my opinion, a formal system should have
A well defined set of symbols.
A well defined grammar, which tells how well-formed formulas are constructed out of the symbols.
One or more well defined inference calculi, which might work similar to the inference calculus associated with a grammar.
One or more semantics, allowing to assign meaning to the formulas, propositions and statements of the formal system.
Even so the last point might be contested, it is the one which is responsible for the significant difference between a formal system, and a grammar or a formal language. The inference calculus of a formal system might indeed coincide with the calculus of some grammar, even so it won't normally coincide with the calculus of the grammar of the formal system itself.
(Even a grammar can have multiple inference calculi, but using multiple inference calculi for one formal system is more common in logic, where you want to prove things like cut elimination, or use one formal system as basis for a hierarchy of formal systems.)
A formal language is associated to both a grammar and a formal system. For a formal system, both the set of well-formed formulas, and the set of valid well-formed formulas are formal languages. The formal language is a sort of equivalence resulting from ignoring additional structure like the semantics of the inference calculus, or the finer parts of a grammar. It is one obvious link between formal systems and grammars, but formal systems and grammars can be closer related than expressible by a formal language alone (i.e. they can have an equivalent inference calculus).
Thomas KlimpelThomas Klimpel
$\begingroup$ There is any reference about this subject ? I would appreciate to read a text that discuss this. $\endgroup$ – Rafael Castro May 12 '15 at 0:40
$\begingroup$ I like the Stanford Encyclopedia of Philosophy, for example plato.stanford.edu/entries/goedel-incompleteness, plato.stanford.edu/entries/hilbert-program or plato.stanford.edu/entries/frege. But you could also lookup "formal system" in a normal encyclopedia, like wikipedia or Encyclopædia Britannica. Usage of formal systems will come up in universal algebra (equational logic, and quasi-equational logic), logic (propositional calculus, predicate calculus, modal logic), in computer science (automata and formal languages), ... $\endgroup$ – Thomas Klimpel May 13 '15 at 16:23
Generally speaking, a formal system is comprised of
a language distinguishing its well-formed formulas from those strings over that are not well-formed
some kind of semantics that says which formulas are true and which are not
axioms and inference rules which attempt to generate, computationally, exactly those formulas which are true given the semantics.
We can specify the language of well-formed formulas using a grammar, but a grammar is not itself a formal system.
Ray ToalRay Toal
A formal language $\mathcal{L}$ is composed of:
an alphabet of symbols, that is a set of symbols with the particularity that each of those symbols can be specified without reference to any interpretation. The alphabet of a formal language $\mathcal{L}$ is often referred as $\Sigma$.
a grammar that determines which sequences of symbols in $\Sigma$ are well-defined-formulas (often called wffs) in $\mathcal{L}$.
A formal system $\mathcal{S}$ is:
A formal language $\mathcal{L}$
A deductive apparatus.
What is a deductive apparatus you may ask?
An arbitrary set of wffs in $\mathcal{L}$ that are axioms
A set of inference rules that determines which wffs have a relation of "immediate consequence" between them.
Here are a some excellent references whether you want to get started or dive into more meaty stuff:
Metalogic: An Introduction to the metatheory of Standard First Order Logic by Geoffrey Hunter.
Automata, Formal Languages and Algebraic Systems by Masami Ito
Erwan AaronErwan Aaron
What you see in an introductory course in formal languages is just a small sample of the multitude of models used to define and study languages (or functions, numbers, etc.) formally, i.e., regarding their syntactical structure - that has a dynamic component, which we call "computation" -, setting aside what could be considered their content, to be "reattached" to these structures a posteriori, or not at all. These different models come from different contexts and have different applications, theoretical as well as practical.
The language used to describe these models can also be formalized, so you have grammars both as formal systems and as languages. This kind of circularity is problematic, but unavoidable, and in many cases desirable.
The notion of formal system as it is used in computer science has to do historically with the investigation of formal properties of mathematical logic systems. The standard definitions you will see in other answers come from this common origin. There are other kinds of formal systems in other fields of research, the term is polysemic. The distinction between concepts of form and content depends on where you're coming from.
André Souza LemosAndré Souza Lemos
I'm looking for the same question, and seems there are no clarity. I hope someone could give us.
See, for instance, what Levelt state in his book ("An Introduction to the Theory of Formal Languages and Automata" Cap1, p. 1):
From a mathematical point of view, grammars are FORMAL SYSTEMS, like Turing machines, computer programs, prepositional logic, theories of inference, neural nets, and so forth. [1]
Despite of this, it's a really matter whats others answers pointed out here, because formal grammars seems truly miss some fundamental attributes (i.e. deductive apparatus) to be marked as formal systems.
However, if a Turing machine is a formal system [2], and formal languages (and its representations, grammars) are equivalents to them: why grammars is not a formal system too?
I hope this consideration, event though are not conclusive may animate some improved answer to this problem.
GabrerGabrer
Thanks for contributing an answer to Computer Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged formal-languages formal-grammars or ask your own question.
grammatical complexity of propositional and monadic predicate validities? (and grammars for recursive but not context-sensitive languages?)
Correspondence between automata and formal grammars?
Calculi for a computability class
Expressive power of formal systems
When did "regular" start referring to Type 3 languages/grammars?
Grammer Class/Production Rules of Programming Languages
Is there any other computation theory besides the one in automata theory? | CommonCrawl |
Show that $\int_0^x f(t)dt$ is compact and injective
Consider $T:L^2([0,1])\to L^2([0,1])$ given by $\int_0^x f(t)dt$ with $x\in[0,1]$. Show that $T$ is compact and infective.
Is my solution correct? My solution:
By Arzéla-Ascoli it suffices to show that $T$ is closed, bounded and equicontinuous.
Boundedness: $\Vert Tf \Vert\leq (\int_0^x 1 dt)^{1/2}(\int_0^x f^2(t)dt)^{1/2}\leq \Vert f\Vert_2$ so that $\Vert T\Vert\leq 1$.
T is closed: let $(f_n,Tf_n)$ be a sequence in the Graph of T s.t. $f_n\to f$ and $Tf_n\to g$. So we have to show that $g=Tf$. Thus $\forall \varepsilon$ $\exists N$ such that $\Vert Tf_n-g\Vert_2\leq \varepsilon$ for all $n>N$. \begin{equation} \Vert Tf_n-g\Vert_2^2\leq=\int_0^1\left(\int_0^xf_n(t)dt-g(x)\right)^2dx=\int_0^1\left(\int_0^x\vert f_n(t)-g'(t)\vert dt\right)^2dx \end{equation} and so by uniqueness of the limit $g'=f$. Thus $Tf=Tg'=g$.
Equicontinuity: Fix $\varepsilon>0$ then it exists $\delta =\varepsilon$ s.t. if $f,g\in L^2([0,1])$ are s.t. $\Vert f-g\Vert_2\leq \delta$ then $\Vert Tf-Tg\Vert_2\leq \Vert T\Vert\Vert f-g\Vert_2\leq \varepsilon$
We see that $T$ is injective, since if $Tf(x)=0$ then it implies that $f(t)=0$ $\forall t\in [0,1]\backslash x$. But since $f$ is continuous $f(x)=0$ and thus $f\equiv 0$.
Please help me to tcheck if my solution is correct, especially, boundedness and closeness.
functional-analysis compact-operators arzela-ascoli
MorganeMaPh
MorganeMaPhMorganeMaPh
$\begingroup$ This is not correct usage of Arzela-Ascoli. The AA theorem tells you exactly when a subset of the continuous functions is compact in the uniform norm, not the $L^2$ norm. Secondly, why would $f$ be continuous? Not every $L^2$ function is continuous. $\endgroup$ – Shalop Aug 7 '16 at 8:43
$\begingroup$ Ok, actually in the exercise, it is not precise with which norm we must work. But the hint is given: use AA. $\endgroup$ – MorganeMaPh Aug 7 '16 at 8:44
$\begingroup$ If you're working on $L^2$ then you're obviously using the $L^2$ norm. I will say that Arzela Ascoli can indeed be used to solve this problem correctly. First, by noting that $T$ maps $L^2$ into $C[0,1]$, and then by noting that $|Tf(x)-Tf(y)| \leq \| f \cdot 1_{[x,y]} \|_1 \leq \|f\|_2 \cdot \|1_{[x,y]}\|_2 = \|f\|_2 \cdot |x-y|^{1/2}$. This shows that $T(B)$ is equicontinuous for any ball $B \subset L^2$. Then use the fact that the $L^2$ norm is weaker than the uniform norm on $[0,1]$. $\endgroup$ – Shalop Aug 7 '16 at 8:47
$\begingroup$ All right! Thank you @shalop , and what about the closeness? $\endgroup$ – MorganeMaPh Aug 7 '16 at 8:49
$\begingroup$ @Shalop does your last sentence mean: $\Vert \Vert_2\leq \Vert \Vert$ with the later being the uniform norm? $\endgroup$ – MorganeMaPh Aug 7 '16 at 8:55
Let's first prove that $T$ is injective. We need to show that $Tf = 0 \implies f=0$ almost surely. Note that if $\int_0^x f=0$ for all $x$, then $f$ must be almost everywhere zero (because any finite Borel measure on $[0,1]$ is determined by its values on sets of the form $[0,x)$ with $x>0$; in this case our Borel measure is $A \mapsto \int_A f$).
Secondly, let's show that $T$ is compact. This means that if $\{f_n\}$ is a sequence in $L^2$ with $\|f_n\| \leq 1$ for all $n$, then it must be true that $Tf_n$ converges along a subsequence (w.r.t. the $L^2$ norm).
So let $f_n$ be such a sequence in $L^2$. Then we note that for all $0 \leq x<y \leq 1$, Cauchy Schwarz gives $$|Tf_n(y)-Tf_n(x)| = \bigg| \int f_n \cdot 1_{[x,y]} \bigg| \leq \|f_n \cdot 1_{[x,y]} \|_1 \leq \|f_n\|_2 \cdot \|1_{[x,y]}\|_2 \leq 1 \cdot |x-y|^{1/2}$$ This proves that $\{Tf_n\}$ is equi-Hölder(1/2), and hence is equicontinuous and pointwise bounded. By the Arzela-Ascoli Theorem ,we can conclude that the $\{Tf_n\}$ converge uniformly along a subsequence $\{Tf_{n_k}\}$. Since the uniform norm dominates the $L^2$ norm on $[0,1]$, it follows that the same subsequence also converges in $L^2$. This proves that $T$ is compact.
ShalopShalop
$\begingroup$ also do you think we can show that $T$ is compact by showing that it is a finit-rank operator? Does it? $\endgroup$ – MorganeMaPh Aug 9 '16 at 15:06
$\begingroup$ It's not a finite rank operator. Can you tell me why? $\endgroup$ – Shalop Aug 10 '16 at 7:11
Not the answer you're looking for? Browse other questions tagged functional-analysis compact-operators arzela-ascoli or ask your own question.
Show $T: C([0,1]) \rightarrow C([0,1])$ is compact
Sequence of Functions on $[0,1]$ with Derivatives Bounded by $L^1$ Function
Arzela-Ascoli over uniformly bounded differentiable functions in $C[a,b]$
Arzelá-Ascoli to show that a given subset of $C([a,b])$ is compact
Sequence of functions $f_n(x)=\frac{2x^2}{x^2+(1-2nx)^2}$, does it have a uniformly converging subsequence?
Is an equicontinuous family of uniformly continuous functions necessarily uniformly equicontinuous?
Definition of equicontinuity
Show that $\{f_n\}$ has a uniformly convergent subsequence.
Prove that a sequence contains a uniformly convergent subsequence if the derivatives are bounded by a function
If $T:L^p[0,1] \to L^p[0,1]$ bounded for $1 < p < \infty$ with continuous image, then it's compact | CommonCrawl |
Is there a way one can combine two correlated hash outputs to maximize the collision resistance?
While practically diving into collision resistance, I am trying to first write a non-cryptographic hash (to learn from it) which is limited to arithmetic operations on 32-bit integers only. It produces two 32-bit results similar to MurmurHash2_x86_64 (aka MurmurHash64B), but each have differing initial states and are mixed together more thoroughly (MurmurHash2_x86_64 only mixes at the end). Essentially I aim to produce a fast, well-distributed and collision-resistant 64-bit hash using two 32-bit states.
Trying to learn some things by looking at MurmurHash2_x86_64, I noticed that at first glance, it produces a 64-bit digest. But the author had this to say on the SMHasher wiki:
MurmurHash2_x86_64 computes two 32-bit results in parallel and mixes them at the end, which is fast but means that collision resistance is only as good as a 32-bit hash. I suggest avoiding this variant.
Then I saw this exchange on the subject:
Hacker News comment by martincmartin:
Computing two 32-bit results in parallel and mixing them at the end does NOT mean collision resistance is only as good as a 32-bit hash. For that, you need to compute ONE 32-bit result, then transform it into a 64-bit result.
Hacker News comment (reply) by finnw
Depends whether the two 32-bit hashes are correlated with each other. If there is no correlation then a pair of 32-bit hashes is no more likely to collide than a single 64-bit hash. But this is difficult to achieve, and you should not assume (for example) running the same algorithm twice with different initial states will produce uncorrelated hashes.
This made me naively test MurmurHash2_x86_64 to find collisions in one 32-bit result, and I found that the second result had no collision. This confused me, because that seemed to be the very definition of collision resistance greater than what a 32-bit hash can provide… and it contradicted what Appleby said about his function.
This makes me ask:
How can two 32-bit results of a hash function correlate, or otherwise increase probability of collisions? Can the outputs from multiple functions correlate at all? And if, how could this weaken the collision resistance of their combination?
What can be done to avoid this situation, and to achieve the collision resistance of a 64-bit hash (or more) using multiple 32-bit results? Is there a way one can combine two correlated hash outputs to maximize the collision resistance?
hash algorithm-design collision-resistance
bryc
brycbryc
Collision resistance is a property of a family of hash functions $H_k$ for a key $k$, measured by the probability that a random algorithm can achieve at finding a collision given a random key $k$. Specifically, it is measured by $$\Pr[x \ne y, H_k(x) = H_k(y)]$$ where $k$ is a uniform random key, and $(x, y) = A(k)$ are chosen by a random collision-finding algorithm $A$. There are generic algorithms $A$, such as Pollard's $\rho$, achieving probability near 1 with modest memory cost requiring $2^{n/2}$ evaluations of the function, where $n$ is the number of bits in the hashes, and there are various area-time tradeoffs available. If this probability is negligible for any algorithm $A$ limited to area*time cost below $O(2^{n/2})$, then we call $H_k$ collision-resistant.
Concerning MurmurHash:
First, 64 bits is too small for any useful notion of collision resistance, because it generically takes only about four billion evaluations of a hash function to find a collision, and that cost is affordable on your laptop today—as in, before the end of the day.
Second, MurmurHash2 and MurmurHash3 aren't even collision-resistant in the formal sense, because there are better-than-generic algorithms for computing collisions under any prescribed key. Actually, it's worse than that: there are pairs of messages that collide under every key in MurmurHash. For example, 390b2e1717dfba58 and b9e9ba6497bd47a6 collide under every key in MurmurHash2. The same applies to MurmurHash3 (and CityHash64), and can be extended into $m$-way multicollisions for any $m$ you want. You can create more of them nearly instantaneously in the privacy of your own living room.
To address the letter of your questions, simply concatenating hash values derived by any iterated hash functions (which is how ~all hash functions are made to work on arbitrary-length inputs, MurmurHash included) fails to achieve collision resistance much better than the individual hash functions alone. Concatenating $H_k$ and $H_{k'}$ with independent keys $k$ and $k'$ doesn't necessarily give worse than 32-bit collision resistance (which would mean a cost of less than a few tens of thousands of evaluations of the function to find a collision), but it doesn't give much better collision resistance either—it certainly won't even reach 64-bit collision resistance.
But since you're asking about 64-bit non-cryptographic hash functions, I can't imagine you actually want collision resistance. Here are a couple of alternatives that you might actually want:
You might want a universal hash family, or a $\varepsilon$-almost universal hash family. This means that for any fixed messages $x$ and $y$, and for uniform random $k$, the collision probability is bounded: $$\Pr[H_k(x) = H_k(y)] \leq \varepsilon,$$ a property which we can often prove of specific constructions. Note that this implies nothing about the difficulty of finding collisions if you know the key: it is only a bound on the probability of collisions under uniform random unknown key.
This prevents an adversary from furnishing you a data set that will, when you load it into a hash table, have many collisions with nonnegligible probability. On the other hand, it doesn't prevent an adaptive adversary from measuring timing of hash table operations to learn about the key $k$ and then furnishing in more collisions. For that you will need a PRF—see below.
You can get smaller collision probabilities by concatenating independent universal hash functions. Pick keys $k_0$ and $k_1$ independently; if $H_k$ was an $\varepsilon$-universal hash family, then $x \mapsto H_{k_0}(x) \mathbin\| H_{k_1}(x)$ is an $\varepsilon^2$-universal hash family. Proof: A collision $x \ne y$ means $H_{k_0}(x) = H_{k_0}(y)$ and $H_{k_1}(x) = H_{k_1}(y)$, which, since $H_{k_0}$ and $H_{k_1}$ are independent, has probability \begin{align*} \Pr&[H_{k_0}(x) = H_{k_0}(y), H_{k_1}(x) = H_{k_1}] \\ &= \Pr[H_{k_0}(x) = H_{k_0}(y)]\cdot\Pr[H_{k_1}(x) = H_{k_1}(y)] \\ &\leq \varepsilon^2. \end{align*}
A variation on this theme is relevant in cryptography for message authentication codes: an $\varepsilon$-almost xor-universal hash family, where $$\Pr[H_k(x) \oplus H_k(y) = \delta] \leq \varepsilon$$ for any $x$, $y$, and $\delta$, and random $k$, makes a good one-time secret-key authenticator. For example, Poly1305 and GHASH are $\varepsilon$-almost xor-universal hash families with $\varepsilon \lll \lceil\ell/128\rceil/2^{100}$ where $\ell$ is the message length. AES-GCM uses GHASH in Carger–Wegman's construction to authenticate many messages; NaCl crypto_secretbox_xsalsa20poly1305 uses Poly1305 in a different construction to authenticate many messages.
There are other variations on the theme, like $\varepsilon$-almost pairwise independent hash families, where $$\Pr[H_k(x) = u, H_k(y) = v] \leq \varepsilon^2$$ for any fixed messages $x$ and $y$, any fixed hash values $u$ and $v$, and uniform random $k$. Zobrist hashes are a popular choice of pairwise independent hash families outside cryptography.
You might want a pseudorandom function family, or PRF. This means that no cost-limited adversary given oracle access to a uniformly randomly chosen member $H_k$ of the family can distinguish from a uniform random function with the same domain and codomain: $$|\Pr[A(H_k) = 1] - \Pr[A(U) = 1]|$$ is bounded by a small constant, for any cost-limited random distinguishing algorithm $A$, where $U$ is a uniform random function.
Note that the adversary isn't given $k$ here—they are only given $H_k$. Obviously, they could try to guess $k$ and use the oracle to confirm their guess, but this succeeds with negligible probability as long as the family, i.e. the number of distinct keys, is reasonably large, say at least $2^{128}$.
This implies that even if the adversary knows $H_k(x_1)$, $H_k(x_2)$, $\ldots$, $H_k(x_{m-1})$, that doesn't help them to guess $H_k(x_m)$ for a distinct message $x_m$. That means that even timing information about hash table operations—revealing when collisions happened—won't let them guess what other keys will also collide. It also means that a PRF makes a good message authentication code.
Concatenating independent PRFs, $x \mapsto H_{k_0}(x) \mathbin\| H_{k_1}(x)$, does not confer better PRF-security than any one of them alone. Why? An adversary need only guess one of the independent keys to win the PRF game and distinguish it from a uniform random function, so the generic security attainable by this construction is at most half the generic security attainable by a PRF with a double-size key. That doesn't mean concatenating independent PRFs damages security—just that it never provides higher security than one of them alone.
We don't know a way to prove that a hash family is a PRF, but typical examples that we conjecture are PRFs are HMAC-SHA256, keyed BLAKE2, and KMAC. For short messages and short, ≤64-bit outputs, SipHash is another popular example, particularly intended to replace MurmurHash where it failed to provide security.
P.S. The comments you quoted on the orange site are, unsurprisingly, meaningless nonsense, compounded by the fact that they are responding to a meandering answer to a confused question on another stackexchange, none of which questions, answers, or responses are grounded by, say, articulation of actual goals in a system.
Squeamish OssifrageSqueamish Ossifrage
Not the answer you're looking for? Browse other questions tagged hash algorithm-design collision-resistance or ask your own question.
What is perfect collision-resistence in probability terms?
Can the XOR of two non-collision-resistant hashes be collision resistant?
I need a 64-bit cryptographic hash for 96 bits of data
Hash collision resistance of $\mathcal H^\prime(m) = \mathcal H(\mathcal H(m)|m)$
Testing hash functions for collision resistance
SipHash's (non-)collision resistance
Proof for collision resistance on a block-based hash
What are the odds of collisions for a hash function with 256-bit output?
Concatenation of multiple shorter hashes vs a single long hash
What is the collision resistance of a 2048-bit hash based on SHA-3?
What is the probability to produce a collision under two different hash functions? | CommonCrawl |
Effective construction of classifiers with the k-NN method supported by a concept ontology
Jan Bazan1,
Stanisława Bazan-Socha2,
Marcin Ochab1,
Sylwia Buregwa-Czuma1,
Tomasz Nowakowski3 &
Mirosław Woźniak4
Knowledge and Information Systems (2019)Cite this article
In analysing sensor data, it usually proves beneficial to use domain knowledge in the classification process in order to narrow down the search space of relevant features. However, it is often not effective when decision trees or the k-NN method is used. Therefore, the authors herein propose to build an appropriate concept ontology based on expert knowledge. The use of an ontology-based metric enables mutual similarity to be determined between objects covered by respective concept ontology, taking into consideration interrelations of features at various levels of abstraction. Using a set of medical data collected with the Holter method, it is shown that predicting coronary disease with the use of the approach proposed is much more accurate than in the case of not only the k-NN method using classical metrics, but also most other known classifiers. It is also proved in this paper that the expert determination of appropriate structure of ontology is of key importance, while subsequent selection of appropriate weights can be automated.
The approximation of complex concepts using exclusively sensor data sets often proves difficult, owing to the intricate nature of real-world processes, presence of direct and indirect relations and interactions between objects involved in those processes. Numerous concepts have been developed of using domain knowledge in classifier construction with a view to taking these phenomena into account. Domain knowledge is predominantly used to narrow down the search space and facilitate interpretation of results. Such knowledge is thus used mainly in data preparation in order to eliminate irrelevant features, select the most valuable ones or develop new derivative features. The literature records a material favourable effect of such use of domain knowledge on the performance of certain data exploration methods. For instance, in Sinha and Zhao [29] and Zhao, Sinha and Ge [36] the effect is analysed of using the knowledge on the efficiency of the following classifiers: logistic regression, artificial neural networks, the k-NN (k-nearest neighbour) method, naive Bayes classifiers, decision trees and the SVM (support vector machine) method. It has been observed, though, that the use of domain knowledge for selection purposes proved least efficient in the event of decision trees and the k-NN method.
For such methods as k-NN, it is of key importance to evaluate the distance between—or in other words—similarity of two objects (e.g. patients). This requires data to be analysed at numerous levels of abstraction. However, given the semantic distance of complex concepts from sensor data, this is not feasible for classic modelling methods based on features being measured. Thus, the definition of such metrics (distance measures) remains a major challenge in data exploration. There do exist methods of defining similarity relation by way of building a metric (distance function) based on simple strategies of aggregating local similarities of the objects being compared (see Bazan [4] for more details). A chosen distance formula is there optimized by tuning local similarity features and parameters used to aggregate them. However, the main challenge in approximating the metric is the selection of such local similarities and a way of aggregation thereof, while domain knowledge shows that there usually are numerous various aspects of similarity of elements being compared. Each aspect should be examined specifically, in line with the domain knowledge. Further, also the aggregation of various aspects into a global similarity or distance should be done based on the knowledge. Therefore, the authors propose to define a semantic metric (for measuring the distance between objects) founded on a concept ontology (based on the domain knowledge) and to use it for the k-NN classifier. Ontology is understood as a finite set of concepts arranged in a hierarchy equipped with relations between concepts from different hierarchy levels.
For a review of existing approaches to measuring distance between concepts, the reader is referred to Pedersen et al. [25] and Taieb et al. [31]. Measures of a semantic similarity and a kinship were divided there into such types as: based on paths in a concept ontology, based on information content and context vectors. Rada et al. [27] define the notion of the semantic distance as the length of the shortest path connecting two concepts in the ontology. The longer the path, the more semantically the concepts are away. The measure of the semantic similarity between concepts based on the length and depth of the path was proposed by Wu and Palmer [35]. This approach uses the number of "is-a" edges from concepts to the nearest common LCS (lowest common subsumer) and the number of edges to the root of taxonomy. Leacock and Chodorow [21] proposed a measure of semantic similarity based on the shortest path in the lexical WordNet database [34]. The path length is scaled using the maximum taxonomy depth to a value between 0 and 1, and the similarity is calculated as the negative logarithm of this value. A measure of similarity based on the concept of information content (IC) was presented by Resnik [28]. IC, which is a measure of the specificity of a concept, is calculated for each concept in the hierarchy based on the frequency of occurrence of this concept in a broader context. Using the concept of IC, Resnik proposes a measure in which the semantic similarity of two concepts is proportional to the amount of information they share. Lin et al. [22] proposed extending Resnik's work by scaling the information content of the superior concept of LCS by the information content of individual concepts. Hsu et al. [20] even provided a representation of such distance in the form of distance hierarchy enhancing concept classification by assigning weights to inter-concept links (edges). The distance between two values of a (categorical or numeric) feature is there measured as the total weight of edges along the path connecting two nodes (concepts), with the weights defined by an expert based on the domain knowledge. All methods of measuring semantic distance with the use of domain knowledge, described in the above-mentioned articles, relate to the comparison of concepts or values of features, which makes them useful in, for instance, discretization of features. The author-proposed method of constructing an ontology-based metric uses another approach. It is designed to determine similarity of objects covered by respective denotations of concepts, and not of concepts themselves or their features.
Construction of a classifier
The similarity function proposed for the purposes of exploration of a set of actual medical data enables patients to be compared in terms of the acuteness of coronary disease and thus to be evaluated for the risk of health- and life-threatening consequences. The more acute the disease, the greater the risk of heart incidents [dangerous rhythm disturbances, acute myocardial ischaemia or sudden cardiac death (SCD)]. Experimental data were provided by the Second Department of Internal Medicine of the Jagiellonian University Medical College. Two data sets were collected containing ECGs recorded with the Holter method and supplemented with clinical data of patients suffering from stable myocardial ischaemia (with sinus rhythm). From the first set (HOLTER_I), 19 features of 70 patients tested in 2006–2009 with the use of Aspel's three-channel HolCARD 24W system were used. From the second set (HOLTER_II), 20 features of 200 patients tested in 2015–2016 with the use of 12-channel R12 monitor of the BTL CardioPoint-Holter H600 v2-23 system were used. Table 1 presents the key profile and angiographic data of both sets. Our research was designed to develop an efficient k-NN classifier with the use of the proposed similarity measure as the metric. The occurrence and non-occurrence of stable coronary disease (binary decision) were chosen as decision classes.
Table 1 Clinical profile of tested populations (the HOLTER_I and HOLTER_II sets)
CHD ontology with expert-assigned weights for the HOLTER_I set (as a comparison values in parentheses present weights determined with Monte Carlo method)
In the first stage of similarity function construction, hierarchical ontology was defined containing concepts referring to stable myocardial ischaemia. At the bottom level, sensor features (sourced directly from data set) were placed. They were selected from the entire data set so as to correspond to the recognized SCD prognostic factors [10]. Then at each level of ontology, by assigning an appropriate weight, the materiality of a given concept with respect to the higher-level concept was defined. A domain expert chose all the weights arbitrarily as a number from the (0,1) interval. The thus developed ontology is presented in Fig. 1. To benchmark prognosis efficiency, an ontology of the same structure, but with weights determined with a Monte Carlo method, was also used at the experimental stage.
The next step consisted in defining an algorithm for computing values of the function measuring similarity of objects with the use of the defined ontology with weights assigned.
The ultimate stage was the construction of a k-NN classifier using the developed metric of semantic similarity of patients.
Construction of ontology
Table 2 Prognostic SCD factors in anamnesis
Table 3 Prognostic SCD factors in supplementary tests
Determination of ontology-based distance requires predefining a concept ontology covered by the term which defines the decision problem. In line with the construction plan for such ontology, proposed by Noy and McGuinness [24], medical sciences were chosen as the domain and cardiology as the field. Then concepts were identified indicating the advancement of myocardial ischaemia, such as: alterations in the anamnesis, alterations in supplementary tests, epidemiological risks, coexisting diseases, alterations in electrophysiological tests or deviations in laboratory tests. These notions served a basis for defining the following ontology concepts: CHD (coronary heart disease), anamnesis, supplementary tests, epidemiology, coexisting diseases, electrophysiological tests, laboratory tests, ECG, HRV, QT, tachycardia and ST. Then, using top-down approach, the concepts were arranged hierarchically into a tree-like structure, starting from the most general concept (at the top), down to the most special ones (at the bottom). Each concept was assigned a property in the form of a weight, a number in the (0,1) interval, reflecting the concept materiality with respect to the concept preceding on the tree (one level up), with the proviso that the sum of weights assigned to all successors (one level down along the tree paths) of a given concept is 1. The last stage of the construction consisted in defining instances of individual concept in the form of recognized SCD prognostic factors [1, 12, 14, 26] corresponding to appropriate data set features. Selected risks are presented in Tables 2 and 3. In the CHD ontology thus developed (see Fig. 1), 19 risks were used, to which experts assigned weights in proportion to their relative importance in the denotation of the respective concept. At the bottom level, there are concept instances directly from the data set. The ontology proposed includes only selected concepts, present in the data sets. It may though be easily extended to include further elements. We should mention here that the literature does not specify the required number of risks: the larger the number of risk factors, the greater the risk of heart incidents [dangerous rhythm disturbances, acute myocardial ischaemia or sudden cardiac death (SCD)]. The OWL technology [3] was used to record and store the ontology developed.
It should be noted that the thresholds values which can be seen in the column called "Description" of mentioned tables represent the current medical knowledge, but were not used for determining values of any symbolic attributes used in the constructed ontology. The only parameters that take symbolic values are: HA, MI, DM, stimulants and gender. They represent one of the following facts: existence of some disease, usage of stimulant or usage of gender. Such facts are organic and any thresholding of that dichotomic values is not required. All other parameters are numeric and while determining similarity between patients were compared using formula, so it was also not required to use thresholding mechanism which could involve usage of some arbitrarily chosen threshold values.
Determination of the ontology-based distance value
Based on the ontology thus built, we can measure the distance between objects within the denotation of concepts described by the ontology. Each concept of the ontology describes differentiation among the objects considered, patients in the case in question. Metric (distance) proposed hereinafter has been designed to help answer the question "how similar (or dissimilar) are two patients diagnosed with myocardial ischaemia?".
Standard metric-based techniques use such metrics as the Euclidean distance or, more generally, the Minkowski distance (p-norm) defined by formula (1) and (2), respectively.
$$\begin{aligned} d_\mathrm{Euclides}(x,y)= & {} \sqrt{\sum _{i=1}^{m}(x_i-y_i)^2}, \end{aligned}$$
$$\begin{aligned} d_\mathrm{Minkowski}(x,y)= & {} \root p \of {\sum _{i=1}^{m}|x_i-y_i|^p}, \end{aligned}$$
where m is the number of conditional features in the decision table, while \(x = [x_{1}, x_{2},\ldots , x_{m}]\) and \(y = [y_{1}, y_{2},\ldots , y_{m}]\) are the values of those m features for two objects. The parameter p is a positive integer.
However, these metrics take into consideration exclusively data collected by sensors, with no regard to interrelations among concepts at the higher level of abstractness. Moreover, they can process numeric features only. On the other hand, the ontology-based metric proposed herein can handle hierarchies and meanings of the concepts described and is free from such limitations. Its computation is a multi-stage process. In the first stage, distances are computed between feature values from sensor readings, that is, at the bottom level of the ontology. In subsequent stages, at a given ontology level, the distance between two objects in the denotation of a given concept is defined, using the distances between the respective objects in the denotations of concepts subordinate to a given concept (one level down) and their respective weights measuring their impact on the higher-level concept.
The similarity function measuring distance between two objects \(u_{i}\) and \(u_{j}\) with respect to a numeric sensor-monitored feature a at the bottom ontology level is defined by formula (3) [33]:
$$\begin{aligned} d_\mathrm{num}(u_i,u_j,a)=\frac{|a(u_i)-a(u_j)|}{R_a} \quad \text { for }i,j \in \{1,\ldots ,n\}, \end{aligned}$$
where n stands for the number of objects and \(R_{a}\) is the range of the feature values. The range may be defined as the difference between the greatest and the least values of the feature in a given data set or it may be known from the domain knowledge. Given a lack of accurately determined extreme values for certain SCD risks, the former approach is used herein. The similarity function measuring distance with respect to a symbolic (non-numeric) sensor-monitored feature (attribute) a is defined with the use of the value difference metric (VDM) method [30], in accordance with formula (4):
$$\begin{aligned} d_\mathrm{symb}(u_i,u_j,a)=\sum _{d_c \in D} |P(dec=d_c|a(u_i)=v)-P(dec=d_c|a(u_j)=w)|, \end{aligned}$$
where D stands for the set of decision classes, P is a probability distribution on the set of decision values (see formula 5), \(v, w \in V_{a}\), which is the domain of the feature a.
$$\begin{aligned} P(dec=d_c|a(u)=v)=\frac{|\{u\in U:dec(u)=d_c \wedge a(u)=v\}|}{|\{u \in U:a(u)=v\}|}, \end{aligned}$$
where U is a non-empty finite set (the "universe"), whose elements are called objects: \(U = u_{1}, u_{2}, \ldots , u_{n}\), and dec(u) is the value of the decision feature for an object u.
Finally, the similarity function defining the distance between two objects \(u_{i}\) and \(u_{j}\) with respect to a concept C arranged at a higher ontology level is defined in accordance with formula (6):
$$\begin{aligned} d_\mathrm{onto}(u_i,u_j,C) = {\left\{ \begin{array}{ll} \sum \limits _{s \in S} w_s \cdot d_\mathrm{onto}(u_i,u_j,C_s) \\ \sum \limits _{a \in A} w_s \cdot d_\mathrm{num}(u_i,u_j,a) &{} {\text { for numeric attribute a}} \\ \sum \limits _{a \in A} w_s \cdot d_\mathrm{symb}(u_i,u_j,a) &{} {\text { for symbolic attribute a}} \end{array}\right. }, \end{aligned}$$
where S stands for the set of subordinate concepts lying in the denotation of the concept C (unless sensor-monitored features lie one level down), \(w_{s}\) stands for the weight of a given subordinate concept s or a feature (at the bottom level) and \(C_{s}\) represents a concept subordinate to C, one level down (at the bottom level, it is a feature a).
It is easy to prove that the proposed similarity function meets the classic three conditions known as metric axioms (the identity and symmetry axioms follow directly from the properties of the absolute value; the triangle inequality may be proved by induction) [10]. Thus, one can try to use this function in the k-NN method as a distance measure.
Using the proposed ontology-based metric, experiments were performed with k-NN classifiers. For the HOLTER_I data set, the myocardial ischaemia ontology presented in Fig. 1 was used. For the purposes of testing HOLTER_II data set, the ontology was slightly modified to adapt the concepts to features available in the set (see Fig. 3). The modification was necessary, because two different ECG monitors were used to collect data for the data sets, generating slightly different parameters. The SOFA (Simple Ontology Framework API) Java library [2] was used to represent ontology models.
To compare the efficiency of the classic k-NN classifier with the one using the proposed similarity metric, four types of tests were run: E1, E2, E3 and E4, described in Tables 4 and 5 for the data sets HOLTER_I and HOLTER_II, respectively. In the experiments, the implementation of k-NN was supported by the WEKA system [15] with the authors' adaptation to the ontology-based metric. The parameter k (the number of neighbours taken into consideration) was set at 3 for the HOLTER_I set and 5 for the larger HOLTER_II set. The above values were chosen experimentally to give the best results. However, taking into account that k should be an odd number and a widely known rule of thumb [17] says that reasonable value is \(k = \sqrt{n}\) (where n is a number of samples), the process of searching the optimal value was started from a \(k=7\) for the HOLTER_I set and \(k=13\) for the HOLTER_II.
Diagram of the nested cross-validation
The individual tests were differentiated in terms of the metric used (the Euclidean distance or the metric based on the ontology from Figs. 1 and 3, comprising 31 concepts) and of the method of determining ontology concept weights (defined by an expert or randomly generated with a Monte Carlo method). Given the low number of items in the HOLTER_I set, the classification quality was evaluated with the n-fold cross-validation known as LOO (leaving-one-out), where the number of iterations equals the total number of objects [16, 18]. For the larger HOLTER_II set, the standard tenfold cross-validation (10-CV) [9] was used, but not in the last experiment E4, where the nested cross-validation (nested CV) [32] was used. With the nested technique, external validation was performed with the LOO method (for HOLTER_I) and 10-CV (for HOLTER_II). In each train set, 100 ontology models with randomly defined weights were generated; subsequently, the highest accuracy model (ACC) was selected with the 10-CV technique for external testing. The final result is the average of all tests. Figure 2 presents the diagram of the nested cross-validation performed. The structure and results of experiments are set forth in Tables 4 and 5 for the data sets HOLTER_I and HOLTER_II, respectively.
Table 4 Results of experiments run with the use of the proposed ontology-based similarity metric for the prediction of coronary stenosis in CHD—the HOLTER_I data set
Table 5 Results of experiments run with the use of the proposed ontology-based similarity metric for the prediction of coronary stenosis in CHD—the HOLTER_II data set
For the both data sets, the k-NN method supported by the ontology-based metric gives accuracy significantly higher than the same method supported by the Euclidean metric. For the HOLTER_I set and the ontology-based metric, the interesting thing is a minor difference in accuracy between procedure with expert-defined weights and that with randomly generated ones, which suggests that the domain knowledge-based selection of concept ontology is much more important than the selection of weights assigned to the concepts. On the other hand, the results of experiment E4 (where weights are repeatedly selected randomly and only the best ones are used in the model) indicate that, apart from the appropriate selection of concepts, appropriate weight allocation may additionally improve the accuracy of classification. The superiority of the automated (random) weight allocation is most probably attributable to difficulties faced by a human being trying to simultaneously numerically evaluate the weights with high accuracy for such a large number of ontology concepts (here 31). The exact values and differences between the weights determined by the expert and those automatically generated are shown in Figs. 1 and 3. Moreover, arbitrarily assuming 0.3 as a threshold value of the difference in the values of the weights it can be observed that for both sets, the "\(electrophysiological\ tests\)" and HRV are overestimated and the parameters "\(laboratory\ tests\)", ST and "stimulants" were underestimated by the expert. Therefore, the logical conclusion is a reduction in the value of "\(electrophysiological\ tests\)" at the cost of "\(laboratory\ tests\)" and the weights of HRV at the cost of ST. As seen in Tables 4 and 5, for the larger data set, the accuracy of the proposed classifier is by 24% higher than the accuracy of the classic k-NN classifier. Moreover, when compared with other classification methods examined by the authors [10], the method proposed herein is most effective in prognosis of the occurrence of material coronary stenosis in the myocardial ischaemia (see Table 6).
CHD ontology with expert-assigned weights for the HOLTER_II set (as a comparison values in parentheses present weights determined with Monte Carlo method)
Table 6 Classification accuracy comparison for selected methods and the data sets HOLTER_I and HOLTER_II [10]
Given the "k-nearest neighbour" method's relatively high computational complexity, its use supported by the ontology-based metric is only feasible if classification is based on a low number of objects. However, when compared with the classic approach (using, for instance, the Euclidean distance) it proves better owing to a significant reduction in the number of features. Namely, the ontology developed by a domain expert enabled the number of features to be reduced from 595 available in the set to just 20, thus materially shortening the computation time. Apart from computational complexity and memory requirements, another shortcoming of the method proposed herein is, as for now, a lack of a mechanism for verifying ontology quality. An interesting direction for further research also appears to be the use of the ontology-based semantic metric proposed to solving grouping problems with such tools as the c-means or hierarchical method.
Due to the fact that machine learning and especially the latest deep learning approach lack in the desired feature of explainability [13, 19], we think that the presented concept ontology could be also found useful in the process of building self-explanatory artificial neural networks supported by domain knowledge.
Another interesting idea would be to use a fuzzy ontology [11]. That way some domain knowledge which is based on the threshold values (as in Tables 2 and 3) could be safely introduced to the model. Those values could be used to divide the numerical attributes by compartments, creating this way a group of symbolic values. Without the fuzzy approach, it could lead to some classification errors on the samples with values close to the borders of the compartments. However, using the fuzzy ontology there is no such a risk. For example (see Table 2), a patient who is 64 years old would be treated by model differently to a someone who is 40, even though they would be labelled with the same symbolic value of low risk. It seems that such an approach is worth attention because it can increase the scope of the domain knowledge used and thus additionally increase the accuracy of the prediction.
Al-Khatib SM, Yancy CW, Solis P, Becker L, Benjamin EJ, Carrillo RG, Ezekowitz JA, Fonarow GC, Kantharia BK, Kleinman M, Nichol G, Varosy PD (2017) 2016 AHA/ACC clinical performance and quality measures for prevention of sudden cardiac death: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures. Circul Cardiovasc Qual Outcomes 10(2):e000022
Alishevskikh A, Subbiah G (n.d.) Sofa: simple ontology framework API. http://sofa.projects.semwebcentral.org
Antoniou G, Van Harmelen F (2009) Web ontology language: owl. Handbook on ontologies. Springer, Berlin, pp 91–110
Bazan JG (2008) Hierarchical classifiers for complex spatio-temporal concepts. In: Peters JF, Skowron A, Rybinski H (eds) Transactions on rough sets IX, vol 5390. LNCS. Springer, Berlin, pp 474–750
Bazan JG, Bazan-Socha S, Buregwa-Czuma S, Pardel PW, Sokolowska B (2012) Prediction of coronary arteriosclerosis in stable coronary heart disease. In: International conference on information processing and management of uncertainty in knowledge-based systems. Springer, pp 550–559
Bazan JG, Buregwa-Czuma S, Pardel PW, Bazan-Socha S, Sokołowska B, Dziedzina S (2015) Predicting the presence of serious coronary artery disease based on 24 hour holter ecg monitoring. In: Transactions on rough sets XIX. Springer, pp 95–113
Bazan JG, Bazan-Socha S, Buregwa-Czuma S, Dydo L, Rzasa W, Skowron A (2016) A classifier based on a decision tree with verifying cuts.Fundamenta Informaticae 143(1–2):1–18
Bazan JG, Szczuka M (2005) The rough set exploration system. In: Peters JF, Skowron A (eds) Transactions on rough sets III. Springer, Berlin, pp 37–56
Bishop CM, Mitchell TM (2014) Pattern recognition and machine learning. Springer, Berlin
Buregwa-Czuma S (2017) Methods of applying domain knowledge to improve the quality of classifiers (In Polish), PhD thesis, University of Silesia in Katowice, Faculty of Computer Science and Materials Science, Katowice, Poland
Calegari S, Ciucci D (2007) Fuzzy ontology, fuzzy description logics and fuzzy-owl. In: Masulli F, Mitra S, Pasi G (eds) International workshop on fuzzy logic and applications. Springer, Berlin, pp 118–126
Ford ES, Giles WH, Mokdad AH (2004) The distribution of 10-year risk for coronary heart disease among U.S. adults. J Am Coll Cardiol 43(10):1791–1796
Goebel R, Chander A, Holzinger K, Lecue F, Akata Z, Stumpf S, Kieseberg P, Holzinger A (2018) Explainable AI: the new 42?, In: International cross-domain conference for machine learning and knowledge extraction. Springer, Berlin, pp. 295–303
Goff DC, Lloyd-Jones DM, Bennett G, Coady S, D'Agostino RB, Gibbons R, Greenland P, Lackland DT, Levy D, O'Donnell CJ, Robinson JG, Schwartz JS, Shero ST, Smith SC, Sorlie P, Stone NJ, Wilson PWF (2014) ACC/AHA guideline on the assessment of cardiovascular risk. Circulation 129(25 Suppl 2):S49–S73
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. SIGKDD Explor 11(1):10–18
Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Elsevier, New York
Hassanat AB, Abbadi MA, Altarawneh GA, Alhasanat AA (2014) Solving the problem of the k parameter in the k-NN classifier using an ensemble learning approach, arXiv preprint arXiv:1409.0919
Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer, New York
Holzinger A, Kieseberg P, Weippl E, Tjoa AM (2018) Current advances, trends and challenges of machine learning and knowledge extraction: From machine learning to explainable AI. In: International cross-domain conference for machine learning and knowledge extraction. Springer, Berlin, pp 1–8
Hsu C-C, Chen C-L, Su Y-W (2007) Hierarchical clustering of mixed data based on distance hierarchy. Inf Sci 177(20):4474–4492
Leacock C, Chodorow M (1998) Combining local context and wordnet similarity for word sense identification. WordNet Electr Lex Database 49(2):265–283
Lin D et al (1998) An information-theoretic definition of similarity. In: ICML, vol 98, Citeseer, pp 296–304
Napierała K, Stefanowski J (2010) Argument based generalization of modlem rule induction algorithm. In: International conference on rough sets and current trends in computing. Springer, pp 138–147
Noy NF, McGuinness DL (2001) Ontology development 101: a guide to creating your first ontology, Technical report, Stanford Knowledge Systems Laboratory Technical Report KSL-01-05 and Stanford Medical Informatics Technical Report SMI-2001-0880
Pedersen T, Pakhomov SV, Patwardhan S, Chute CG (2007) Measures of semantic similarity and relatedness in the biomedical domain. J Biomed Inform 40(3):288–299
Priori SG, Aliot E, Blomstrom-Lundqvist C, Bossaert L, Breithardt G, Brugada P, Camm AJ, Cappato R, Cobbe SM, Mario CD, Maron BJ, McKenna WJ, Pedersen AK, Ravens U, Schwartz PJ, Trusz-Gluza M, Vardas P, Wellens HJJ, Zipes DP (2001) Task force on sudden Cardiac death of the European Society of Cardiology, Technical report, European Heart Journal
Rada R, Mili H, Bicknell E, Blettner M (1989) Development and application of a metric on semantic nets. IEEE Trans Syst Man Cybern 19(1):17–30
Resnik P (1995) Using information content to evaluate semantic similarity in a taxonomy. arXiv preprint arXiv:cmp-lg/9511007
Sinha AP, Zhao H (2008) Incorporating domain knowledge into data mining classifiers: an application in indirect lending. Decis Support Syst 46(1):287–299
Stanfill C, Waltz D (1986) Toward memory-based reasoning. Commun ACM 29:1213–1228
Taieb MAH, Aouicha MB, Hamadou AB (2014) Ontology-based approach for measuring semantic similarity. Eng Appl Artif Intell 36:238–261
Varma S, Simon R (2006) Bias in error estimation when using cross-validation for model selection. BMC Bioinform 7(1):91
Wilson D, Martinez T (1997) Improved heterogeneous distance functions. J Artif Intell Res 6(1):1–34
WordNet: lexical database of English (n.d.). http://wordnet.princeton.edu/
Wu Z, Palmer M (1994) Verbs semantics and lexical selection, In: Proceedings of the 32nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, pp 133–138
Zhao H, Sinha AP, Ge W (2009) Effects of feature construction on classification performance: an empirical study in bank failure prediction. Expert Syst Appl 36(2):2633–2644
We thank anonymous reviewers for their very useful comments and suggestions. This work was partially supported by the Centre for Innovation and Transfer of Natural Sciences and Engineering Knowledge of University of Rzeszow, Poland.
Interdisciplinary Centre for Computational Modelling, University of Rzeszow, Pigonia 1, 35-310, Rzeszow, Poland
Jan Bazan
, Marcin Ochab
& Sylwia Buregwa-Czuma
Department of Internal Medicine, Faculty of Medicine, Jagiellonian University Medical College, Skawinska 8, 31-066, Kraków, Poland
Stanisława Bazan-Socha
Department of Angiology, Jagiellonian University Medical College, Skawinska 8, 31-066, Kraków, Poland
Tomasz Nowakowski
Angiomed Private Medical Centre, Skawinska 8, 31-066, Kraków, Poland
Mirosław Woźniak
Search for Jan Bazan in:
Search for Stanisława Bazan-Socha in:
Search for Marcin Ochab in:
Search for Sylwia Buregwa-Czuma in:
Search for Tomasz Nowakowski in:
Search for Mirosław Woźniak in:
Correspondence to Jan Bazan.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Bazan, J., Bazan-Socha, S., Ochab, M. et al. Effective construction of classifiers with the k-NN method supported by a concept ontology. Knowl Inf Syst (2019). https://doi.org/10.1007/s10115-019-01391-w
Revised: 03 August 2019
Accepted: 05 August 2019
DOI: https://doi.org/10.1007/s10115-019-01391-w
k-nearest neighbour algorithm
Ontology similarity metrics
Holter measurement | CommonCrawl |
15 Jul 2019, 04:00 → 19 Jul 2019, 14:00 America/Toronto
Maxime Brodeur (Notre Dame), Thomas Brunner
The SMI-2019 will be co-hosted by the University of Notre Dame and McGill University. The workshop continues the series of meetings that started in 1986 in Konnevesi, Finland. The scope of the workshop includes techniques related to the stopping of energetic ions in noble gases and the manipulation ion beams, mostly in research involving unstable nuclides. These includes the ion-guide isotope separation on-line (IG-ISOL) technique, the use of large-volume helium-filled gas cell to thermalize radioactive ion beams, laser-ionization in gas cells and in helium jets, radio-frequency multipole devices, and multi-reflection time-of-flight spectrometers for beam purification. The SMI-19 Workshop aims at providing a status of the field as well as guidance for future developments.
Maxime Brodeur, Chair (Notre Dame)
Thomas Brunner, Chair (McGill University)
Janet Weikel (Notre Dame)
International Advisory Committee:
Juha Aysto (University of Jyväskylä)
Michael Block (GSI)
Georg Bollen (FRIB)
Jens Dilling (TRIUMF)
Frank Herfurth (GSI)
Wenxue Huang (IMP/CAS)
Ari Jokinen (University of Jyväskylä)
Wolfgang Plass (GSI)
Guy Savard (ANL)
Michiharu Wada (KEK)
This conference is sponsored by McGill University, and the Physics Department and College of Science of University of Notre Dame.
SMI-2019_abstract_template.pdf SMI2019 Program_07.15.19.pdf
Monday, 15 July
18:00 → 18:30
Conference Check In 30m Thomson House Restaurant
Thomson House Restaurant
Welcome Reception 1h 30m Thomson House Restaurant
Tuesday, 16 July
Conference Check In 30m Ball Room
Welcome Address 10m Ball Room
Reviewing the success of radioactive-ion manipulation with RFQ traps and recalling the contributions from McGill 20m Ball Room
Ion cooling and trapping techniques have opened new vistas in the physics associated with exotic (short-lived) nuclides and helped cure the ills of isobaric contamination. The ability of condensing ion-beam phase space using soothing cold buffer gas accompanied by electromagnetic confinement has created a new paradigm: beam preparation. The main player in this field is the so-called RFQ cooler-buncher, a segmented linear Paul trap that can capture exotic nuclides hot off the target, reducing emittance by grouping ions into a tight bunch.
Inspired by a 1982 sabbatical leave in Mainz with the group of H.-J. Kluge developing Penning traps for ISOLDE, the late R.B. Moore initiated the first ion-catching developments at McGill with many of details elaborated (at the bar) in Thomson House, the SMI2019 conference venue.
Bunchers are now used for mass measurements and collinear laser spectroscopy of exotic nuclides. Further evolutions have them preceeding dipole mass separators to increase resolving power and even inside ISOL target modules combined with laser ionization for beam purification. RFQ bunchers are also necessary preparatory devices for the fabulous multi-reflection time of flight (MRToF) mass spectrometers that are now pervading the radioactive-ion scene.
In this presentation, the (local) history will be briefly told and the rich evolution of cooler-bunchers will be illustrated as exhaustively as time will permit.
Speaker: David Lunney (CSNSM/IN2P3 Orsay)
Lunney_SMI_2019.pdf
Actinide beams by light-ion induced fusion-evaporation for mass-, decay- and optical spectroscopy at IGISOL 30m Ball Room
The production of actinide ion beams has become a focus of recent efforts at the IGISOL facility of the Accelerator Laboratory, University of Jyväskylä, especially aimed at the measurement of nuclear properties of heavy elements using high-resolution optical spectroscopy [1]. The first successful proof-of-principle on-line experiment for the production of actinides from a light-ion fusion-evaporation reaction has recently been performed with protons on $^{232}$Th targets. Several alpha-active reaction products were detected, reaching as neutron deficient as $^{224}$Pa through the $^{232}$Th(p, 9n)$^{224}$Pa with a 60 MeV primary beam. By detection of gamma-rays in coincidence with the alpha-decay, new information on the decay radiation has been obtained on nuclei including $^{226}$Pa.
Direct detection of long lived actinides such as $^{229}$Th which is of special interest due to the extremely low-energy isomer [2], was not possible due to low alpha-activity as well due to low $Q_{EC/\beta^-}$ values, rendering separation of isotopes even with high resolution Ramsey cleaning with the Penning trap ineffective. Therefore, the novel Phase-Imaging Ion Cyclotron Resonance (PI-ICR) method [3] at JYFLTRAP is to be used for for a direct yield determination of long-lived isotopes in an upcoming experiment. This will also allow direct high-precision mass measurements creating new anchor points in the mass network calculations which currently rely on long chains of alpha decays in the actinide region of the nuclear chart.
An important aspect of these developments has been related to target manufacturing. In addition to metallic thorium targets, several new $^{232}$Th targets manufactured by a novel Drop-on-Demand inkjet printing method [4] were successfully tested. These targets were provided by the Nuclear Chemistry Institute of Johannes Gutenberg-Universität Mainz who will now provide several new targets from other more exotic actinides such as $^{233}$U or $^{237}$Np. With these new targets we expect to access several new isotopes in the neutron-deficient actinide region for decay and optical spectroscopy as well as for mass measurements.
[1] A. Voss et al., Phys. Rev. A, 95 (2017) 032506.
[2] L. von der Wense et al., Nature, 533 (2016) 47.
[3] D. Nesterenko et al., Eur. Phys. J. A, 54 (2018) 154.
[4] R. Haas. et al., Nucl. Instr. Meth. A, 874 (2017) 43.
Speaker: Ilkka Pohjalainen (University of Jyväskylä)
The ELI-IGISOL radioactive ion beam facility at ELI-NP 30m Ball Room
The Extreme Light Infrastructure for Nuclear Physics (ELI-NP) facility will make available in the near future two new photon installations: a high-power laser system and a high-brilliance gamma beam system, which can be used together or separately.
The ELI-IGISOL project [1] will use the primary gamma beam to generate a Radioactive Ion Beam (RIB) via photofission in a stack of Uranium targets placed at the center of a gas cell [2]. The particular technology used for this gas cell is the High Areal Density with Orthogonal extraction Cryogenic Stopping Cell (HADO-CSC) [3] featuring ion extraction orthogonal to the primary beamline. The gas cell is coupled to a radio-frequency quadrupole for beam formation. The exotic neutron-rich nuclei will be separated, and their mass measured, by a high-resolution Multiple-Reflection Time-of-Flight (MR-ToF) mass spectrometer. The isomerically pure RIBs [4] obtained with the MR-ToF will be further measured by a β-decay tape station and a collinear laser spectroscopy station.
The latest developments in the simulation and design of the gas cell are presented. We report benchmark calculations of the production rates and of the extraction time and efficiency from the gas cell. Starting from these studies, the optimal design of the cell and its state-of-the-art technologies is discussed. Various testing units for the HADO-CSC components that are being developed at ELI-NP will be presented.
D.L. Balabanski et al., "Photofission Experiments at ELI-NP", Rom. Rep. Phys. 68, S621 (2016).
P. Constantin et al., "Design of the gas cell for the IGISOL facility at ELI-NP", Nucl. Inst. Meth. B 397, 1 (2017).
T. Dickel et al., "Conceptual design of a novel next-generation cryogenic stopping cell for the Low-Energy Branch of the Super-FRS", Nucl. Inst. Meth. B 376, 216 (2016).
T. Dickel et al., "First spatial separation of a heavy ion isomeric beam with a multiple-reflection time-of-flight mass spectrometer", Phys. Lett. B 744, 137 (2015).
Speaker: Dr Paul Constantin (ELI-NP (Romania))
PaulConstantin_SMI2019.pdf
Coffee Break 30m Ball Room
Barium Tagging in High Pressure Xenon Gas 30m Ball Room
The identification of a single barium ion in coincidence with an energy deposit measured with a precision of 1% in xenon is widely recognized as an unambiguous signature of neutrinoless double beta decay. The detection of single ions in tons of gas or liquid xenon, however, is a major experimental challenge. In this talk I will discuss barium tagging methodologies based on single molecule fluorescence imaging adapted to high pressure xenon gas time projection chambers. Recent advances in ion sensing chemistry and gas phase microscopy will be presented, followed by a discussion of the subsequent R&D steps planned by the NEXT collaboration to enable an ultra-low background, barium tagging neutrinoless double beta decay technology.
Speaker: Dr Ben Jones (UTA)
SMITalk.pptx
Barium Ion Transport in High Pressure Xenon Gas using RF Carpets 30m Ball Room
A background-free measurement of neutrinoless double beta decay can be achieved
with the detection of the daughter nucleus. Methods to image the daughter
barium ion in the decay of xenon-136 are being developed for use in high
pressure gas time projection chambers by the NEXT collaboration. A major
remaining challenge is the transport of the barium ion to a small imaging
region within the detector. In this talk I will discuss the plans for testing
RF carpet performance in high pressure gas, early simulation results, and
experimental tests of RF high voltage behavior in high pressure systems. I will
also discuss our studies of ion drift properties in DC fields in high pressure
gases.
Speaker: Katherine Woodruff
SMI2019_Woodruff.pdf
Lunch 2h Ball Room
Laser Resonance Chromatography (LRC): A new methodology in superheavy element research 20m Ball Room
Optical spectroscopy constitutes the historical path to accumulate basic knowledge on the atom and its structure. Former work based on fluorescence and resonance ionization spectroscopy enabled identifying optical spectral lines up to element 102, nobelium [1, 2]. Beyond nobelium, solely predictions of the atom's structure exist, which in general are far from sufficient to reliably identify atoms from spectral lines. One of the major difficulties in atomic model calculations arise from the complicated interaction between the numerous electrons in atomic shells, which necessitate conducting experiments on such exotic quantum systems. The experiments, however, face the challenging refractory nature of the elements, which lay ahead, coupled with shorter half-lives and decreasing production yields.
In this contribution, a new concept of laser spectroscopy of the superheavy elements is proposed. To overcome the need for detecting fluorescence light or for neutralization of the fusion products, which were employed up to date when lacking tabulated spectral lines, the new concept foresees resonant optical excitations to alter the ratio of ions in excited metastable states to ions in the ground state. The excitation process shall be readily measurable using electronic-state chromatography techniques [3, 4] as the ions exhibit distinct ion mobilities at proper conditions and thus drift at different speeds through the apparatus to the detector. The concept offers unparalleled access to laser spectroscopy of many mono-atomic ions across the periodic table of elements, in particular, the transition metals including the high-temperature refractory metals and the elusive superheavy elements like rutherfordium and dubnium at the extremes of nuclear existence.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (No. 819957)
1. J. Reader A. Kramida, Yu. Ralchenko and NIST ASD Team (2018), 2019.
2. M. Laatiaoui et al., Nature, 538 (2016) 495.
3. P. R. Kemper and M. T. Bowers, J. Phys. Chem., 95 (1991) 5134.
4. M. J. Manard and P. R. Kemper, Int. J. Mass Spectrom., 407 (2016) 69.
Speaker: Dr Mustapha Laatiaoui (JGU Mainz)
A compact gas filled linear Paul trap for CRIS experiments. 20m Ball Room
The CRIS technique (Collinear Resonance Ionisation Spectroscopy) has been shown to be an efficient method for accessing fundamental nuclear properties of exotic isotopes [1]. The technique can be applied to stable ion beams produced via laser ablation [2] which are pulsed due to the method of production. However, with radioactive cases produced at the ISOLDE (Isotope separator On-line) facility at CERN, a gas filled linear Paul trap is required for creating ion bunches. Currently, radioactive ion beams are produced via proton impact with a suitable target at ISOLDE. The resulting beam is then trapped, cooled, and bunched using the ISCOOL device following mass separation. The ion bunches are then directed to the CRIS setup where they are prepared for laser spectroscopy experiments. The technique has been shown to reveal properties such as nuclear spins, magnetic and electric quadrupole moments, and isotopic variations in the nuclear mean square charge radii. Measurement of these properties is made possible with ion beams that have been bunched with reduced emittance. The CRIS method has so far measured fundamental nuclear properties of neutron deficient Francium [3], and neutron rich radium [4] isotopes, among others. We envisage significant improvements to the CRIS technique following the installation of an independent gas filled linear Paul trap at ISOLDE as an alternative to the ISCOOL device. This would reduce set up times prior to time constrained experiments at the ISOLDE facility. It would enable constant optimisation of beam transport and quality. It would also trivialise switching from a radioactive beam to a stable reference isotope from our independent offline ion source. We provide an overview of the work completed since the first prototype was constructed and installed at the University of Manchester [5], where tests utilising a Ga ion source are ongoing. These tests include ion transport and gas attenuation within the device. Spatial limitations require that the new device is compact (<80 cm in length). SIMION calculations estimate that a prototype device with a 20 cm rod length could achieve a trapping efficiency of up to ~ 40% with a mean energy spread of ~ 4 eV.
[1]: T.E Cocolios et al. Nucl, Inst, Methods in Phys Res B, 317 (2013) [2]: R. F. Garcia Ruiz et al. Phys. Rev. X 8, 041005 (2018) [3]: K.T. Flanagan et al. Phys rev lett 111, 212501 (2013) [4]: K. M. Lynch et al. Phys. Rev. C 97, 024309 (2018) [5]: B. S. Cooper et al. Hyperfine Interact, 240:52 (2019)
Speaker: Dr Ben Cooper (University of Manchester)
Development of offline ion source for collinear laser spectroscopy at the SLOWRI facility in RIKEN 20m Ball Room
We have prepared an offline ion source mainly for a planned collinear laser spectroscopy of RI beams at the SLOWRI facility in RIKEN. It was designed to provide low-emittance ion beams including refractory elements such as Zr, by combining laser ablation of a solid target in He gas and RF ion guide system [1]. We have connected the ion source to a test beamline and observed about $10^7$ singly charged ions per laser pulse ($\le 10$ Hz) extracted at 10 keV. The current situation including tests to evaluate the performance will be presented.
[1] M. Wada et al., Nucl. Instrum. Methods Phys. Res. B 204, 570 (2003).
e-mail: [email protected]
Speaker: Ms Minori TAJIMA (RIKEN Nishina Center)
190716SMI_tajima.pdf
The CISe project 30m Ball Room
Gas-catchers are widely used in experimental nuclear physics to slow down for precision measurements. Chemical reactions of the ions with impurities in the gas can affect the extraction efficiency. Thus, there is lots of effort to keep the gas inside the catcher as clean as possible.
Our aim is to explore the potential of chemical reactions for Chemical Isobaric Separation (CISe). We are currently building a new setup consisting of a gas-catcher and a commercial quadrupole Time-of-Flight mass-spectrometer. First studies in a hexapol collision cell have been performed to investigate the ion chemistry of tin, indium, cadmium and silver.
In this contribution, an overview of the project will be presented.
Speaker: Julia Even (University of Groningen)
SMI_JEVEN2019.pdf
Single Barium Atom Detection in Solid Xenon for the nEXO Experiment 20m Ball Room
The proposed nEXO experiment is a tonne-scale liquid xenon time projection chamber, designed to search for neutrinoless double beta decay in xenon-136 [1]. A critical concern for any rare decay search is reducing or eliminating backgrounds that will interfere with the signal [2]. A powerful background discrimination technique is the positive identification ("tagging") of the decay daughter, in this case barium.
A technique being developed in the nEXO collaboration is the trapping and extraction of the Ba daughter ion in solid xenon on a cryogenic probe, then using fluorescence spectroscopy to tag, i.e., identify the barium atom. Individual barium atoms, implanted into Xe ice as Ba ions, have been imaged in solid xenon, and the 619 nm emission of atomic barium in solid xenon has been assigned to single vacancy trapping sites [3].
Al Kharusi et al. (nEXO Collaboration), arXiv:1805.11142 [physics.ins-det] (2018).
Albert et al. (nEXO Collaboration), Phys. Rev. C 97, 065503 (2018).
Chambers et al. (nEXO Collaboration), Nature 569, 203-207 (2019).
Speaker: Dr Christopher Chambers (McGill University)
SMI-2019-ChrisChambers.pdf
Group Photo 20m Ball Room
Adjourn 10m
First application of mass selective re-trapping enables mass measurements of neutron-deficient Yb and Tm isotopes despite strong isobaric background 20m Ball Room
TRIUMF's Ion Trap for Atomic and Nuclear science (TITAN) [1] located at the Isotope Separator and Accelerator (ISAC) facility, TRIUMF, Vancouver, Canada is a multiple ion trap system specialized in performing high-precision mass measurements and in-trap decay spectroscopy of short-lived radioactive ions. Although ISAC can deliver high yields for some of the most exotic species, many measurements suffer from strong isobaric background. In order to overcome this limitation an isobar separator based on the Multiple-Reflection Time-Of-Flight Mass Spectrometry (MR-TOF-MS) technique has been developed and installed at TITAN [2].
Mass selection is achieved using dynamic re-trapping of the ions of interest after a time-of-flight analysis in an electrostatic isochronous reflector system [3]. Re-using the injection trap of the device for the mass-selective re-trapping, the TITAN MR-TOF-MS can operate as its very own high resolution isobar separator prior to mass measurements within the same device. This combination of operation modes boosts the dynamic range and background handling capabilities of the device, enabling high precision mass measurements with ion of interests to contaminant ratios of 1:10^6.
We will discuss the technical aspects of re-trapping and recent results of mass measurements of neutron-deficient Yb and Tm isotopes investigating the persistence of the N=82 neutron shell closure far from stability made possible by employing for the first time online mass selective re-trapping to supress strong isobaric background.
[1] J. Dilling et al., NIM B 204, 2003, 492–496
[2] C. Jesch et al., , Hyperfine Interact. 235 (1-3), 2015, 97–106
[3] T. Dickel et al. J. Am. Soc. Mass Spectrom. (2017) 28: 1079
Speaker: Dr Moritz Pascal Reiter (JLU Giessen, TRIUMF)
Reiter_2019_SMI_Talk_upload.pdf
Design, optimization and commission of a multi-reflection time-of-flight mass analyzer at IMP/CAS 30m Ball Room
A multi-reflection time-of-flight mass analyzer is being constructed for isobaric separation and mass measurement at IMP/CAS (Institute of Modern Physics, Chinese Academy Science). A new method including two sub-procedures, global search and local refinement, has been developed for the design of MRTOF mass analyzer. The method can be used to optimize the parameters of MRTOF-MS both operating in mirror-switching mode and in-trap-lift mode. By using this method, an MRTOF mass analyzer, in which each mirror consists of five cylindrical electrodes, has been designed. In the mirror-switching mode, the maximal mass resolving power has been achieved to be 1.3 × 10$^5$ with a total time-of-flight of 6.5 ms for the ion species of $^{40}$Ar$^{1+}$ [1], and in the in-trap-lift mode, it is 1.6 × 10$^5$ with a total time-of-flight of 6.4 ms [2]. The simulation also reveals the relationships between the resolving power and the potentials applied on the mirror electrodes, the lens electrode and the drift tube.
This MRTOF-MS has been constructed and is being commissioning now. The preliminary test results show that it works [2].
In this conference, we will present the design details, optimization method and the test results obtained.
[1] Y.L. Tian, Y.S. Wang, J.Y. Wang, et al., Int. J. Mass Spectrom. 408, 28–32 (2016).
[2] Jun-Ying Wang, Yu-Lin Tian, Yong-Sheng Wang, et al., Nucl. Instrum. Meth. B, (2019).
e-mail: [email protected]
Speaker: Mr Yongsheng Wang (Institute of Modern Physics, Chinese Academy Science)
Yongsheng_2019_SMI_Talk_upload.pdf
MIRACLS: A Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy 30m Ball Room
Speaker: Simon Sels (CERN)
Status of St. Benedict at the Nuclear Science Laboratory 20m Ball Room
Speaker: Daniel Burdette (University of Notre Dame)
MORA project and optimization of transparent ion trap geometry 20m Ball Room
The MORA (Matter's Origin from the RadioActivity of trapped and oriented ions) project [1] is part of the research on CP violation that could explain the matter-antimatter asymmetry observed in the universe, through the measurement of the so-called D correlation. MORA uses an innovative in-trap orientation method which combines the high trapping efficiency of a transparent Paul trap with laser orientation techniques. The MORA setup will permit to reach precision on D down to a few $10^{-5}$, which allows to probe the Final State Interactions (FSI) effects for the first time.
Within the framework of this project, a three-dimensional Paul trap (MORATrap) geometry has been optimized to broad the quadrupolar region, where the contribution of higher order harmonics is reduced. MORATrap is composed of three conic ring pairs with a mid-plane symmetry, its geometry is inspired from the existing transparent Paul trap, LPCTrap [2]. Our trap optimization was carried out by minimizing high order harmonics and maximizing the quadrupolar term in the spherical harmonics expansion of the generated potential in the trap center. Our simulation is based on solving Laplace's equation with the AXIELECTROBEM software developed at LPC Caen coupled to some $\chi^2$ minimization.
[1] P. Delahaye et al., arXiv:1812.02970, proceedings of the TCP 2018 conference, to appear in Hyp. Int.
[2] P.~Delahaye et al., arXiv:1810.09246 [physics.ins-det], submitted to EPJA.
Speaker: Ms Meriem BENALI (LPC Caen, France )
Afternoon Free 20m
Thursday, 18 July
Recent results from the FRS Ion Catcher 20m Ball Room
The FRS Ion Catcher setup [1] is used for thermalization and high-resolution measurements of exotic nuclei produced at relativistic energies of up to 1 GeV/u at the fragment separator (FRS) at GSI. It consists of a cryogenic gas-filled stopping cell (CSC), an RFQ beamline and a multiple-reflection time-of-flight mass-spectrometer (MR-TOF-MS), which can be used for mass measurements with mass accuracies down to $6\cdot10^{-8}$ [2] and for the production of isobarically and isomerically clean beams.
Over the last years, several technical improvements and upgrades were implemented to the setup. New techniques for enhancing the selectivity of ion transport based on ion mobility and dissociation of molecular contaminants were developed. The RFQ beamline was expanded and upgraded with improved differential pumping, a mass filter and a laser ablation carbon cluster ion source. The areal density of the CSC was increased to 10 mg/cm$^2$. A novel method for half-lives and branching ratios measurements [3] using the CSC as an ion trap for controllable storing of ions was developed and demonstrated.
In addition, the progress on the technical design of the CSC for the Low-Energy Branch of the Super-FRS at FAIR will be reported.
[1] W. R. Plass et al., Nucl. Instrum. Methods B, 317 (2013)
[2] S. Ayet et al., accepted to Phys. Rev. C, arXiv:1901.11278 (2019)
[3] I. Miskun et al., submitted to Eur. Phys. Journal A, arXiv:1902.11195 (2019)
e-mail: [email protected]
Speaker: Mr Ivan Miskun (1 II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392, Gießen, Germany 2 GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291, Darmstadt, Germany)
The Advanced Cryogenic Gas Stopper at NSCL – Progress towards Operations 30m Ball Room
The Advanced Cryogenic Gas Stopper (ACGS) has successfully delivered its first rare isotope beam for experiments at the National Superconducting Cyclotron Laboratory (NSCL). The ACGS has shown an increase extraction efficiency, reduce transport time, reduce molecular contamination of the isotope of interest, and the ability to minimize space charge effects. This is achieved by a novel 4-phase Radio Frequency wire-carpet which generates a traveling electrical wave for fast and efficient ion transport, cryogenic cooling of the helium gas chamber reduces unwanted molecular formation, and the new planar geometry with the wire-carpet in the mid-plane of stopper alleviates space charge effects. Offline testing of ACGS has shown wire-carpet transport efficiencies greater than 95% and transport speeds up to 100 m/s. Operating at a temperature of near 80 K, ACGS delivered argon-44 to the ReA3 system reliably for over a week with a beam rate up to twice as much as advertised on the ReA3 Beam List. This presentation will show the most recent online and offline performance of the ACGS and discuss advancements made regarding extraction from the gas stopper.
Speaker: Dr Kasey Lund (The National Superconducting Cyclotron Laboratory )
Particle-in-Cell Simulations for Studies of Space Charge Effects in Ion Trap and Ion Transport Devices 30m Ball Room
One of the least intuitive phenomena in ion trap or ion transport devices is the effect of large numbers of charged particles, also known as space charge, on the performance of the device. Space charge can shield applied DC and RF fields, leading to poor transport efficiencies and increased spatial and energy distributions. Robust simulation methods must be employed in order to mitigate these effects and to gain a better understanding of the device in the presence of space charge. However, standard ion optics software, such as SIMION [1], have limited ability to handle space charge, or are not optimized to efficiently study the system of interest. Therefore, other, more specialized, techniques must be used.
The particle-in-cell (PIC) method has been used to study plasmas and gravitational systems for decades, typically employing 2D or 3D coordinate systems. Thorough treatments of the subject can be found in [2, 3]. Modern desktop computing hardware make 3D PIC simulations with millions of super particles possible in a reasonable amount of time without requiring a high-performance computing cluster. The 3DCylPIC package [4] was developed to study devices at FRIB/NSCL, such as RF carpets, gas cells, radiofrequency quadrupole cooler/bunchers, MR-TOFs, etc., that need to operate effectively in the presence of large amounts of space charge. In this talk I will describe how 3DCylPIC operates and present the results of simulations of devices that are currently in use, making comparisons to measurements where possible.
[1] D. A. Dahl, Int. J. Mass Spectrom. 200 (2000) 3–25.
[2] C. K. Birdsall, A. B. Langdon, Plasma physics via computer simulation, McGraw-Hill, New York, 1985.
[3] R. W. Hockney, J. W. Eastwood, Computer simulation using particles, A. Hilger, Bristol [England] ; Philadelphia, 1988.
[4] R. Ringle, Int. J. Mass Spectrom. 303 (2011) 42-50.
Speaker: Ryan Ringle (NSCL/FRIB)
190715_SMI2019_ringle.pptx
Efficient Ion Thermalization and Mass Spectrometry of (Super-)Heavy Elements at SHIPTRAP 30m Ball Room
The quest for the island of stability, a region of nuclides with enhanced stability around proton and neutron numbers $Z\approx 114-126$ and $N\approx 184$, respectively, is at the forefront of nuclear physics. The survival of superheavy elements is intimately linked to nuclear shell effects, which can be experimentally probed by mass measurements. Experiments around this region are hampered by extremely low production rates of down to few ions per month. Nonetheless, the Penning-trap mass spectrometer SHIPTRAP, located at the GSI in Darmstadt, Germany, has shown that direct high-precision measurements of atomic masses of $_{102}$No and $_{103}$Lr isotopes around the deformed shell closure $N=152$ are feasible and provide indispensable knowledge on binding energies, shell effects and yield important anchor-points on $\alpha$-decay chains, affecting absolute mass values up to the heaviest elements.
To continue this groundbreaking program and to proceed towards heavier and more exotic nuclides, the drop in production rate has to be accomodated by several improvements. The Penning-trap system was recently relocated, allowing to integrate a second-generation gas-stopping cell, operating at cryogenic temperatures. Its stopping efficiency was optimized using the SRIM simulation software, and its purity was recently investigated using recoil-ion sources. In addition, the Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) technique was developed, increasing the sensitivity of mass measurements. To fully exploit its enhanced mass resolving power required improving the temporal stability of the electric and magnetic fields. Furthermore, its applicabilty in low-rate measurements, accumulating only few ions in total, yet had to be proven.
In the SHIPTRAP experimental campaign in summer 2018, we extended direct high-precision Penning-trap mass spectrometry into the region of the heaviest elements using the PI-ICR technique. For the first time, direct mass measurements of $^{251}$No, $^{254}$Lr and the superheavy nuclide $^{257}$Rf were performed with rates down to one detected ion per day. Despite lowest rates the PI-ICR technique allowed resolving the isomeric states $^{251m,254m}$No and $^{254m,255m}$Lr from their respective ground states with mass resolving powers of up to 10.000.000 and to accurately determine their excitation energies, which had previously been derived only indirectly via decay spectroscopy.
In this contribution an overview of the technical developments and the recent results will be given.
Speaker: Mr Oliver Kaleja (MPIK Heidelberg, JGU Mainz, GSI Darmstadt)
Addressing the systematics in phase-imaging ion-cyclotron-resonance measurements at the Canadian Penning Trap mass spectrometer 30m Ball Room
Phase-imaging ion-cyclotron-resonance (PI-ICR) is a novel technique for determining the cyclotron frequency ($\nu_{c}$) of an ion trapped in a Penning trap. First developed by the SHIPTRAP group at GSI [1], this technique relies on measuring the radial phase a trapped ion accumulates over a period of time. At the Canadian Penning Trap mass spectrometer (CPT) in Argonne National Laboratory (ANL), PI-ICR is currently employed [2,3]. The measurement campaigns and extensive tests over the last few years have revealed a number of systematics relating to the alignment between the magnetic field and ejection optics, the stability of the Penning trap electric field, and the initial magnetron motion of the ions [4]. These systematics and the efforts to address them will be presented.
This work is supported by Natural Sciences and Engineering Research Council (NSERC, Canada) under Application Number SAPPJ-2018-00028, U.S. Department of Energy (DOE), Office of Nuclear Physics under Contract Number DE-AC02-06CH11357(ANL), and Facility for Rare Isotope Beams - China Scholarship Council (FRIB-CSC) Fellowship under Grant Number 201704910964.
[1] S. Eliseev et al., Appl. Phys. B 114 (2014) 107.
[2] R. Orford, N. Vassh et al., Phys. Rev. Lett. 120 (2018) 262702.
[3] D.J. Hartley, F.G. Kondev, R. Orford et al., Phys. Rev. Lett. 120 (2018) 182502.
[4] R. Orford, PhD thesis, McGill University, Canada (2018).
Speaker: Dwaipayan Ray (University of Manitoba)
Characterization of supersonic jets for in-gas-jet laser ionization spectroscopy at the IGLIS laboratory and of gas flow inside the ion guide at the IGISOL-4 facility 20m Ball Room
Noble gases such as argon and helium are utilized within the In-Gas Laser Ionization and Spectroscopy (IGLIS) [1] and Ion Guide Isotope Separation On-Line (IGISOL) [2] techniques to thermalize and transport nuclear reaction products, which often have short lifetimes and small production yields. To facilitate the spectroscopic studies of the properties of nuclear reaction products, thorough understanding and characterization of utilized gas flows are essential. Characterization was performed experimentally at both the IGLIS and IGISOL-4 laboratories and numerically using the Computational Fluid Dynamics (CFD) Module of COMSOL Multiphysics.
With the in-gas-jet method, an extension of the IGLIS technique, the spectral resolution is improved by more than one order of magnitude in comparison to in-gas-cell laser ionization spectroscopy [3], while maintaining a high efficiency. This allows the determination of nuclear properties with higher precision. The flow parameters of such supersonic gas jets were characterized at the IGLIS laboratory at KU Leuven using Planar Laser Induced Fluorescence (PLIF) and will be discussed in the first part of this talk. The projected temperature associated (Doppler) broadening, which can be attained with an upgraded in-gas-jet method, was estimated to be about 140 MHz for the No isotopes. Moreover, the numerical calculations were performed to obtain temperature, velocity and Mach number profiles of supersonic jets formed by a de Laval nozzle. The experimental and numerical in-gas-jet results agreed reasonably well for a range of coordinates after the nozzle's exit [4].
Extraction efficiencies and delay times of subsonic helium and argon flows inside a fission ion guide are being characterized at the IGISOL-4 facility at the University of Jyvaskyla using a radioactive 223Ra α-recoil source (T1/2=11.4 d). The status of these measurements will be discussed in the second part of this talk. This characterization defines lower limits of production yields and lifetimes of the nuclear reaction products to be studied using gas cells.
[1] Yu. Kudryavtsev et al., Beams of short lived nuclei produced by selective laser ionization in a gas cell, Nucl. Instrum. Meth. Phys. Res. B, 114, 350 (1996)
[2] I. D. Moore, P. Dendooven, and J. Ärje, The IGISOL technique—three decades of developments. In: Äystö J., Eronen T., Jokinen A., Kankainen A., Moore I.D., Penttilä H., Three decades of research using IGISOL technique at the University of Jyväskylä. Springer, Dordrecht (2013)
[3] R. Ferrer et al., Towards high-resolution laser ionization spectroscopy of the heaviest elements in supersonic gas jet expansion, Nat. Commun. 8, 14520 (2017)
[4] A. Zadvornaya et al., Characterization of Supersonic Gas Jets for High Resolution Laser Ionization Spectroscopy of Heavy Elements, Phys. Rev. X, 8, 041008 (2018)
Speaker: ALEXANDRA ZADVORNAYA (University of Jyväskylä)
19-07-18_SMI2019_A.Zadvornaya.pdf
Development of a New Laser Ablation Ion Source 20m Ball Room
A new laser ablation ion source is under development at the Institute for Nuclear Physics, TU Darmstadt for high-precision collinear laser spectroscopy. The design will combine the versatility of laser ablation ion production and the non-conservative cooling in Helium buffer gas, to produce a low emittance ion beam of a wide range of elements. It is based on the original idea of an RF-only ion funnel [1] using only the gas jet to transport the ablated ions, which are radially confined by RF electrodes. Additionally, this design will contain a new feature that will allow to further cool and bunch the ion beam. For this purpose, an additional RF electrode stack is placed in the next pumping stage superimposed by a DC gradient towards the exit [2]. The last electrode can be connected to a positive voltage to create a potential barrier and stop the ions to produce a narrow ion bunch. Detailed computer simulations have shown that this ion source [3] will allow us to produce various high-quality continuous and pulsed ion beams, with low transverse and longitudinal emittance. We will present the current status and first results of this project development.
[1] Victor Varentsov, A new Approach to the Extraction System Design, SHIPTRAP
Collaboration Meeting, 19 March, 2001, DOI: https://doi.org10.13140/RG.2.2.30119.55200
[2] Victor Varentsov, Proposal for a new Laser ablation ion source for LaSpec and MATS testing, NUSTAR Collaboration Meeting, 1 March, 2016, DOI: https://doi.org10.13140/RG.2.2.10904.39686
[3] T. Ratajczyk, V. Varentsov and W. Nörtershäuser, Status of a new laser ablation ion beam
source for LASPEC, GSI-FAIR SCIENTIFIC REPORT 2017, DOI: https://doi.org10.15120/GR-2018-1
Speaker: Tim Ratajczyk (TU Darmstadt, Institut für Kernphysik, Darmstadt, Germany)
Improvement of a dc-to-pulse conversion efficiency of FRAC 20m Ball Room
At the SCRIT electron scattering facility at RIKEN [1,2], we aim at realizing the world's first electron scattering experiment of unstable nuclei, after succeeding in principle verification experiment using stable nuclei $^{132}$Xe [3].
In order to perform electron scattering with unstable nuclei with small production rate, it is important to accumulate and inject ions efficiently into the SCRIT device. For this purpose, it is necessary to convert a continuous ion beam from the ISOL type ion separator ERIS [4] to a pulsed beam with the pulse duration of 300~500 μs.
We developed a dc-to-pulse converter, called FRAC [5], based on RFQ linear ion trap and have attained the dc-to-pulse conversion efficiency of 5.6%.
We modified the FRAC to further improve the efficiency, and enabled cooling of the trapped ions by Xe gas of ~10$^{-3}$ Pa. Then an electric field gradient was applied in the longitudinal direction of FRAC.
As a result, the conversion efficiency was improved by more than 10 times compared to that before modification. Details of the modification and its latest performance will be presented.
[1] M. Wakasugi et al., Nucl. Instrum. Meth. B317, 668 (2013).
[2] T. Ohnishi et al., Physca Scripta T166, 014071 (2015).
[3] K. Tsukada et al., Phys. Rev. Lett. 118, 262501 (2017).
[4] T. Ohnishi et al., Nucl. Instrum. Meth. B317, 357 (2013).
[5] M. Wakasugi et al., Rev. Sci. Instrum. 89, 095107 (2018).
Speaker: Mr So Sato (Department of Physics, Rikkyo University, Toshima, Tokyo 171-8501, Japan)
SMI2019_so_sato_Ver_8_for_pdf.pdf SMI2019_so_sato_Ver_8.pptx
Status of the radiofrequency quadrupole cooler/buncher at TRIUMF-CANREB 20m Ball Room
The Canadian Rare-isotope facility with Electron Beam ion source (CANREB) is currently being commissioned at TRIUMF in Vancouver, Canada. CANREB will accept rare isotope beams from the Isotope Separator and Accelerator (ISAC) or Advanced Rare Isotope Laboratory (ARIEL) facilities. The ions will be charge bred using an electron beam ion source (EBIS) to 3 ≤ m/q ≤ 7 for post-acceleration to medium- and high-energy experiments. For injection into the EBIS, continuous ion beams from the source will be cooled and bunched using a radiofrequency quadrupole (RFQ) cooler/buncher. Results from initial RFQ commissioning tests, as well as an overall status of CANREB, will be presented.
Speaker: Dr Brad Schultz (TRIUMF)
SIMULATION VS. PERFORMAMCE OF THE TRIUMF CANREB RFQ COOLER-BUNCHER 20m Ball Room
The CANadian Rare-isotope laboratory with Electron Beam ion source (CANREB) project at TRIUMF [1] produces a large variety of rare radioactive and stable isotope beams for fundamental research. Essential to CANREB is a new radiofrequency quadrupole (RFQ) cooler-buncher [2] operating in grade 5.0 helium gas at 3 MHz, 1.2 kV$_{pp}$ (q $\sim$ 0.2) with 60-70 W input RF power. The RFQ is designed to (A) accept beams with <100 pA currents at <60 keV energies, and (B) deliver cooled and bunched beams <10$^6$ ions/bunch at 100 Hz with >90$\%$ efficiency, <10 eV energy spread, and short <1 us time-spread. Commissioning tests with picoamp beams of 30 keV $^{133}$Cs$^{+1}$ (r $\sim$ 5 mm, angular spread $\sim$ 10 mrad) in $\sim$ 5 mtorr helium yield >90$\%$ transmission through the RFQ with >80$\%$ bunching efficiency. Simulations agree with $^{133}$Cs$^{+1}$ performance characteristics. Here we discuss simulation of beam properties in the RFQ obtained with SIMION to actual performance for $^{133}$Cs$^{+1}$, $^{85}$Rb$^{+1}$ and other isotopes of interest, over a range of energies. Preliminary results indicate q-values for RFQ operation with >90$\%$ transmission occur for 60 keV: $^{133}$Cs$^{+1}$ = 0.10-0.25, $^{85}$Rb$^{+1}$ = 0.09, and $^{133}$Cs$^{+1}$ (18.5 keV) = 0.14-0.30, $^{85}$Rb$^{+1}$ (29 keV) = 0.12-0.16.
[1] The CANREB project for charge state breeding at TRIUMF. F. Ames, R. Baartman, B. Barquest, C. Barquest, M. Blessenohl, J. R. Crespo López-Urrutia, J. Dilling, S. Dobrodey, L. Graham, R. Kanungo, M. Marchetto, M. R. Pearson, and S. Saminathan. Proceedings of the "17th International Conference on Ion Sources", Oct. 15-20, 2017, Geneva Switzerland, AIP Conf. Proc. 2011, 070010-1–070010-3; (2018).
[2] B.R. Barquest, J.C. Bale, J. Dilling, G. Gwinner, R. Kanungo, R. Krucken, M.R. Pearson. Development of a new RFQ beam cooler and buncher for the CANREB project at TRIUMF. NIMB 376 (2016), 207-210.
e-mail: [email protected]
Speaker: Dr Chris Charles (TRIUMF)
20190718_ARQB_SIMION.pdf
Conference Dinner 2h L'Auberge Saint-Gabriel
L'Auberge Saint-Gabriel
426 St Gabriel St. Montreal, QC H2Y 2Z9
Recent experimental results of KEK Isotope Separation System (KISS) 20m Ball Room
KEK Isotope Separation System (KISS) is a laser ion source with an argon gas cell, we have been developing at RIKEN RIBF facility [1,2]. The KISS project is motivated by the systematic nuclear spectroscopy of neutron-rich nuclei at the north-east part of the nuclear chart, that is from around neutron-magic number 126 to the trans-uranium region. The systematic studies of lifetimes, masses, beta-gamma spectroscopy and laser spectroscopy of those nuclei will provide information of nuclear structures, which is crucial inputs to the theoretical predictions of nuclear parameters included in the the simulation of r-process nucleosynthesis, its astrophysical environments remain unrevealed yet.
KISS has an argon gas cell which is optimized to efficiently collect and extract nuclear products in the multi nucleon transfer (MNT) reactions, which are considered to be appropriate mechanism to produce neutron-rich nuclei of interest [3,4]. The employment of a doughnut-shaped gas cell with high-vacuum condition of the primary beam line improved the extraction efficiency [5]. The laser resonance ionization technique is used to element-selectively ionize the element of interest. Those photo-ions are transported by RF ion guides through the differential pumping area and are finally accelerated by a high voltage to select one species of isotopes with a mass separator. In-gas-cell and in-gas-jet laser ionizations are utilized at KISS.
In this presentation, we will report the present status, the recent experimental results and the future plan of KISS.
[1] Y. Hirayama et al., Nucl. Instrum. and Methods B 353 (2015) 4.
[2] Y. Hirayama et al., Nucl. Instrum. and Methods B 376 (2016) 52.
[3] Y.H. Kim et al., EPJ Web of conferences 66 (2014) 03044.
[4] Y.X. Watanabe et al., Phys. Rev. Lett. 115 (2015) 172503.
Speaker: Yutaka Watanabe (KEK WNSC)
20190719_smi2019_watanabe.pdf
Present status and future plans for slow and stopped beams in RIKEN 30m Ball Room
The accelerator complex at RIKEN's Nishina Center for Accelerator Based Science offers presently unparalleled intensity and variety of radioactive ion beams. The accelerator complex employs multiple facilities utilizing in-flight fission and fragmentation, fusion, and multi-nucleon transfer reactions to provide radioactive ion beams spanning the table of isotopes from $^{6}$He to $^{294}$Og. In order to make these beams viable for low-energy experimental techniques (e.g. ion traps) requires the use of high-pressure gas cells. Several such systems are in various states of readiness.
The SHE-mass gas cell, located after the gas-filled recoil ion separator GARIS-II has been successfully operated since 2016. Recent modifications of the SHE-mass system will be discussed and select results presented.
A medium-size gas cell is nearing construction for use in symbiotic measurements. It will be used as a beam dump for in-beam gamma-ray experiments and in conjunction with a multi-reflection time-of-flight mass spectrograph will enhance the in-beam gamma-ray experiments. The design of the system and its planned usage will be discussed.
To provide access to neutron-rich heavy isotopes which are difficult to access via in-flight fission and fragmentation, the KEK Isotope Separation System (KISS) utilizes multi-nucleon transfer reactions. The transfer products are stopped and neutralized in an argon-filled gas cell. Atoms of a desired element can be selectively re-ionized using a two-color resonance laser ionization scheme. Ions of the selected element are accelerated to 30 keV and isobarically purified via a magnetic dipole prior to being delivered to a measurement station. A new "gas-cell cooler-buncher'' has recently been installed to efficiently convert the 30 keV beam to be compatible with ion traps. The system will be described and its performance reported.
Speaker: Dr Peter Peter Schury
SMI2019.key
The N=126 factory at Argonne National Laboratory 30m Ball Room
The properties of nuclei near the neutron $N=126$ shell are critical to the understanding of the production of elements via the astrophysical $r$-process pathway, particularly for the $A\sim195$ abundance peak [1]. Unfortunately traditional particle-fragmentation, target-fragmentation, or fission production techniques do not efficiently produce these nuclei. Multi-nucleon transfer (MNT) reactions between two heavy ions, however, can efficiently produce these nuclei [2]. The $N=126$ factory currently under construction at Argonne National Laboratory's ATLAS facility will make use of these reactions to allow for the study of these nuclei [3]. Because of the difficulty collecting MNT reaction products, this new facility will use a large-volume gas catcher, similar to the one currently in use at CARIBU, to convert these reaction products into a low energy beam that will initially be mass separated with a magnetic dipole of resolving power $R\sim10^3$. Subsequently, the beam will pass through an RFQ cooler-buncher and MR-TOF system to provide high mass resolving power ($R\sim10^5$) sufficient to suppress isobaric contaminants. The isotopically separated, bunched low-energy beams will then be available downstream for measurements such as mass measurements using the CPT mass spectrometer or decay studies. The status of the facility under construction will be presented, together with commissioning results of the component devices. This work was supported in part by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357; by NSERC (Canada), Application No. SAPPJ-2018-00028; by the National Science Foundation under Grant No. PHY-1713857; by the University of Notre Dame; and used resources of ANL's ATLAS facility, an Office of Science User Facility.
[1] M. Arnould, S. Goriely, and K. Takahashi, Phys. Rep. 450, 97 (2007)
[2] V. Zagrebaev and W. Greiner, Phys. Rev. Lett. 101, 122701 (2008)
[3] G. Savard, M. Brodeur, J.A. Clark and A.A. Valverde, Nucl. Instr.
Meth. Phys. Res. B Proceedings of EMIS-2018 (in press)
Speaker: Dr Adrian A. Valverde (Argonne National Laboratory)
ValverdeSMI2019.talk.pdf
On the way to a world-competitive fission fragment facility at SARAF 30m Ball Room
Combining an Ion Catcher, which is based on the cryogenic stopping cell that is being designed for the Low Energy Branch at the Super-FRS at FAIR [1], with the high-power accelerator SARAF II, currently under construction at Soreq NRC [2], and a liquid lithium target [3] will enable creating a research facility for neutron-rich exotic isotopes based on high-energy neutrons induced fission. I will outline a conceptual design and possible implementation of the Ion Catcher at SARAF, along with rate estimations, which indicate that such a facility will be potent in a world competitive manner, with neutron-rich isotope production rates higher than much larger future facilities such as FRIB.
[1] T. Dickel et al., "Conceptional design of a novel next-generation cryogenic stopping cell for the Low-Energy Branch of the Super-FRS", Nucl. Instr. and Meth. B 376 216-220 (2016)
[2] I. Mardor et al., "The Soreq Applied Research Accelerator Facility (SARAF): Overview, research programs and future plans", Eur. Phys. J. A (2018) 54: 91
[3] S. Halfon et al., "Note: Proton irradiation at kilowatt-power and neutron production from a free-surface liquid-lithium target", Rev. Sci. Inst. 85, 056105 (2014)
email: [email protected]
Speaker: Israel Mardor (Soreq Nuclear Research Center)
Mardor SARAF Ion Catcher SMI2019 201907 2.pptx
Overview of progress at SMI-2019 30m Ball Room
This talk will provide an overview of the field of Stopping and Manipulation of Ions and related topics based on the recent progress presented by the different contributions within SMI-2019. A final focus will aim to look towards the future and the puzzles and possibilities we may face in the coming years which will set the scene for the next conference.
Speaker: Iain Moore (University of Jyväskylä)
Concluding Remarks 10m Ball Room
Conference Ends 10m Ball Room | CommonCrawl |
Global Encyclopedia of Public Administration, Public Policy, and Governance
| Editors: Ali Farazmand
Privatization of State-Owned Enterprises in Indonesian Manufacturing Industry
Maman Setiawan
Ernie Tisnawati Sule
First Online: 19 March 2019
DOI: https://doi.org/10.1007/978-3-319-31816-5_3673-1
Firm ownership; Government control; Private enterprise; Productive efficiency; Public enterprise; Transfer of state ownership
State-owned enterprise (SOE) is an enterprise where the government has full, majority, or significant ownership of the firm enabling the government to fully control the firm. In Indonesia, the government can have a full control on the SOEs if it has at least 51% of the ownership share. The privatization is a mechanism applied by the government to transfer its ownership to the public or private firm. The privatization is supposed to improve the efficiency of the companies. The efficiency is related to efforts of the firm to reduce the waste in using the resources. The technical efficiency can be defined as the ability of the firm to expand its output using the same input or contracting its input using the same output.
State-owned enterprises (SOEs) in the manufacturing sector contribute significantly to the Indonesia economy with its contribution about 10% to the total manufacturing output in the period of 1980–2015, on average. The number of the SOEs in the manufacturing industry also reached about 25% of the total number of SOEs in Indonesia in 2015 (see Setiawan and Tisnawati Sule 2018). With its significant contribution, the performance of the SOEs can significantly affect the Indonesian welfare.
Regarding the performance of the SOEs, three main theories i.e. property right theory, agency theory and public choice theory may explain the nature of the SOEs performance (see Bozec et al. 2002). Those theories suggest that the performance of the SOEs will not always be at the optimum level with a tendency to be lower than the private firms. Thus, most of the jurisdictions have been privatizing their SOEs to avoid the higher loss to the economy.
With respect to the privatization, Indonesian government has been privatizing the SOEs in the manufacturing sector since 1980s. In 1987, 7 SOEs operating in manufacturing and nonmanufacturing sectors were privatized. Also 52 SOEs having business in manufacturing, finance, agriculture, and service were planned to be privatized in 1989. Based on the Indonesian Law No. 19 year 2003 about the SOEs, the privatization can be done through three ways: (i) direct stock sales to the investor, (ii) initial public offering (IPO) through the stock market, and (iii) stock sales to the employee or management. The privatization causes the government to have less control to the privatized SOEs. The selling of the stock share to more than 50% will result in the loss of the government control on the SOEs.
The privatization is expected to increase the efficiency of the firms, since the privatized SOEs have less political interferences compared to the condition before the privatization. The political interferences in the SOEs may usually result in the poor choices of inputs, low quality of product, and less incentive for the managers (see Shleifer and Vishny 1994). Therefore, the privatization may improve the choice of the inputs and the quality of the products increasing the technical efficiency of the privatized SOEs.
Previous research has investigated the effect of the privatization on the technical efficiency of the state-owned enterprises. Chirwa (2001) investigated the privatization and its effect on the technical efficiency of the manufacturing sector in Malawi during the period 1970–1997. The technical efficiency is estimated using data envelopment analysis. The research found that the privatized firms had higher technical efficiency after the privatization. Al-Obaidan (2002) investigated the effect of the privatization on the efficiency of the 45 developing countries in 1980s. They applied frontier production function to account efficiency differences between countries with different degrees of the private sector contribution on the economy. They found that after the privatization the developing countries can increase the utility of their natural resource by 45%. Shi and Sun (2016) investigated the effect of the privatization in the China during the period 2001–2010. They applied regression model and Cobb-Douglass production function to estimate the firm performance and productivity, respectively. They found that the privatization improved efficiency and productivity in the China. Plane (1999) also investigated the effect of the privatization on the productive efficiency in the Cote d'Ivoire electric power company for the period 1959–1995. The research applied the stochastic production frontier to estimate the productive efficiency. The research found that the productive efficiency improved after the privatization because of the organizational innovations. In addition, Boussofiane et al. (1997) investigated the technical efficiency of nine privatized organizations in the UK in 1980. The research used data envelopment analysis to estimate the technical efficiency before and after privatization. The research found that the positive effect of the privatization on the technical efficiency applied only for some privatized organizations. Moreover, Yu and Fan (2008) investigated the effect of the privatization on the technical efficiency in the Taiwan's Intercity Bus Services. They found that there is a significant increase in the technical efficiency of the firms after the privatization. The previous research explaining about the technical efficiency of the SOEs and its relation with the privatization in other countries had been investigated. It is still rare to find a same research in the Indonesian economy. Thus, it is important to investigate the technical efficiency and its changes after the privatization in the Indonesian economy.
This research uses data envelopment analysis with a bootstrapping approach to estimate the technical efficiency. The data envelopment analysis requires very few assumptions about the properties of the production possibilities set, since it applies a mathematical linear programming to measure the efficiency (Setiawan et al. 2012). The technical efficiency score estimated by the DEA is based on the transformation of N inputs into the M outputs for all I firms. Suppose the column vectors xi and qi represent input and output for the i-th firm, respectively. The consecutive NxI input matrix, X, and the MxI output matrix, Q, represent the data for all I firms. The output-oriented DEA model (The output-oriented DEA estimates the technical inefficiency by having a proportional increase in output, given the same inputs. The rigidities in the inputs faced by the firms make this assumption relevant in the Indonesian economy (see Setiawan et al. 2012).) is applied by having the mathematical linear programming model as follows:
$$ {\displaystyle \begin{array}{l}{\max}_{\phi, \lambda }\ \phi, \\ {}\mathrm{st}\quad\quad -\phi {q}_i+ Q\lambda \ge 0,\\ {}\quad\quad {x}_i- X\lambda \ge 0,\\ {}\quad\quad I{1}^{\prime}\lambda =1\\ {}\quad\quad \lambda \ge 0,\\ {}\quad\quad \end{array}} $$
where ϕ − 1 can be defined as a proportional increase in outputs that could be achieved by the i-th firm or SOE given the same amount of input quantities. This research also applies a Farrell approach to avoid getting a negative efficiency score because of the high variabilities of inputs and outputs. This measures the technical efficiency to get an efficiency score which varies from 1 to infinity. The variable return to scale (VRS) is also applied to the DEA by having I1′λ = 1 as a convexity constraint with I1 as an Ix1 vector of ones and λ as an Ix1 vector of constants. Furthermore, the final technical efficiency score is calculated from 1/ϕ (\( \widehat{\delta}\left(x,y\right) \)) as a value in the unit interval as also applied by Setiawan et al. (2012).
The research also applies the DEA with a bootstrap technique of Simar and Wilson (1998) to get a robust estimate of the technical efficiency score. The bootstrapping approach uses a repeated simulation of the data generating process by using a resampling method and applying it to the original estimator. The simulated estimates can be identical to the sampling distribution of the original estimator. The biased-corrected efficiency score from the bootstrapped method is applied using the formula:
$$ {\displaystyle \begin{array}{c}\widehat{\widehat{\delta}}\left(x,y\right)=\widehat{\delta}\left(x,y\right)-{\mathrm{bias}}_B\left[\widehat{\delta}\left(x,y\right)\right]\\ {}=2\widehat{\delta}\left(x,y\right)-{B}^{-1}\sum \limits_{b=1}^B{\widehat{\delta}}_b^{\ast}\left(x,y\right)\end{array}} $$
$$ \mathrm{with}\ \mathrm{the}\ \mathrm{condition}\ \mathrm{of}\ \mathrm{sample}\ \mathrm{variance}\, {\widehat{\delta}}_b^{\ast}\left(x,y\right)\, <\, \frac{1}{3}{\left(\widehat{\mathrm{b}}{\mathrm{ias}}_B\left[\widehat{\delta}\left(x,y\right)\right]\right)}^2 $$
This research applies \( \widehat{\delta}\left(x,y\right) \) and \( \widehat{\widehat{\delta}}\left(x,y\right) \) representing the original and biased-corrected efficiency scores, respectively. The \( {\widehat{\delta}}_b^{\ast}\left(x,y\right) \) is a final bootstrap estimate of the efficiency score which is generated from B samples with b = 1,…,B.
This research also applies analysis of variance (ANOVA) to test the differences between average technical efficiency of the SOEs in the subsectors before and after privatization. ANOVA test is based on the variance analysis between the two groups of the subsectors. The higher the differences of the variance between the two groups, the higher the possibility of the test to reject the null hypothesis of the no differences of the variance between the two groups of the subsectors. The ANOVA will be applied for the groups of all subsectors, only subsectors with increasing technical efficiency after privatization and only subsectors with decreasing technical efficiency after privatization.
This research uses establishment-level data to estimate the technical efficiency of the firms pooling both SOEs and non-SOEs. To compare the technical efficiencies of the SOEs in the subsectors between pre- and post-privatization, the research only takes the technical efficiency scores of the SOEs. The technical efficiency of the subsectors is obtained by averaging the technical efficiency of the SOEs in the subsectors. The data are sourced from the Annual Manufacturing Survey provided by the Indonesian Bureau of Central Statistics (BPS) covering the period from 1980 to 2015 in which the data of the SOEs can be obtained.
There are about 400 subsectors in the Indonesian manufacturing industry. Each subsector may include both SOEs and non-SOEs in the period from 1980 until 2015. The subsectors having less than 30 observations are combined into a new subsector in the same digit of ISIC 4 (four) or 3 (three) level to have at least 30 observations. (For example, subsectors of 10110, 10120, and 10130 were combined into a new subsector of 10100 to have at least 30 observations in every single year.) Also the subsectors without having the SOEs during the period 1980–2015 are dropped. Finally, this research uses 146 subsectors classified into 5-digit International Standard Industrial Classification (ISIC) level. This research defined the privatization as the changes of the government ownership on the SOEs to become less than 51%. The government has no dominant control anymore on the SOEs if its stock ownership is less than 51%.
The firms in the manufacturing industry transform labor, raw material, and fixed capital such as machines, equipment, etc. into the output (see Setiawan et al. 2012). Production output is defined as the value of gross output produced by the firms and deflated by the Wholesale Price Index (WPI). This research also applies a proxy of labor efficiency units to represent the labor. (We define the labor efficiency units as used by Setiawan et al. (2012): L = Number of production worker + number of other worker* (\( \frac{\mathrm{Salary}\, \mathrm{of}\ \mathrm{other}\ \mathrm{worker}}{\mathrm{Salary}\ \mathrm{of}\ \mathrm{production}\ \mathrm{worker}} \))) Raw material is represented by the total costs of domestic and imported materials (The raw materials include not only materials but also other costs related to the production such as electricity and fuel cost.) and is deflated by Wholesale Price Index (WPI), respectively. Fixed assets measure the fixed capital which is deflated by the WPI of machinery (excluding electrical products), transport equipment, residential, and nonresidential buildings.
Table 1 shows the descriptive statistics of the variables including input and output variables used for the technical efficiency estimation using the DEA across firms and subsector period 1980–2015. Table 1 shows that all variables had high variability. This is shown by the high standard deviation for every variable. The high variation of data can be caused by the long period of 1980–2015. The high variability of the data can also be caused by the variation across the subsectors. Moreover, fixed capital had the highest variation of the data among the other inputs and output variables with the mean and standard deviation of 1.020*107 and 6.300*109, respectively. In spite of this, the high variability of data will not affect the estimation of the technical efficiency significantly, since the estimation of the technical efficiency is applied for every year in each subsector. Furthermore, the DEA estimation which applies the VRS approach naturally benchmarks each firm to the best practice of the other firms with the similar sizes.
Descriptive statistics of the variables across firms and subsectors, 1980–2015
Output (in thousands Rupiah)
386114.700
7.950*108
Labor (person index)
Capital (in thousand Rupiah)
4.25*1012
Material (in thousand Rupiah)
Source: Authors' calculation
Table 2 shows the number of privatized SOEs in the Indonesian manufacturing industry during the period 1980–2015. The number of the privatized SOEs was based on the establishment data. Table 2 shows that there was a significant increase of the number of privatized SOEs from the interval period of 1980–1990 to the interval period of 1991–2000. The number of privatized SOEs was 45 in the interval period 1980–1990, and the number increased to 71 SOEs in the interval period of 1991–2000. The number of privatized firm was the largest in the interval period of 1991–2000. This may be an indication that the plan of the privatization in 1989 went as the plan in 1990s. Furthermore, this might also happen because the privatization was the priority program stated in the letter of intent (LOI) between Indonesia and the International Monetary Fund as a return of the support from IMF to resolve the economic crisis in Indonesia occurring during the period 1997/1998. The number of privatized firms decreased in the interval period of 2001–2010 compared to the previous interval period, but the number of privatized SOEs increased again in the interval period of 2011–2015. The average of the number of privatized SOEs was 53 during the period 1980–2015. The fluctuation of the number of the privatized firms was also related to the different regulations as well as the different regimes in the Indonesian government during the period.
Number of privatized firms, 1980–2015
Average number of privatized firms in Indonesian manufacturing sector
Table 3 shows the average technical efficiency of the privatized and non-privatized SOEs across subsectors during the period 1980–2015. The average technical efficiency of the privatized SOEs was larger than the average technical efficiency of the non-privatized SOEs during the period 1980–2015. In the interval period of 1980–2015, the average technical efficiencies of the privatized and non-privatized SOEs were 0.51 and 0.48, respectively. The average technical efficiency of the SOEs declined for both privatized and non-privatized SOEs during the period of estimation. For example, the average technical efficiencies of the SOEs were 0.60 and 0.55 for privatized and non-privatized SOEs in the interval period of 1980–1990, respectively. The average technical efficiency decreased to 0.45 for both privatized and non-privatized SOEs in the interval period of 2011–2015, respectively.
Technical efficiency of the SOEs before and after privatization across subsectors, 1980–2015
Technical efficiency before privatization
Technical efficiency after privatization
Table 4 shows the percentage of the subsectors in which the privatized SOEs exhibited increasing and decreasing average technical efficiencies after the privatization. Percentage of the subsectors with increasing technical efficiency after privatization was 53%. The percentage of the subsectors with increasing technical efficiency was larger than the percentage of the subsectors with decreasing technical efficiency which reached about 47%. Although more than half of the subsectors experienced increasing technical efficiency after the privatization, the privatization might not always guarantee that privatized SOEs will be more technically efficient. Okten and Arin (2006) suggested that other environmental factors could also influence the effect of the privatization on the efficiency.
Percentage of subsectors with increasing and decreasing technical efficiencies after privatization of the SOEs period 1980–2015
Subsectors
% of privatized firms
Subsectors with increasing technical efficiency after privatization
Subsectors with decreasing technical efficiency after the privatization
Table 5 shows the percentage of the subsectors with increasing and decreasing technical efficiency after the privatization classified into low technology level, medium-low technology level, medium-high technology level, and high technology level. The classification is based on the grouping from the United Nations Industrial Development Organization (UNIDO). Table 5 shows that the percentage of the subsectors having decreasing technical efficiency after the privatization were 54.41% coming from the subsectors with low level of technology. Only 50% of the subsectors with increasing technical efficiency after privatization were classified as the subsectors with low level of technology.
Percentage of subsectors with increasing and decreasing technical efficiency (TE) after privatization based on the groups of technology level
Technology level
Subsectors with increasing TE (%)
Subsectors with decreasing TE (%)
Low-level
Medium-low
Medium-high
Table 6 shows 20 of the 78 subsectors in which the privatized SOEs exhibited increasing average technical efficiency after the privatization, and they had the largest average technical efficiencies after the privatization. From Table 6, it shows that subsectors classified as medium-high technology industry, medium-low technology industry, and low technology industry were four subsectors, eight subsectors, and eight subsectors. This indicates that 60% of the 20 subsectors exhibiting increasing average technical efficiency came from the subsectors with medium-low and medium-high technology industry.
20 of the 78 subsectors in which the privatized SOEs exhibits increasing average technical efficiency after the privatization
Metallic goods
Medium low
Soybean – tahu
Other basic chemistry goods
Khor and Alchaly
Clay tiles
Paint, printed ink and Lacquer
Iron and steel making
Rope goods
Organic, farm
Finished clothes from leather convection
Rice and seed milling, cleaning
Frozen other aquatic biota
Preparation textile
Table 7 shows the 20 of the 68 subsectors in which the privatized SOEs exhibited decreasing average technical efficiency after the privatization. The 20 subsectors also had the lowest-average technical efficiency score during the period 1980–2015. The subsectors classified as high technology level, medium-high technology level, medium-low technology level, and low technology level were 1 subsector, 4 subsectors, 3 subsectors, and 12 subsectors. From Table 7 it is seen that the 20 subsectors having decreasing technical efficiency after the privatization dominantly came from subsectors classified as low technology level. About 60% of the 20 subsectors with the decreasing technical efficiency came from the subsectors classified with the low technology level.
20 of the 68 subsectors in which the privatized SOEs exhibits decreasing average technical efficiency after the privatization
Wine and other liquor
Embroidery textile
Soap, detergents
Laminated plywood
Paper nec
Furniture, rattan, and others
Porcelain prod, structural
Macaroni, noodles
Gips and gips prod
Cigarettes, clove
Coffee cleaning
Other palm scratch
Plastic prod, structural
Animal feed and concentrates
Electrical motor and generator
Remilled rubber
Leather goods for technical and Industrial use
Finished textiles
Machinery and equipment for other general uses
Other basic chemistry industries
Tables 6 and 7 may give an indication that privatization most likely had positive effect on the technical efficiency on the industry with higher level of technology. The privatization of the firms in the higher level of technology may boost the economies of scale because of the improved inputs and outputs boosting the technical efficiency. Nevertheless, the privatized SOEs exhibiting decreasing technical efficiency may consider other environmental factors that can affect the technical efficiency (see Setiawan and Tisnawati Sule 2018).
Table 8 shows ANOVA test to determine whether there is a difference between technical efficiency before privatization of the SOEs and technical efficiency after privatization of the SOEs. The ANOVA test was based on the subsector level comparing between the technical efficiency of the subsectors before the SOEs were privatized and after the SOEs were privatized. The technical efficiency of the subsectors is the average technical efficiency of the SOEs in the respective subsector. The ANOVA test was also applied for both groups of subsectors in which the privatized SOEs exhibited decreasing and increasing average technical efficiency after the privatization.
ANOVA test of the average technical efficiency before and after privatization period 1980–2015
F-statistics
All subsectors
df1 = 1; df2 = 290
Subsectors in which the privatized SOEs exhibits decreasing average technical efficiency after the privatization
11.697***
Subsectors in which the privatized SOEs exhibits increasing average technical efficiency after the privatization
Note: *** denotes significance of the test statistic at the 1% critical level
From Table 8, it is shown that there were no significant differences of the average technical efficiency between before and after privatization for all subsectors at the 10% critical level. In spite of this, the results were different if the tests were based on the only subsectors with increase technical efficiency after privatization or the only subsectors with decreasing technical efficiency after the privatization. Table 7 shows that there were significant differences of the technical efficiency before and after privatization for the two groups at the 1% critical level. The subsectors in which the privatized SOEs exhibited decreasing or increasing average technical efficiency after the privatization had a different average technical efficiency significantly compared to the average technical efficiency before the privatization. From the Table 7 and previous tables, it is concluded that more than half of the privatized SOEs improved their efficiencies in the respective subsectors significantly, but the other privatized SOEs worsen their efficiencies in the respective subsectors significantly.
This research investigates the impact of the privatization on the technical efficiency of the privatized SOEs in the Indonesian manufacturing industry. This research uses the data from the Indonesian Bureau of Central Statistics during the period 1980–2015. The ANOVA test is also applied to test whether there is a difference of the technical efficiency of the subsectors between pre- and post-privatization.
This research found that the technical efficiency of the SOEs in the Indonesian manufacturing sector was low. Furthermore, the improvement of the average technical efficiency after the privatization happens not to all SOEs. The ANOVA tests suggest that the privatization may increase or decrease the technical efficiency of the SOEs significantly in the subsectors after the privatization. These results may suggest the government to carefully consider the SOEs that will be privatized. The SOEs should be feasible and able to give a good impact on the efficiency as well as on the market after the privatization.
Cost-Efficiency
Denationalization
Government Ownership
Private Ownership
Productive-Efficiency
Public Enterprises
Al-Obaidan AM (2002) Efficiency effect of privatization in the developing countries. Appl Econ 34(1):111–117CrossRefGoogle Scholar
Boussofiane A, Martin S, Parker D (1997) The impact on technical efficiency of the UK privatization programme. Appl Econ 29(3):297–310CrossRefGoogle Scholar
Bozec R, Breton G, Cote L (2002) The performance of state owned enterprises revisited. Financ Account Manag 18(4):383–407CrossRefGoogle Scholar
Chirwa EW (2001) Privatization and technical efficiency: evidence from the manufacturing sector in Malawi. African Development Review 13(2):276–307Google Scholar
Okten C, Arin KP (2006) The effects of privatization on efficiency: how does privatization work? World Dev 34(9):1537–1556CrossRefGoogle Scholar
Plane P (1999) Privatization, technical efficiency and welfare consequences: the case of the Cote d'Ivoire electricity company (CIE). World Dev 27(2):343–360CrossRefGoogle Scholar
Setiawan M, Tisnawati Sule E (2018) Technical efficiency and its determinants of the SOEs in the Indonesian manufacturing sector. Working Papers in Economics and Development Studies (WoPEDS) No. 201802, Department of Economics, Padjadjaran University, Available at https://econpapers.repec.org/paper/unpwpaper/201802.htm
Setiawan M, Emvalomatis G, Oude Lansink A (2012) The relationship between technical efficiency and industrial concentration: evidence from the Indonesian food and beverages industry. J Asian Econ 23(4):466–475CrossRefGoogle Scholar
Shi W, Sun J (2016) The impact of privatization on the efficiency and profitability: evidence from Chinese listed firms, 2001–2010. Econ Transit 24(3):393–420CrossRefGoogle Scholar
Shleifer A, Vishny R (1994) Politicians and firms. Q J Econ 109:995–1025CrossRefGoogle Scholar
Simar L, Wilson PW (1998) Sensitivity analysis of efficiency scores: how to bootstrap in nonparametric frontier models. Manag Sci 44(1):49–61CrossRefGoogle Scholar
Yu M-M, Fan C-K (2008) The effects of privatization on return to the dollar: a case study on technical efficiency, and price distortions of Taiwan's intercity bus services. Transp Res A Policy Pract 42(6):935–950. ElsevierCrossRefGoogle Scholar
1.Faculty of Economics and BusinessPadjadjaran UniversityBandungIndonesia
Setiawan M., Sule E.T. (2019) Privatization of State-Owned Enterprises in Indonesian Manufacturing Industry. In: Farazmand A. (eds) Global Encyclopedia of Public Administration, Public Policy, and Governance. Springer, Cham
First Online 19 March 2019
eBook Packages Economics and Finance
Principal-Agent Theory of Organizations
Prisons as Social Policy, United States
Privacy Rights and Public Employment
Private Law and Public Administration
Privatization and Deregulation in Japan
Privatization and Public Management
Privatization in Central Government
Privatization in Local Governments
Privatization of Education in Australia
Privatization of the Utilities in Abu Dhabi, UAE
Pro-natal Responses to National Decline
Procuring for Sustainability: Prioritizing Preferences
Product Development Partnerships: Collaborative Multi-Sector Regimes to Accelerate Vaccine Development
Productivity and Management in Public Sector
Professional and Business Associations
Professional City Management
Professional Development and Training | CommonCrawl |
"Effect of tungsten carbide (WC) on electrochemical corrosion behavior, hardness, and microstructure of CrFeCoNi high entropy alloy"
A. Hegazy Khallaf1,
M. Bhlol2,
O. M. Dawood2,
I. M. Ghayad3 &
Omayma A. Elkady4
High-entropy alloy HEA (CrFeCoNi) was reinforced with variety of weight percentages of 5:20 wt.% WC particles. The alloy samples were mechanically prepared in a ball roll mill for 25 h by 10:1 ball to powder ratio at 180 rpm. Then WC was mixed with the prepared alloys in a high-speed ball mill for 1 h by 350 rpm under a controlled atmosphere. The mixed samples were compacted by a uniaxial press under 700 MPa and then sintered at 1200 °C for 90 min under air atmosphere. The corrosion behavior of the tested samples in 3.5 wt.% NaCl solution was investigated using electrochemical polarization measurements. The microstructure of the sintered samples with high relative density showed three phases, which were FCC matrix, W-rich carbide, and Cr-rich carbide and homogeneously distributed all over the alloy matrix. The hardness of the (CrFeCoNi)1-X (WC)X HEAs was increased gradually with the increasing of WC content from about 336.41 HV up to 632.48 HV at room temperature. The results indicated that the addition of WC improves the corrosion resistance. Especially, the 20 wt.% of WC addition remarkably enhanced the comprehensive corrosion resistance and easy passivation of (CrFeCoNi)1-X(WC)X HEAs. Also, the wear rate of 0 wt.% WC HEA is (1.70E-04) which is approximately 4.5 times higher than the wear rate of 20 wt.% WC HEA (3.81E-05); this means that wear resistance is significantly improved with the increase of WC content.
Cutting tool steels are commonly used in machining, assembling, and sealing parts. Traditional cutting tool steels are featured by high hardness, high strength, and good wear resistance. However, it normally has many elements, and the phase structures are complicated and difficult to manage. Meanwhile, complex-shaped components are hard to manufacture due to the high hardness of cutting tool steels. Cutting tool steels, on the other hand, still need to be enhanced in terms of corrosion resistance. Cutting tool steels, for example, are inapplicable in a salty and oxidizing environment [1, 2].
High-entropy alloys were developed recent years; they have at least five principal elements with 5–35 at.% for each element [3]. Recently HEAs caught much attentions in which they are containing four or five metallic elements in close-to-equiatomic proportions [4]. HEAs are usually single-phase solid solutions [5], with outstanding properties such as high strength, ductility, and corrosion resistance [6,7,8]. These characteristics also make HEAs suitable for structural applications such as cutting tool steels.
The properties of HEAs are modified by addition of element, compound, or a second phase. Tungsten (W) has a remarkable effects on the micro-hardness and wear resistance of CoCrFeNi HEA [9]. Ti addition affected the wear resistance of AlCoCrFeNiTix HEA, especially the AlCoCrFeNiTi0.5 HEA, which showed superior wear resistance compared to the bearing steel [10]. Furthermore, tungsten carbide (WC) is an appealing reinforcement phase that has been used in many alloy structures [2, 11]. Juan Xu et al. and C. Shang et al. studied the microstructure and properties of CoCrFeNi (WC) high-entropy alloy coatings prepared by mechanical alloying and hot consolidation techniques [12, 13].
HEAs have excellent corrosion and wear properties, as well as good thermal stability, making them very promising for surface modification of metals and alloys with low corrosion and wear resistances [14, 15]. F. S. da Silva et al. studied SEM and electrochemical behavior after 700 h exposure to NaCl solution. The results showed that WC-25Co exhibited better corrosion resistance performance [16]. The AlCoCrFeNi HEAC was mounted on a 1045 steel substrate using electro-spark deposition technology, which increased corrosion resistance over the substrate [17]. Qiu explored the impact of Co on the corrosion behavior of Al2CrFeCoxCuNiTi HEAs coatings in alkaline and salt solutions [18].
In this work, CrFeCoNi HEAs are prepared with the additions of 0, 5, 10, 15, and 20 wt.% of WC particles by powder metallurgy technique for using as high strength cutting tools with good machinability. The microstructure, hardness, corrosion resistance, and wear resistance of the prepared HEAs are systematically investigated. Therefore, the aim of this work is to synthesize new generations of cutting tools from (CrFeCoNi)1-X(WC)X (X = 0–20 wt%) HEA "which will be referred to as the under-investigation alloy in this paper" with high scale of corrosion resistance and high hardness.
Alloy elements which are implemented in this investigation were Cr, Fe, Co, Ni, and WC powders with high purity (more than 99.5%). Particle size < 70 μm was mechanically alloyed to synthesize CrFeCoNi HEA powders.
Synthesis of high entropy alloys
The starting elemental powders are mixed and milled in a high-energy roll ball mill (Changsha Tianchuang, Powder Technology Co., Ltd.) at 180 rpm, in a dry condition under a protective argon atmosphere to prevent oxidation. High-performance stainless-steel vial (1500 cc) and balls were utilized, and the ball-to-powder ratio (BPR) was 10:1. The diameters of milling balls used were 3, 6, and 10 mm. The CrFeCoNi HEA powders were milled for 25 h, to avoid powder overheating.
The ball milling process was applied in a cyclic mode with stop intervals 15 min each 1 h, and then the milled powders were mixed with different weight ratios of WC particles (5, 10, 15, and 20 wt.%) in PQ-N2 planetary ball mill at 350 rpm for 1 h, to verify homogenization and to eliminate powders clusters. These parameters were selected based on A. Abu-Oqail et al. and A. M. Sadoun et al. [19, 20].
Figure 1 shows the SEM images of the mixed powder after the milling process. It can be shown that the particle size was decreased with an average between 0.8 and 6.2 μm because of the milling process.
SEM micrographs of the (CrFeCoNi)1-x(WC)x mixed powder with a value of x equal (a)0, (b)5, (c)15, and (d)20 wt% WC
Cylindrical specimens of each group with 8 mm diameter and 12 mm height have been synthesized by cold pressing with a universal hydraulic piston at a uniaxial pressure of 700 MPa. The sintering temperature of the HEAs can be calculated by the rule of mixture as shown in Eq. (1).
$${T}_s=\frac{2}{3}\sum {\left(f\ast {T}_m\right)}_i$$
T s: Sintering temperature
f: Weight percentage
T m: Melting temperature
i: Alloying elements
Calculation of the sintering temperature according to the rule of mixture for the underinvestigation HEA is calculated as shown in Table 1. The green (cold pressed) specimens [21] were then sintered at 1200 °C for 90 min with a heating rate of 3 °C/min under a flowing argon atmosphere in a horizontal tube furnace (type XINKYO).
Table 1 Calculation of sintering temperature of HEA
Figure 2 shows the heating schedule for the sintering process [22, 23]. The heating cycle was 3 °C/min from room temperature up to 850 °C and then holding for 30 min for the liquid phase formation of cobalt liquefaction. The sintering process is completed by 5 °C/min cycle up to 1200 °C and then a holding time for 90 min to complete the sintering process. Cooling under controlled atmosphere is done to protect samples from any oxidation.
Heating schedule for the sintering process
Characterization of high-entropy alloys
The densities of the sintered samples were determined by applying the water immersion technique, using water as a floating liquid according to the standard (ASTM B962-15) [24], and were compared with the theoretical density [25]. The sintered samples were weighed at room temperature in air and in distilled water, and then the bulk density was determined. The test was repeated three times to ensure the repeatability of the results.
The sintered samples were grinded using grit SiC papers with grades 600, 800, 1000, 1200, and 2000 before being polished by 6 μm and 1 μm alumina paste for microstructural analysis. A field emission scanning electron microscope (FE-SEM) was used to examine the microstructure of the polished samples.
Phase formation and composition changes during the sintering process are identified by X-ray diffraction analysis using X-ray diffractometer (X'pert PRO PANalytical) using Cu Kα radiation (α = 1.5418 Å). The scan rate was studied with a step size of 0.02° with a scan range from 20 to 120°; the device was operated with a voltage of 45 KV and a current of 40 mA.
A universal hardness tester INNOVATEST (NEMESIS 9000) was used to determine the Vickers hardness of the consolidated specimens at room temperature, on the polished surface. Each value was the average of five randomly chosen hardness indentations made with a 20 kg load and a holding time of 10 s.
Corrosion behavior of the samples was evaluated out using electrochemical polarization measurements using AutoLab PGSTAT302N potentiostat in 3.5 wt.% NaCl solution applying the potential dynamic technique in air atmosphere with humidity 38% ± 5%, while the measuring solution was open to air and stirred using a magnetic stirrer.
Electrochemical measurements were performed in three-electrode setups: silver-silver chloride (Ag/AgCl) saturated electrode as a reference electrode, platinum sheet as a counterelectrode, and the examined sample as a working electrode. The effective average area of the sample exposed to corrosive solution was fixed at 0.502 cm2. After the open-circuit potential was stable enough, potentiodynamic-polarization curves were recorded in air at a potential scan rate of 1 mV/s (about 600 s).
Wear rate was measured according to the ASTM G133-05 standard test method for linearly reciprocating ball-on-flat sliding wear [26] at room temperature. An aluminum oxide ball with a 3 mm radius, which moved with an average sliding speed of 20 mm/s and the applied load, is 5 N. The wear volume lose was calculated by measuring the cross-sectional area of the wear path made by the aluminum oxide ball multiplied by the length of this wear path.
The effect of WC additions on the relative density of sintered samples is shown in Fig. 3. The ratio of a sample's actual to theoretical density is known as its relative density. From the figure, there are two phenomena; the first one is the values of theoretical and actual densities were increased with the increase of WC powder content; this is may be due to the high density of WC relative to the rest of elements. The second phenomenon is the relative density was decreased with the increase of the content of WC powder, and this may be due to the non-wettability problem between the ceramic WC particles and the other four elements, in which the surface energy between them with WC is high; therefore, some agglomerations take place by increasing WC percent; consequently, the relative density is decreased [27]. Also, the hard ceramic WC particles act as internal barriers that hinder the complete densification; therefore, by more addition of WC, the densification decreases.
Theoretical, actual, and relative densities of HEA vs. WC percentage
The XRD patterns of the HEA samples taken on the ingot cross section contain diffraction peaks. It was noticed that the patterns of each elemental constituents were disappeared, in which no peaks were recorded for Cr, Fe, Co, or Ni individually. It seems that FCC solid solution phase and two carbide phases were appeared in the patterns as shown in Fig. 4. Three mean peaks corresponding to FCC are observed. The highest intensity of FCC peaks is due to the good dissolution of the four elements with each other and good sinter ability.
XRD patterns of (CrFeCoNi)1-x(WC)x (x = 0–20 wt%) HEAs
The 2nd peaks are corresponding to the added WC particles. The 3rd one belongs to the formed chromium carbide phases which are Cr23C6, Cr7C3, and Cr3C2 that are compatible with the previous work [28]. The results are similar to that of the FeCoCrNiW0.3 + 0.5, at.% C alloy, and to (FeCoCrNi)1-x(WC)x (x = 3–11, at.%) [2, 8]. The increase of WC content does not affect the phase compositions of the (CrFeCoNi)1-x(WC)x alloys, but it does improve peaks from W-rich and Cr-rich carbides.
High-temperature sintering can result in element alloying [29]. The W-rich carbide is formed directly from the original WC particles, while the Cr-rich phases are formed by the diffusion and reaction of Cr and C. The intensities of the WC peaks and chromium carbide ones are increased by increasing the WC percent.
The crystal parameters of the investigated HEAs are calculated in Table 2. Crystallite size (D) calculated using Scherer formula (Eq. (2)) [13] and lattice strain ε was calculated based on the equation presented by Danilchenko et al. [30] (Eq. (3)) using the X-ray broadening of the peaks after eliminating the instrumental broadening.
$$D=\frac{0.89\lambda }{B\cos \theta }$$
$$\varepsilon =\frac{B}{4\tan \theta }$$
Table 2 Crystallite size (D) and lattice strain (ε) of (CrFeCoNi)1-x(WC)x (x = 0–20 wt%) HEA composites
D: is the crystallite size
B: is the full width at half maximum (FWHM)
λ: is the wavelength of X radiation used (λ = 2d (sin θ) = 0.154187 nm)
d: is the interplanar spacing, and θ is the peak position.
ε: is the lattice strain
The D-values of the (CrFeCoNi)1-x(WC)x HEAs were decreased; at the same time, the lattice strain was increased with increasing the addition of WC particles, which participated in the grain boundaries of the (CrFeCoNi)1-x(WC)x HEAs. Addition of WC particles causes a grain refinement of the elements as it acts as internal balls. So, enhancement of the grain boundaries cohesion takes place, and the crystallite size decreases. Also, the difference in size between the elements formed the HEAs and the increase in the dislocation density [31].
Figure 5 shows the SEM micrographs of the considered HEA. The particles are finer, in which cold welding between the elemental particles takes place during the high-speed ball milling process. Over a longtime milling, the welded particles are crushed to finer particle size. Figure 5a represents the SEM of WC-free alloy. It can be seen that there are two distinct regions; the first is the light gray area, which is the main alloy (CrFeCoNi) HEA, which is the FCC solid solution; the second is the dark gray area, which is the Cr-rich phases as confirmed from energy-dispersive spectrometer (EDS).
SEM micrographs of the (CrFeCoNi)1-x(WC)x HEAs with a value of x equal (a)0, (b)5, (c)10, (d)15, and (e)20 wt% WC
Figure 5 b, c, d, and e represents different contents of 5, 10, 15, and 20 wt% WC, respectively. It is clear that bright phases began to appear at 5 wt% WC and increased with the increase of WC %, which is thought to be WC particles. It is also observed in all samples that WC particles uniformly distributed inside the alloy. This is may be due to good mixing process of WC powder with the main alloy to get more homogenization and to eliminate any powders clusters.
Although phase distribution appears in Fig. 5, yet the phase details in each sample are not clear, in which only the good homogenization between the constituents appeared. Therefore, magnification of the microstructure must be established to investigate the formed phase.
Figure 6 shows a magnified SEM images of the alloy under test at 1000×. In Fig. 6 (a-1), a Cr-rich phases inside the FCC matrix appeared as dark gray areas while in Fig. 6 (a-2), which is a magnification of Cr-rich area in Fig. 6 (a-1) at 4000×, focuses on the Cr-rich phase. There are two types of Cr-rich phases, the dark gray phase in a size of submicrons (Cr percentage exceeded 80 wt.%) which embedded in the light gray phase called Cr-depleted phases (Cr percentage lower than 60 wt.%).
SEM micrographs of the (CrFeCoNi)1-x(WC)x HEAs with a value of x equal (a-1, 2)0, (b)5, (c)10, (d)15, and (e)20 wt% WC
After adding WC to the alloy, new phases have been appeared. According to Fig. 6 (b, c, d, e), there are two zones appear inside the FCC matrix, the bright phase which is the W-rich carbide phase with a particle size around 5–10 μm and the dark gray area which is the Cr-rich carbide phase. Both of them are distributed homogenously all over the alloy matrix. Some agglomerations are observed by increasing WC%; this is may be due to the non-wettability between WC as a ceramic material and the other four metallic elements [23]. In which the contact angle between them is very large, therefore, agglomerations take place. It is also noticeable that some black areas appear in the microstructure due to the presence of some holes or pores, and these pores can appear because the relative density of the HEAs did not reach 100%.
According to the elemental mapping (Fig. 7) and the EDS analysis (Table 3), the FCC matrix consists mainly from Cr, Fe, Co, and Ni with almost equiatomic ratios. In HEAs of 5 wt.% WC addition, elemental mapping is clear in (Fig. 7). There was a continual gray matrix and a small number of bright grains of fine sizes (less than 10 μm). Also, dark gray phases were noticed with a larger grain size (almost 50 μm). It is evident from elemental mapping that Cr, Fe, Ni, and Co elements are primarily dispersed in a gray matrix. The bright grains are high in W and C. This indicates that the bright grains were WC particles with a uniform distribution. Similarly, the distribution of bright grains in HEAs containing 20% WC showed a more uniform and dense state in the bright grains.
SEM images of the elemental mapping of the (CrFeCoNi)1-x(WC)x HEAs
Regarding the EDS analyses of the investigated HEAs in Table 3, the alloys' principal element compositions are all close to nominal compositions. Furthermore, the atomic ratios of W and C elements in bright grains remained equivalent, and the W element was increased from 57.37 to 60.50 wt.% in bright grains as WC content increased (W-rich carbide). The map analysis revealed a good and homogeneous distribution of all phases formed in the HEAs.
Table 3 Chemical compositions of different phases of (CrFeCoNi)1-x(WC)x HEAs obtained by SEM and EDS analysis
Figure 8 presents the effect of WC additions on the hardness values of the alloy under test. The curve shows a gradual increase in the hardness values with WC additions. As the HEAs free from WC has about 336.41 HV value, 20 wt.% WC alloy recorded 632.48 HV at room temperature. This may be attributed to the high hardness of WC particles which is 2400 HV. Therefore, incorporation of a hard ceramic material such as WC in soft elements like Cr, Co, and Ni gives a strength for the overall prepared samples. Also, WC particles acts as internal balls which make grain refinement, so the hardness values are increased by increasing WC percent due to grain refinement, according to Hall-Petch equation that states the enhancement of hardness by reduction of the particle size [32, 33]. Also, the formed Cr-carbide phases increases the hardness.
Hardness of the (CrFeCoNi)1-x(WC)x HEAs
Hall-Petch equation was used to indicate the enhancement of hardness and strengthening calculations of the recrystallized specimens of the HEAs due to grain refinement. From Hall-Petch equation, one can find that the hardness H of the sintered samples increases as its grain size d decreases, as expressed in (Eq. (4)) [34, 35]. The strengthening calculations were applied on the (CrFeCoNi)1.0(WC)0.0 HEA to compare between the measured hardness and the calculated hardness of the HEA sample.
$$H={H}_o+{K}_H{d}^{-0.5}$$
where Ho is the intrinsic hardness of the (CrFeCoNi)1.0(WC)0.0 HEA which was 123.6 HV, d is the average grain size, and it was measured using line-intercept method on at least five SEM micrographs, and it is found to be 2± μm, and the KH value is the Hall-Petch coefficient. S. Yoshida et al. calculated KH for the (CrFeCoNi)1.0(WC)0.0 HEA, and it is found to be equal to 276 [36]. From all these data, it can be found that the hardness of the (CrFeCoNi)1.0(WC)0.0 HEA calculated by Hall-Petch equation is equal to 318.76 HV, which is approximately similar to the experimental results.
For the investigation of the electrochemical behavior of the prepared samples, Fig. 9 shows the potentiodynamic polarization curves of the (CrFeCoNi)1-x(WC)x HEAs in 3.5% NaCl solution. Limited regions of passivation can be observed on the curves, which indicates a tendency of the alloys to passivate, and the passivation regions increased by increasing the WC percent.
Potentiodynamic polarization curves of the (CrFeCoNi)1-x(WC)x HEA composites
This may be due to the high corrosion resistance of WC particles which is a ceramic material has a protective surface. Also the same effect by the formed Cr carbides increases the corrosion resistance [37]. Nickel, chromium, and cobalt are considered the most lower corrosion rate elements which form passive layers on the alloy surface. The corrosion potential (Ecorr) and corrosion current density (Icorr) were obtained using the Tafel extrapolation method [38]. The parameter, Icorr, can be used to calculate the average corrosion rates (Rcorr), which represent the general corrosion resistance [39]. Corrosion rate can be calculated by (Eq. (5)).
$$Corrosion\ rate\ \left( mm/ year\right)=3.27\times {10}^{-3}\times \frac{i_{corr}}{\rho}\times EW$$
where ρ is the density of the alloy (in g/cm3), Icorr (in μA/cm2) is the corrosion current density, and EW is the equivalent weight of the alloy [40], which can be calculated by (Eq. (6)):
$$EW={\left(\sum \frac{n_i{f}_i}{a_i}\right)}^{-1}$$
where ni is the valence of the ith element of the alloy, fi is the mass fraction of the ith element in the alloy, and ai is the atomic weight of the ith element in the alloy. For more clarification, (CrFeCoNi)1.0(WC)0.0 HEA is given in the following example:
$$EW={\left(\frac{0.25\times 3}{52.00}+\frac{0.25\times 2}{55.845}+\frac{0.25\times 2}{58.93}+\frac{0.25\times 2}{58.69}+\frac{0\times 0}{195.86}\right)}^{-1}=24.76$$
$${r}_{corr}=3.27\times {10}^{-3}\times \frac{72.71}{7.96}\times 24.76=0.740\ \left( mm/ year\right)$$
The equivalent weight is calculated as in (Eq. (6)) for the (CrFeCoNi)1.0(WC)0.0 HEA to be 24.76; the density of this alloy is 7.96 g/cm3 as shown in Fig. 3. Consequently, the corrosion rate can be calculated form (Eq. (5)) to be 0.740 (mm/year).
Table 4 illustrates the values of (Ecorr, Icorr, and Rcorr) derived from the potentiodynamic curves using the AutoLab software. It can be seen that the value of Ecorr in the CrFeCoNi HEA is lower than in the same alloy reinforced with different percentages of WC. At the same time, the CrFeCoNi HEA offers a value of Icorr higher than the other samples and higher corrosion rate Rcorr. The Ecorr, Icorr, and Rcorr values of (CrFeCoNi)100(WC)0 were 0.399 V, 7.271 × 10−5 A/cm2, and 0.74 mm/yr., respectively. For (CrFeCoNi)80(WC)20, Ecorr was increased to −0.24 V, and Icorr and Rcorr were decreased to 1.232 × 10−5A/cm2 and 0.133 mm/yr., respectively. Therefore, it has been shown that the addition of WC is lowering corrosion rate and enhancing the corrosion resistance.
Table 4 Electrochemical corrosion parameters of the (CrFeCoNi)1-x(WC)x HEA composites
It can be noticed from Fig. 9 that there is a passivation behavior to appear with the addition of WC as a result of forming a passivation films, which are formed on the HEAs surface that prevents the inner layer from being exposed to the corrosive medium, which could slow down the corrosion reaction [12, 41]. The material's ability to resist a concentrated attack on a passive film depends on its passivation region (∆EP = EPit − Ecorr). The ∆EP value was increasing with the increase of WC content. This is due to the formation of W-rich carbide and Cr-rich carbide and also the presence of the high corrosion resistance because of Ni, Cr, and Co elements.
Although the relative density of samples decreases by increasing the WC percent, yet the presence of WC particles with a homogeneous distribution in the alloy matrix and the formation of Cr-rich carbide phases improve the corrosion resistance. The addition of WC enhances the passivation of the HEAs surface and protects the inner layer from being exposed to the corrosive medium, which could slow down the corrosion reaction. The enhancement in passivation could be attributed to the formation of W-rich carbide and Cr-rich carbide.
Because Archard's wear equation (Eq. (7)) [42] states that the wear volume loss (V) is inversely proportional to the hardness (H) of the material and linearly proportional to the sliding distance (L) and normal load (F), an improvement in wear resistance of the HEAs is also expected as the WC content increases.
$$V=K\frac{LF}{H}$$
where (K) is the wear coefficient, a dimensionless constant. This coefficient is a material constant that should be determined by other characteristics such as the elasticity, surface quality, and chemical affinity between the material of two surfaces and others.
Wear rate is the ratio of wear volume loss divided by normal load and total sliding distance; wear volume loss may also be computed by multiplying the wear area by the ball track distance. Figure 10 demonstrates the results of wear rate of (CrFeCoNi)1-x(WC)x (x = 0–20 wt.%) HEAs as a function of WC percentage. It can be shown that the value of wear rate is gradually decreased with the increase of WC content. Also, the wear rate of (CrFeCoNi)1.0(WC)0.0 HEA is (1.70E-04) which is approximately 4.5 times the wear rate of (CrFeCoNi)0.8(WC)0.2 HEA (3.81E-05); this means that wear resistance is significantly improved with the increase of WC content. This is also owing to the uniform dispersion of the WC phase, the formation of chromium carbide phases, and grain refinement.
Wear rate of the (CrFeCoNi)1-x(WC)x HEAs
(CrFeCoNi)1-x(WC)x HEAs were successfully prepared using PM technique. The relative density reached up to 97.0%, indicating the high sintering quality.
FCC HEA matrix phase, W-rich carbide, and two major types of Cr-rich carbides are compose of (CrFeCoNi)1-x(WC)x HEAs. Furthermore, W-rich carbide has a size of 5–10 μm. The compositions of Cr-rich carbides are complex; fine Cr-rich phases in a size of submicrons can be found.
The hardness of the (CrFeCoNi)1-x(WC)x HEAs steadily improved with increasing WC content, from 336.41 HV of the (CrFeCoNi)1.0(WC)0.0 to 632.48 HV of the (CrFeCoNi)0.80(WC)0.20. The hard WC particle and the precipitation of Cr-rich carbides are most likely responsible for the strengthening mechanism.
The reinforcement of the HEAs alloy with WC results in the enhancement of the corrosion resistance of the alloy. The highest corrosion resistance, i.e., the lowest Icorr and Rcorr values, was obtained by the alloy reinforced with 20% WC.
The promising mechanical and physical properties of the (CrFeCoNi)1-x(WC)x HEAs provide a valuable guidance for expanding industrial applications.
The value of wear rate is gradually decreased with the increase of WC content. The wear rate of (CrFeCoNi)1.0(WC)0.0 HEA is (1.70E-04) which is approximately 4.5 times higher than the wear rate of (CrFeCoNi)0.8(WC)0.2 HEA (3.81E-05); this means that wear resistance is significantly improved with the increase of WC content.
The datasets collected and/or analyzed during the current study are available from the corresponding author on request. The corresponding author had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
HEA:
High-entropy alloy
BPR:
Ball to powder ratio
FE-SEM:
Field emission scanning electron microscope
FCC:
XRD:
EDS:
Energy-dispersive spectrometer
HV:
Hardness Vickers
Yilbas BS, Toor IUH, Malik J, Patel F (2014) Laser gas assisted treatment of AISI H12 tool steel and corrosion properties. Opt. Lasers Eng. 54:8–13. https://doi.org/10.1016/j.optlaseng.2013.10.004
R. Zhou, G. Chen, B. Liu, J. Wang, L. Han, and Y. Liu, "Microstructures and wear behaviour of (FeCoCrNi)1-x(WC)x high entropy alloy composites," Int. J. Refract. Met. Hard Mater., vol. 75, no. November 2017, pp. 56–62, 2018, doi: https://doi.org/10.1016/j.ijrmhm.2018.03.019.
Tung CC, Yeh JW, Shun T t, Chen SK, Huang YS, Chen HC (2007) On the elemental effect of AlCoCrCuFeNi high-entropy alloy system. Mater. Lett. 61(1):1–5. https://doi.org/10.1016/j.matlet.2006.03.140
He JY et al (2016) A precipitation-hardened high-entropy alloy with outstanding tensile properties. Acta Mater. 102:187–196. https://doi.org/10.1016/j.actamat.2015.08.076
B. Cantor, I. T. H. Chang, P. Knight, and A. J. B. Vincent, "Microstructural development in equiatomic multicomponent alloys," Mater. Sci. Eng. A, vol. 375–377, no. 1-2 SPEC. ISS., pp. 213–218, 2004, doi: https://doi.org/10.1016/j.msea.2003.10.257.
He F, Wang Z, Wu Q, Li J, Wang J, Liu CT (2017) Phase separation of metastable CoCrFeNi high entropy alloy at intermediate temperatures. Scr. Mater. 126:15–19. https://doi.org/10.1016/j.scriptamat.2016.08.008
Tang Z, Yuan T, Tsai CW, Yeh JW, Lundin CD, Liaw PK (2015) Fatigue behavior of a wrought Al0.5CoCrCuFeNi two-phase high-entropy alloy. Acta Mater. 99:247–258. https://doi.org/10.1016/j.actamat.2015.07.004
Poletti MG, Fiore G, Gili F, Mangherini D, Battezzati L (2017) Development of a new high entropy alloy for wear resistance: FeCoCrNiW0.3 and FeCoCrNiW0.3 + 5 at.% of C. Mater. Des. 115:247–254. https://doi.org/10.1016/j.matdes.2016.11.027
Shang C et al (2017) CoCrFeNi(W1−xMox) high-entropy alloy coatings with excellent mechanical properties and corrosion resistance prepared by mechanical alloying and hot pressing sintering. Mater. Des. 117:193–202. https://doi.org/10.1016/j.matdes.2016.12.076
Löbel M, Lindner T, Mehner T, Lampke T (2018) Influence of titanium on microstructure, phase formation and wear behaviour of AlCoCrFeNiTix high-entropy alloy. Entropy 20(7):1–11. https://doi.org/10.3390/e20070505
Huang G, Hou W, Shen Y (2018) Evaluation of the microstructure and mechanical properties of WC particle reinforced aluminum matrix composites fabricated by friction stir processing. Mater. Charact. 138(January):26–37. https://doi.org/10.1016/j.matchar.2018.01.053
Xu J, Wang S, Shang C, Huang S, Wang Y (2019) Microstructure and properties of CoCrFeNi(WC) high-entropy alloy coatings prepared using mechanical alloying and hot pressing sintering. Coatings 9(1). https://doi.org/10.3390/coatings9010016
Shang C, Axinte E, Ge W, Zhang Z, Wang Y (2017) High-entropy alloy coatings with excellent mechanical, corrosion resistance and magnetic properties prepared by mechanical alloying and hot pressing sintering. Surf Interfaces 9(June):36–43. https://doi.org/10.1016/j.surfin.2017.06.012
Joseph J, Haghdadi N, Shamlaye K, Hodgson P, Barnett M, Fabijanic D (2019) The sliding wear behaviour of CoCrFeMnNi and AlxCoCrFeNi high entropy alloys at elevated temperatures. Wear 428–429(March):32–44. https://doi.org/10.1016/j.wear.2019.03.002
Qiu Y, Thomas S, Fabijanic D, Barlow AJ, Fraser HL, Birbilis N (2019) Microstructural evolution, electrochemical and corrosion properties of Al x CoCrFeNiTi y high entropy alloys. Mater. Des. 170:107698. https://doi.org/10.1016/j.matdes.2019.107698
F. S. da Silva et al., "Corrosion behavior of WC-Co coatings deposited by cold gas spray onto AA 7075-T6," Corros. Sci., vol. 136, no. June 2017, pp. 231–243, 2018, doi: https://doi.org/10.1016/j.corsci.2018.03.010.
Li QH, Yue TM, Guo ZN, Lin X (2013) Microstructure and corrosion properties of alcocrfeni high entropy alloy coatings deposited on AISI 1045 steel by the electrospark process. Metall. Mater. Trans. A Phys. Metall. Mater. Sci. 44(4):1767–1778. https://doi.org/10.1007/s11661-012-1535-4
Qiu X w, Wu M j, Liu C g, Zhang Y p, Huang C x (2017) Corrosion performance of Al2CrFeCoxCuNiTi high-entropy alloy coatings in acid liquids. J. Alloys Compd. 708:353–357. https://doi.org/10.1016/j.jallcom.2017.03.054
Abu-Oqail A, Wagih A, Fathy A, Elkady O, Kabeel AM (2019) Effect of high energy ball milling on strengthening of Cu-ZrO2 nanocomposites. Ceram. Int. 45(5):5866–5875. https://doi.org/10.1016/j.ceramint.2018.12.053
Sadoun AM, Mohammed MM, Fathy A, El-Kady OA (2020) Effect of Al2O3 addition on hardness and wear behavior of Cu-Al2O3 electro-less coated Ag nanocomposite. J. Mater. Res. Technol. 9(3):5024–5033. https://doi.org/10.1016/j.jmrt.2020.03.020
Khoei AR, Sameti AR, Mofatteh H (2020) Compaction simulation of crystalline nano-powders under cold compaction process with molecular dynamics analysis. Powder Technol. 373:741–753. https://doi.org/10.1016/j.powtec.2020.06.069
Elkady OAM, Abu-Oqail A, Ewais EMM, El-Sheikh M (2015) Physico-mechanical and tribological properties of Cu/h-BN nanocomposites synthesized by PM route. J. Alloys Compd. 625:309–317. https://doi.org/10.1016/j.jallcom.2014.10.171
Sadoun AM, Mohammed MM, Elsayed EM, Meselhy AF, El-Kady OA (2020) Effect of nano Al2O3 coated Ag addition on the corrosion resistance and electrochemical behavior of Cu-Al2O3 nanocomposites. J. Mater. Res. Technol. 9(3):4485–4493. https://doi.org/10.1016/j.jmrt.2020.02.076
ASTM International (2013) Standard Test methods for density of compacted or sintered powder metallurgy (pm) products using Archimedes' principle. Astm B962-15 i:1–7. https://doi.org/10.1520/B0962-13.2
Luo W, Liu Y, Luo Y, Wu M (2018) Fabrication and characterization of WC-AlCoCrCuFeNi high-entropy alloy composites by spark plasma sintering. J. Alloys Compd. 754:163–170. https://doi.org/10.1016/j.jallcom.2018.04.270
Coeffi RF (2011) Standard test method for linearly reciprocating ball-on-flat sliding wear 1. Lubrication 05(Reapproved 2010):1–10. https://doi.org/10.1520/G0133-05R10.2
Liu B et al (2016) Microstructure and mechanical properties of equimolar FeCoCrNi high entropy alloy prepared via powder extrusion. Intermetallics 75:25–30. https://doi.org/10.1016/j.intermet.2016.05.006
Stepanov ND, Yurchenko NY, Tikhonovsky MA, Salishchev GA (2016) Effect of carbon content and annealing on structure and hardness of the CoCrFeNiMn-based high entropy alloys. J. Alloys Compd. 687:59–71. https://doi.org/10.1016/j.jallcom.2016.06.103
Yeh J (2013) Alloy design strategies and future trends in high-entropy alloys. Miner. Met. Mater. Soc. Alloy 65(12):1759–1771. https://doi.org/10.1007/s11837-013-0761-6
Danilchenko SN, Kukharenko OG, Moseke C, Protsenko IY, Sukhodub LF, Sulkio-Cleff B. Determination of the bone mineral crystallite size and lattice strain from diffraction line broadening. Cryst Res Technol. 2002;37(11):1234–40. https://www.semanticscholar.org/paper/Determination-of-the-Bone-Mineral-Crystallite-Size-Danilchenko-Kukharenko/5823ed75ed7addfa722dc7dc3b38a6a5d1d054e4.
Zhang KB et al (2009) Nanocrystalline CoCrFeNiCuAl high-entropy solid solution synthesized by mechanical alloying. J. Alloys Compd. 485(1–2):34–37. https://doi.org/10.1016/j.jallcom.2009.05.144
Yehia HM, El-Tantawy A, Ghayad IM, Eldesoky AS, El-kady O (2020) Effect of zirconia content and sintering temperature on the density, microstructure, corrosion, and biocompatibility of the Ti–12Mo matrix for dental applications. J. Mater. Res. Technol. 9(4):8820–8833. https://doi.org/10.1016/j.jmrt.2020.05.109
Khallaf AH, Bhlol M, Dawood OM, Elkady OA (2022) Effect of WC addition on the mechanical properties and microstructure of CrFeCoNi high entropy alloy by powder metallurgy technique. Int. J. Mech. Eng. 7(2):1127–1134
Ondicho I, Alunda B, Park N (2021) Intermetallics effect of Fe on the Hall-Petch relationship of ( CoCrMnNi ) 100-x Fe x medium-and high-entropy alloys. Intermetallics 136, no. May:107239. https://doi.org/10.1016/j.intermet.2021.107239
Barnett MR et al (2020) Featured article A scrap-tolerant alloying concept based on high entropy alloys. Acta Mater. 200:735–744. https://doi.org/10.1016/j.actamat.2020.09.027
Yoshida S, Ikeuchi T, Bhattacharjee T, Bai Y (2019) Effect of elemental combination on friction stress and Hall-Petch relationship in face-centered cubic high / medium entropy alloys. Acta Mater. 171:201–215. https://doi.org/10.1016/j.actamat.2019.04.017
Zhao RF, Ren B, Cai B, Liu ZX, Zhang GP, Zhang J j (2019) Corrosion behavior of CoxCrCuFeMnNi high-entropy alloys prepared by hot pressing sintered in 3.5% NaCl solution. Results Phys. 15, no. July:102667. https://doi.org/10.1016/j.rinp.2019.102667
McCafferty E (2005) Validation of corrosion rates measured by the Tafel extrapolation method. Corros. Sci. 47(12):3202–3215. https://doi.org/10.1016/j.corsci.2005.05.046
Shi Y, Yang B, Liaw PK (2017) Corrosion-resistant high-entropy alloys: a review. Metals (Basel). 7(2):1–18. https://doi.org/10.3390/met7020043
Chen YY, Duval T, Hung UD, Yeh JW, Shih HC (2005) Microstructure and electrochemical properties of high entropy alloys-a comparison with type-304 stainless steel. Corros. Sci. 47(9):2257–2279. https://doi.org/10.1016/j.corsci.2004.11.008
Xu B, Zhou Y, Liu Y, Hu S, Zhang G (2022) Effect of different contents of WC on microstructure and properties of CrMnFeCoNi high-entropy alloy-deposited layers prepared by PTA. J. Mater. Res. https://doi.org/10.1557/s43578-021-00452-7
R. Liu and D. Y. Li, "Modification of Archard's equation by taking account of elastic/pseudoelastic properties of materials," Wear, vol. 250–251, no. PART 2, pp. 956–964, 2001, doi: https://doi.org/10.1016/s0043-1648(01)00711-6.
The authors acknowledge the financial and technical support from the Central Metallurgical R&D Institute and are greatly indebted for providing the materials used in this work and introducing the microstructure facilities.
All authors declare that there are no funding sources for this research paper.
Mechanical Eng. Dept., Egyptian Academy for Engineering & Advanced Technology (EAE & AT) Affiliated to Ministry of Military Production, Cairo, Egypt
A. Hegazy Khallaf
Mechanical Eng. Dept., Faculty of Engineering, University of Helwan, Helwan, Cairo, 11792, Egypt
M. Bhlol & O. M. Dawood
Central Metallurgical Research and Development Institute, P. O. Box: 87, Helwan, Cairo, Egypt
I. M. Ghayad
Powder Technology Department, Central Metallurgical R&D Institute, P. O. 87, Helwan, Cairo, 11421, Egypt
Omayma A. Elkady
M. Bhlol
O. M. Dawood
AHK and OAE designed the analyses, analyzed the data, and wrote the manuscript. The study was supervised and critically reviewed by OMD and OAE. All authors read and approved the final manuscript.
Correspondence to A. Hegazy Khallaf.
Competing of interests
Khallaf, A.H., Bhlol, M., Dawood, O.M. et al. "Effect of tungsten carbide (WC) on electrochemical corrosion behavior, hardness, and microstructure of CrFeCoNi high entropy alloy". J. Eng. Appl. Sci. 69, 43 (2022). https://doi.org/10.1186/s44147-022-00097-1
High entropy alloy
Electrochemical behavior | CommonCrawl |
incompressible fluid approximation and fluid vs sound velocity
consider the following case: A straight tube, with a constant mass flow rate of water $\dot m_{in}=\dot m_{out}$ , and with a linear power entering in it $\dot Q [\frac W m]$. And the water is the liquid phase in all the tube.
My professor told us that in this case the incompressible fluid is a good approximation if the velocity of the water is much less than the velocity of the sound. Can you explain me why this should be a good criteria? In particular what confuses me is that the density should change depending on the temperature region.
SimoBartzSimoBartz
$\begingroup$ There is a very similar post on Physics Stack Exchange. For any fluid really, the incompressible assumption can be good if the flow velocity is far below the sound speed of that fluid. $\endgroup$
– Carlton
$\begingroup$ Water is compressible. Just apply sufficient pressure... $\endgroup$
$\begingroup$ In particular what confuses me is that the density should change depending on the temperature region. That may be the case, however it imo does not have anything to do with compressibility. Compressibility refers to whether the fluids volume changes when subject to e.g. pressure. Changes in density due to a change in temperature are different. $\endgroup$
– idkfa
It depends on the velocity.
In particular what confuses me is that the density should change depending on the temperature region.
You stated that water stays liquid along the length of the tube, and if you took a look at water properties table at atmospheric pressure in the range 32 to 90 degress Celsius, the change in density would be approximately 3%, thus hardly compressible.
The mathematical definition of flow incompressibility is that the divergence of velocity vector is zero: $$ \nabla.\vec{V}= \frac{\partial u_i}{\partial x_i}=0 $$
But this definition can be somehow confusing, for example variation of density for water at room temperatures are negligible, as in our previous example. But if you pumped the same water at velocities close to the material-specific speed of sound, the flow now is compressible.
So, a flow is said to be compressible if its speed is approximately 30% of its sound speed or its Mach number $\text{Ma}_{crit} \ge 0.3$.
Water speed of sound at 20 degree C is approximately $1,480$ m/s, and the corresponding velocity at $\text{Ma} = 0.3$ is $v = 444$ m/s, which is not hard to achieve using a water jet.
So, in your problem you can calculate the range of velocities you might have, and compare against $\text{Ma}_{crit}$, to check if your fluid flow approximates as compressible or incompressible.
Note: This answer is based on Rodriguez discussion on the incompressible approximation in computational fluid dynamics, highly recommended.
AlgoAlgo
$\begingroup$ But a 3% change is taken into account on hot water tanks with either venting or pressure relief valves so it must be considered. $\endgroup$
$\begingroup$ @SolarMike I didn't say that it shouldn't :)). If the system is sensitive enough to even smaller changes than that, then it must be considered. $\endgroup$
$\begingroup$ But it was not clear that it should... These things can explode... $\endgroup$
The question confuses two separate concepts. One is the idea of incompressible flow, and the other is constant density flow.
The professor is referring to a criterion that allows you to use the incompressible flow equations, without heat addition. When you derive the more general flow equations, using Newton's Second Law, Conservation of Mass, and Equation of State, you find that there's an important parameter called Mach Number, M, defined as fluid velocity divided by local speed of sound. Even more, M appears as M^2, and the latter often appears in terms such as (1 - M^2). When you study these equations, you find that if you neglect M^2 when compared to unity, you find that there can be no variation in density. Thus, if M^2 is << 1, you can use the incompressible flow equations, without heat addition. Practically this means for flows where approximately M^2 < 0.1, or M < 0.3.
With heat addition, you need to invoke in addition to the principles mentioned above the Energy Equation. These are a much more complicated set, and it's often advantageous to look for less accurate, but very useful simplifications, unless it's obvious that changes in density - for whatever reason - are important features of the flow.
ttononttonon
Low cost, moderately accurate water depth measurement
Why use different specific heat capacities in the energy equations for incompressible and compressible fluid flow?
How does pressure energy convert to thermal in incompressible flow?
Steady state approximation for compressible fluid flow (manifolds and other pressure vessels)
Eulerian description of fluid flow
Boundary layer thickness decreases as fluid velocity increases, so why is the boundary layer thicker in the turbulent region for higher velocities? | CommonCrawl |
Hostname: page-component-dc8c957cd-s7h52 Total loading time: 1.351 Render date: 2022-01-28T23:11:10.712Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }
>Journals
>Journal of Fluid Mechanics
>Volume 824
>Pore-filling events in single junction micro-models...
Pore-filling events in single junction micro-models with corresponding lattice Boltzmann simulations
Published online by Cambridge University Press: 06 July 2017
Ioannis Zacharoudiou [Opens in a new window] ,
Emily M. Chapman ,
Edo S. Boek and
Ioannis Zacharoudiou*
Qatar Carbonates and Carbon Storage Research Centre, Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
Emily M. Chapman
Qatar Carbonates and Carbon Storage Research Centre, Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK Soft Matter Research Group, Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, CB3 0WA, UK
†Email address for correspondence: [email protected]
Save PDF (1 mb) View PDF[Opens in a new window] Save hi-res PDF (2 mb)
Save to Kindle
Rights & Permissions[Opens in a new window]
The aim of this work is to better understand fluid displacement mechanisms at the pore scale in relation to capillary-filling rules. Using specifically designed micro-models we investigate the role of pore body shape on fluid displacement during drainage and imbibition via quasi-static and spontaneous experiments at ambient conditions. The experimental results are directly compared to lattice Boltzmann (LB) simulations. The critical pore-filling pressures for the quasi-static experiments agree well with those predicted by the Young–Laplace equation and follow the expected filling events. However, the spontaneous imbibition experimental results differ from those predicted by the Young–Laplace equation; instead of entering the narrowest available downstream throat the wetting phase enters an adjacent throat first. Thus, pore geometry plays a vital role as it becomes the main deciding factor in the displacement pathways. Current pore network models used to predict displacement at the field scale may need to be revised as they currently use the filling rules proposed by Lenormand et al. (J. Fluid Mech., vol. 135, 1983, pp. 337–353). Energy balance arguments are particularly insightful in understanding the aspects affecting capillary-filling rules. Moreover, simulation results on spontaneous imbibition, in excellent agreement with theoretical predictions, reveal that the capillary number itself is not sufficient to characterise the two phase flow. The Ohnesorge number, which gives the relative importance of viscous forces over inertial and capillary forces, is required to fully describe the fluid flow, along with the viscosity ratio.
JFM classification
Interfacial Flows (free surface): Capillary flows Interfacial Flows (free surface): Interfacial Flows (free surface) Low-Reynolds-number flows: Porous media
Journal of Fluid Mechanics , Volume 824 , 10 August 2017 , pp. 550 - 573
DOI: https://doi.org/10.1017/jfm.2017.363[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© 2017 Cambridge University Press
Understanding the fundamentals of fluid displacement processes in porous media is essential for multiple applications, from fluid uptake in diapers, humidity and fluid uptake in proton exchange membranes to carbon sequestration and enhanced oil recovery. This involves understanding the dynamics of drainage and imbibition, which however is often hard to unravel due to the complexity of the porous medium in terms of pore geometry, surface roughness and wetting properties. Attempts to simplify the problem, in order to determine the fluid displacement pathways and the associated fluid distributions, have been carried out with pore network models (Blunt et al. Reference Blunt, Jackson, Piri and Valvatne2002, Reference Blunt, Bijeljic, Dong, Gharbi, Iglauer, Mostaghimi, Paluszny and Pentland2013). Pore network models decompose the porous medium into an ensemble of geometric shapes and, based on the pioneering work of Lenormand, Zarcone & Sarr (Reference Lenormand, Zarcone and Sarr1983), predict the fluid displacement at the field scale in a sequential manner. These models have been successful especially for the drainage case, but do not work so well for the imbibition case. Our aim in this paper is to demonstrate the importance of the pore geometry in determining the displacement pathways, especially for spontaneous imbibition, by studying the displacement dynamics in well-defined single capillaries and pore junctions.
A significant amount of work has been carried out on the subject of imbibition or capillary filling. A fluid penetrates a hydrophilic capillary due to the Laplace pressure across the fluid–fluid interface, or equivalently due to the decrease in the free energy of the system as the hydrophilic liquid wets the walls of the capillary. The system uses the energy liberated from wetting the walls to drive the fluid inside the capillary. Lucas (Reference Lucas1918) and Washburn (Reference Washburn1921) gave the first account of this phenomenon, but considered only the regime when all influences apart from the driving force and the viscous drag cease to exist. Still, their predictions could describe the experimentally observed time dependency of the filled length of the penetrating fluid, i.e. $l\sim sqrt(t)$ . Several authors have further progressed the subject by considering effects not taken into account by Lucas and Washburn such as inertial (Quéré Reference Quéré1997; Diotallevi et al. Reference Diotallevi, Biferale, Chibbaro, Pontrelli, Toschi and Succi2009) and gravitational effects (Raiskinmäki et al. Reference Raiskinmäki, Shakib-Manesh, Jäsberg, Koponen, Merikoski and Timonen2002), deviations from a Poiseuille velocity profile at the inlet of the capillary or at the interface (Levine et al. Reference Levine, Lowndes, Watson and Neale1980; Dimitrov, Milchev & Binder Reference Dimitrov, Milchev and Binder2007; Diotallevi et al. Reference Diotallevi, Biferale, Chibbaro, Pontrelli, Toschi and Succi2009) and variations of the dynamic contact angle (Quéré Reference Quéré1997; Pooley, Kusumaatmaja & Yeomans Reference Pooley, Kusumaatmaja and Yeomans2008). The effect of the solid surfaces on the imbibition process, whether this involves the roughness (Stukan et al. Reference Stukan, Ligneul, Crawshaw and Boek2010) or patterned surfaces (Kusumaatmaja et al. Reference Kusumaatmaja, Pooley, Girardo, Pisignano and Yeomans2008; Mognetti & Yeomans Reference Mognetti and Yeomans2009) was also investigated. Finally, as the Lucas–Washburn regime is the asymptotic limit for long times, extensive work was devoted on the initial stages of capillary filling (Siegel Reference Siegel1961; Petrash, Nelson & Otto Reference Petrash, Nelson and Otto1963; Dreyer, Delgado & Path Reference Dreyer, Delgado and Path1994; Ichikawa & Satoda Reference Ichikawa and Satoda1994; Quéré Reference Quéré1997; Zhmud, Tiberg & Hallstensson Reference Zhmud, Tiberg and Hallstensson2000; Zacharoudiou & Boek Reference Zacharoudiou and Boek2016).
In order to unravel the fundamentals of fluid displacement processes at the pore scale, experimental research with micro-models has been on-going for the past several decades. This has progressed considerably, from bead packs to complex networks representing rock thin sections (Chatenever & Calhoun Jr Reference Chatenever and Calhoun1952; Lenormand et al. Reference Lenormand, Zarcone and Sarr1983; Hornbrook, Castanier & Pettit Reference Hornbrook, Castanier and Pettit1991; Giordano & Cheng Reference Giordano and Cheng2001; Bico & Quéré Reference Bico and Quéré2003; Kavehpour, Ovryn & McKinley Reference Kavehpour, Ovryn and McKinley2003; Rangel-German & Kovscek Reference Rangel-German and Kovscek2006; Kumar Gunda et al. Reference Kumar Gunda, Bera, Karadimitriou, Mitra and Hassanizadeh2011; Karadimitriou & Hassanizadeh Reference Karadimitriou and Hassanizadeh2012). Of particular interest are the displacement mechanisms controlling both primary drainage and imbibition processes (Lenormand et al. Reference Lenormand, Zarcone and Sarr1983; Yu & Wardlaw Reference Yu and Wardlaw1986; Lenormand Reference Lenormand1990; Morrow & Mason Reference Morrow and Mason2001; Chang et al. Reference Chang, Tsai, Shan and Chen2009), especially with regards to capillary trapping during carbon sequestration (Chalbaud et al. Reference Chalbaud, Lombard, Martin, Robin, Bertin and Egermann2007; Taku, Jessen & Orr Reference Taku, Jessen and Orr2007; Saadatpoor, Bryant & Sepehrnoori Reference Saadatpoor, Bryant and Sepehrnoori2011; Tokunaga et al. Reference Tokunaga, Wan, Jung, Kim, Kim and Dong2013).
Figure 1. Schematic drainage/imbibition capillary pressure curve. During primary drainage the capillary pressure $P_{c}$ systematically increases with increasing non-wetting phase saturation to connate water saturation ( $S_{WC}$ ), whereas the capillary pressure decreases with increasing wetting phase saturation during imbibition.
Lenormand et al. (Reference Lenormand, Zarcone and Sarr1983) investigated these fluid displacement mechanisms in resin etched networks of straight throats varying in width, with multiple menisci displacement processes identified. In the case of two immiscible fluids (oil–air), Lenormand et al. (Reference Lenormand, Zarcone and Sarr1983) illustrated that the Young–Laplace equation was sufficient to describe all menisci displacement mechanisms. The Young–Laplace equation (Rowlinson & Widom Reference Rowlinson and Widom1982) states
(1.1) $$\begin{eqnarray}P_{c}=P_{nw}-P_{w}=\unicode[STIX]{x1D6FE}\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right),\end{eqnarray}$$
where $P_{c}$ is the capillary pressure, $P_{nw}$ and $P_{w}$ are the non-wetting phase (NWP) and wetting phase (WP) pressures respectively, $\unicode[STIX]{x1D6FE}$ is the surface tension and $r_{1}$ , $r_{2}$ are the principal radii of curvature of the fluid interface. For throats with a rectangular cross-section this becomes $P_{c}=2\unicode[STIX]{x1D6FE}\cos \unicode[STIX]{x1D703}(1/d+1/w_{t})$ , where $\unicode[STIX]{x1D703}$ is the contact angle and $d$ , $w_{t}$ are the depth and width of the throat. For primary drainage the porous medium is initially saturated with WP before NWP is forcibly injected, displacing the WP and thus causing the capillary pressure to systematically increase with increasing NWP saturation, see figure 1. Considering a pore junction with different throat widths, where the NWP enters from one of the throats, the Young–Laplace law (1.1) says that the NWP should select the downstream throat with the lowest capillary entry pressure first, i.e. by entering the widest throat. In the case of $\text{CO}_{2}$ sequestration in a saline aquifer, once injection of the NWP ( $\text{CO}_{2}$ ) (Chiquet & Broseta Reference Chiquet, Broseta and Thibeau2007; Doughty, Freifeld & Trautz Reference Doughty, Freifeld and Trautz2008; Hesse, Orr & Tchelepi Reference Hesse, Orr and Tchelepi2008; Chalbaud et al. Reference Chalbaud, Robin, Lombard, Martin, Egermann and Bertin2009) has stopped, the WP (brine) re-enters the porous medium via imbibition. During the imbibition process, the capillary pressure systematically decreases with increasing WP saturation, see figure 1. This means that, following (1.1), we expect that the WP will enter the narrowest downstream throat first, before sequentially filling all other throats in order of increasing diameter.
In reality, displacement processes are more complex as they are dependent on the pore geometry along with the WP and NWP location. In order to observe these displacement mechanisms experimentally, we used: (i) micro-fluidic devices as their transparent nature makes visualisation of fluid dynamics relatively simple and (ii) single pore geometries, rather than more complex networks, as single pore geometries allow us to accurately control the displacement processes in each throat individually; see figure 2 for schematic micro-model designs used. Furthermore, the models' characteristics, like pore geometry, surface energy and roughness, can be carefully controlled. The above enabled us to examine real-time primary drainage and imbibition events systematically via both quasi-static and spontaneous methods.
The quasi-static experiments are performed in a sequence of pressure steps, with hydrostatic equilibrium reached before progressing to the next pressure step. These quasi-static experiments are relevant to displacement processes occurring in the far field during $\text{CO}_{2}$ storage and enhanced oil recovery operations, associated with low Reynolds numbers. Displacement processes within pore network models (Øren, Bakke & Arntzen Reference Øren, Bakke and Arntzen1998; Øren & Bakke Reference Øren and Bakke2003; Valvatne & Blunt Reference Valvatne and Blunt2003; Sorbie & Skauge Reference Sorbie and Skauge2012) are performed in this fashion. The dynamic experiments were conducted as they are relevant to drainage and imbibition processes occurring near the well bore at relatively high Reynolds numbers. Further details on the experimental methods will be provided in the next section.
To further elucidate our experimental findings we compare experiments to lattice Boltzmann simulations in the same geometries. These provide the opportunity for a detailed investigation of the problem by varying the parameters under investigation over a wider range, not easily accessible to the experiments. The overall goal of this study is to challenge the pore-filling rules on which network models are based by comparing the sequence of throat filling in simple geometries for which the Laplace pressures can be calculated exactly and without ambiguity.
This paper is organised as follows; in the next section we provide the details of the experiments and the numerical scheme. We present and discuss our results in § 3. Finally conclusions drawn from this work are discussed in § 4.
Figure 2. Schematic micro-model designs, including throat configuration and connectors. (a) Geometry 1: square pore with equal throats. (b) Geometry 2: square pore with unequal throats. (c) Geometry 3: pore with unequal throats. (d) Top down view of chip including confocal cross-section of a throat and (e) schematic cross-section of micro-model and connectors.
2 Methodology
2.1 Experimental section
2.1.1 Micro-fluidic models
To conduct these experiments we had specifically designed micro-models fabricated in poly(methyl methacrylate) (PMMA), figure 2(a–c), by Epigem (Redcar). The fabrication procedure involves defining the pattern in the base layer of the model by using SU-8 (an epoxy based photoresist) via photolithography, which is achieved in two stages. Initially an under-layer is deposited (spin coated then dried) and fully cured before a secondary layer is deposited in the same way. The pattern is then created by exposing the coated models through a photo-mask and developing it to form the features. The top layer has a partially cured SU-8 layer deposited on the underside (this will form the top of the pattern) before the inlet/outlet holes are drilled into it. Finally, the base and top layer are assembled; figure 2(e) illustrates the cross-section of an assembled model. All the models have been chemically treated by Epigem, in order to increase the hydrophobicity of the surface. Additionally all models have an approximate etch depth of $d=50~\unicode[STIX]{x03BC}\text{m}$ . The designs are intended to explore pore geometry and the influence of varying throat diameters, providing a range of different capillary entry pressures.
2.1.2 Experimental set-up
All of the fluid displacement observations were captured via a high-speed video microscope (FastCam MC2.1, Photron) which is housed within a laminar flow cabinet (PurAir-48, Air Science). This is necessary due to the intricate nature of the micro-models, where unnecessary exposure to dust can lead to blockages rendering the micro-models unusable. Due to the hydrophobic nature of the micro-models, n-decane (viscosity $\unicode[STIX]{x1D702}_{w}=0.85~\text{mPa}~\text{s}$ (Dymond & Øye Reference Dymond and Øye1994), surface tension $\unicode[STIX]{x1D6FE}=0.024~\text{N}~\text{m}^{-1}$ (Kuhn, Försterling & Waldeck Reference Kuhn, Försterling and Waldeck2009), density $\unicode[STIX]{x1D70C}=730~\text{kg}~\text{m}^{-3}$ , 99 %, Sigma-Aldrich) and air (viscosity $\unicode[STIX]{x1D702}_{nw}=18.2~\unicode[STIX]{x03BC}\text{Pa}~\text{s}$ , density $\unicode[STIX]{x1D70C}=1.2~\text{kg}~\text{m}^{-3}$ ) were selected as the WP and NWP respectively. However, due to the unique fabrication of each micro-model, the etch depth/width and contact angle varied for each chip, with the contact angle always measured through the denser phase (Lyons Reference Lyons2009), resulting in different capillary entry pressures. All experiments were conducted at ambient conditions.
2.1.3 Experimental procedure – primary drainage and imbibition
The primary drainage experiments began with the models fully saturated with the WP. The NWP entered the model via either: (i) quasi-static displacement, achieved by gradually decreasing the WP pressure by siphoning the WP into a reservoir located below the model via the narrowest throat, or (ii) dynamic displacement – injection of the NWP at a constant flow rate of $0.5~\unicode[STIX]{x03BC}\text{l}~\text{min}^{-1}$ via a programmable syringe pump (BS-8000, Braintree Scientific Ltd).
For imbibition the models were initially saturated with the NWP. During quasi-static displacement the model was connected to a reservoir of the WP and $P_{w}$ was then gradually increased by raising the height of the reservoir. In contrast, spontaneous displacement was attained by placing a droplet of the WP over an inlet port which then spontaneously penetrated into the model.
2.2 Numerical method
In this section we describe the numerical method we shall use, starting with the thermodynamics in § 2.2.1, the dynamical equations of motion in § 2.2.2 and the lattice Boltzmann implementation in § 2.2.3.
2.2.1 Thermodynamics of the fluid
The equilibrium properties of a binary fluid can be described by a Landau free energy functional (Briant & Yeomans Reference Briant and Yeomans2004)
(2.1) $$\begin{eqnarray}{\mathcal{F}}=\int _{V}\left(f_{b}+\frac{\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}}{2}(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}\unicode[STIX]{x1D719})^{2}\right)\,\text{d}V+\int _{S}f_{s}\,\text{d}S.\end{eqnarray}$$
The first term in the integrand is the bulk free energy density given by
(2.2) $$\begin{eqnarray}f_{b}=-\frac{A}{2}\unicode[STIX]{x1D719}^{2}+\frac{A}{4}\unicode[STIX]{x1D719}^{4}+\frac{c^{2}}{3}\unicode[STIX]{x1D70C}\ln \unicode[STIX]{x1D70C},\end{eqnarray}$$
where $\unicode[STIX]{x1D719}$ is the concentration or order parameter, $\unicode[STIX]{x1D70C}$ is the fluid mass density and $c$ is a lattice velocity parameter. $A$ is a constant with dimensions of energy per unit volume. This choice of $f_{b}$ allows binary phase separation into two phases with bulk equilibrium solutions $\unicode[STIX]{x1D719}_{eq}=\pm 1$ . The position of the interface is chosen to be the locus $\unicode[STIX]{x1D719}=0$ . The term in $\unicode[STIX]{x1D70C}$ controls the compressibility of the fluid (Kendon et al. Reference Kendon, Cates, Pagonabarraga, Desplat and Bladon2001).
The presence of interfaces is accounted for by the gradient term $(\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}/2)(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}\unicode[STIX]{x1D719})^{2}$ , which penalises spatial variations of the order parameter $\unicode[STIX]{x1D719}$ . This gives rise to the interface tension $\unicode[STIX]{x1D6FE}=\sqrt{8\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}A/9}$ and to the interface width $\unicode[STIX]{x1D709}=\sqrt{\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}/A}$ (Briant & Yeomans Reference Briant and Yeomans2004).
The final term in the free energy functional, equation (2.1), describes the interactions between the fluid and the solid surface. Following Cahn (Reference Cahn1977), the surface energy density is taken to be of the form $f_{s}=-h\unicode[STIX]{x1D719}_{s}$ , where $\unicode[STIX]{x1D719}_{s}$ is the value of the order parameter at the surface. Minimisation of the free energy gives an equilibrium wetting boundary condition (Briant & Yeomans Reference Briant and Yeomans2004)
(2.3) $$\begin{eqnarray}\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}\unicode[STIX]{x2202}_{\bot }\unicode[STIX]{x1D719}=-\frac{\text{d}f_{s}}{\text{d}\unicode[STIX]{x1D719}_{s}}=-h.\end{eqnarray}$$
The value of the parameter $h$ (the surface excess chemical potential) is related to the equilibrium contact angle $\unicode[STIX]{x1D703}^{eq}$ via (Briant & Yeomans Reference Briant and Yeomans2004)
(2.4) $$\begin{eqnarray}h=\sqrt{2\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}A}\text{ sign }\left[\frac{\unicode[STIX]{x03C0}}{2}-\unicode[STIX]{x1D703}^{eq}\right]\sqrt{\cos \left(\frac{\unicode[STIX]{x1D6FC}}{3}\right)\left\{1-\cos \left(\frac{\unicode[STIX]{x1D6FC}}{3}\right)\right\}},\end{eqnarray}$$
where $\unicode[STIX]{x1D6FC}=\arccos (\sin ^{2}\unicode[STIX]{x1D703}^{eq})$ and the function sign returns the sign of its argument.
This choice of the free energy leads to the chemical potential
(2.5) $$\begin{eqnarray}\unicode[STIX]{x1D707}=\frac{\unicode[STIX]{x1D6FF}{\mathcal{F}}}{\unicode[STIX]{x1D6FF}\unicode[STIX]{x1D719}}=-A\unicode[STIX]{x1D719}+A\unicode[STIX]{x1D719}^{3}-\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FE}\unicode[STIX]{x1D6FE}}\unicode[STIX]{x1D719},\end{eqnarray}$$
and the pressure tensor (Anderson, McFadden & Wheeler Reference Anderson, McFadden and Wheeler1998)
(2.6) $$\begin{eqnarray}\unicode[STIX]{x1D617}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}=\left[p_{b}-\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}\unicode[STIX]{x1D719}\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FE}\unicode[STIX]{x1D6FE}}\unicode[STIX]{x1D719}-\frac{\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}}{2}(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FE}}\unicode[STIX]{x1D719})^{2}\right]\unicode[STIX]{x1D6FF}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}+\unicode[STIX]{x1D705}_{\unicode[STIX]{x1D719}}(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}\unicode[STIX]{x1D719})(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}\unicode[STIX]{x1D719}),\end{eqnarray}$$
where $p_{b}=(c^{2}/3)\unicode[STIX]{x1D70C}-(A\unicode[STIX]{x1D719}^{2})/2+(3/4)A\unicode[STIX]{x1D719}^{4}$ is the bulk pressure.
2.2.2 Equations of motion
The hydrodynamic equations for the system are the continuity, (2.7), and the Navier–Stokes, (2.8), equations for a non-ideal fluid
(2.7) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x2202}_{t}\unicode[STIX]{x1D70C}+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}(\unicode[STIX]{x1D70C}u_{\unicode[STIX]{x1D6FC}})=0, & \displaystyle\end{eqnarray}$$
(2.8) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x2202}_{t}(\unicode[STIX]{x1D70C}u_{\unicode[STIX]{x1D6FC}})+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}(\unicode[STIX]{x1D70C}u_{\unicode[STIX]{x1D6FC}}u_{\unicode[STIX]{x1D6FD}})=-\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}\unicode[STIX]{x1D617}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}[\unicode[STIX]{x1D702}(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}u_{\unicode[STIX]{x1D6FC}}+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}u_{\unicode[STIX]{x1D6FD}})], & \displaystyle\end{eqnarray}$$
where $\boldsymbol{u}$ , $\unicode[STIX]{x1D64B}$ , $\unicode[STIX]{x1D702}$ are the fluid velocity, pressure tensor and dynamic viscosity respectively. For a binary fluid the equations of motion are coupled with a convection–diffusion equation,
(2.9) $$\begin{eqnarray}\unicode[STIX]{x2202}_{t}\unicode[STIX]{x1D719}+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}(\unicode[STIX]{x1D719}u_{\unicode[STIX]{x1D6FC}})=M\unicode[STIX]{x1D6FB}^{2}\unicode[STIX]{x1D707},\end{eqnarray}$$
that describes the dynamics of the order parameter $\unicode[STIX]{x1D719}$ . $M$ is a mobility coefficient.
2.2.3 Lattice Boltzmann method
The equations of motion are solved using a standard free energy lattice Boltzmann algorithm for a binary fluid (Briant & Yeomans Reference Briant and Yeomans2004). In particular we use a three-dimensional model with 19 discrete velocity vectors (D3Q19) and adopt a multiple relaxation time (MRT) (D'Humières et al. Reference D'Humières, Ginzburg, Krafczyk, Lallemand and Luo2002) approach for the evolution of the distribution functions, $f_{i}$ , associated with the fluid density $\unicode[STIX]{x1D70C}$ . Following Pooley, Kusumaatmaja & Yeomans (Reference Pooley, Kusumaatmaja and Yeomans2009), the relaxation times responsible for generating the viscous terms in the Navier–Stokes equation are set to $\unicode[STIX]{x1D70F}_{f}$ (based on the fluid viscosity $\unicode[STIX]{x1D702}=\unicode[STIX]{x1D70C}(\unicode[STIX]{x1D70F}_{f}-0.5)/3$ ), those related to conserved quantities to infinity and all the others, which correspond to non-hydrodynamic modes, to unity. A single relaxation time approach is sufficient for the order parameter $\unicode[STIX]{x1D719}$ . The relaxation time for the evolution of the distribution functions, $g_{i}$ , associated with the concentration $\unicode[STIX]{x1D719}$ is set to $\unicode[STIX]{x1D70F}_{g}=1$ . As shown by Pooley et al. (Reference Pooley, Kusumaatmaja and Yeomans2009), this approach suppresses spurious currents at the contact line, while improving significantly the numerical stability of the algorithm as well (Lallemand & Luo Reference Lallemand and Luo2000). For a detailed description of the lattice Boltzmann (LB) implementation we refer the reader to (Briant & Yeomans Reference Briant and Yeomans2004; Yeomans Reference Yeomans2006; Pooley et al. Reference Pooley, Kusumaatmaja and Yeomans2008, Reference Pooley, Kusumaatmaja and Yeomans2009).
Numerically we solve the equations of motion using graphics processing units (NVIDIA Tesla K40 GPU cards), taking advantage of the fact that the LB method is particularly well suited for computations on a parallel architecture (Gray, Cen & Boek Reference Gray, Cen and Boek2016). Running the simulations in parallel on 8 Tesla K40 cards reduces the required computational time to a few days for the most computationally intensive simulation.
In this section we will present our results. We start with primary drainage, which also serves as a validation for our experiments. Then we turn our attention to the imbibition case, where we examine pore filling events and investigate the role of pore geometry in determining the displacement pathways.
Figure 3. Quasi-static primary drainage images in a micro-model (Geometry 2) with etch depth $d=45~\unicode[STIX]{x03BC}\text{m}$ and throat widths $w_{t}$ : 33, 51, 65, $107~\unicode[STIX]{x03BC}\text{m}$ (WP-white, NWP-grey). Contact angle $26^{\circ }$ .
Table 1. Critical pressures for drainage in a micro-model (Geometry 2) with etch depth $d=45~\unicode[STIX]{x03BC}\text{m}$ and unequal throat widths $w_{t}$ : 33, 51, 65 and $107~\unicode[STIX]{x03BC}\text{m}$ , calculated via (1.1) and experimental data. Contact angle: $26^{\circ }$ , interfacial tension: $0.024~\text{N}~\text{m}^{-1}$ . Experimental error of $\pm 7$ Pa is attained by the 1 mm accuracy the height of the reservoir can be set to.
3.1 Primary drainage
The model was initially filled with the WP. For the quasi-static displacement, the $P_{w}$ was gradually decreased, allowing the NWP to enter the largest throat first, before sequentially displacing the WP from the throats in decreasing size order, as can be seen in figure 3. This type of displacement is referred to as 'piston-type' motion (Lenormand et al. Reference Lenormand, Zarcone and Sarr1983), when the NWP enters the throat filled with the WP only if the capillary pressure is equal to or greater than a given value $P_{p}=P_{nw}-P_{w}=2\unicode[STIX]{x1D6FE}\cos \unicode[STIX]{x1D703}^{eq}(1/d+1/w_{t})$ , which we call 'the critical pressure'. The calculated theoretical and experimental critical pressures for drainage within the throats are displayed in table 1. Generally there is good agreement between the two.
For the dynamic drainage, the NWP is injected into the chip at a constant flow rate of $0.5~\unicode[STIX]{x03BC}\text{l}~\text{min}^{-1}$ . This leads to the displacement of the WP at a constant mean velocity $u$ in each throat, prior to and after the square pore body. The corresponding capillary number for the interface motion in the throat prior to the pore body is $Ca=\unicode[STIX]{x1D702}_{w}u/\unicode[STIX]{x1D6FE}\sim 10^{-4}$ . Typical values for the ratio of viscous to capillary forces at the pore scale, quantified by the capillary number, are in the range of $10^{-3}$ – $10^{-10}$ , depending on the distance from the injection point in the well bore (Blunt & Scher Reference Blunt and Scher1995). Again, as can be seen in figure 4(a), the NWP displaces the WP via the largest downstream throat first, as predicted by (1.1). To validate the displacement sequence, we have carried out LB simulations in exactly the same geometry, the same value for $Ca$ and contact angle as in the experiment. The fluid flow was driven by applying velocity boundary conditions (Hecht & Harting Reference Hecht and Harting2010) at the inlet and outlet of the simulation domain to match the experimental conditions. The results are presented in figure 4(b) and display a good agreement with the experimental displacement sequence.
3.2 Imbibition
3.2.1 Quasi-static imbibition
Pore-filling events are dependent on the number of throats connected to the pore that are occupied with the NWP along with their orientation. Generally, pore-filling events are designated as $I_{1}$ , $I_{2}$ , $I_{3}\ldots I_{n}$ , with $n$ indicating the number of throats filled with the NWP (Lenormand et al. Reference Lenormand, Zarcone and Sarr1983). Here $I_{1,2,2A,3}$ pore-filling events were investigated via quasi-static imbibition experiments, by gradually increasing $P_{w}$ , in a micro-model with equal throat widths (Geometry 1: etch depth $d=56~\unicode[STIX]{x03BC}\text{m}$ , throat width $w_{t}=73~\unicode[STIX]{x03BC}\text{m}$ , pore width $238~\unicode[STIX]{x03BC}\text{m}$ ), see figure 5. At a critical capillary pressure the WP spontaneously displaces the NWP via 'piston-type' displacement. In all cases the critical pressure of the pore-filling event calculated from the Young–Laplace equation is very close to the experimentally determined critical pressure, as shown in table 2. Good agreement is achieved as we know the exact pore geometry and can observe the meniscus shape, allowing the critical radii to be well defined, as illustrated for each case in figure 5.
Figure 4. Primary dynamic drainage images: (a) experimental images of forced injection of NWP at $0.5~\unicode[STIX]{x03BC}\text{l}~\text{min}^{-1}$ . (b) Corresponding LB simulations. The dashed arrow denotes the direction of the flow.
Figure 5. Quasi-static $I_{1}$ , $I_{2}$ , $I_{2A}$ and $I_{3}$ imbibition experimental results (WP – white, NWP – grey). The critical radii in the plane of the micro-model, $r_{2}=r$ , used to calculate the displacement pressures for each case are also illustrated together with $r_{1}=d/2\cos \unicode[STIX]{x1D703}^{eq}$ .
Table 2. Critical pressures for each pore-filling event in a micro-model (Geometry 1) with equal throat widths. Contact angle: $28^{\circ }$ , etch depth $d=56~\unicode[STIX]{x03BC}\text{m}$ , throat width $w_{t}=73~\unicode[STIX]{x03BC}\text{m}$ , pore width: $238~\unicode[STIX]{x03BC}\text{m}$ , interfacial tension: $0.024~\text{N}~\text{m}^{-1}$ . The critical radii in the plane of the micro-model $r_{2}=r$ used in the calculation of the critical pressures is shown in figure 5. The principal radii of curvature, equation (1.1), are $r_{1}=d/2\cos \unicode[STIX]{x1D703}^{eq},r_{2}=r$ . Experimental error of $\pm 7$ Pa is attained by the 1 mm accuracy the height of the reservoir can be set to.
Quasi-static experiments were also conducted in a micro-model (Geometry 3) with unequal throats (etch depth $d=55~\unicode[STIX]{x03BC}\text{m}$ , throat widths $w_{t}=27$ , 47, 60, $100~\unicode[STIX]{x03BC}\text{m}$ pore: 155, $236~\unicode[STIX]{x03BC}\text{m}$ ). The predicted displacement sequence for this geometry, using the Young–Laplace equation and a contact angle of $16^{\circ }$ , is first snap off ( $P_{s}$ ) in the narrowest throat at 1240 Pa, followed by pore body filling via 'piston-type' displacement ( $P_{p}$ ) at 869 Pa, see figure 6(a). The snap-off pressure, $P_{s}=2\unicode[STIX]{x1D6FE}(\cos \unicode[STIX]{x1D703}^{eq}-\sin \unicode[STIX]{x1D703}^{eq})/\min (d,w_{t})$ , was estimated from the pressure at which two growing corner menisci meet on the channel wall (Valvatne & Blunt Reference Valvatne and Blunt2004). Snap off is able to occur during the quasi-static experiments, as there is time for the advancing WP films to develop ahead of the main meniscus, which swell and become unstable (Bico & Quéré Reference Bico and Quéré2003; Kavehpour et al. Reference Kavehpour, Ovryn and McKinley2003), see figure 6(b), and because 'piston-type' displacement is not possible for topological reasons (Lenormand et al. Reference Lenormand, Zarcone and Sarr1983). The WP films can be clearly seen in figure 7 as the darker lines outlining the model geometry.
Figure 6. (a) Predicted displacement sequence for quasi-static $I_{3}$ imbibition (Geometry 3) using the Young–Laplace equation for a contact angle of $16^{\circ }$ , with the narrowest throat filling first via snap off ( $P_{s}$ ), followed by the body filling through piston-like displacement ( $P_{p}$ ). (b) Schematic illustration of the snap-off mechanism. Swelling of the corner WP films leads to snap off when the menisci meet on the channel wall (dashed line) and become unstable. Top panel: configuration in the plane of the micro-model at the channel wall. Bottom panel: cross-sectional view in the orthogonal plane.
Figure 7. Quasi-static $I_{3}$ imbibition experimental results (WP – white, NWP – grey, wetting films – black). Snap off occurs in the narrowest throat before the meniscus enters the pore as predicted by the Young–Laplace equation.
Figure 8. Spontaneous $I_{3}$ imbibition experimental results ((a), Geometry 3, throat widths: ( $T_{1}$ ) $w_{t}=27~\unicode[STIX]{x03BC}\text{m}$ , ( $T_{2}$ ) $w_{t}=47~\unicode[STIX]{x03BC}\text{m}$ , ( $T_{3}$ ) $w_{t}=60~\unicode[STIX]{x03BC}\text{m}$ , ( $T_{4}$ ) $w_{t}=100~\unicode[STIX]{x03BC}\text{m}$ , etch depth $d=55~\unicode[STIX]{x03BC}\text{m}$ ), with the corresponding LB simulations (b); here the WP travels down the adjacent throat first. This behaviour is not predicted by the Young–Laplace law (1.1). Equilibrium contact angle $\unicode[STIX]{x1D703}^{eq}=16^{\circ }$ , viscosity ratio $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}=50$ .
3.2.2 Spontaneous imbibition
Extending our research to spontaneous imbibition, we investigated the $I_{3}$ displacement mechanism, which was compared to LB simulations, figure 8. During our spontaneous imbibition experiments we observed that the advancing meniscus did travel down an adjacent throat ( $T_{2}$ ) first instead of filling the smallest throat, as the WP films do not have time to develop ahead of the advancing meniscus. Indeed, Lenormand (Reference Lenormand1990) noted that when no wetting films are present along the corners of the models, the imbibing WP should enter an adjacent throat first, but no direct evidence was provided. Here we confirm this hypothesis directly and show our results in a series of snapshots from experiment and corresponding LB simulations in figure 8. Thus, displacement does not occur in the order predicted by the Young–Laplace equation. This is understandable as all the downstream throats have lower critical entry pressures than the pore body. Therefore, as soon as the critical pore-filling pressure has been exceeded, the WP can enter any of the downstream throats. Hence, in the case of spontaneous imbibition, the pore body geometry becomes a key factor in determining in which order the downstream throats will fill.
Capillary-filling dynamics in a rectangular throat
A reasonable assumption then could be that the dynamics of imbibition in the throat prior to the junction ( $T_{3}$ ) may affect the pore filling sequence and the selection of the displacement pathway. Hence, we turn our attention to the imbibition dynamics. We recently (Zacharoudiou & Boek Reference Zacharoudiou and Boek2016) examined the dynamics of capillary filling in two-dimensional channels and covered both: (i) the limit of long times for both high and low viscosity ratios $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}$ and (ii) the limit of short times, demonstrating that the free energy LB method can capture the correct dynamics for the process. We recall that in the limit of high viscosity ratios and long times, when the total time is much larger than the viscous time scale $t_{v}\sim \unicode[STIX]{x1D70C}L_{s}^{2}/\unicode[STIX]{x1D702}_{w}$ (Quéré Reference Quéré1997; Stange, Dreyer & Rath Reference Stange, Dreyer and Rath2003), the Lucas–Washburn regime ( $l\sim sqrt(t)$ ) (Lucas Reference Lucas1918; Washburn Reference Washburn1921) is expected. In the limit of short times two regimes, namely (i) inertial regime ( $l\sim t^{2}$ ) and (ii) visco–inertial regime ( $l\sim t$ ) can precede the Lucas–Washburn regime (Dreyer et al. Reference Dreyer, Delgado and Path1994; Stange et al. Reference Stange, Dreyer and Rath2003). The hydraulic diameter $D_{h}=2dw_{t}/(d+w_{t})$ is used as the characteristic length scale $L_{s}$ for evaluating $t_{v}$ .
Ichikawa, Hosokawa & Maeda (Reference Ichikawa, Hosokawa and Maeda2004) examined the interface motion driven by capillary action in the case of three-dimensional channels with a rectangular cross-section. Using the analytical solution of Brody, Yager & Austin (Reference Brody, Yager, Goldstein and Austin1996) for the velocity profile in a rectangular channel of aspect ratio $\unicode[STIX]{x1D716}=d/w_{t}$ and balancing the relevant forces, they estimated the dimensionless relation for the interface position as a function of time
(3.1) $$\begin{eqnarray}l^{\ast 2}=\cos \unicode[STIX]{x1D703}^{eq}(t^{\ast }-1+\text{e}^{-t^{\ast }}).\end{eqnarray}$$
The rescaled time and length are defined as $t^{\ast }=t/t_{c}$ and $l^{\ast }=l/l_{c}$ respectively, where
(3.2a,b ) $$\begin{eqnarray}t_{c}=\frac{8\unicode[STIX]{x1D716}^{2}\left[1-\displaystyle \frac{2\unicode[STIX]{x1D716}}{\unicode[STIX]{x03C0}}\tanh \left(\displaystyle \frac{\unicode[STIX]{x03C0}}{2\unicode[STIX]{x1D716}}\right)\right]}{\unicode[STIX]{x03C0}^{4}}\times t_{v},\quad l_{c}=2t_{c}\sqrt{\frac{(1+\unicode[STIX]{x1D716})\unicode[STIX]{x1D6FE}}{\unicode[STIX]{x1D716}\unicode[STIX]{x1D70C}w_{t}}}.\end{eqnarray}$$
Figure 9. (a) Length of the penetrating fluid column in throat $T_{3}$ for simulations with varying viscosity ratio $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}$ as a function of time in lattice units (l.u.). What appears as a deviation from the $l\sim t^{2}$ regime at early times (first point for $r_{\unicode[STIX]{x1D702}}=5$ ) is due to the interface retracting initially in the connected reservoir which can result in interfacial oscillations (Stange et al. Reference Stange, Dreyer and Rath2003; Zacharoudiou & Boek Reference Zacharoudiou and Boek2016). (b) Length versus time in scaled units according to (3.2). The theoretical prediction of Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004), equation (3.1), is given with the dashed line.
Here, we first compare our results with the theoretical prediction of Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004), equation (3.1), before examining whether the capillary-filling dynamics affects the displacement pathway after the junction. We considered equilibrium contact angles of $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ and $16^{\circ }$ (experimental condition). Starting with a situation with a high viscosity ratio $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}=500$ by choosing relaxation rates $\unicode[STIX]{x1D70F}_{f,w}=1.5$ and $\unicode[STIX]{x1D70F}_{f,nw}=0.502$ , we decrease the viscosity ratio to $r_{\unicode[STIX]{x1D702}}=5$ by decreasing $\unicode[STIX]{x1D702}_{w}$ , while $\unicode[STIX]{x1D702}_{nw}$ is kept fixed. This choice decreases the rate of viscous dissipation in the WP as the viscosity ratio decreases from $r_{\unicode[STIX]{x1D702}}=500$ to $r_{\unicode[STIX]{x1D702}}=5$ , allowing for more energy to be available to the interface motion as it enters the junction region.
Figure 9(a) shows the length of the penetrating WP column in throat $T_{3}$ for simulations with varying viscosity ratio $r_{\unicode[STIX]{x1D702}}$ and $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ . In the limit of long times and high viscosity ratios the Lucas–Washburn regime ( $l\sim sqrt(t)$ ) (Lucas Reference Lucas1918; Washburn Reference Washburn1921) is clearly observed. The viscous time scale $t_{v}$ is of the order of $10^{4}$ in lattice units (l.u.) for $r_{\unicode[STIX]{x1D702}}=25,50$ , thus enabling us to also obtain the $l\sim t^{2}$ regime at early times, as the interface accelerates initially penetrating the throat. This initial acceleration of the interface for $r_{\unicode[STIX]{x1D702}}=5,25$ and 50 is reflected in the increasing dynamic contact angle $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ , shown in figure 10(a).
Figure 10. (a) Time variation of the dynamic contact angle in the $xz$ plane (etch depth direction) $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ for the imbibition process in throat $T_{3}$ . The behaviour of $\unicode[STIX]{x1D703}_{xy}^{\unicode[STIX]{x1D6FC}}$ is the same. The increase observed in $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ at the end of each simulation set is due to the interface reaching the end of throat $T_{3}$ and entering the junction region. The equilibrium value of the contact angle $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ (dashed line). (b) The cosine of $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ plotted against the capillary number $Ca$ . Results affected by the interface approaching the junction region were excluded. The solid lines correspond to linear fits of each data set to $Ca$ to enable comparison with the theoretical prediction of Cox (Reference Cox1986), Sheng & Zhou (Reference Sheng and Zhou1992), equation (3.3). (c) Illustration of the WP column configuration, as it imbibes throat $T_{3}$ , and the definition for the dynamic contact angle.
Rescaling length and time using (3.2) reveals that our numerical results approach the theoretical prediction of Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004), equation (3.1), in the limit of long times as shown in figure 9(b). Here we used the equilibrium value of the contact angle $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ , although actually the dynamic contact angle, shown in figure 10(a), varies with time as it depends on the interfacial velocity and $Ca$ (Cox Reference Cox1986; Sheng & Zhou Reference Sheng and Zhou1992). Using the dynamic contact angle instead would decrease slightly $l^{\ast }$ improving the agreement further. The dynamic contact angle, in the $xy$ and $xz$ planes, is evaluated by fitting the interface in the central region of the advancing meniscus to a circle, as shown in figure 10(c). The angle of intersection this circle makes with the side walls is $\unicode[STIX]{x1D703}_{xy,xz}^{\unicode[STIX]{x1D6FC}}$ .
Figure 10(b) depicts the variation of $\cos (\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}})$ as a function of the capillary number $Ca$ . This is in agreement with the theoretical predicted dependency of $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ on $Ca$ (Cox Reference Cox1986; Sheng & Zhou Reference Sheng and Zhou1992)
(3.3) $$\begin{eqnarray}\cos \unicode[STIX]{x1D703}^{\unicode[STIX]{x1D6FC}}=\cos \unicode[STIX]{x1D703}^{eq}-Ca\ln (KL_{s}/l_{s}).\end{eqnarray}$$
$K$ is a fitting constant, $L_{s}$ is a characteristic length scale of the system and $l_{s}$ is the effective slip length at the contact line. High interfacial velocities translate to high $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ , while as the interface slows down $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ approaches the equilibrium value $\unicode[STIX]{x1D703}^{eq}$ . Fitting the results for each simulation set to (3.3) and extrapolating to $Ca=0$ reveals the following $\unicode[STIX]{x1D703}_{Ca=0}^{\unicode[STIX]{x1D6FC}}=29.2^{\circ },30.3^{\circ },30.9^{\circ }$ and $29.4^{\circ }$ for $r_{\unicode[STIX]{x1D702}}=5,25,50$ and 500 respectively. Hence, this verifies that the dynamic contact angle tends to the correct value for the equilibrium contact angle of $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ . The deviation increases for the case of $\unicode[STIX]{x1D703}^{eq}=16^{\circ }$ and is of the order of a few degrees ( ${\sim}5^{\circ }$ ). This is expected for very small or large contact angles due to the finite width of the interface and has been observed for binary and ternary systems as well (Pooley et al. Reference Pooley, Kusumaatmaja and Yeomans2009; Semprebon, Krüger & Kusumaatmaja Reference Semprebon, Krüger and Kusumaatmaja2016).
We next examined the velocity of the interface front. Decreasing the viscosity ratio by decreasing the viscosity of the WP results in less viscous dissipation in the WP and hence more energy becomes available for driving the interface in the channel. Figure 11(a) shows the corresponding $Ca=\unicode[STIX]{x1D702}_{w}u/\unicode[STIX]{x1D6FE}$ . The experimental $Ca$ is of the order of $10^{-2}$ ( $r_{\unicode[STIX]{x1D702}}\sim 50$ ). Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004) estimated the dimensionless velocity
(3.4) $$\begin{eqnarray}u^{\ast }=\frac{\text{d}l^{\ast }}{\text{d}t^{\ast }}=\frac{1}{2}\sqrt{\frac{\unicode[STIX]{x1D716}\unicode[STIX]{x1D70C}w_{t}}{(1+\unicode[STIX]{x1D716})\unicode[STIX]{x1D6FE}}}u,\end{eqnarray}$$
which, in the limit of long times, approaches
(3.5) $$\begin{eqnarray}u^{\ast }=\frac{1}{2}\sqrt{\frac{\cos \unicode[STIX]{x1D703}^{eq}}{t^{\ast }}}.\end{eqnarray}$$
Although they neglected variations in the advancing dynamic contact angle $\unicode[STIX]{x1D703}^{\unicode[STIX]{x1D6FC}}$ , equation (3.5) can still be considered valid in the limit of long times as the advancing contact angle $\unicode[STIX]{x1D703}^{\unicode[STIX]{x1D6FC}}$ approaches the equilibrium value $\unicode[STIX]{x1D703}^{eq}$ . It is evident that the numerical results demonstrate excellent agreement with the theoretical prediction of Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004) (figure 11 b).
Figure 11. (a) The capillary number, $Ca=\unicode[STIX]{x1D702}_{w}u/\unicode[STIX]{x1D6FE}$ , for the interface front motion in throat $T_{3}$ as a function of time (in l.u.) for varying $r_{\unicode[STIX]{x1D702}}$ and $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ . At the end of throat $T_{3}$ the velocity decreases significantly, as the interface enters the junction region. (b) Interfacial velocity and time in scaled units. The theoretical prediction of Ichikawa et al. (Reference Ichikawa, Hosokawa and Maeda2004), equation (3.5), is given with the dashed line.
Junction region
Having validated the interface motion in the throat prior to the junction, we next examine the interface motion in the wider pore body. As the interface approaches the end of throat $T_{3}$ and enters the junction region, it decelerates at first as the driving capillary forces per unit area decrease and the interface adapts a concave shape in the $xy$ -plane. This reduction in the interfacial velocity is evident in figure 11. On the other hand inertial forces can keep the interface moving in the junction, while the motion is opposed by viscous forces that can damp this forward movement. Hence, an important dimensionless number that can affect the dynamics of the interface entering the junction is the Ohnesorge number
(3.6) $$\begin{eqnarray}Oh=\frac{\unicode[STIX]{x1D702}_{w}}{\sqrt{\unicode[STIX]{x1D70C}\unicode[STIX]{x1D6FE}L_{s}}},\end{eqnarray}$$
which gives the relative importance of viscous forces over inertial and capillary forces. Inertial forces (per unit volume), which scale as ${\sim}\unicode[STIX]{x1D70C}u^{2}/L_{s}$ , are in the range $10^{-6}$ ( $r_{\unicode[STIX]{x1D702}}=5$ )– $10^{-9}$ ( $r_{\unicode[STIX]{x1D702}}=500$ ) in l.u. Viscous forces (per unit volume), which scale as ${\sim}\unicode[STIX]{x1D702}_{w}u/L_{s}^{2}$ , are in the range $10^{-8}$ in l.u. for all cases as $r_{\unicode[STIX]{x1D702}}$ increases from 5 to 500. Capillary forces $F_{cap}=2\unicode[STIX]{x1D6FE}\cos \unicode[STIX]{x1D703}^{\unicode[STIX]{x1D6FC}}(d+w_{t})$ are of the order of 1 in l.u. for all cases.
On a second stage, the contact line in the $xy$ -plane makes contact with the side walls, see figure 12(a). As this favours energetically a transition from a concave to a convex configuration (dashed yellow line to the solid black line configuration), the surface energy released and the interface configuration transition can lead to interfacial oscillations, which, as demonstrated in figure 12(b), are more profound with decreasing $Oh$ . Time is normalised by the time $t_{cont}$ when the interface imbibes throat $T_{2}$ . Similar interfacial oscillations have been observed by Ferrari & Lunati (Reference Ferrari and Lunati2013), who examined a forced imbibition situation. Here we demonstrate that these oscillations can be generated as the interface travels through a narrow throat to a wider pore body in a spontaneous imbibition scenario as well, due to the small $Oh$ , with the mechanism behind this being the same. As the interface progresses further in the junction, the kinetic energy that was available is gradually dissipated and the forward movement is due to the action of capillary filling. This becomes more clear when looking at the interface configuration in the junction at a level $z=d/2$ for viscosity ratios $r_{\unicode[STIX]{x1D702}}=5$ ( $Oh=6\times 10^{-3}$ ) and $r_{\unicode[STIX]{x1D702}}=50$ ( $Oh=6\times 10^{-2}$ ), figure 13.
Figure 12. (a) Interface configuration in the $xy$ -plane at $z=d/2$ as the interface enters the junction region. When the contact line makes contact with the side walls the interface configuration can change from concave to convex leading to interfacial oscillations. (b) Velocity of the interface (in l.u.) measured along the dashed line shown in figure 13 for simulations with varying $r_{\unicode[STIX]{x1D702}}$ and Ohnesorge number ( $Oh$ ). Interfacial oscillations are evident, especially as $Oh$ decreases. Results are plotted as a function of dimensionless time $t^{\ast }=t/t_{cont}$ , where time is normalised by the time $t_{cont}$ when the interface imbibes throat $T_{2}$ .
Figure 13. Interface configuration in the $xy$ -plane ( $z=d/2$ ) in the junction for (a) $r_{\unicode[STIX]{x1D702}}=5$ , $Oh=6\times 10^{-3}$ and (b) $r_{\unicode[STIX]{x1D702}}=50$ , $Oh=6\times 10^{-2}$ .
Figure 14(a) shows the length travelled by the meniscus in the junction, measured along the dashed line in figure 13, for varying $r_{\unicode[STIX]{x1D702}}$ and $Oh$ . The similar shape of the curves, with two peaks – labelled as (1) and (3) – and two troughs – points (2) and (4) – is due to the transition from a concave to convex configuration, favoured by the shape of the junction. As expected, increasing the viscosity of the wetting phase (increasing $r_{\unicode[STIX]{x1D702}}$ ) increases the time it takes for the interface to imbibe into the next downstream throat, $t_{cont}$ . For all situations examined here, the fluid imbibes throat $T_{2}$ first, which is achieved when the interface has progressed a length $l\sim 85$ in the junction region. In other words, geometry dictates the pore-filling sequence, irrespective of the dynamics prior to the selection of the next downstream throat or the filling rules proposed by Lenormand et al. (Reference Lenormand, Zarcone and Sarr1983). The same is observed for the simulations reported in figure 14(b), which examines simulations with the same viscosity ratio ( $r_{\unicode[STIX]{x1D702}}=50$ ) and different $Oh$ , achieved by varying the viscosity of both phases. We note here that the capillary number is approximately the same in these simulations ( $Ca\sim 8\times 10^{-4}$ ); $t_{cont}$ varies significantly though as can be clearly observed, as a consequence of the different rates at which energy is dissipated in the system. Therefore, an important remark here is that $Ca$ itself is not sufficient to characterise the fluid flow. The dimensionless numbers, relevant to the specific type of fluid flow, must be matched, for example the viscosity ratio $r_{\unicode[STIX]{x1D702}}$ and the $Oh$ , in order to characterise the fluid flow dynamics.
Figure 14. Junction region: (a) length versus time (in l.u.) for varying viscosity ratios $r_{\unicode[STIX]{x1D702}}$ , measured along the line shown in figure 13(a). The wetting fluid starts imbibing the next downstream throat ( $T_{2}$ ) when $l\sim 85$ and $t=t_{cont}$ . (b) Simulation results with $r_{\unicode[STIX]{x1D702}}=50$ and different $Oh$ , by varying the viscosities of both phases. Increasing fluids' viscosities increases $Oh$ and $t_{cont}$ . The average $Ca=\unicode[STIX]{x1D702}_{w}\bar{u}/\unicode[STIX]{x1D6FE}$ for the interface motion in the junction is $8.1\times 10^{-4}$ ( $Oh=6\times 10^{-2}$ ), $8.4\times 10^{-4}$ ( $Oh=2\times 10^{-1}$ ) and $7.9\times 10^{-4}$ ( $Oh=1\times 10^{0}$ ). The labels (0)–(4) correspond to the snapshots in figure 13(b). Inset: the moment the wetting phase starts imbibing the throat $T_{2}$ ( $t=t_{cont}$ ) the interface retracts in the middle of the junction (reduction in $l$ ).
Examining the imbibition process in the junction in terms of the surface energies, and given that we kept all parameters fixed except for the fluid viscosities, we note that the surface energy released, due to wetting, and used to drive the fluid–fluid interface, is the same for all runs. What changes is the rate of viscous dissipation,
(3.7) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}=2\int _{V}\unicode[STIX]{x1D702}_{i}\dot{\unicode[STIX]{x1D750}}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}\dot{\unicode[STIX]{x1D750}}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}\,\text{d}V>0,\end{eqnarray}$$
which dictates how much energy is left available as kinetic energy for the fluid motion. $\unicode[STIX]{x1D702}_{i}$ ( $i=w,nw$ ) is the local viscosity and $\dot{\unicode[STIX]{x1D750}}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}=(\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FC}}u_{\unicode[STIX]{x1D6FD}}+\unicode[STIX]{x2202}_{\unicode[STIX]{x1D6FD}}u_{\unicode[STIX]{x1D6FC}})/2$ is the rate of strain tensor. For the spontaneous imbibition situation we examine here, the energy balance states (Ferrari & Lunati Reference Ferrari and Lunati2013)
(3.8) $$\begin{eqnarray}\frac{\text{d}E_{k}}{\text{d}t}=-\frac{\text{d}F}{\text{d}t}-\unicode[STIX]{x1D6F7},\end{eqnarray}$$
where $E_{k}$ and $F$ are the kinetic energy and surface free energy respectively. In figure 15(a) the viscous dissipation rate is plotted as a function of time for two of the simulations reported in figure 14(b), covering the interface motion both in throat $T_{3}$ (prior the junction) and in the junction region. In figure 15(b) viscous dissipation rate in dimensionless form $\unicode[STIX]{x1D6F7}^{\ast }=\unicode[STIX]{x1D6F7}/\unicode[STIX]{x1D6FE}D_{h}\bar{u}$ is plotted as a function of dimensionless time $t^{\ast }=t/t_{cont}$ . As the viscosity of both fluids increases to maintain the same $r_{\unicode[STIX]{x1D702}}$ , the total amount of energy dissipated (area under the curves, i.e. $\int \unicode[STIX]{x1D6F7}\,\text{d}t$ ) increases, and hence the change in kinetic energy decreases. The peak which is clearly visible for each case, corresponds to time $t=t_{cont}$ , when the wetting fluid starts imbibing throat $T_{2}$ , resulting in an increase in the fluid velocity.
Figure 15. (a) Viscous dissipation rate as a function of time (in l.u.) for the simulations reported in figure 14(b). This covers the fluid–fluid interface motion in the throat $T_{3}$ ( $t\leqslant t_{1}$ ) and in the junction region ( $t\geqslant t_{1}$ ). The corresponding values of viscosities are $\unicode[STIX]{x1D702}_{w}=8.33\times 10^{-2}$ , $\unicode[STIX]{x1D702}_{nw}=1.67\times 10^{-3}$ ( $Oh=2\times 10^{-1}$ ) and $\unicode[STIX]{x1D702}_{w}=6.67\times 10^{-1}$ , $\unicode[STIX]{x1D702}_{nw}=1.33\times 10^{-2}$ ( $Oh=1\times 10^{0}$ ). (b) Viscous dissipation rate versus time in scaled units.
The change in the total surface energy can be expressed as
(3.9) $$\begin{eqnarray}\text{d}F=\unicode[STIX]{x1D6FE}\,\text{d}A_{int}+\unicode[STIX]{x1D6FE}_{ws}\,\text{d}A_{ws}+\unicode[STIX]{x1D6FE}_{ns}\,\text{d}A_{ns},\end{eqnarray}$$
where $\text{d}A_{int}$ , $\text{d}A_{ws}$ and $\text{d}A_{ns}$ are the increments of the areas of the fluid–fluid, solid–wetting phase fluid and solid–non-wetting phase fluid interfaces respectively and $\unicode[STIX]{x1D6FE}$ , $\unicode[STIX]{x1D6FE}_{ws}$ , $\unicode[STIX]{x1D6FE}_{ns}$ the corresponding surface tensions. Given that the total solid surface area $A_{tot}^{s}=A_{ns}+A_{ws}$ is constant and that $\text{d}A_{ns}=-\text{d}A_{ws}$ , then
(3.10) $$\begin{eqnarray}\text{d}F=\unicode[STIX]{x1D6FE}(\text{d}A_{int}-\cos \unicode[STIX]{x1D703}^{eq}\,\text{d}A_{ws}).\end{eqnarray}$$
The total surface energy is given by $F=\unicode[STIX]{x1D6FE}(A_{int}-\cos \unicode[STIX]{x1D703}^{eq}A_{ws})+F_{0}$ , where $F_{0}=\unicode[STIX]{x1D6FE}_{ns}A_{tot}^{s}$ is constant. Plotting $F-F_{0}$ as a function of dimensionless time $t^{\ast }=t/t_{cont}$ in figure 16(a), reveals that the total surface energy decreases monotonically. It also demonstrates that the released energy is approximately the same for all simulations, as expected.
Figure 16. (a) The total surface energy $F-F_{0}$ (in l.u.) as a function of dimensionless time $t^{\ast }=t/t_{cont}$ in the junction region for varying viscosity ratios $r_{\unicode[STIX]{x1D702}}$ . The energy released from wetting the solid surfaces ( $\text{d}F<0$ ) drives the fluid inside the micro-model geometry. (b) The area of the fluid–fluid interface (in l.u.). This is measured using the marching cubes algorithm.
Figure 17. Wetting film growth in the side throats, along the dashed lines, for the simulations reported in figures 14(b) and 15 with $r_{\unicode[STIX]{x1D702}}=50$ and (a) $Oh=2\times 10^{-1}$ , (b) $Oh=1\times 10^{0}$ . A situation with higher viscous dissipation rate (b) decreases the amount of energy converted to kinetic energy and hence the mean velocity along the $x$ -direction. This gives more time for the development of thin wetting films progressing along the corners of the micro-channel. The snapshots correspond at the same length travelled in the $x$ -direction and approximately at the same normalised time $t^{\ast }=t/t_{cont}$ .
Finally, an interesting feature was observed when comparing situations with the same viscosity ratio ( $r_{\unicode[STIX]{x1D702}}=50$ ), see figure 14(b). The longer times $t_{cont}$ observed, as viscous dissipation (and $Oh$ ) increases, allow more time to the interface to progress along the corners of the side throats. This is shown in figure 16(b), where we plot the area of the fluid–fluid interface as well as in figure 17, where we show snapshots of the interface configuration in the junction. In a complex geometry, as in porous media, swelling of these films and the consequent collapse can affect the displacement pathways. Especially in porous media and natural rock formations the effective diameter varies continuously with length, favouring filling of the narrowest downstream throat as predicted by the filling rules of Lenormand et al. (Reference Lenormand, Zarcone and Sarr1983). Here, however, and examining a wide range of parameters relevant to spontaneous imbibition dynamics, LB simulations revealed that the pore-filling sequence remained the same, with the pore geometry being the major influencing factor.
It was observed that for both the drainage and quasi-static imbibition experiments the displacement pathways predicted by the Young–Laplace law are obeyed. LB simulations also followed these displacement predictions for drainage. In addition, the theoretical critical pressures for displacement events were in good agreement with calculated experimental events. For our spontaneous imbibition experiments, on the other hand, we found that the WP enters an adjacent throat first due to the absence of WP film growth. Lenormand (Reference Lenormand1990) suggested that the imbibing WP should enter an adjacent throat first, but no direct evidence was provided. Here we confirm this hypothesis, for the first time, directly using experiment and corresponding LB simulations. Thus, pore geometry plays a vital role as it becomes the main deciding factor in the displacement pathways. Once the critical pressure of the pore has been exceeded, all downstream throats are able to be filled. This displacement choice was observed in both our spontaneous imbibition experiments, and the corresponding LB simulations for models of the same geometry. The implications of the absence of WP films within the models have not been investigated.
Furthermore, we observe that the displacement of the meniscus in a throat and the scaling of the imbibing fluid column with time can fall in the early time flow regimes of capillary filling ((i) $l\sim t^{2}$ , (ii) $l\sim t$ ) prior the Lucas–Washburn regime ((iii) $l\sim sqrt(t)$ ). An in-depth investigation of imbibition dynamics using lattice Boltzmann simulations was carried out in Zacharoudiou & Boek (Reference Zacharoudiou and Boek2016). We emphasise here that matching the relevant dimensionless numbers is essential in correctly resolving the multiphase flow dynamics, as $Ca$ itself is not sufficient to uniquely describe the flow. For example, we need to match the viscosity ratio and the Ohnesorge number, which for fluid flow at the pore scale is typically $Oh\ll 1$ . Nevertheless, for the range of parameters examined here, LB simulations revealed that the pore-filling sequence remained the same. Therefore, the main message of the current paper is that the filling order of channels connected to a junction depends primarily on the geometry of the pore body and is largely independent of the details of the dynamic meniscus shape influenced by inertial effects.
In addition to the above, in real porous media, microscopic WP films can develop under strong wetting conditions and flow through the cervices and the micro-roughness of the pore walls, whereas in our micro-fluidic devices WP films develop at the right-angled wedges. This, however, will not change the main message of the current paper. Several papers in the literature have discussed the development of thin WP films in porous media, including Vizika, Avraam & Payatakes (Reference Vizika, Avraam and Payatakes1994), Constantinides & Payatakes (Reference Constantinides and Payatakes2000), Bico & Quéré (Reference Bico and Quéré2003). Depending on the time scales associated with the general advancement of the fluid–fluid interface ( $t_{adv}$ ) and the time scales for thin films to develop and swell ( $t_{film}$ ), the displacement pathways can be very different. If the fluid flow is fast, for example in the manuscript the spontaneous imbibition situation ( $t_{film}>t_{adv}$ ), then the displacement pathway is mainly affected by the geometry. On the other hand, if the fluid flow is very slow (providing time for thin films to develop and swell), for example in the manuscript the quasi-static imbibition situation ( $t_{film}<t_{adv}$ ), then the displacement pathway and the pore-filling sequence is likely to follow the filling rules proposed by Lenormand et al. (Reference Lenormand, Zarcone and Sarr1983), based on the Young–Laplace equation.
The advancement of the invading WP fluid in a real porous medium, under strong wetting conditions, involves two distinct macroscopic fronts (Constantinides & Payatakes Reference Constantinides and Payatakes2000; Bico & Quéré Reference Bico and Quéré2003). The primary displacement front, which saturates the medium, is due to the piston-type motion of menisci in the main pores. The secondary front is due to the precursor WP films, which propagate ahead of the primary front using the fine structures of the porous material. The distance between the two fronts depends on the capillary number $Ca$ and the viscosity ratio and may be many times larger than the mean pore length (Constantinides & Payatakes Reference Constantinides and Payatakes2000). Under certain conditions the WP films can swell and cause disconnection and entrapment of the NWP. In these situations the displacement pathways can be significantly different from situations with no WP films. The impact of this phenomenon can be significant. For example Constantinides & Payatakes (Reference Constantinides and Payatakes2000) report that these WP films cause a substantial increase of the residual NWP saturation. Rücker et al. (Reference Rücker, Berg, Armstrong, Georgiadis, Ott, Schwing, Neiteler, Brussee, Makurat and Leu2015) report on regimes of corner flow and film swelling in real porous media, decreasing the connectivity of the NWP. Micro-fluidic devices and simulations can be used to investigate the dynamics and mechanisms involved in two phase flow at the pore scale and elucidate the role of wettability, viscosity ratio, $Ca$ or even roughness using patterned micro-fluidic devices on the advancement of the displacement fronts.
This work was conducted as part of the Qatar Carbonates and Carbon Storage Research Centre (QCCSRC), jointly funded by Qatar Petroleum, Shell and the Qatar Science and Technology Park. E.M.C. would also like to acknowledge the Engineering and Physical Sciences Research Council (EPSRC) for their funding.
Anderson, D. M., McFadden, G. B. & Wheeler, A. A. 1998 Diffuse-interface methods in fluid mechanics. Annu. Rev. Fluid Mech. 30 (1), 139–165.CrossRefGoogle Scholar
Bico, J. & Quéré, D. 2003 Precursors of impregnation. Europhys. Lett. 61 (3), 348–353.CrossRefGoogle Scholar
Blunt, M. J., Bijeljic, B., Dong, H., Gharbi, O., Iglauer, S., Mostaghimi, P., Paluszny, A. & Pentland, C. 2013 Pore-scale imaging and modelling. Adv. Water Resour. 51, 197–216; 35th Year Anniversary Issue.CrossRefGoogle Scholar
Blunt, M. J., Jackson, M. D., Piri, M. & Valvatne, P. H. 2002 Detailed physics, predictive capabilities and macroscopic consequences for pore-network models of multiphase flow. Adv. Water Resour. 25 (812), 1069–1089.CrossRefGoogle Scholar
Blunt, M. J. & Scher, H. 1995 Pore-level modeling of wetting. Phys. Rev. E 52 (6), 6387.Google ScholarPubMed
Briant, A. J. & Yeomans, J. M. 2004 Lattice Boltzmann simulations of contact line motion. II. Binary fluids. Phys. Rev. E 69 (3), 031603.Google ScholarPubMed
Brody, J. P., Yager, P., Goldstein, R. E. & Austin, R. H. 1996 Biotechnology at low Reynolds numbers. Biophys. J. 71 (6), 3430–3441.CrossRefGoogle ScholarPubMed
Cahn, J. 1977 Critical-point wetting. J. Chem. Phys. 66, 3367.CrossRefGoogle Scholar
Chalbaud, C., Robin, M., Lombard, J.-M., Martin, F., Egermann, P. & Bertin, H. 2009 Interfacial tension measurements and wettability evaluation for geological CO2 storage. Adv. Water Resour. 32 (1), 98–109.CrossRefGoogle Scholar
Chalbaud, C. A., Lombard, J.-M. N., Martin, F., Robin, M., Bertin, H. J. & Egermann, P. 2007 Two phase flow properties of Brine-CO2 systems in a carbonate core: influence of wettability on Pc and kr. In SPE/EAGE Reservoir Characterization and Simulation Conference. Society of Petroleum Engineers.Google Scholar
Chang, L.-C., Tsai, J.-P., Shan, H.-Y. & Chen, H.-H. 2009 Experimental study on imbibition displacement mechanisms of two-phase fluid using micro model. Environ. Earth Sci. 59 (4), 901–911.CrossRefGoogle Scholar
Chatenever, A. & Calhoun, J. C. Jr. 1952 Visual examinations of fluid behavior in porous media-part I. J. Petrol. Tech. 4 (06), 149–156.CrossRefGoogle Scholar
Chiquet, P., Broseta, D. & Thibeau, S. 2007 Wettability alteration of caprock minerals by carbon dioxide. Geofluids 7 (2), 112–122.CrossRefGoogle Scholar
Constantinides, G. N. & Payatakes, A. C. 2000 Effects of precursor wetting films in immiscible displacement through porous media. Trans. Porous Med. 38 (3), 291–317.CrossRefGoogle Scholar
Cox, R. G. 1986 The dynamics of the spreading of liquids on a solid surface. Part 1. Viscous flow. J. Fluid Mech. 168, 169–194.CrossRefGoogle Scholar
D'Humières, D., Ginzburg, I., Krafczyk, M., Lallemand, P. & Luo, L.-S. 2002 Multiple-relaxation-time lattice Boltzmann models in three dimensions. Phil. Trans. R. Soc. Lond. A 360, 437.CrossRefGoogle ScholarPubMed
Dimitrov, D. I., Milchev, A. & Binder, K. 2007 Capillary rise in nanopores: molecular dynamics evidence for the Lucas–Washburn equation. Phys. Rev. Lett. 99, 054501.CrossRefGoogle ScholarPubMed
Diotallevi, F., Biferale, L., Chibbaro, S., Pontrelli, G., Toschi, F. & Succi, S. 2009 Lattice boltzmann simulations of capillary filling: finite vapour density effects. Eur. Phys. J. Special Topics 171 (1), 237–243.CrossRefGoogle Scholar
Doughty, C., Freifeld, B. M. & Trautz, R. C. 2008 Site characterization for CO2 geologic storage and vice versa: the Frio brine pilot, Texas, USA as a case study. Environ. Geol. 54 (8), 1635–1656.CrossRefGoogle Scholar
Dreyer, M., Delgado, A. & Path, H.-J. 1994 Capillary rise of liquid between parallel plates under microgravity. J. Colloid Interface Sci. 163 (1), 158–168.CrossRefGoogle Scholar
Dymond, J. H. & Øye, H. A. 1994 Viscosity of selected liquid n-alkanes. J. Phys. Chem. Ref. Data 23, 41–53.CrossRefGoogle Scholar
Ferrari, A. & Lunati, I. 2013 Direct numerical simulations of interface dynamics to link capillary pressure and total surface energy. Adv. Water Resour. 57, 19–31.CrossRefGoogle Scholar
Giordano, N. & Cheng, J. T. 2001 Microfluid mechanics: progress and opportunities. J. Phys.: Condens. Matter 13 (15), R271.Google Scholar
Gray, F., Cen, J. & Boek, E. S. 2016 Simulation of dissolution in porous media in three dimensions with lattice Boltzmann, finite-volume, and surface-rescaling methods. Phys. Rev. E 94, 043320.Google ScholarPubMed
Hecht, M. & Harting, J. 2010 Implementation of on-site velocity boundary conditions for D3Q19 lattice Boltzmann simulations. J. Stat. Mech. 2010 (01), P01018.CrossRefGoogle Scholar
Hesse, M. A., Orr, F. M. & Tchelepi, H. A. 2008 Gravity currents with residual trapping. J. Fluid Mech. 611, 35–60.CrossRefGoogle Scholar
Hornbrook, J. W., Castanier, L. M. & Pettit, P. A. 1991 Observation of foam/oil interactions in a new high-resolution micromodel. In SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers.Google Scholar
Ichikawa, N., Hosokawa, K. & Maeda, R. 2004 Interface motion of capillary-driven flow in rectangular microchannel. J. Colloid Interface Sci. 280 (1), 155–164.CrossRefGoogle ScholarPubMed
Ichikawa, N. & Satoda, Y. 1994 Interface dynamics of capillary flow in a tube under negligible gravity condition. J. Colloid Interface Sci. 162 (2), 350–355.CrossRefGoogle Scholar
Karadimitriou, N. K. & Hassanizadeh, S. M. 2012 A review of micromodels and their use in two-phase flow studies. Vadose Zone Journal 11 (3).CrossRefGoogle Scholar
Kavehpour, H. P., Ovryn, B. & McKinley, G. H. 2003 Microscopic and macroscopic structure of the precursor layer in spreading viscous drops. Phys. Rev. Lett. 91 (19), 196104.CrossRefGoogle ScholarPubMed
Kendon, V. M., Cates, M. E., Pagonabarraga, I., Desplat, J.-C. & Bladon, P. 2001 Inertial effects in three-dimensional spinodal decomposition of a symmetric binary fluid mixture: a lattice boltzmann study. J. Fluid Mech. 440, 147–203.CrossRefGoogle Scholar
Kuhn, H., Försterling, H.-D. & Waldeck, D. H. 2009 Principles of Physical Chemistry. Wiley.Google Scholar
Kumar Gunda, N. S., Bera, B., Karadimitriou, N. K., Mitra, S. K. & Hassanizadeh, S. M. 2011 Reservoir-on-a-chip (roc): a new paradigm in reservoir engineering. Lab on a Chip 11 (22), 3785–3792.CrossRefGoogle Scholar
Kusumaatmaja, H., Pooley, C. M., Girardo, S., Pisignano, D. & Yeomans, J. M. 2008 Capillary filling in patterned channels. Phys. Rev. E 77, 067301.Google ScholarPubMed
Lallemand, P. & Luo, L.-S. 2000 Theory of the lattice Boltzmann method: dispersion, dissipation, isotropy, Galilean invariance, and stability. Phys. Rev. E 61, 6546–6562.Google ScholarPubMed
Lenormand, R. 1990 Liquids in porous media. J. Phys.: Condens. Matter 2 (S), SA79.Google Scholar
Lenormand, R., Zarcone, C. & Sarr, A. 1983 Mechanisms of the displacement of one fluid by another in a network of capillary ducts. J. Fluid Mech. 135, 337–353.CrossRefGoogle Scholar
Levine, S., Lowndes, J., Watson, E. J. & Neale, G. 1980 A theory of capillary rise of a liquid in a vertical cylindrical tube and in a parallel-plate channel: Washburn equation modified to account for the meniscus with slippage at the contact line. J. Colloid Interface Sci. 73 (1), 136–151.CrossRefGoogle Scholar
Lucas, R. 1918 Rate of capillary ascension of liquids. Kolloidn. Z 23 (15), 15–22.Google Scholar
Lyons, W. 2009 Working Guide to Reservoir Engineering. Gulf Professional Publishing.Google Scholar
Mognetti, B. M. & Yeomans, J. M. 2009 Capillary filling in microchannels patterned by posts. Phys. Rev. E 80, 056309.Google ScholarPubMed
Morrow, N. R. & Mason, G. 2001 Recovery of oil by spontaneous imbibition. Curr. Opin. Colloid Interface Sci. 6 (4), 321–337.CrossRefGoogle Scholar
Øren, P.-E. & Bakke, S. 2003 Reconstruction of berea sandstone and pore-scale modelling of wettability effects. J. Petrol. Science Engineering 39 (3), 177–199.CrossRefGoogle Scholar
Øren, P.-E., Bakke, S. & Arntzen, O. J. 1998 Extending predictive capabilities to network models. SPE J. RICHARDSON 3, 324–336.CrossRefGoogle Scholar
Petrash, D. A., Nelson, T. M. & Otto, E. W. 1963 Effect of Surface Energy on the Liquid–Vapor Interface Configuration During Weightlessness. National Aeronautics and Space Administration.Google Scholar
Pooley, C. M., Kusumaatmaja, H. & Yeomans, J. M. 2008 Contact line dynamics in binary lattice Boltzmann simulations. Phys. Rev. E 78, 056709.Google ScholarPubMed
Pooley, C. M., Kusumaatmaja, H. & Yeomans, J. M. 2009 Modelling capillary filling dynamics using lattice Boltzmann simulations. Eur. Phys. J. Special Topics 171 (1), 63–71.CrossRefGoogle Scholar
Quéré, D. 1997 Inertial capillarity. Europhys. Lett. 39 (5), 533.CrossRefGoogle Scholar
Raiskinmäki, P., Shakib-Manesh, A., Jäsberg, A., Koponen, A., Merikoski, J. & Timonen, J. 2002 Lattice-Boltzmann simulation of capillary rise dynamics. J. Stat. Phys. 107 (1–2), 143–158.CrossRefGoogle Scholar
Rangel-German, E. R. & Kovscek, A. R. 2006 A micromodel investigation of two-phase matrix-fracture transfer mechanisms. Water Resour. Res. 42, W03401.CrossRefGoogle Scholar
Rowlinson, J. S. & Widom, B. 1982 Molecular Theory of Capillarity. Clarendon.Google Scholar
Rücker, M., Berg, S., Armstrong, R. T., Georgiadis, A., Ott, H., Schwing, A., Neiteler, R., Brussee, N., Makurat, A., Leu, L. et al. 2015 From connected pathway flow to ganglion dynamics. Geophys. Res. Lett. 42 (10), 3888–3894.CrossRefGoogle Scholar
Saadatpoor, E., Bryant, S. L. & Sepehrnoori, K. 2011 Effect of upscaling heterogeneous domain on CO2 trapping mechanisms. Energy Procedia 4 (0), 5066–5073; 10th International Conference on Greenhouse Gas Control Technologies.CrossRefGoogle Scholar
Semprebon, C., Krüger, T. & Kusumaatmaja, H. 2016 Ternary free-energy lattice Boltzmann model with tunable surface tensions and contact angles. Phys. Rev. E 93, 033305.Google ScholarPubMed
Sheng, P. & Zhou, M. 1992 Immiscible-fluid displacement: contact-line dynamics and the velocity-dependent capillary pressure. Phys. Rev. A 45 (8), 5694.CrossRefGoogle ScholarPubMed
Siegel, R. 1961 Transient capillary rise in reduced and zero-gravity fields. Trans. ASME J. Appl. Mech. 28 (2), 165–170.CrossRefGoogle Scholar
Sorbie, K. S. & Skauge, A. 2012 Can network modeling predict two-phase flow functions. Petrophysics 53 (06), 401–409.Google Scholar
Stange, M., Dreyer, M. E. & Rath, H. J. 2003 Capillary driven flow in circular cylindrical tubes. Phys. Fluids 15 (9), 2587–2601.CrossRefGoogle Scholar
Stukan, M. R., Ligneul, P., Crawshaw, J. P. & Boek, E. S. 2010 Spontaneous imbibition in nanopores of different roughness and wettability. Langmuir 26 (16), 13342–13352.CrossRefGoogle ScholarPubMed
Taku, I. S., Jessen, K. & Orr, F. M. Jr. 2007 Storage of CO2 in saline aquifers: effects of gravity, viscous, and capillary forces on amount and timing of trapping. Intl J. Greenh. Gas Control 1 (4), 481–491.CrossRefGoogle Scholar
Tokunaga, T. K., Wan, J., Jung, J.-W., Kim, T. W., Kim, Y. & Dong, W. 2013 Capillary pressure and saturation relations for supercritical CO2 and brine in sand: high-pressure Pc(Sw) controller/meter measurements and capillary scaling predictions. Water Resour. Res. 49 (8), 4566–4579.CrossRefGoogle Scholar
Valvatne, P. H. & Blunt, M. J. 2003 Predictive pore-scale network modeling. In SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers.Google Scholar
Valvatne, P. H. & Blunt, M. J. 2004 Predictive pore-scale modeling of two-phase flow in mixed wet media. Water Resour. Res. 40 (7), w07406.CrossRefGoogle Scholar
Vizika, O., Avraam, D. G. & Payatakes, A. C. 1994 On the role of the viscosity ratio during low-Capillary-number forced imbibition in porous media. J. Colloid Interface Sci. 165 (2), 386–401.CrossRefGoogle Scholar
Washburn, E. W. 1921 The dynamics of capillary flow. Phys. Rev. 17 (3), 273.CrossRefGoogle Scholar
Yeomans, J. M. 2006 Mesoscale simulations: lattice Boltzmann and particle algorithms. Physica A 369 (1), 159–184.CrossRefGoogle Scholar
Yu, L. & Wardlaw, N. C. 1986 Mechanisms of nonwetting phase trapping during imbibition at slow rates. J. Colloid Interface Sci. 109 (2), 473–486.Google Scholar
Zacharoudiou, I. & Boek, E. S. 2016 Capillary filling and Haines jump dynamics using free energy Lattice Boltzmann simulations. Adv. Water Resour. 92, 43–56.CrossRefGoogle Scholar
Zhmud, B. V., Tiberg, F. & Hallstensson, K. 2000 Dynamics of capillary rise. J. Colloid Interface Sci. 228 (2), 263–269.CrossRefGoogle ScholarPubMed
View in content
Figure 1. Schematic drainage/imbibition capillary pressure curve. During primary drainage the capillary pressure $P_{c}$ systematically increases with increasing non-wetting phase saturation to connate water saturation ( $S_{WC}$), whereas the capillary pressure decreases with increasing wetting phase saturation during imbibition.
Figure 3. Quasi-static primary drainage images in a micro-model (Geometry 2) with etch depth $d=45~\unicode[STIX]{x03BC}\text{m}$ and throat widths $w_{t}$: 33, 51, 65, $107~\unicode[STIX]{x03BC}\text{m}$ (WP-white, NWP-grey). Contact angle $26^{\circ }$.
Table 1. Critical pressures for drainage in a micro-model (Geometry 2) with etch depth $d=45~\unicode[STIX]{x03BC}\text{m}$ and unequal throat widths $w_{t}$: 33, 51, 65 and $107~\unicode[STIX]{x03BC}\text{m}$, calculated via (1.1) and experimental data. Contact angle: $26^{\circ }$, interfacial tension: $0.024~\text{N}~\text{m}^{-1}$. Experimental error of $\pm 7$ Pa is attained by the 1 mm accuracy the height of the reservoir can be set to.
Figure 4. Primary dynamic drainage images: (a) experimental images of forced injection of NWP at $0.5~\unicode[STIX]{x03BC}\text{l}~\text{min}^{-1}$. (b) Corresponding LB simulations. The dashed arrow denotes the direction of the flow.
Figure 5. Quasi-static $I_{1}$, $I_{2}$, $I_{2A}$ and $I_{3}$ imbibition experimental results (WP – white, NWP – grey). The critical radii in the plane of the micro-model, $r_{2}=r$, used to calculate the displacement pressures for each case are also illustrated together with $r_{1}=d/2\cos \unicode[STIX]{x1D703}^{eq}$.
Table 2. Critical pressures for each pore-filling event in a micro-model (Geometry 1) with equal throat widths. Contact angle: $28^{\circ }$, etch depth $d=56~\unicode[STIX]{x03BC}\text{m}$, throat width $w_{t}=73~\unicode[STIX]{x03BC}\text{m}$, pore width: $238~\unicode[STIX]{x03BC}\text{m}$, interfacial tension: $0.024~\text{N}~\text{m}^{-1}$. The critical radii in the plane of the micro-model $r_{2}=r$ used in the calculation of the critical pressures is shown in figure 5. The principal radii of curvature, equation (1.1), are $r_{1}=d/2\cos \unicode[STIX]{x1D703}^{eq},r_{2}=r$. Experimental error of $\pm 7$ Pa is attained by the 1 mm accuracy the height of the reservoir can be set to.
Figure 6. (a) Predicted displacement sequence for quasi-static $I_{3}$ imbibition (Geometry 3) using the Young–Laplace equation for a contact angle of $16^{\circ }$, with the narrowest throat filling first via snap off ( $P_{s}$), followed by the body filling through piston-like displacement ( $P_{p}$). (b) Schematic illustration of the snap-off mechanism. Swelling of the corner WP films leads to snap off when the menisci meet on the channel wall (dashed line) and become unstable. Top panel: configuration in the plane of the micro-model at the channel wall. Bottom panel: cross-sectional view in the orthogonal plane.
Figure 8. Spontaneous $I_{3}$ imbibition experimental results ((a), Geometry 3, throat widths: ( $T_{1}$) $w_{t}=27~\unicode[STIX]{x03BC}\text{m}$, ( $T_{2}$) $w_{t}=47~\unicode[STIX]{x03BC}\text{m}$, ( $T_{3}$) $w_{t}=60~\unicode[STIX]{x03BC}\text{m}$, ( $T_{4}$) $w_{t}=100~\unicode[STIX]{x03BC}\text{m}$, etch depth $d=55~\unicode[STIX]{x03BC}\text{m}$), with the corresponding LB simulations (b); here the WP travels down the adjacent throat first. This behaviour is not predicted by the Young–Laplace law (1.1). Equilibrium contact angle $\unicode[STIX]{x1D703}^{eq}=16^{\circ }$, viscosity ratio $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}=50$.
Figure 9. (a) Length of the penetrating fluid column in throat $T_{3}$ for simulations with varying viscosity ratio $r_{\unicode[STIX]{x1D702}}=\unicode[STIX]{x1D702}_{w}/\unicode[STIX]{x1D702}_{nw}$ as a function of time in lattice units (l.u.). What appears as a deviation from the $l\sim t^{2}$ regime at early times (first point for $r_{\unicode[STIX]{x1D702}}=5$) is due to the interface retracting initially in the connected reservoir which can result in interfacial oscillations (Stange et al.2003; Zacharoudiou & Boek 2016). (b) Length versus time in scaled units according to (3.2). The theoretical prediction of Ichikawa et al. (2004), equation (3.1), is given with the dashed line.
Figure 10. (a) Time variation of the dynamic contact angle in the $xz$ plane (etch depth direction) $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ for the imbibition process in throat $T_{3}$. The behaviour of $\unicode[STIX]{x1D703}_{xy}^{\unicode[STIX]{x1D6FC}}$ is the same. The increase observed in $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ at the end of each simulation set is due to the interface reaching the end of throat $T_{3}$ and entering the junction region. The equilibrium value of the contact angle $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$ (dashed line). (b) The cosine of $\unicode[STIX]{x1D703}_{xz}^{\unicode[STIX]{x1D6FC}}$ plotted against the capillary number $Ca$. Results affected by the interface approaching the junction region were excluded. The solid lines correspond to linear fits of each data set to $Ca$ to enable comparison with the theoretical prediction of Cox (1986), Sheng & Zhou (1992), equation (3.3). (c) Illustration of the WP column configuration, as it imbibes throat $T_{3}$, and the definition for the dynamic contact angle.
Figure 11. (a) The capillary number, $Ca=\unicode[STIX]{x1D702}_{w}u/\unicode[STIX]{x1D6FE}$, for the interface front motion in throat $T_{3}$ as a function of time (in l.u.) for varying $r_{\unicode[STIX]{x1D702}}$ and $\unicode[STIX]{x1D703}^{eq}=30^{\circ }$. At the end of throat $T_{3}$ the velocity decreases significantly, as the interface enters the junction region. (b) Interfacial velocity and time in scaled units. The theoretical prediction of Ichikawa et al. (2004), equation (3.5), is given with the dashed line.
Figure 12. (a) Interface configuration in the $xy$-plane at $z=d/2$ as the interface enters the junction region. When the contact line makes contact with the side walls the interface configuration can change from concave to convex leading to interfacial oscillations. (b) Velocity of the interface (in l.u.) measured along the dashed line shown in figure 13 for simulations with varying $r_{\unicode[STIX]{x1D702}}$ and Ohnesorge number ( $Oh$). Interfacial oscillations are evident, especially as $Oh$ decreases. Results are plotted as a function of dimensionless time $t^{\ast }=t/t_{cont}$, where time is normalised by the time $t_{cont}$ when the interface imbibes throat $T_{2}$.
Figure 13. Interface configuration in the $xy$-plane ( $z=d/2$) in the junction for (a) $r_{\unicode[STIX]{x1D702}}=5$, $Oh=6\times 10^{-3}$ and (b) $r_{\unicode[STIX]{x1D702}}=50$, $Oh=6\times 10^{-2}$.
Figure 14. Junction region: (a) length versus time (in l.u.) for varying viscosity ratios $r_{\unicode[STIX]{x1D702}}$, measured along the line shown in figure 13(a). The wetting fluid starts imbibing the next downstream throat ( $T_{2}$) when $l\sim 85$ and $t=t_{cont}$. (b) Simulation results with $r_{\unicode[STIX]{x1D702}}=50$ and different $Oh$, by varying the viscosities of both phases. Increasing fluids' viscosities increases $Oh$ and $t_{cont}$. The average $Ca=\unicode[STIX]{x1D702}_{w}\bar{u}/\unicode[STIX]{x1D6FE}$ for the interface motion in the junction is $8.1\times 10^{-4}$ ( $Oh=6\times 10^{-2}$), $8.4\times 10^{-4}$ ( $Oh=2\times 10^{-1}$) and $7.9\times 10^{-4}$ ( $Oh=1\times 10^{0}$). The labels (0)–(4) correspond to the snapshots in figure 13(b). Inset: the moment the wetting phase starts imbibing the throat $T_{2}$ ( $t=t_{cont}$) the interface retracts in the middle of the junction (reduction in $l$).
Figure 15. (a) Viscous dissipation rate as a function of time (in l.u.) for the simulations reported in figure 14(b). This covers the fluid–fluid interface motion in the throat $T_{3}$ ( $t\leqslant t_{1}$) and in the junction region ( $t\geqslant t_{1}$). The corresponding values of viscosities are $\unicode[STIX]{x1D702}_{w}=8.33\times 10^{-2}$, $\unicode[STIX]{x1D702}_{nw}=1.67\times 10^{-3}$ ( $Oh=2\times 10^{-1}$) and $\unicode[STIX]{x1D702}_{w}=6.67\times 10^{-1}$, $\unicode[STIX]{x1D702}_{nw}=1.33\times 10^{-2}$ ( $Oh=1\times 10^{0}$). (b) Viscous dissipation rate versus time in scaled units.
Figure 16. (a) The total surface energy $F-F_{0}$ (in l.u.) as a function of dimensionless time $t^{\ast }=t/t_{cont}$ in the junction region for varying viscosity ratios $r_{\unicode[STIX]{x1D702}}$. The energy released from wetting the solid surfaces ( $\text{d}F<0$) drives the fluid inside the micro-model geometry. (b) The area of the fluid–fluid interface (in l.u.). This is measured using the marching cubes algorithm.
Figure 17. Wetting film growth in the side throats, along the dashed lines, for the simulations reported in figures 14(b) and 15 with $r_{\unicode[STIX]{x1D702}}=50$ and (a) $Oh=2\times 10^{-1}$, (b) $Oh=1\times 10^{0}$. A situation with higher viscous dissipation rate (b) decreases the amount of energy converted to kinetic energy and hence the mean velocity along the $x$-direction. This gives more time for the development of thin wetting films progressing along the corners of the micro-channel. The snapshots correspond at the same length travelled in the $x$-direction and approximately at the same normalised time $t^{\ast }=t/t_{cont}$. | CommonCrawl |
Function that converts values to 0 or 1
I'm looking for a function that takes a number between 0 and 1 and converts it into 0 if number is between 0 and 0.5, and into 1 if number is between 0.5 and 1.
So far I've got the first part, f(x) = max(x - 0.5, 0.0), but can't figure out how to continue the formula for the second part.
EDIT: The idea is to write it as one expression to avoid if branching.
algebra-precalculus functions
B. Mehta
$\begingroup$ Can you tell me why $f(x) = 0$ if $0 \le x < .5$ $f(x) = 1$ if $.5 \le x \le 1$ is not acceptable? That is a function and it's very well defined and calculable. $\endgroup$ – fleablood May 26 '17 at 23:44
$\begingroup$ If the language you are using has a round function, then round(x) will do the trick. BTW, while it may make your code look cleaner (which may be the more important consideration), it'll almost surely be less CPU efficient to invoke a function rather than perform a branch. The only exception I can think of is if round(x) is an inline function. $\endgroup$ – Χpẘ May 27 '17 at 0:49
$\begingroup$ Or probably (int) (x >= 0.5). $\endgroup$ – johnchen902 May 27 '17 at 2:19
$\begingroup$ @Χpẘ You're right regarding CPU, thing is in my case it's GPU and I use GLSL :) $\endgroup$ – user1617735 May 27 '17 at 5:48
You could just use a piecewise function, but if you're looking for a different option, this should work: $$f(x) = \lfloor2x\rfloor$$ If $0<x<0.5$, then $0<2x<1$, so $\lfloor2x\rfloor=0$ and if $0.5<x<1$, then $1<2x<2$, so $\lfloor2x\rfloor=1$.
B. MehtaB. Mehta
$\begingroup$ Damn I can't upvote now $\endgroup$ – Oussama Boussif May 26 '17 at 23:35
$\begingroup$ Don't want to be a spoilsport, but how is the floor function not a piecewise function, and why is the floor function any more or less acceptable then simply the function $f(x) = \begin{cases} 0 & \text{if } 0 \le x < 0.5\\1 & \text{if } .5 \le x \le 1\end{cases}$ $\endgroup$ – fleablood May 26 '17 at 23:50
$\begingroup$ I know. But it is a big bugaboo pet-peeve of mine that beginning students often assume incorrectly to be a valid function it has to have some expressible formula and if the cant find it they can't express it as a function. I REALLY want to abuse students of the notion. If you can say "I want a function that returns 0 if x is less than .5 and returns 1 otherwise" then that IS the function. The student HAS found a function and described it perfectly adequately. $\endgroup$ – fleablood May 26 '17 at 23:57
$\begingroup$ @fleablood +1 for the comment. Note: you want "disabuse", not "abuse". $\endgroup$ – Ethan Bolker May 27 '17 at 0:12
$\begingroup$ The usual programmer's trick for this kind of rounding is $f(x) = \left\lfloor x+\frac12\right\rfloor.$ This doesn't require any additional mechanism for the case $x=1.$ It also performs rounding within the intervals between other integers than just $0$ and $1,$ although for this application that's apparently not necessary. $\endgroup$ – David K May 27 '17 at 9:35
"I'm looking for a function that takes a number between 0 and 1 and converts it into 0 if number is between 0 and 0.5, and into 1 if number is between 0.5 and 1."
And you've already found it.
A function is any relation (set of ordered pairs) where the first term is mapped to a unique second term. That is all. A function doesn't have to have a rule or formula to make it calculable. ANything that can be unambiguously described is an acceptable function.
So the function you want is:
$f(x) =\begin{cases} 0 & \text {if } 0 \le x < .5 \\ 1 & \text {if } .5 \le x \le 1\end{cases}$.
That's it! That is all you need to say.
Unless you are asking for a mathematical formula.
But in that case you should ask for a formula, not a function. You already have the function.
It is "a function that takes a number between 0 and 1 and converts it into 0 if number is between 0 and 0.5, and into 1 if number is between 0.5 and 1".
fleabloodfleablood
$\begingroup$ Either that's a partial function, or the domain is not the reals... $\endgroup$ – Kevin May 27 '17 at 3:53
$\begingroup$ The domain is [0,1]. The function isn't defined anywhere else. The function ' s domain is implicit in its definition/description. $\endgroup$ – fleablood May 27 '17 at 4:22
I don't know if this is answer your question, but could it be the Heaviside theta function $\theta(x-0.5)$? You can write it as the derivative of the maximum function: $$ \theta(x-0.5) = \frac{d}{dx}\max\{x-0.5,0\}= \begin{cases} 0 & \text{ if } x < 0.5\\ 1 & \text{ if } x > 0.5 \end{cases} $$
Marco MolariMarco Molari
$\def\sign{\operatorname{sign}}$
Using another piecewise function, which is often included in a standard set of functions:
\begin{align} \sign x &= \begin{cases} -1 & \text{if } x < 0 \\ \phantom{-}0 & \text{if } x = 0 \\ \phantom{-}1 & \text{if } x > 0 \end{cases} \quad, \\ f(x)&=\tfrac12(\sign(2x-1)+1)\cdot \sign(2x-1) \end{align}
g.kovg.kov
Using the Iverson bracket, one could describe the function simply as $[x>0.5],\; x\in [0,1]$.
Frank VelFrank Vel
Not the answer you're looking for? Browse other questions tagged algebra-precalculus functions or ask your own question.
What is the formula for this exponentially growing "stairs"?
Need a function that matches criteria
Notation regarding the maximum function over a list of naturals
Algebraic proof that $\sum\limits_{i=0}^n \binom{i}{k} = \binom{n + 1}{k + 1}$
Searching a formula for scaling/mapping a variable based on 3 known values
Placing a number in a scale
How to Determine a Sinus Function from Samples
a Function to Push Numbers Away From a Central Number
unknown polynomial divided by $x^2(x-1)$, find the remainder.
How to map logarithmic scale onto linear space? | CommonCrawl |
Motivation for the rigour of real analysis
I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.
One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary.
Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well.
calculus real-analysis soft-question big-list motivation
Stella Biderman
$\begingroup$ Yours is a very weird question… What would be the motivation for not knowing for sure if a result we think is true is true? It is not that "rigour" is optional. if someone comes and tells you «I know that X holds but I can only prove it non-rigorously» he is just paraphrasing the sentence «I have a hunch». $\endgroup$ – Mariano Suárez-Álvarez Mar 29 '17 at 17:15
$\begingroup$ @MarianoSuárez-Álvarez I find this response a weird comment. Mathematicians were fine doing things with minimal rigor for over 2000 years. From the modern viewpoint that seems weird, but to them it makes perfect sense. In fact, there was an influential camp that was against the work on set theory of Cantor, Whitehead, Russell, et al. at the turn of the 20th century because it was unnecessary abstract nonsense $\endgroup$ – Stella Biderman Mar 29 '17 at 17:29
$\begingroup$ Yes, one can imagine that Cauchy had great fun when Abel observed that his theorem that a series of continuous functions has continuous sum had «exceptions». There are many such examples… The fact that the change towards arithmetization of calculus had opposition is explained not so much in that it moved towards arithmetization as in it was a change. Change is usually resisted. Great mathematicians objected to Hilbert's proof of his basis theorem, too, and to many other things. $\endgroup$ – Mariano Suárez-Álvarez Mar 29 '17 at 17:35
$\begingroup$ @MarianoSuárez-Álvarez What does that even mean? Do you think almost no one did mathematics before 1900? Fermat didn't prove his results, does that make him not a mathematician? What about Ramanujan? Or Euler's work on infinite series? From a modern POV most of Euler's work on infinite series is non-rigorous. So therefore it's not mathematics to you? $\endgroup$ – Stella Biderman Mar 29 '17 at 17:48
$\begingroup$ It's worth mentioning explicitly that analysis had to adapt primarily due to the broadening of what constituted a function, that occurred gradually from approximately Euler (it's a thing we have a single expression for) to approximately Weierstrass (it's a thing that has a value for each real number). If all your functions are implicitly continuous, differentiable, ..., as essentially all of Euler's functions are, you can comfortably manipulate them without worrying if the operations are actually valid, and indeed, you won't notice they could not be valid. $\endgroup$ – Chappers Mar 29 '17 at 20:26
In general, the push for rigor is usually in response to a failure to be able to demonstrate the kinds of results one wishes to. It's usually relatively easy to demonstrate that there exist objects with certain properties, but you need precise definitions to prove that no such object exists. The classic example of this is non-computable problems and Turing Machines. Until you sit down and say "this precisely and nothing else is what it means to be solved by computation" it's impossible to prove that something isn't a computation, so when people start asking "is there an algorithm that does $\ldots$?" for questions where the answer "should be" no, you suddenly need a precise definition. Similar things happened with real analysis.
In real analysis, as mentioned in an excellent comment, there was a shift in what people's conception of the notion of a function was. This broadened conception of a function suddenly allows for a number of famous "counter example" functions to be constructed. These often that require a reasonably rigorous understanding of the topic to construct or to analyze. The most famous is the everywhere continuous nowhere differentiable Weierstrass function. If you don't have a very precise definition of continuity and differentiability, demonstrating that that function is one and not the other is extremely hard. The quest for weird functions with unexpected properties and combinations of properties was one of the driving forces in developing precise conceptions of those properties.
Another topic that people were very interested in was infinite series. There are lots of weird results that can crop up if you're not careful with infinite series, as shown by the now famously cautionary theorem:
Theorem (Summation Rearrangement Theorem): Let $a_n$ be a sequence such that $\sum a_n$ converges conditionally. Then for every $x$ there is some $b_n$ that is a reordering of $a_n$ such that $\sum b_n=x$.
This theorem means you have to be very careful dealing with infinite sums, and for a long time people weren't and so started deriving results that made no sense. Suddenly the usual free-wheeling algebraic manipulation approach to solving infinite sums was no longer okay, because sometimes doing so changed the value of the sum. Instead, a more rigorous theory of summation manipulation, as well as concepts such as uniform and absolute convergence had to be developed.
Here's an example of an problem surrounding an infinite product created by Euler:
Consider the following formula: $$x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$ Does this expression even make sense? Assuming it does, does this equal $\sin(x)$ or $\sin(x)e^x$? How can you tell (notice that both functions have the same zeros as this sum, and the same relationship to their derivative)? If it doesn't equal $\sin(x)e^x$ (which it doesn't, it really does equal $\sin(x)$) how can we modify it so that it does?
Questions like this were very popular in the 1800s, as mathematicians were notably obsessed with infinite products and summations. However, most questions of this form require a very sophisticated understanding of analysis to handle (and weren't handled particularly well by the tools of the previous century).
Stella BidermanStella Biderman
$\begingroup$ There are many, many problems with the Collingwood article as it tends to regurgitate unthinkingly the Boyer-Grabiner line on Cauchy, for example. For a discussion of these issues see the publications here. $\endgroup$ – Mikhail Katz Jun 26 '17 at 13:46
One good motivating example I have is the Weierstrass Function, which is continuous everywhere but differentiable nowhere. Throughout the 18th and 19th centuries (until this counter example was discovered) it was thought that every continuous function was also (almost everywhere) differentiable and a large number of "proofs" of this assertion were attempted. Without a rigorous definition of concepts like "continuity" and "differentiabiliy", there is no way to analyze these sort of pathological cases.
In integration, a number of functions which are not Riemann integrable (see also here) were discovered, paving the way for the Stieltjes and more importantly the Lebesgue theories of integration. Today, the majority of integrals considered in pure mathematics are Lebesgue integrals.
A large number of these cases, especially pertaining to differentiation, integration, and continuity were all motivating factors in establishing analysis on a rigorous footing.
Lastly, the development of rigorous mathematics in the late 19th and early 20th centuries changed the focus of mathematical research. Before this revolution, mathematics--especially analysis--was extremely concrete. One did research into a specific function or class of functions--e.g. Bessel functions, Elliptic functions, etc.--but once rigorous methods exposed the underlying structure of many different classes and types of functions, research began to focus on the abstract nature of these structures. As a result, virtually all research in pure mathematics these days is abstract, and the major tool of abstract research is rigor.
JMJJMJ
$\begingroup$ People long knew that continuous functions could fail to be differentiable at finitely many points: just take any piecewise linear function. The issue is failure of differentiability everywhere, or even just at most points. $\endgroup$ – Ian Mar 29 '17 at 17:39
$\begingroup$ Indeed. But that point is rather trivial and is included in every high school calc course. It is clear that "differentiable" in this context means "differentiable, except possibly at a finite number of points". $\endgroup$ – JMJ Mar 29 '17 at 17:45
$\begingroup$ For a rare book in English (virtually all are in German or French) from the mid 1800s that discusses concerns with rigor from the perspective of that time, see this translation (published in 1843) of Ohm's The Spirit of Mathematical Analysis, and its Relation to a Logical System (original German version published in 1842), especially the translator's Introduction on pp. 1-17. $\endgroup$ – Dave L. Renfro Mar 29 '17 at 18:55
$\begingroup$ I think you're being overly defensive here. Just go edit in an "almost everywhere" and no one has anything to complain about. (I do find as a lecturer that any time I try to gloss over fine print, at some future points it comes back to bite me. Maybe that's apropos for this question.) $\endgroup$ – Daniel R. Collins Mar 30 '17 at 14:53
$\begingroup$ the indirection of petty rigor can sometimes cause more harm than good. (3) $\endgroup$ – JMJ Mar 30 '17 at 14:56
Some other answers have already provided excellent insights. But let's look at the problem this way: Where does the need for rigor originates? I think the answer lies behind one word: counter-intuition.
When someone is developing or creating mathematics, they mostly need to have an intuition about what they are talking about. I don't know much about the history, but for example, I bet the notion of derivative was first introduced because they needed something to express the "speed" or "acceleration" in motion. I mean, first there was a natural phenomenon for which a mathematical concept was developed. This math could perfectly describe the thing they were dealing with, and the results matched with expectation/intuition. But as time passed, some new problems popped out that led to unexpected/counter-intuitive results. So they felt the need to provide some more rigorous (and consequently, more abstract) concepts. This is why the more we develop in math, the harder its intuition become.
A classic example, as mentioned in other answers, is the Weierstrass function. Before knowing calculus, we may have some sense about the notion of continuity as well as the slope, and this helps us understand calculus more thoroughly. But Weierstrass function is something unexpected and hard-to-imagine, which leads us to the fact that "sometimes mathematics may not make sense, but it's true!"
Another (somehow related) example is the Bertrand paradox in probability. In a same manner, we may have some intuition about the probability even before studying it. This intuition is helpful in understanding the initial concepts of probability, until we are faced with the Bertrand paradox and be like, Oh... what can we do about that?
There are some good questions on this site and mathoverflow about some counter-intuitive results in various fields of mathematics, some of which were the initial incentive to develop more rigorous math. I recommend taking a look at them as well.
polfosolpolfosol
You may enjoy these books. The first one is a classic.
The History of the Calculus and Its Conceptual Development, by Carl B. Boyer
A History of Analysis, edited by Hans Niels Jahnke
Analysis by Its History, by Ernst Haider and Gerhard Wanner
A Radical Approach to Real Analysis, by David Bressoud
lhflhf
I list here few excellent texts on Real Analysis,have a look at them.
1)Understanding Analysis by Stephen Abbott
2)Real Mathematical Analysis by Pugh
3)Counterexamples in analysis by Gelbaum
For historically inclined yet mathematical you may try The Calculus gallery by William Dunham
Coming to your question of there was a need for epsilon delta proofs, have a look at this https://en.m.wikipedia.org/wiki/Non-standard_analysis
dannydanny
You can try to read this https://en.wikipedia.org/wiki/Fluxion to understand the motivations to introduce the definition of limit.
Important is the example in the indicated web page:
If the fluent ${\displaystyle y}$ is defined as ${\displaystyle y=t^{2}}$ (where ${\displaystyle t}$ is time) the fluxion (derivative) at ${\displaystyle t=2}$ is:
${\displaystyle {\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {(2+o)^{2}-2^{2}}{(2+o)-2}}={\frac {4+4o+o^{2}-4}{2+o-2}}=4+o}$
Here ${\displaystyle o}$ is an infinitely small amount of time and according to Newton, we can now ignore it because of its infinite smallness. He justified the use of ${\displaystyle o}$ as a non-zero quantity by stating that fluxions were a consequence of movement by an object.
Bishop George Berkeley, a prominent philosopher of the time, slammed Newton's fluxions in his essay The Analyst, published in 1734. Berkeley refused to believe that they were accurate because of the use of the infinitesimal ${\displaystyle o}$. He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus.
Towards the end of his life Newton revised his interpretation of ${\displaystyle o}$ as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit. He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largerly replaced Newton's fluxions and fluents and remain in use today.
You can also find informations in: "The American Mathematical Monthly, March 1983, Volume 90, Number 3, pp. 185–194."
Daniel R. Collins
asvasv
$\begingroup$ That doesn't mean just quoting the source verbatim; it's debatable whether you even attributed or made clear that it's a direct quote. $\endgroup$ – pjs36 Mar 29 '17 at 21:14
$\begingroup$ What do $[6]$-$[9]$ refer to? $\endgroup$ – Jason DeVito Mar 29 '17 at 21:24
$\begingroup$ @JasonDeVito The text of the answer was ripped word for word from the link at the top. Those numbers are footnotes in the wikipedia article that got copied. $\endgroup$ – Stella Biderman Mar 30 '17 at 1:54
Rigor is essential in mathematics, there is just no other way to do math than to proceed on the basis of rigorously proven theorems. This does not mean that calculus necessarily needs to be set up in the same way as it currently is. You may rail against the rigorous definition of limits, but you need to come up with an alternative if you don't like the way things are done. There are plenty of examples for imperfections in mathematics as was practiced in previous centuries, see Stella Biderman's and ABL's answers for details.
A more compelling objection against the way real analysis is done, is i.m.o. that we haven't gone far enough in neutralizing infinitely large or infinitely small objects. So, as is pointed out in asv's answer the limit procedure does away with the ill defined fluxions. To make such quantities well defined requires setting up the formalism of non-standard analysis, which is extremely complicated to do. But we still have not excised all infinite objects, etake e.g. the set of real numbers, as pointed out here this is an extremely complicated matter that's easily glossed over.
Therefore, it's worthwhile to explore the opposite idea where limits are used also at a higher level to get rid all infinite objects. This has not yet been done, but there have been mathematicians who have railed against the idea of infinite sets, which has led to formalisms such as finitism and ultrafinitism. A proper finitist foundation can allow one to always work on a discrete set and then approach the continuum only in a proper scaling limit where both the set is made larger and larger but also the functions that are defined on that set are coarse grained so that they become smooth functions in that continuum limit (we then don't get exotic objects like non-measurable functions). This more elaborate limiting procedure would i.m.o. at least, lead to a much simpler and much more natural set up of real analysis.
I'm in no doubt that the mathematicians who have fallen in love with exotic objects would strongly disagree with me, but it's difficult to explain to an engineering student why he/she has to navigate around all these exotic mathematical artifacts in order to study a topic such as fluid dynamics.
Count IblisCount Iblis
$\begingroup$ "Rigor is essential in mathematics, there is just no other way to do math than to proceed on the basis of rigorously proven theorems." The OP does not question this and seems to have no problem with it; the question is "for what reason, historically speaking, did old mathematicians decide to be more rigorous". $\endgroup$ – AnoE Mar 30 '17 at 9:21
$\begingroup$ As an alternative to finitism, one could use so-called "nonstandard analysis" to put the "infinitesimal" approach to calculus on a rigorous foundation, and then just get on with doing physics or engineering, since most functions used in applied science are "sufficiently well behaved" not to cause any serious problems. (String theorists may have to look after their mathematical hygiene more carefully than engineers, of course). $\endgroup$ – alephzero Mar 30 '17 at 14:12
$\begingroup$ @alephzero " since most functions used in applied science are "sufficiently well behaved"" Yes, but there is a reason why that's the case and that reason can be formalized which isn't done in analysis. This is because in the 19th century the classical physics notion of a continuum was taken as a model that inspired mathematicians at the time to define the mathematical notion of a continuum. But in modern physics the continuum only exists "effectively" as a result of coarse graining and scaling. $\endgroup$ – Count Iblis Mar 30 '17 at 19:19
$\begingroup$ @AnoE I did refer to the other answers given here that go into the details of problems with the old approaches making rigor necessary. My answer basically boils down to saying that you're ill so you're taking medicines to deal with that illness, the main problems have been cured, but there is still some pain left because you are only taking half the dose. So, you're not getting the full benefit of the rigor. $\endgroup$ – Count Iblis Mar 30 '17 at 19:23
@TheGreatDuck This is a fascinating thread. Let me comment on your aeronautical contribution from the viewpoint of an aeronautical engineer.
There are times when rigor is important and times when it isnt. For the first situation, consider the design of software to undertake air traffic control. Much attention is being given at the moment to "verifiable" algorithms, where it can be "rigorously" established that no possible situation has been overlooked. I am not sure that the standard of rigor would convince a modern analyst, but there is a recognition that intuition can be misleading and that formal analysis has considerable value.
An example where the search for rigor would be misplaced would be the calculation of the airflow by solving the Navier-Stokes equations. Insistence on rigor would require waiting around until the Navier-Stokes equations are shown to be well-posed, which is probably not going to happen soon. Until that day comes, designers will rely on wind-tunnel experiments, flight tests, and decades of accumulated experience. For now, this is MUCH safer than attempting to prove theorems. In fact, if I knew that the designers were trying to rely on theorems I would think very seriously before buying an airline ticket.
The value of rigor depends entirely on what you are trying to do, and how quickly you need to do it. This is true within mathematics as much as in its applications. Without Euler's gleeful nonchalance the pace of mathematical advance would have been greatly delayed.
Philip RoePhilip Roe
$\begingroup$ Exactly! Oddly enough, pragmatic considerations in mathematics are very unstylish in many quarters. $\endgroup$ – paul garrett Apr 1 '17 at 21:55
The purpose of rigor is not so much to make sure something is true. It is to make sure we know what we are actually assuming. If one forces specificity of what is assumed then also new ways to define thing may become clearer.
The parallell axiom of euclidean geometry is a good example. By forcing ourselves to try and prove it ( which we now know was not possible ) we gradually realize that other paths to build theory are possible. Without bothering to try and prove it and just take it for granted, then maybe other possibilities would not have occured to us.
For each added piece of specificity there is always a "in what other ways could this be done?" which has a chance to pop up leading to new theories.
mathreadlermathreadler
The purpose of "rigor" is to prove that when you claim something in mathematics it actually is legitimately true. If you wish to ask "why" then it is a fairly simple answer:
When we use calculus in machinery, programming, and to solve problems in science at a much larger scale than just a handful of expert scientists* can we risk not being able to perfectly know whether or not mathematics (the fundamental tool we use to measure theoretical scientific concepts) is actually correct? Imagine if the mean value theorem were not always true but we assumed it to be true. What if we built an airplane with an auto-piloting system relying on that theorem being true (maybe it turns upward at full throttle at a certain dropping velocity which we know it must pass through to be considered 'crashing'). We know that position is continuous (obviously we do not teleport) but without proof that the derivative is continuous due to position being a "smooth" curve we don't have a basis to claim velocity is not a step function.
And well, without rigor there would be a risk that the plane would crash.
tl;dr Science relies much heavier on calculus to do riskier jobs with safety concerns and so our scrutiny must therefore rise to meet the occasion.
*Of course it wasn't just experts that did calculus in the late 1800's to early 1900's but one has to admit that a college education is more widespread than many decades and/or centuries ago and so more people have that knowledge. Therefore, the number of people using it rises. With that, the need for quality control rises. You wouldn't buy a broken device at the store. Mathematics isn't a product that can be bought, but it's the same way. If it's broken, people won't accept it. Therefore, we scrutinize everything in a much deeper manner than before so that we can be justified in saying "yes, this statement is true!" and people will agree with us. We don't want to be blamed for something failing because we simply ignored cases where an equation wasn't true.
Not the answer you're looking for? Browse other questions tagged calculus real-analysis soft-question big-list motivation or ask your own question.
Lebesgue Integral but not a Riemann integral
Example for non-Riemann integrable functions
Reference Request: Finding an Op-Ed by J. Hammersley
What are some books on history and development of calculus?
Grad school & success in the long run
Expository articles on Analysis and Probability theory
Low Level Books on Conjectures/Famous Problems
Book recommendation for Lebesgue integration and Fourier Analysis
Calculus book that starts with sequences
book recommendation for real analysis
Relearning advanced undergraduate/beginning graduate level mathematics | CommonCrawl |
Download Expert Q&A Lessons & Calculators Premium
Still have math questions?
Ask our expert tutors
Hayden says that \(\frac{3}{2}\) is a whole number because it does not repeat as a decimal. Is he correct? Why or why not?
Incorrect.
View full explanation on CameraMath App.
The manufacturing cost of an air-conditioning unit $544, and the full-replacement extended warranty costs $113. If the manufacturer sells 506,970 units with extended warranties and must replace 20% of them as a result, how much will the replacement costs be? (Note: Assume manufacturing costs and replacement costs are the same.)
a. $55,158,336
b. $11,457,522
c. $2,129,274
d. $66,717,252
A: See Answer #Arithmetic
At a shelter, 15% of the dogs are puppies. There are 60 dogs at the shelter. How many are puppies? | CommonCrawl |
Search all SpringerOpen articles
Fixed Point Theory and Applications
Research | Open | Published: 07 January 2016
Iterative algorithm for a family of split equilibrium problems and fixed point problems in Hilbert spaces with applications
Shenghua Wang1,
Xiaoying Gong2,
Afrah AN Abdou3 &
Yeol Je Cho4,5
Fixed Point Theory and Applicationsvolume 2016, Article number: 4 (2016) | Download Citation
In this paper, we propose an iterative algorithm and, by using the proposed algorithm, prove some strong convergence theorems for finding a common element of the set of solutions of a finite family of split equilibrium problems and the set of common fixed points of a countable family of nonexpansive mappings in Hilbert spaces. An example is given to illustrate the main result of this paper. As an application, we construct an algorithm to solve an optimization problem.
Throughout this paper, let $\mathbb{R}$ denote the set of all real numbers, $\mathbb{N}$ denote the set of all positive integer numbers, H be a real Hilbert space and C be a nonempty closed convex subset of H. A mapping $S: C\to C$ is said to nonexpansive if
$$\Vert Sx-Sy\Vert \leq \Vert x-y\Vert $$
for all $x,y\in C$. The set of fixed points of S is denoted by $\operatorname {Fix}(S)$. It is known that the set $\operatorname {Fix}(S)$ is closed and convex.
Let $F: C\times C\to\mathbb{R}$ be a bifunction. The equilibrium problem for F is to find $z\in C$ such that
$$ F(z,y)\geq0 $$
for all $y\in C$. The set of all solutions of the problem (1.1) is denoted by $\operatorname {EP}(F)$, i.e.,
$$\operatorname {EP}(F)=\bigl\{ z\in C: F(z,y)\geq0, \forall y\in C\bigr\} . $$
From the problem (1.1), we can consider some related problems, that is, variational inequality problems, complementarity problems, fixed point problems, game theory and other problems. Also, many problems in physics, optimization, and economics can be reduced to finding a solution of the problem (1.1) (see [1–4]).
In 1997, Combettes and Hirstoaga [5] introduced an iterative scheme of finding a solution of the problem (1.1) under the assumption that $\operatorname {EP}(F)$ is nonempty. Later on, many iterative algorithms were considered to find a common element of the set of $\operatorname {Fix}(S)\cap \operatorname {EP}(F)$ (see [6–11]).
Recently, some new problems called split variational inequality problems were considered by some authors. Especially, Censor et al. [12] initially studied this class of split variational inequality problems.
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. Given the operators $f: H_{1}\to H_{1}$ and $g: H_{2}\to H_{2}$, bounded linear operator $A:H_{1}\to H_{2}$, and nonempty closed convex subsets $C\subset H_{1}$ and $Q\subset H_{2}$, the split variational inequality problem is formulated as follows:
Find a point $x^{*}\in C$ such that
$$\bigl\langle f\bigl(x^{*}\bigr),x-x^{*} \bigr\rangle \geq0 $$
for all $x\in C$ and such that
$$y^{*}=Ax^{*}\in Q \quad \mbox{solves}\quad \bigl\langle g\bigl(y^{*}\bigr),y-y^{*}\bigr\rangle \geq0 $$
for all $y\in Q$.
After investigating the algorithm of Censor et al., Moudafi [13] introduced a new iterative scheme to solve the following split monotone variational inclusion:
Find $x^{*}\in H_{1}$ such that
$$0\in f\bigl(x^{*}\bigr)+B_{1}\bigl(x^{*}\bigr) $$
and such that
$$y^{*}=Ax^{*}\in H_{2} \quad \mbox{solves}\quad 0\in g\bigl(y^{*}\bigr)+B_{2} \bigl(y^{*}\bigr), $$
where $B_{1}: H_{i}\to2^{H_{i}}$ is a set-valued mappings for $i=1,2$.
In 2013, Kazmi and Rizvi [14] considered a new class of split equilibrium problems. Let $F_{1}: C\times C\to\mathbb{R}$ and $F_{2}: Q\times Q\to\mathbb{R}$ be two bifunctions and $A: H_{1}\to H_{2}$ be a bounded linear operator. The split equilibrium problem is as follows:
Find $x^{*}\in C$ such that
$$ F_{1}\bigl(x^{*},x\bigr)\geq0 $$
$$ y^{*}=Ax^{*}\in Q\quad \mbox{solves}\quad F_{2}\bigl(y^{*},y\bigr)\geq0 $$
for all $y\in Q$. The set of all solutions of the problems (1.2) and (1.3) is denoted by Ω, i.e.,
$$\Omega=\bigl\{ z\in C: z\in \operatorname {EP}(F_{1}) \mbox{ such that } Az\in \operatorname {EP}(F_{2})\bigr\} . $$
For more details as regards the split equilibrium problems, refer to [15, 16], in which the author gave an iterative algorithm to find a common element of the sets of solutions of the split equilibrium problem and hierarchical fixed point problem.
In this paper, inspired by the results in [14] and [16], we propose an iterative algorithm to find a common element of the set of solutions for a family of split equilibrium problems and the set of common fixed points of a countable family of nonexpansive mappings. In particular, we use some new methods to prove the main result of this paper. As an application, we propose an iterative algorithm to solve a split variational inequality problem.
Let H be a Hilbert space and C be a nonempty closed subset of H. For each point $x\in H$, there exists a unique nearest point of C, denoted by $P_{C}x$, such that
$$\Vert x-P_{C}x\Vert \leq \Vert x-y\Vert $$
for all $y\in C$. Such a $P_{C}$ is called the metric projection from H onto C. It is well known that $P_{C}$ is a firmly nonexpansive mapping from H onto C, i.e.,
$$\Vert P_{C}x-P_{C}y\Vert ^{2}\leq\langle P_{C}x-P_{C}y,x-y\rangle $$
for all $x,y\in H$. Further, for any $x\in H$ and $z\in C$, $z=P_{C}x$ if and only if
$$\langle x-z,z-y \rangle\geq0 $$
for all $y\in C$.
A mapping $B:C\to H$ is called α-inverse strongly monotone if there exists $\alpha>0$ such that
$$\langle x-y,Bx-By\rangle\geq\alpha \Vert Bx-By\Vert ^{2} $$
for all $x,y\in H$. For each $\lambda\in(0, 2\alpha]$, $I-\lambda B$ is a nonexpansive mapping of C into H (see [17]).
Consider the following variational inequality for an inverse strongly monotone mapping B:
Find $u\in C$ such that
$$\langle v-u,Bu \rangle\geq0 $$
for all $v\in C$. The set of solutions of the variational inequality is denoted $\operatorname {VI}(C,B)$. It is well known that
$$u\in \operatorname {VI}(C,B)\quad \Longleftrightarrow \quad u=P_{C}(u-\lambda Bu) $$
for any $\lambda>0$. By this property, we can use a simple method to show that $u\in \operatorname {VI}(C,B)$. In fact, let $\{x_{n}\}$ be a sequence in C with $x_{n}\rightharpoonup u$. If $x_{n}-P_{C}(I-\lambda B)x_{n}\to0$, then, by the demiclosedness principle, it follows that $u=P_{C}(I-\lambda B)$, i.e., $u\in \operatorname {VI}(C,B)$. In Section 3, we use this method to show the conclusions of our main results in this paper.
Let $S: C\to C $ be a mapping. It is well known that S is nonexpansive if and only if the complement $I-S$ is $\frac {1}{2}$-inverse strongly monotone (see [18]). Assume that $\operatorname {Fix}(S)\neq\emptyset$. Then we have
$$ \Vert Sx-x\Vert ^{2}\leq2 \langle x-Sx,x-\hat{x}\rangle $$
for all $x\in C$ and $\hat{x}\in \operatorname {Fix}(S)$, which is obtained directly from
$$\begin{aligned} \Vert x-\hat{x}\Vert ^{2}&\geq \Vert Sx-S\hat{x}\Vert ^{2}=\Vert Sx-\hat{x}\Vert ^{2}=\bigl\Vert Sx-x+(x-\hat {x})\bigr\Vert ^{2} \\ &=\Vert Sx-x\Vert ^{2}+\Vert x-\hat{x}\Vert ^{2}+2 \langle Sx-x, x-\hat{x}\rangle. \end{aligned} $$
Let F be a bifunction of $C\times C$ into $\mathbb{R}$ satisfying the following conditions:
$F(x, x)=0$ for all $x\in C$;
F is monotone, i.e., $F(x,y)+F(y,x)\leq0$ for all $x,y\in C$;
for each $x,y,z\in C$, $\lim_{t\downarrow0}F(tz+(1-t)x,y)\leq F(x,y)$;
for each $x\in C$, $y\mapsto F(x,y)$ is convex and lower semi-continuous.
Lemma 2.1
Let C be a nonempty closed convex subset of a Hilbert space H and $F: C\times C\to\mathbb{R}$ be a bifunction which satisfies the conditions (A1)-(A4). For any $x\in H$ and $r >0$, define a mapping $T_{r}:H\to C$ by
$$ T_{r}^{F}(x)=\biggl\{ z\in C:F(z,y)+\frac{1}{r}\langle y-z,z-x\rangle\geq0, \forall y\in C\biggr\} . $$
Then $T_{r}^{F}$ is well defined and the following hold:
$T_{r}^{F}$ is single-valued;
$T_{r}^{F}$ is firmly nonexpansive, i.e., for any $x,y\in H$,
$$\bigl\Vert T_{r}^{F}x-T_{r}^{F}y \bigr\Vert ^{2}\leq\bigl\langle T_{r}^{F}x-T_{r}^{F}y,x-y \bigr\rangle ; $$
$\operatorname {Fix}(T_{r}^{F})=\operatorname {EP}(F)$;
$\operatorname {EP}(F)$ is closed and convex.
Let $F: C\times C\to\mathbb{R}$ be a bifunction satisfying the conditions (A1)-A(4). Let $T^{F}_{r}$ and $T^{F}_{s}$ be defined as in Lemma 2.1 with $r,s>0$. Then, for any $x,y\in H$, one has
$$\bigl\Vert T^{F}_{r}x-T^{F}_{s}y \bigr\Vert \leq \vert x-y\vert + \biggl\vert 1-\frac{s}{r} \biggr\vert \bigl\Vert T^{F}_{r}x-x\bigr\Vert . $$
Remark 2.1
In [20], some other conditions are required besides the conditions (A1)-(A4). In fact, the conditions (A1)-(A4) are enough for Lemma 2.2. For the proof, refer to [9, 20].
Let $F: C\times C\to\mathbb{R}$ be a functions satisfying the conditions (A1)-(A4) and $T^{F}_{s}$, $T^{F}_{t}$ be defined as in Lemma 2.1 with $s,t>0$. Then the following holds:
$$\bigl\Vert T_{s}^{F}x-T_{t}^{F}x \bigr\Vert ^{2}\leq\frac{s-t}{s}\langle T_{s}x-T_{t}x, T_{s}x-x\rangle $$
for all $x\in H$.
Let $\{a_{n}\}$ be a sequence in $[0,1]$ such that $\sum_{n=1}^{\infty}a_{n}=1$. Then we have the following:
$$\Biggl\Vert \sum_{n=1}^{\infty}a_{n}x_{n} \Biggr\Vert ^{2}\leq\sum _{n=1}^{\infty}a_{n}\Vert x_{n} \Vert ^{2} $$
for any bounded sequence $\{x_{n}\}$ in a Hilbert space H.
(Demiclosedness principle)
Let T be a nonexpansive mapping on a closed convex subset C of a real Hilbert space H. Then $I-T$ is demiclosed at any point $y\in H$, that is, if $x_{n}\rightharpoonup x$ and $x_{n}-Tx_{n}\to y\in H$, then $x-Tx=y$.
Assume that $\{a_{n}\}$ is a sequence of nonnegative numbers such that
$$a_{n+1}\leq(1-\gamma_{n})a_{n}+\delta_{n} $$
for each $n\geq0$, where $\{\gamma_{n}\}$ is a sequence in $(0,1)$ and $\{\delta_{n}\}$ is a sequence in $\mathbb{R}$ such that
$\sum_{n=1}^{\infty}\gamma_{n}=\infty$;
$\limsup_{n\to\infty}\delta_{n}/\gamma_{n}\leq0$ or $\sum_{=1}^{\infty} \vert \delta_{n}\vert <\infty$.
Then $\lim_{n\to\infty}a_{n}=0$.
[23, 24]
Let U and V be nonexpansive mappings. For $\sigma\in(0,1)$, define $S=\sigma U+(1-\sigma)V$. Suppose that $\operatorname {Fix}(U)\cap \operatorname {Fix}(V)\neq \emptyset$. Then $\operatorname {Fix}(U)\cap \operatorname {Fix}(V)=\operatorname {Fix}(S)$.
From [24] we can see that Lemma 2.7 holds whenever U and V are self or non-self mappings.
Let C be a nonempty closed convex subset of a Hilbert space H and $T: C\to H$ be a nonexpansive mapping with $\operatorname {Fix}(T)\neq\emptyset$. Let $P_{C}$ be the metric projection from H onto C. Then $\operatorname {Fix}(P_{C}T)=\operatorname {Fix}(T)=\operatorname {Fix}(TP_{C})$.
Let $S_{1},S_{2}: C\to H$ be two nonexpansive mappings with $\operatorname {Fix}(S_{1})\cap \operatorname {Fix}(S_{2})\neq\emptyset$. Let $\sigma\in(0,1)$ and define the mapping $S: C\to H$ by $S=\sigma S_{1}+(1-\sigma)S_{2}$. By Lemmas 2.7 and 2.8, it is easy to see that $\operatorname {Fix}(P_{C}S)=\operatorname {Fix}(P_{C}S_{1})\cap \operatorname {Fix}(P_{C}S_{2})$.
From Remark 2.2, we get the following result.
Let $\{B_{i}\}_{i=1}^{N}$ be a finite family of inverse strongly monotone mappings from C to H with the constants $\{\beta_{i}\}_{i=1}^{N}$ and assume that $\bigcap_{i=1}^{N}\operatorname {VI}(C,B_{i})\neq\emptyset$. Let $B=\sum_{i=1}^{N} \alpha_{i}B_{i}$ with $\{\alpha_{i}\}_{i=1}^{N}\subset(0,1)$ and $\sum_{i=1}^{N}\alpha_{i}=1$. Then $B: C\to H$ is a β-inverse strongly monotone mapping with $\beta=\min\{\beta_{1},\ldots, \beta_{N}\}$ and $\operatorname {VI}(C,B)=\bigcap_{i=1}^{N} \operatorname {VI}(C,B_{i})$.
It is easy to show that B is a β-inverse strongly monotone mapping. In fact, for all $x,y\in C$, by Lemma 2.4, we have
$$\begin{aligned} \beta \Vert Bx-By\Vert ^{2}&=\beta\Biggl\Vert \sum _{i=1}^{N}\alpha_{i} (B_{i}x- B_{i}y)\Biggr\Vert ^{2} \\ &\leq\beta\sum_{i=1}^{N} \alpha_{i}\Vert B_{i}x-B_{i}y\Vert ^{2} \\ &\leq \sum_{i=1}^{N}\alpha_{i} \beta_{i}\Vert B_{i}x-B_{i}y\Vert ^{2} \\ &\leq\sum_{i=1}^{N}\alpha_{i} \langle x-y, B_{i}x-B_{i}y\rangle \\ &=\langle x-y, Bx-By\rangle, \end{aligned} $$
which implies that B is a β-inverse strongly monotone mapping.
Next, we prove that $\operatorname {VI}(C,B)=\bigcap_{i=1}^{N} \operatorname {VI}(C,B_{i})$. Obviously, we have
$$\bigcap_{i=1}^{N} \operatorname {VI}(C,B_{i})\subset \operatorname {VI}(C,B). $$
Now, for any $w\in \operatorname {VI}(C,B)$, we show that $w\in\bigcap_{i=1}^{N} \operatorname {VI}(C,B_{i})$. Take a constant $\lambda\in(0,2\beta]$. Then $I-\lambda B$ is nonexpansive. Note that $I-\lambda B=\sum_{i=1}^{N}\alpha_{i}(I-\lambda B_{i})$ and each $I-\lambda B_{i}$ is nonexpansive. From Remark 2.2, it follows that
$$\operatorname {Fix}\bigl(P_{C}(I-\lambda B)\bigr)=\bigcap_{i=1}^{N} \operatorname {Fix}\bigl(P_{C}(I-\lambda B_{i})\bigr). $$
Thus we have
$$w\in \operatorname {VI}(C,B)\quad \Longleftrightarrow\quad w=P_{C}(I-\lambda B)w=P_{C}(I-\lambda B_{i})w \quad \Longleftrightarrow\quad w\in \operatorname {VI}(C,B_{i}) $$
for each $i=1,\ldots, N$. Therefore, $w\in\bigcap_{i=1}^{N} \operatorname {VI}(C,B_{i})$. This completes the proof. □
Main result
Now, we give the main results of this paper.
Theorem 3.1
Let $H_{1}$, $H_{2}$ be two real Hilbert spaces and $C\subset H_{1}$, $Q\subset H_{2}$ be nonempty closed convex subsets. Let $A_{i}: H_{1}\to H_{2}$ be a bounded linear operator for each $i=1,\ldots ,N_{1}$ with $N_{1}\in\mathbb{N}$ and $B_{i}: C\to H_{1}$ be a $\beta_{i} $-inverse strongly monotone operator for each $i=1,\ldots,N_{2}$ with $N_{2}\in\mathbb{N}$. Assume that $F:C\times C\to\mathbb{R}$ satisfies (A1)-(A4), $F_{i}: Q\times Q\to\mathbb{R}$ ($i=1,\ldots, N_{1}$) satisfies (A1)-(A4). Let $\{S_{n}\}$ be a countable family of nonexpansive mappings from C into C. Assume that $\Theta=\Gamma \cap\Omega\cap \operatorname {VI}\neq\emptyset$, where $\Gamma=\bigcap_{n=1}^{\infty} \operatorname {Fix}(S_{n})$, $\Omega=\{z\in C: z\in \operatorname {EP}(F)\ \textit{and}\ A_{i}z\in \operatorname {EP}(F_{i}), i=1,\ldots ,N_{1} \}$ and $\operatorname {VI}=\bigcap_{i=1}^{N_{2}}\operatorname {VI}(C,B_{i})$. Let $\{\gamma_{1},\ldots, \gamma_{N_{2}}\}\subset (0,1)$ with $\sum_{i=1}^{N_{2}}\gamma_{i}=1$. Take $v,x_{1}\in C$ arbitrarily and define an iterative scheme in the following manner:
$$ \textstyle\begin{cases} u_{i,n}=T_{r_{n}}^{F}(I-\gamma A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i}) x_{n}, \quad i=1,\ldots , N_{1},\\ y_{n}=P_{C} (I-\lambda_{n} (\sum_{i=1}^{N_{2}}\gamma_{i} B_{i} ) ) (\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} ),\\ x_{n+1}=\alpha_{n} v+ \sum_{i=1}^{n}( \alpha_{i-1} - \alpha_{i} ) S_{i}y_{n}, \end{cases} $$
for each $i=1,\ldots,N_{1}$ and $n\in\mathbb{N}$, where $\{r_{n}\}\subset (r,\infty)$ with $r>0$, $\{\lambda_{n}\}\subset(0,2\beta)$ with $\beta =\min\{\beta_{1},\ldots,\beta_{N_{2}}\}$ and $\gamma\subset(0,1/L^{2}]$, $L=\max\{L_{1},\ldots,L_{N_{1}}\}$ and $L_{i}$ is the spectral radius of the operator $A_{i}^{*}A_{i}$ and $A_{i}^{*}$ is the adjoint of $A_{i}$ for each $i\in\{ 1,\ldots, N_{1}\}$, and $\{\alpha_{n}\}\subset(0,1)$ is a strictly decreasing sequence. Let $\alpha_{0}=1$ and assume that the control sequences $\{\alpha_{n}\}$, $\{\lambda_{n}\}$, $\{r_{n}\}$ satisfy the following conditions:
$\lim_{n\to\infty}\alpha_{n}=0$ and $\sum_{n=1}^{\infty}\alpha _{n}=\infty$;
$\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty$ and $\sum_{n=1}^{\infty} \vert \lambda_{n+1}-\lambda_{n}\vert <\infty$;
$\lim_{n\to\infty} \lambda_{n}=\lambda>0$.
Then the sequence $\{x_{n}\}$ defined by (3.1) converges strongly to a point $z=P_{\Theta}v$.
We first show that, for each $i=1,\ldots, N_{1}$ and $n\in \mathbb{N}$, $A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i}$ is a $\frac {1}{2L_{i}^{2}}$-inverse strongly monotone mapping. In fact, since $T_{r_{n}}^{F_{i}}$ is (firmly) nonexpansive and $I-T_{r_{n}}^{F_{i}}$ is $\frac {1}{2}$-inverse strongly monotone, we have
$$\begin{aligned} &\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}x-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}y\bigr\Vert ^{2} \\ &\quad =\bigl\langle A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}}\bigr) (A_{i}x-A_{i}y), A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr) (A_{i}x-A_{i}y)\bigr\rangle \\ &\quad =\bigl\langle (I-T_{r_{n}}) (A_{i}x-A_{i}y),A_{i}A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr) (A_{i}x-A_{i}y) \bigr\rangle \\ &\quad \leq L_{i}^{2}\bigl\langle \bigl(I-T_{r_{n}}^{F_{i}} \bigr) (A_{i}x-A_{i}y), \bigl(I-T_{r_{n}}^{F_{i}} \bigr) (A_{i}x-A_{i}y)\bigr\rangle \\ &\quad = L_{i}^{2}\bigl\Vert \bigl(I-T_{r_{n}}^{F_{i}} \bigr) (A_{i}x-A_{i}y)\bigr\Vert ^{2} \\ &\quad \leq2L_{i}^{2}\bigl\langle A_{i}x-A_{i}y, \bigl(I-T_{r_{n}}^{F_{i}}\bigr) (A_{i}x-A_{i}y) \bigr\rangle \\ &\quad =2L_{i}^{2} \bigl\langle x-y, A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr) A_{i}x-A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr) A_{i} y\bigr\rangle \end{aligned}$$
for all $x,y\in H_{1}$, which implies that $A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i}$ is a $\frac {1}{2L_{i}^{2}}$-inverse strongly monotone mapping. Note that $\gamma\in (0,\frac{1}{L_{i}^{2}}]$. Thus $I-\gamma A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i}$ is nonexpansive for each $i=1,\ldots, N_{1}$ and $n\in\mathbb{N}$.
Now, we complete the proof by the next steps.
Step 1. $\{x_{n}\}$ is bounded.
Let $p\in\Theta$. Then $p=T_{r_{n}}^{F_{i}}p$ and $(I-\gamma A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i})p=p$. Thus we have
$$\begin{aligned} \Vert u_{i,n}-p\Vert &=\bigl\Vert T_{r_{n}}^{F } \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F } \bigr)A_{i}\bigr)x_{n}-T_{r_{n}}^{F_{i}}\bigl(I- \gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}\bigr)p\bigr\Vert \\ &\leq\bigl\Vert \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}\bigr)x_{n}-\bigl(I-\gamma A^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr)p\bigr\Vert \\ &\leq \Vert x_{n}-p\Vert . \end{aligned}$$
Let $B=\sum_{i=1}^{N_{2}}\gamma_{i} B_{i}$. Then B is a β-inverse strongly monotone mapping. Since $\{\lambda_{n}\}\subset(0,2\beta)$, $I-\lambda_{n} B $ is nonexpansive. Thus from (3.2), we have
$$\begin{aligned} \Vert y_{n}-p\Vert &= \Biggl\Vert P_{C}(I- \lambda_{n} B) \frac{1}{N_{1}} \sum_{i=1}^{N_{1}}u_{i,n}-P_{C}(I- \lambda_{n} B)p \Biggr\Vert \\ &\leq \Biggl\Vert (I-\lambda_{n} B)\frac{1}{N_{1}} \sum _{i=1}^{N_{1}}u_{i,n} -(I-\lambda_{n} B)p \Biggr\Vert \\ &\leq \Biggl\Vert \frac{1}{N_{1}} \sum_{i=1}^{N_{1}}u_{i,n}-p \Biggr\Vert \\ &\leq\frac{1}{N_{1}} \sum_{i=1}^{N_{1}} \Vert u_{i,n}-p\Vert \\ &\leq \Vert x_{n}-p\Vert . \end{aligned}$$
Thus from (3.3), it follows that
$$\begin{aligned} \Vert x_{n+1}-p\Vert &=\Biggl\Vert \alpha_{n}(v-p)+ \sum_{i=1}^{n}(\alpha_{i-1}-\alpha _{i}) (S_{i}y_{n}-S_{i}p) \Biggr\Vert \\ &\leq\alpha_{n}\Vert v-p\Vert + \sum_{i=1}^{n}( \alpha_{i-1}-\alpha_{i})\Vert y_{n}- p \Vert \\ &\leq\alpha_{n}\Vert v-p\Vert + \sum_{i=1}^{n}( \alpha_{i-1}-\alpha_{i})\Vert x_{n}- p \Vert \\ &=\alpha_{n}\Vert v-p\Vert + (1-\alpha_{n})\Vert x_{n}- p \Vert \\ &\leq \max\bigl\{ \Vert v-p\Vert ,\Vert x_{n}-p\Vert \bigr\} \end{aligned} $$
for all $n\in\mathbb{N}$, which implies that $\{x_{n}\}$ is bounded and so are $\{u_{i,n}\}$ ($i=1,\ldots,N_{1}$) and $\{y_{n}\}$.
Step 2. $\lim_{n\to\infty} \Vert x_{n+1}-x_{n}\Vert =0$ and $\lim_{n\to\infty }\Vert u_{i,n+1}-u_{i,n}\Vert =0$ for each $i=1,\ldots, N_{1}$.
Since the mappings $I-\gamma A^{*}(I-T_{r_{n}}^{F_{i}})A$ are nonexpansive, by Lemmas 2.2 and 2.3, we have
$$\begin{aligned} & \Vert u_{i,n+1}-u_{i,n}\Vert \\ &\quad=\bigl\Vert T_{r_{n+1}}^{F}\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n+1}}^{F_{i}}\bigr)A_{i}\bigr)x_{n+1}-T_{r_{n}}^{F} \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}\bigr)x_{n}\bigr\Vert \\ &\quad\leq\bigl\Vert \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n+1}}^{F_{i}} \bigr)A_{i}\bigr)x_{n+1}-\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr)x_{n} \bigr\Vert \\ &\qquad{}+\frac{\vert r_{n+1}-r_{n}\vert }{r_{n+1}}\bigl\Vert T_{r_{n+1}}^{F}\bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n+1}}^{F_{i}}\bigr)A_{i} \bigr)x_{n+1}-\bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n+1}}^{F_{i}} \bigr)A_{i}\bigr)x_{n+1}\bigr\Vert \\ &\quad\leq \Vert x_{n+1}-x_{n}\Vert +\bigl\Vert \bigl(I- \gamma A_{i}^{*}\bigl(I-T_{r_{n+1}}^{F_{i}} \bigr)A_{i}\bigr)x_{n}-\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr)x_{n} \bigr\Vert \\ &\qquad{}+\frac{\vert r_{n+1}-r_{n}\vert }{r_{n+1}}\delta_{n+1} \\ &\quad=\Vert x_{n+1}-x_{n}\Vert +\bigl\Vert \gamma A_{i}^{*}\bigl( T_{r_{n+1}}^{F_{i}} A_{i} x_{n}- T_{r_{n}}^{F_{i}} A_{i} x_{n} \bigr)\bigr\Vert +\frac{\vert r_{n+1}-r_{n}\vert }{r_{n+1}}\delta_{n+1} \\ &\quad\leq \Vert x_{n+1}-x_{n}\Vert + \gamma\bigl\Vert A_{i}^{*}\bigr\Vert \biggl[\frac {\vert r_{n+1}-r_{n}\vert }{r_{n+1}} \bigl\vert \bigl\langle T_{r_{n+1}}^{F_{i}} A_{i} x_{n}- T_{r_{n}}^{F_{i}} A_{i} x_{n}, T_{r_{n+1}}^{F_{i}} A_{i} x_{n}- A_{i} x_{n}\bigr\rangle \bigr\vert \biggr]^{\frac{1}{2}} \\ &\qquad{} +\frac{\vert r_{n+1}-r_{n}\vert }{r }\delta_{n+1} \\ &\quad\leq \Vert x_{n+1}-x_{n}\Vert + \gamma\bigl\Vert A_{i}^{*}\bigr\Vert \biggl[\frac{\vert r_{n+1}-r_{n}\vert }{r }\sigma_{n+1} \biggr]^{\frac{1}{2}} +\frac{\vert r_{n+1}-r_{n}\vert }{r }\delta _{n+1} \\ &\quad\leq \Vert x_{n+1}-x_{n}\Vert +\eta_{i,n+1}, \end{aligned}$$
$$\begin{aligned}& \sigma_{n+1}=\sup_{n\in\mathbb{N}} \bigl\vert \bigl\langle T_{r_{n+1}}^{F_{i}} A_{i} x_{n}- T_{r_{n}}^{F_{i}} A_{i} x_{n}, T_{r_{n+1}}^{F_{i}} A_{i} x_{n}- A_{i} x_{n}\bigr\rangle \bigr|, \\& \delta_{n+1}=\sup_{n\in\mathbb{N}}\bigl\Vert T_{r_{n+1}}^{F}\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n+1}}^{F_{i}}\bigr)A_{i}\bigr)x_{n+1}- \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n+1}}^{F_{i}} \bigr)A_{i}\bigr)x_{n+1}\bigr\Vert , \end{aligned}$$
$$\eta_{i,n+1}=\gamma\bigl\Vert A_{i}^{*}\bigr\Vert \biggl[ \frac{\vert r_{n+1}-r_{n}\vert }{r }\sigma _{n+1} \biggr]^{\frac{1}{2}}+\frac{\vert r_{n+1}-r_{n}\vert }{r } \delta_{n+1}. $$
$$ \begin{aligned}[b] & \Biggl\Vert (I-\lambda_{n+1}B)\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}u_{i,n+1} -(I- \lambda_{n}B)\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} \Biggr\Vert \\ &\quad= \Biggl\Vert (I-\lambda_{n+1}B)\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}u_{i,n+1} -(I-\lambda_{n+1}B) \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} +(\lambda _{n}-\lambda_{n+1})Bw_{n} \Biggr\Vert \\ &\quad\leq \Biggl\Vert (I-\lambda_{n+1}B)\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}u_{i,n+1} -(I-\lambda_{n+1}B) \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} \Biggr\Vert \\ &\qquad{} +\vert \lambda_{n}-\lambda_{n+1}\vert \Vert Bw_{n}\Vert \\ &\quad\leq\frac{1}{N_{1}}\sum_{i=1}^{N_{1}} \Vert u_{i,n+1}-u_{i,n}\Vert +\vert \lambda _{n}- \lambda_{n+1}\vert \Vert Bw_{n}\Vert , \end{aligned} $$
where $w_{n}=\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n}$. Let $M_{1}= \sup_{n\in\mathbb{N}}\Vert Bw_{n}\Vert $. By (3.1), (3.4), and (3.5), we have
$$ \begin{aligned}[b] \Vert y_{n+1}-y_{n}\Vert &= \Biggl\Vert P_{C}(I-\lambda_{n+1}B)\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}u_{i,n+1}-P_{C}(I- \lambda_{n}B)\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} \Biggr\Vert \\ &\leq \Biggl\Vert (I-\lambda_{n+1}B)\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}u_{i,n+1}-(I-\lambda_{n}B) \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n} \Biggr\Vert \\ &\leq\frac{1}{N_{1}}\sum_{i=1}^{N_{1}} \Vert u_{i,n+1}-u_{i,n}\Vert +\vert \lambda _{n}- \lambda_{n+1}\vert \Vert Bw_{n}\Vert \\ &\leq \Vert x_{n+1}-x_{n}\Vert +\frac{1}{N_{1}}\sum _{i=1}^{N_{1}}\eta _{i,n+1}+\vert \lambda_{n}-\lambda_{n+1}\vert M_{1}. \end{aligned} $$
Since $\{\alpha_{n}\}$ is strictly decreasing, by using (3.6), we have
$$\begin{aligned} &\Vert x_{n+1}-x_{n}\Vert \\ &\quad=\Biggl\Vert (\alpha_{n}-\alpha_{n-1})v+\sum _{i=1}^{n-1}(\alpha_{i-1}-\alpha _{i}) (S_{i}y_{n}-S_{i}y_{n-1}) +(\alpha_{n-1}-\alpha_{n})S_{n}y_{n}\Biggr\Vert \\ &\quad\leq( \alpha_{n-1}-\alpha_{n} )\Vert v\Vert +\sum _{i=1}^{n-1}(\alpha _{i-1}- \alpha_{i})\Vert S_{i}y_{n}-S_{i}y_{n-1} \Vert +(\alpha_{n-1}-\alpha_{n})\Vert S_{n}y_{n} \Vert \\ &\quad\leq( \alpha_{n-1}-\alpha_{n} )\Vert v\Vert +\sum _{i=1}^{n-1}(\alpha _{i-1}- \alpha_{i})\Vert y_{n}- y_{n-1}\Vert +( \alpha_{n-1}-\alpha_{n})\Vert S_{n}y_{n} \Vert \\ &\quad= (\alpha_{n-1}-\alpha_{n})\Vert v\Vert + (1- \alpha_{n-1})\Vert y_{n}- y_{n-1}\Vert +( \alpha_{n-1}-\alpha_{n})\Vert S_{n}y_{n} \Vert \\ &\quad\leq(1-\alpha_{n-1})\Vert x_{n}-x_{n-1}\Vert + \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\eta _{i,n}+\vert \lambda_{n-1}-\lambda_{n}\vert M_{1}+(\alpha_{n-1}-\alpha_{n})M_{2}, \end{aligned} $$
where $M_{2}=\sup\{\Vert S_{n}y_{n}\Vert +\Vert v\Vert :n\in\mathbb{N}\}$. By (i) and (ii) and Lemma 2.6, we conclude that
$$ \lim_{n\to\infty} \Vert x_{n+1}-x_{n}\Vert =0. $$
Further, by (3.4) and (3.6), we have
$$ \lim_{n\to\infty} \Vert y_{n+1}-y_{n}\Vert =0,\qquad \lim_{n\to\infty} \Vert u_{i,n+1}-u_{i,n}\Vert =0, \quad i \in\{1,\ldots,N_{1}\}. $$
Step 3. $\lim_{n\to\infty} \Vert S_{i}x_{n}-x_{n}\Vert \to0$ for each $i\in \mathbb{N}$.
First, we show that $\lim_{n\to\infty} \Vert u_{i,n}-x_{n}\Vert =0$ for each $i\in \{1,\ldots,N_{1}\}$. Since each $A_{i}^{*}(I-T_{r_{n}}^{F_{i}})A_{i}$ is $\frac{1}{ 2L_{i}^{2}}$-inverse strongly monotone, by (3.1), we have
$$\begin{aligned} \Vert u_{i,n}-p\Vert ^{2} =&\bigl\Vert T_{r_{n}}^{F}\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr)x_{n}-T_{r_{n}}^{F} \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}\bigr)p\bigr\Vert ^{2} \\ \leq&\bigl\Vert \bigl(I-\gamma A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i}\bigr)x_{n}-\bigl(I-\gamma A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr)p\bigr\Vert ^{2} \\ =&\bigl\Vert (x_{n}-p)-\gamma\bigl(A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n}-A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} p\bigr)\bigr\Vert ^{2} \\ =&\Vert x_{n}-p\Vert ^{2}-2\gamma\bigl\langle x_{n}-p, A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n}-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A _{i}p\bigr\rangle \\ &{}+\gamma^{2}\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n}-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} p\bigr\Vert ^{2} \\ \leq&\Vert x_{n}-p\Vert ^{2}- \frac{ \gamma}{L_{i}^{2}}\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n}-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} p\bigr\Vert ^{2} \\ &{}+\gamma^{2}\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n}-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} p\bigr\Vert ^{2} \\ =&\Vert x_{n}-p\Vert ^{2}+\gamma\biggl(\gamma- \frac{ 1 }{L_{i}^{2}}\biggr)\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n}-A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} p\bigr\Vert ^{2} \\ =&\Vert x_{n}-p\Vert ^{2}+\gamma\biggl(\gamma- \frac{1 }{L_{i}^{2}}\biggr)\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n} \bigr\Vert ^{2}. \end{aligned}$$
From Lemma 2.4 and (3.9), it follows that
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} =&\Biggl\Vert \alpha_{n}(v-p)+\sum_{i=1}^{n}( \alpha_{i-1}-\alpha _{i}) (S_{i}y_{n}-p) \Biggr\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+\sum _{i=1}^{n}(\alpha_{i-1}-\alpha_{i}) \Vert S_{i}y_{n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+\sum _{i=1}^{n}(\alpha_{i-1}-\alpha_{i}) \Vert y_{n}-p\Vert ^{2} \\ =&\alpha_{n}\Vert v-p\Vert ^{2}+ (1- \alpha_{n})\Vert y_{n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+(1- \alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\Vert u_{i,n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}\\ &{}+ (1- \alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\biggl[\Vert x_{n}-p\Vert ^{2} +\gamma\biggl(\gamma-\frac{1 }{L_{i}^{2}}\biggr)\bigl\Vert A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n} \bigr\Vert ^{2}\biggr] \\ =&\alpha_{n}\Vert v-p\Vert ^{2}+(1-\alpha_{n}) \Vert x_{n}-p\Vert ^{2} \\ &{}+ (1-\alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\gamma\biggl(\gamma-\frac{1 }{L_{i}^{2}}\biggr)\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n} \bigr\Vert ^{2} \\ \leq& \alpha_{n}\Vert v-p\Vert ^{2}+ \Vert x_{n}-p\Vert ^{2} \\ &{}+ (1-\alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\gamma\biggl(\gamma-\frac{1 }{L_{i}^{2}}\biggr)\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n} \bigr\Vert ^{2}. \end{aligned}$$
Since $\gamma<\frac{1}{L^{2}}=\max\{\frac{1}{L_{1}^{2}},\ldots, \frac {1}{L_{N_{1}}^{2}}\}$, we have
$$\begin{aligned} &(1-\alpha_{n})\frac{1}{N_{1}}\gamma \biggl( \frac{1 }{L_{i}^{2}}-\gamma \biggr)\bigl\Vert A_{i}^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n} \bigr\Vert ^{2} \\ &\quad \leq(1-\alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\gamma \biggl( \frac{1 }{L_{i}^{2}}-\gamma \biggr)\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i} x_{n} \bigr\Vert ^{2} \\ &\quad \leq\alpha_{n}\Vert v-p\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}-\Vert x_{n+1}-p\Vert ^{2} \\ &\quad \leq\alpha_{n}\Vert v-p\Vert ^{2}+\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-p\Vert + \Vert x_{n+1}-p\Vert \bigr). \end{aligned} $$
Since $\alpha_{n}\to0$, by (3.7), we have
$$ \lim_{n\to\infty}\bigl\Vert A_{i}^{*}\bigl(I-T_{r_{n}}^{F_{i}} \bigr)A _{i}x_{n} \bigr\Vert =0 $$
for each $i\in\{1,\ldots, N_{1}\}$, which implies that
$$ \lim_{n\to\infty}\bigl\Vert \bigl(I-T_{r_{n}}^{F_{i}} \bigr)A_{i} x_{n} \bigr\Vert =0 $$
for each $i\in\{1,\ldots, N_{1}\}$. Since $T_{r_{n}}^{F }$ is firmly nonexpansive and $I-\gamma A_{i} ^{*}(I-T_{r_{n}}^{F_{i}})A_{i} $ is nonexpansive, by (3.1), we have
$$\begin{aligned} \Vert u_{i,n}-p \Vert ^{2} =& \bigl\Vert T_{r_{n}}^{F } \bigl(x_{n}+\gamma A_{i} ^{*}\bigl(T_{r_{n}}^{F_{i}}-I\bigr)A_{i} x_{n} \bigr)-T_{r_{n}}^{F }(p) \bigr\Vert ^{2} \\ \leq& \bigl\langle u_{i,n}-p,x_{n}+\gamma A _{i}^{*}\bigl(T_{r_{n}}^{i}-I\bigr)A_{i} x_{n}-p \bigr\rangle \\ =&\frac{1}{2} \bigl\{ \Vert u_{i,n}-p \Vert ^{2}+ \bigl\Vert x_{n}+\gamma A _{i} ^{*} \bigl(T_{r_{n}}^{F_{i}}-I\bigr)A _{i}x_{n}-p \bigr\Vert ^{2} \\ &{}- \bigl\Vert u_{i,n}-p-\bigl[x_{n}+\gamma A _{i}^{*}\bigl(T_{r_{n}}^{F_{i}}-I\bigr)A_{i} x_{n}-p\bigr] \bigr\Vert ^{2} \bigr\} \\ =&\frac{1}{2} \bigl\{ \Vert u_{i,n}-p \Vert ^{2}+ \bigl\Vert \bigl(I-\gamma A_{i} ^{*} \bigl(I-T_{r_{n}}^{F_{i}}\bigr)A_{i}\bigr) x_{n}-\bigl(I-\gamma A _{i} ^{*}\bigl(I-T_{r_{n}}^{F_{2}} \bigr)A_{i}\bigr)p \bigr\Vert ^{2} \\ &{}- \bigl\Vert u_{i,n}-x_{n}-\gamma A _{i}^{*} \bigl(T_{r_{n}}^{F_{i}}-I\bigr)A _{i}x_{n} \bigr\Vert ^{2} \bigr\} \\ \leq&\frac{1}{2} \bigl\{ \Vert u_{i,n}-p \Vert ^{2}+ \Vert x_{n}-p \Vert ^{2} - \bigl\Vert u_{i,n}-x_{n}-\gamma A _{i}^{*}\bigl(T_{r_{n}}^{F_{i}}-I \bigr)A_{i} x_{n} \bigr\Vert ^{2} \bigr\} \\ =&\frac{1}{2} \bigl\{ \Vert u_{i,n}-p \Vert ^{2}+ \Vert x_{n}-p \Vert ^{2}- \bigl[ \Vert u_{i,n}-x_{n} \Vert ^{2}+\gamma^{2} \bigl\Vert A_{i} ^{*}\bigl(T_{r_{n}}^{F_{i}}-I \bigr)A_{i} x_{n} \bigr\Vert ^{2} \\ &{} -2\gamma\bigl\langle u_{i,n}-x_{n},A_{i} ^{*} \bigl(T_{r_{n}}^{F_{i}}-I\bigr)A_{i} x_{n}\bigr\rangle \bigr] \bigr\} , \end{aligned}$$
which implies that
$$ \Vert u_{i,n}-p \Vert ^{2}\leq \Vert x_{n}-p \Vert ^{2}- \Vert u_{i,n}-x_{n} \Vert ^{2}+2\gamma \Vert u_{i,n}-x_{n} \Vert \bigl\Vert A_{i}^{*}\bigl(T_{r_{n}}^{F_{i}}-I \bigr)A_{i} x_{n}\bigr\Vert . $$
Now, from (3.1) and (3.12), it follows that
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} \leq& \alpha_{n}\Vert v-p\Vert ^{2}+ (1-\alpha_{n}) \Vert y_{n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+(1- \alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}}\Vert u_{i,n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+ (1- \alpha_{n})\sum_{i=1}^{N_{1}} \frac{1}{N_{1}} \bigl(\Vert x_{n}-p \Vert ^{2}- \Vert u_{i,n}-x_{n} \Vert ^{2} \\ &{}+2\gamma \Vert u_{i,n}-x_{n} \Vert \bigl\Vert A_{i}^{*}\bigl(T_{r_{n}}^{F_{i}}-I\bigr)A_{i} x_{n}\bigr\Vert \bigr) \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+ \Vert x_{n}-p \Vert ^{2}- (1-\alpha_{n})\sum _{i=1}^{N_{1}}\frac {1}{N_{1}}\Vert u_{i,n}-x_{n} \Vert ^{2} \\ &{}+2\gamma\sum_{i=1}^{N_{1}}\frac{1}{N_{1}} \Vert u_{i,n}-x_{n} \Vert \bigl\Vert A_{i}^{*} \bigl(T_{r_{n}}^{F_{i}}-I\bigr)A_{i} x_{n}\bigr\Vert , \end{aligned}$$
$$\begin{aligned} (1-\alpha_{n})\frac{1}{N_{1}} \Vert u_{i,n}-x_{n} \Vert ^{2} \leq&(1-\alpha_{n})\sum _{i=1}^{N_{1}}\frac{1}{N_{1}} \Vert u_{i,n}-x_{n}\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+ \Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-p\Vert + \Vert x_{n+1}-p\Vert \bigr) \\ &{}+ 2\gamma\sum_{i=1}^{N_{1}}\frac{1}{N_{1}} \bigl(\Vert u_{i,n}\Vert +\Vert x_{n} \Vert \bigr)\bigl\Vert A_{i}^{*}\bigl(T_{r_{n}}^{F_{i}}-I \bigr)A_{i} x_{n}\bigr\Vert ). \end{aligned}$$
Since $\alpha_{n}\to0$, both $\{u_{i,n}\}$ and $\{x_{n}\}$ are bounded, by (3.7) and (3.10), we have
$$ \lim_{n\to\infty} \Vert u_{i,n}-x_{n}\Vert =0 $$
for each $i\in\{1,\ldots,N_{1}\}$.
Next, we show that $\lim_{n\to\infty} \Vert y_{n}-u_{n}\Vert =0$, where $u_{n}=\frac {1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n}$. Note that $p={P_{C} (I-\lambda_{n}B)p}$. By (3.1), we have
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} \leq& \alpha_{n}\Vert v-p\Vert ^{2}+ (1-\alpha_{n}) \Vert y_{n}-p\Vert ^{2} \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}+(1- \alpha_{n}) \bigl\Vert u_{n}-p-\lambda_{n}(Bu_{n}-Bp) \bigr\Vert ^{2} \\ =&\alpha_{n}\Vert v-p\Vert ^{2}\\ &{}+(1-\alpha_{n}) \bigl(\Vert u_{n}-p\Vert ^{2} -2\lambda_{n}\langle u_{n}-p, Bu_{n}-Bp \rangle+ \lambda_{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2}\bigr) \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}\\ &{}+(1- \alpha_{n}) \bigl(\Vert u_{n}-p\Vert ^{2} -2\lambda_{n}\beta \Vert Bu_{n}-Bp \Vert ^{2}+ \lambda_{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2}\bigr) \\ \leq&\alpha_{n}\Vert v-p\Vert ^{2}\\ &{}+(1- \alpha_{n}) \bigl(\Vert x_{n}-p\Vert ^{2} -2\lambda_{n}\beta \Vert Bu_{n}-Bp \Vert ^{2}+ \lambda_{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2}\bigr) \\ =&\alpha_{n}\Vert v-p\Vert ^{2}+(1-\alpha_{n}) \Vert x_{n}-p\Vert ^{2} \\ &{} +(1-\alpha_{n}) \lambda_{n}( \lambda_{n}-2\beta) \Vert Bu_{n}-Bp \Vert ^{2} \end{aligned}$$
$$\begin{aligned} &(1-\alpha_{n}) \lambda_{n}( 2\beta- \lambda_{n})\Vert Bu_{n}-Bp \Vert ^{2} \\ &\quad \leq\alpha_{n}\Vert v-p\Vert ^{2}+ \Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-p\Vert + \Vert x_{n+1}-p\Vert \bigr). \end{aligned} $$
Since $\alpha_{n}\to0$ and $0<\lim_{n\to\infty} \lambda_{n}=\lambda<2\beta $, by (3.7), we have
$$ \lim_{n\to\infty} \Vert Bu_{n}-Bp \Vert =0. $$
Since $P_{C}$ is firmly nonexpansive and $(I-\lambda_{n} B)$ is nonexpansive, by (3.1), we have
$$\begin{aligned} \Vert y_{n}-p\Vert ^{2} ={}&\bigl\Vert P_{C}(u_{n}-\lambda_{n}Bu_{n})-P_{C}(p- \lambda_{n}Bp)\bigr\Vert ^{2} \\ \leq{}&\bigl\langle y_{n}-p,u_{n}-\lambda_{n}Bu_{n}-(p- \lambda_{n}Bp)\bigr\rangle \\ ={}& \frac{1}{2} \bigl(\Vert y_{n}-p\Vert ^{2}+ \bigl\Vert (I-\lambda_{n} B)u_{n}-(I-\lambda_{n} B)p\bigr\Vert ^{2} -\bigl\Vert y_{n}-u_{n}+ \lambda_{n}(Bu_{n}-Bp)\bigr\Vert ^{2} \bigr) \\ \leq{}& \frac{1}{2} \bigl(\Vert y_{n}-p\Vert ^{2}+ \Vert u_{n}- p\Vert ^{2} -\bigl\Vert y_{n}-u_{n}+\lambda _{n}(Bu_{n}-Bp)\bigr\Vert ^{2} \bigr) \\ ={}&\frac{1}{2} \bigl(\Vert y_{n}-p\Vert ^{2}+ \Vert u_{n}- p\Vert ^{2} -\Vert y_{n}-u_{n} \Vert ^{2}-\lambda_{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2} \\ & {}-2\lambda_{n}\langle y_{n}-u_{n},Bu_{n}-Bp \rangle \bigr) \\ \leq{}&\frac{1}{2} \bigl(\Vert y_{n}-p\Vert ^{2}+ \Vert u_{n}- p\Vert ^{2} -\Vert y_{n}-u_{n} \Vert ^{2}-\lambda _{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2} \\ & {}+2\lambda_{n}\Vert y_{n}-u_{n}\Vert \Vert Bu_{n}-Bp\Vert \bigr) \end{aligned}$$
$$ \begin{aligned}[b] \Vert y_{n}-p\Vert ^{2}\leq{}&\Vert u_{n}- p\Vert ^{2} -\Vert y_{n}-u_{n} \Vert ^{2}-\lambda_{n}^{2}\Vert Bu_{n}-Bp\Vert ^{2} \\ &{}+2\lambda_{n}\Vert y_{n}-u_{n}\Vert \Vert Bu_{n}-Bp\Vert \\ \leq{}&\Vert x_{n}- p\Vert ^{2} -\Vert y_{n}-u_{n}\Vert ^{2} +2\lambda_{n} \Vert y_{n}-u_{n}\Vert \Vert Bu_{n}-Bp\Vert . \end{aligned} $$
From (3.1) and (3.15), we have
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2}\leq{}& \alpha_{n}\Vert v-p\Vert ^{2}+ (1-\alpha_{n}) \Vert y_{n}-p\Vert ^{2} \\ \leq{}&\alpha_{n}\Vert v-p\Vert ^{2}\\ &{}+ (1- \alpha_{n}) \bigl( \Vert x_{n}- p\Vert ^{2} - \Vert y_{n}-u_{n}\Vert ^{2} +2\lambda_{n}\Vert y_{n}-u_{n}\Vert \Vert Bu_{n}-Bp\Vert \bigr) \\ \leq{}&\alpha_{n}\Vert v-p\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- (1-\alpha_{n})\Vert y_{n}-u_{n}\Vert ^{2} \\ &{}+2 (1-\alpha_{n})\lambda_{n}\Vert y_{n}-u_{n} \Vert \Vert Bu_{n}-Bp\Vert . \end{aligned} $$
Therefore, we have
$$\begin{aligned} (1-\alpha_{n})\Vert y_{n}-u_{n}\Vert ^{2}\leq {}&\alpha_{n}\Vert v-p\Vert ^{2}+\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n+1}-p\Vert + \Vert x_{n}-p\Vert \bigr) \\ &{}+2 (1-\alpha_{n})\lambda_{n}\bigl(\Vert y_{n} \Vert +\Vert u_{n}\Vert \bigr)\Vert Bu_{n}-Bp\Vert . \end{aligned} $$
Since $\lim_{n\to\infty}\alpha_{n}=0$ and both $\{y_{n}\}$ and $\{u_{n}\}$ are bounded, by (3.7) and (3.14), we have
$$ \lim_{n\to\infty} \Vert y_{n}-u_{n}\Vert =0. $$
Further, from (3.7), (3.13), (3.16), and
$$\begin{aligned} \Vert x_{n+1}-y_{n}\Vert &\leq \Vert x_{n+1}-x_{n}\Vert +\Vert x_{n}-u_{n} \Vert +\Vert u_{n}-y_{n}\Vert \\ &\leq \Vert x_{n+1}-x_{n}\Vert +\sum _{i=1}^{N_{1}}\frac{1}{N_{1}}\Vert x_{n}-u_{i,n}\Vert +\Vert u_{n}-y_{n} \Vert , \end{aligned} $$
it follows that
$$ \lim_{n\to\infty} \Vert x_{n+1}-y_{n}\Vert =0. $$
Now, from (3.1), it follows that
$$ \sum_{i=1}^{n}(\alpha_{i-1}- \alpha_{i}) (S_{i}y_{n}-y_{n})=x_{n+1}-y_{n}- \alpha _{n}(v-y_{n}). $$
Since $\{\alpha_{n}\}$ is strictly decreasing, for each $i\in\mathbb{N}$, by (2.1) and (3.18), we have
$$\begin{aligned} (\alpha_{i-1}-\alpha_{i})\Vert S_{i}y_{n}-y_{n}\Vert ^{2}&\leq\sum _{i=1}^{n}(\alpha _{i-1}- \alpha_{i})\Vert S_{i}y_{n}-y_{n} \Vert ^{2} \\ &\leq2\sum_{i=1}^{n}(\alpha_{i-1}- \alpha_{i})\langle S_{i}y_{n}-y_{n}, p-y_{n}\rangle \\ &=2\langle x_{n+1}-y_{n},y_{n}-p\rangle-2 \alpha_{n}\langle v-y_{n},p-y_{n} \rangle \\ &\leq2\Vert x_{n+1}-y_{n}\Vert \Vert y_{n}-p \Vert +2\alpha_{n}\Vert v-y_{n}\Vert \Vert y_{n}-p \Vert . \end{aligned}$$
Since $\lim_{n\to\infty}\alpha_{n}=0$ and $\{y_{n}\}$ is bounded, by (3.17), one has
$$ \lim_{n\to\infty} \Vert S_{i}y_{n}-y_{n} \Vert =0 $$
for all $i\in\mathbb{N}$. Further, since
$$\begin{aligned} \Vert S_{i}x_{n}-x_{n}\Vert &\leq \Vert S_{i}x_{n}-S_{i}y_{n}\Vert + \Vert S_{i}y_{n}-y_{n}\Vert +\Vert y_{n}-x_{n}\Vert \\ &\leq2\Vert y_{n}-x_{n}\Vert +\Vert S_{i}y_{n}-y_{n}\Vert \\ &\leq2\Vert y_{n}-x_{n+1}\Vert +2\Vert x_{n+1}-x_{n}\Vert +\Vert S_{i}y_{n}-y_{n} \Vert , \end{aligned} $$
by (3.7), (3.17), and (3.19), we obtain
$$ \lim_{n\to\infty} \Vert S_{i}x_{n}-x_{n} \Vert =0 $$
for all $i\in\mathbb{N}$.
Step 4. $\limsup_{n\to\infty}\langle v-z,x_{n}-z\rangle\leq0$.
Let $z=P_{\Theta}v$. Since $\{x_{n}\}$ is bounded, we can choose a subsequence $\{x_{n_{j}}\}$ of $\{x_{n}\}$ such that
$$\limsup_{n\to\infty}\langle v-z,x_{n}-z\rangle=\lim _{j\to\infty}\langle v-z,x_{n_{j}}-z \rangle. $$
Since $\{x_{n_{j}}\}$ is bounded, there exists a subsequence $\{ x_{n_{j_{i}}}\}$ of $\{x_{n_{j}}\}$ converging weakly to a point $w\in C$. Without loss of generality, we can assume that $x_{n_{j}}\rightharpoonup w$.
Now, we show that $w\in\Theta$. First of all, we prove that $w\in \Gamma=\bigcap_{i=1}^{\infty} \operatorname {Fix}(S_{i})$. In fact, since $x_{n}-S_{i}x_{n}\to0$ for each $i\in\mathbb{N}$ and $x_{n_{j}}\rightharpoonup w$, by Lemma 2.5, we obtain $w\in\bigcap_{i=1}^{\infty} \operatorname {Fix}(S_{i})=\Gamma$.
Next, we show that $w\in\Omega$, i.e., $w\in \operatorname {EP}(F )$ and $A_{i}w\in \operatorname {EP}(F_{i})$ for each $i=1,\ldots, N_{1}$.
Let $w_{i,n}=(I-A_{i}^{*}(I-T_{r_{n}}^{F_{i}}))A_{i}x_{n}$ for each $i=1,\ldots, N_{1}$. By (3.10) and (3.13) we see that $w_{i,n}-x_{n}\to0$ and $T_{r_{n}}^{F}w_{i,n}-w_{i,n}\to0$ as $n\to\infty$. By Lemma 2.2 we see that $\Vert T_{r_{n}}^{F}w_{i,n}-T_{r}^{F}w_{i,n}\Vert \leq \vert 1-\frac{r}{r_{n}}\vert \Vert T_{r_{n}}^{F}w_{i,n}-w_{i,n}\Vert \to0$ as $n\to\infty$. Hence $T_{r}^{F}w_{i,n}-w_{i,n}\to0$ as $n\to\infty$ for each $i=1,\ldots,N_{1}$. Since $T_{r}^{F}$ is non-expansive and $\{w_{i,n}\}$ converges weakly to w, by Lemma 2.5 we get $w=T_{r}^{F} w$, i.e., $w\in \operatorname {EP}(F)$. On the other hand, since $(I-\gamma A^{*}_{i}(I-T_{r_{n}}^{F_{i}})A_{i} )x_{n}-x_{n}\to0 $ (by (3.13)) and $I-\gamma A^{*}_{i}(I-T_{r_{n}}^{F_{i}})A_{i} $ is non-expansive, from Lemmas 2.2 and 2.5 it follows that $w=(I-\gamma A^{*}_{i}(I-T_{r}^{F_{i}})A_{i} )w$, i.e., $w=T_{r}^{F} A_{i}w$. Therefore, $w\in\Omega$.
Finally, we prove that $w\in \operatorname {VI}=\bigcap_{i=1}^{N_{2}} \operatorname {VI}(C,B_{i})$ by demiclosedness principle. Obviously, we only need to show that $w=P_{C}(w-\lambda B_{i}w)$, where $\lambda=\lim_{n\to\infty}\lambda_{n}$. By (3.1) and (3.16), one has $\Vert u_{n}-P_{C}(I-\lambda_{n} B)u_{n}\Vert \to0$, where $u_{n}=\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}u_{i,n}$. Then we have
$$\begin{aligned} \bigl\Vert u_{n}-P_{C}(I-\lambda B)u_{n}\bigr\Vert &\leq\bigl\Vert u_{n}-P_{C}(I- \lambda_{n} B)u_{n}\bigr\Vert +\bigl\Vert P_{C}(I-\lambda_{n} B)u_{n}-P_{C}(I- \lambda B)u_{n}\bigr\Vert \\ &\leq\bigl\Vert u_{n}-P_{C}(I-\lambda_{n} B)u_{n}\bigr\Vert +\bigl\Vert (I-\lambda_{n} B)u_{n}- (I-\lambda B)u_{n}\bigr\Vert \\ &\leq\bigl\Vert u_{n}-P_{C}(I-\lambda_{n} B)u_{n}\bigr\Vert + \vert \lambda-\lambda_{n}\vert \Vert Bu_{n}\Vert . \end{aligned} $$
Since $\lambda_{n}\to\lambda>0$, $\{Bu_{n}\}$ is bounded and $\Vert u_{n}-P_{C}(I-\lambda B)u_{n}\Vert \to0$, we have
$$\lim_{n\to\infty}\bigl\Vert u_{n}-P_{C}(I- \lambda B)u_{n}\bigr\Vert =0. $$
On the other hand, since $\{\lambda_{n}\}\subset(0,2\beta)$, one has $\lambda\in(0,2\beta]$. Thus $I-\lambda B$ is nonexpansive and, further, $P_{C}(I-\lambda B)$ is nonexpansive. Noting that $u_{n_{j}}\rightharpoonup w$ as $j\to\infty$, by Lemma 2.5, we obtain $w=P_{C}(I-\lambda B)w$. By Lemma 2.8, we get $w\in \operatorname {VI}=\bigcap_{i=1}^{N_{2}}\operatorname {VI}(C,B_{i})$. Therefore, $w\in\Theta$. By the property on $P_{C}$, we have
$$ \limsup_{n\to\infty}\langle v-z,x_{n}-z\rangle=\lim _{j\to\infty}\langle v-z,x_{n_{j}}-z\rangle=\langle v-z,w-z \rangle\leq0. $$
Step 5. $x_{n}\to z=P_{\Theta}v $ as $n\to\infty$.
By (3.1), we have
$$\begin{aligned} \Vert x_{n+1}-z\Vert ^{2}&=\Biggl\Vert \alpha_{n} v+\sum_{i=1}^{n}( \alpha_{i-1}-\alpha _{i})S_{i}y_{n}-z \Biggr\Vert ^{2} \\ &=\alpha_{n}\langle v-z,x_{n+1}-z\rangle+ \sum _{i=1}^{n}(\alpha_{i-1}-\alpha _{i})\langle S_{i}y_{n}-z,x_{n+1}-z \rangle \\ &\leq\alpha_{n}\langle v-z,x_{n+1}-z\rangle+ \frac{\sum_{i=1}^{n}(\alpha _{i-1}-\alpha_{i})}{2}\bigl(\Vert S_{i}y_{n}-z\Vert ^{2}+\Vert x_{n+1}-z\Vert ^{2}\bigr) \\ &\leq \alpha_{n}\langle v-z,x_{n+1}-z\rangle+ \frac{\sum_{i=1}^{n}(\alpha _{i-1}-\alpha_{i})}{2}\bigl(\Vert x_{n}-z\Vert ^{2}+\Vert x_{n+1}-z\Vert ^{2}\bigr) \\ &= \alpha_{n}\langle v-z,x_{n+1}-z\rangle+\frac{1-\alpha_{n} }{2} \bigl( \Vert x_{n}-z\Vert ^{2}+\Vert x_{n+1}-z \Vert ^{2}\bigr) \\ &\leq\alpha_{n}\langle v-z,x_{n+1}-z\rangle+ \frac{1-\alpha_{n} }{2} \Vert x_{n}-z\Vert ^{2}+ \frac{1 }{2}\Vert x_{n+1}-z\Vert ^{2}, \end{aligned} $$
$$\Vert x_{n+1}-z\Vert ^{2}\leq(1-\alpha_{n}) \Vert x_{n}-z\Vert ^{2}+2\alpha_{n}\langle v-z,x_{n+1}-z\rangle. $$
By Lemma 2.6 and (3.21), we can conclude that $\lim_{n\to\infty} \Vert x_{n}-z\Vert =0$. This completes the proof. □
The following results follow directly from Theorem 3.1.
Corollary 3.2
Let $H_{1}$, $H_{2}$ be two real Hilbert spaces and $C\subset H_{1}$, $Q\subset H_{2}$ be nonempty closed convex subsets. Let $A: H_{1}\to H_{2}$ be a bounded linear operator and $B: C\to H_{1}$ be a β-inverse strongly monotone operator. Assume that $F :C\times C\to\mathbb{R}$, $F_{1}: Q\times Q\to\mathbb{R} $ are bifunctions satisfying the conditions (A1)-(A4). Let $\{S_{n}\}$ be countable family of nonexpansive mappings from C into C. Assume that $\Theta =\Gamma\cap\Omega\cap \operatorname {VI}(C,B)\neq\emptyset$, where $\Gamma=\bigcap_{n=1}^{\infty} \operatorname {Fix}(S_{n})$ and $\Omega=\{z\in C: z\in \operatorname {EP}(F)\ \textit{and}\ Az\in \operatorname {EP}(F_{1}) \}$. Take $v\in C$ arbitrarily and define an iterative scheme in the following manner:
$$ \textstyle\begin{cases} u_{n}=T_{r_{n}}^{F}(I-\gamma A^{*}(I-T_{r_{n}}^{F_{1}})A )x_{n},\\ y_{n}=P_{C}(u_{n}-\lambda_{n} Bu_{n}),\\ x_{n+1}=\alpha_{n} v+ \sum_{i=1}^{n}(\alpha_{i-1}-\alpha_{i}) S_{i} y_{n}, \end{cases} $$
for all $n\in\mathbb{N}$, where $\{r_{n}\}\subset(r,\infty)$ with $r>0$, $\{\lambda_{n}\}\subset(0,2\beta)$, and $\gamma\subset(0,1/L^{2}]$, L is the spectral radius of the operator $A^{*}A$ and $A^{*}$ is the adjoint of A, $\alpha_{0}=1$, and $\{\alpha_{n}\}\subset(0,1)$ is a strictly decreasing sequence. Assume that the control sequences $\{\alpha_{n}\}$, $\{\lambda_{n}\}$, and $\{r_{n}\}$ satisfy the following conditions:
$\lim_{n\to\infty} \lambda_{n}=\lambda\in(0,2\beta)$.
Then the sequence $\{x_{n}\}$ defined by (3.22) converges strongly to a point $z=P_{\Theta}v$.
Let $H_{1}$, $H_{2}$ be two real Hilbert spaces and $C\subset H_{1}$, $Q\subset H_{2}$ be nonempty closed convex subsets. Let $A: H_{1}\to H_{2}$ be a bounded linear operator and $B: C\to H_{1}$ be a β-inverse strongly monotone operator. Assume that $F :C\times C\to\mathbb{R}$, $F_{1}: Q\times Q\to\mathbb{R} $ are the bifunctions satisfying the conditions (A1)-(A4). Let $S: C\to C$ be a nonexpansive mapping. Assume that $\Theta=\operatorname {Fix}(S) \cap\Omega\cap \operatorname {VI}(C,B)\neq\emptyset$, where $\Omega=\{z\in C: z\in \operatorname {EP}(F)\ \textit{and}\ Az\in \operatorname {EP}(F_{1}) \}$. Take $v\in C$ arbitrarily and define an iterative scheme in the following manner:
$$ \textstyle\begin{cases} u_{n}=T_{r_{n}}^{F}(I-\gamma A^{*}(I-T_{r_{n}}^{F_{1}})A )x_{n},\\ y_{n}=P_{C}(u_{n}-\lambda_{n} Bu_{n}),\\ x_{n+1}=\alpha_{n} v+ (1-\alpha_{n}) S y_{n} \end{cases} $$
for all $n\in\mathbb{N}$, where $\{r_{n}\}\subset(r,\infty)$ with $r>0$, $\{\lambda_{n}\}\subset(0,2\beta)$, and $\gamma\subset(0,1/L^{2}]$, L is the spectral radius of the operator $A^{*}A$ and $A^{*}$ is the adjoint of A, $\{\alpha_{n}\}\subset(0,1)$ is a sequence. Assume that the control sequences $\{\alpha_{n}\}$, $\{\lambda_{n}\}$, and $\{r_{n}\}$ satisfy the following conditions:
Theorem 3.1 and Corollary 3.3 extend the corresponding one of Kazmi and Rizvi [14] from a nonexpansive mapping to a finite of family of nonexpansive mappings and from a split equilibrium problem to a finite of family of split equilibrium problems. It is a little simple to prove that $w\in \operatorname {VI}$ by the demiclosedness principle in Theorem 3.1.
We give an example to illustrate Theorem 3.1 as follows.
Example 3.5
Let $H_{1}=\mathbb{R}$ and $H_{2}=\mathbb{R}^{2}$, $C=[0,1]$, and $Q=[0,1]\times[0,1]$. Let $A_{1}: H_{1}\to H_{2}$ and $A_{2}: H_{1}\to H_{2}$ defined by $A_{1}x=(x,x)^{T}$ and $A_{2}x=(\frac{x}{2},\frac {x}{2})^{T}$ for each $x\in H_{1}$. Then $A_{1}^{*}y=y_{1}+y_{2}$ and $A_{2}^{*}y=\frac{y_{1}+y_{2}}{2}$ for each $y=(y_{1},y_{2})^{T}\in H_{2}$. Then $L_{1}=2$ and $L_{2}=\frac{1}{2}$, where $L_{1}$ and $L_{2}$ are the spectral radius of $A_{1}^{*}A_{1}$ and $A_{2}^{*}A_{2}$, respectively.
Let $B_{1}= 2(x-1)$ and $B_{2}= -4$ for all $x\in C$. Then it is easy to see that $B_{1}$ and $B_{2}$ are $\frac{1}{2}$ and 1-inverse strongly monotone operators from C into $H_{1}$. Find that $\operatorname {VI}=\operatorname {VI}(C,B_{1})\cap \operatorname {VI}(C,B_{2})=\{1\}$. For each $n\in\mathbb{N}$, let $S_{n}:C\to C$ defined by $S_{n}(x)=x+\frac{1}{3n}$ for each $x\in[0,\frac{1}{2}]$ and $S_{n}(x)=x$ for each $x\in(\frac{1}{2},1]$. Then $\{S_{n}\}$ is a countable family of nonexpansive mappings from C into C and it is easy to see that $\Gamma=\bigcap_{n=1}^{\infty} \operatorname {Fix}(S_{n})=(\frac{1}{2},1]$. For each $x,y\in C$, define the bifunction $F: C\times C\to\mathbb{R}$ by $F(x,y)=x-y$ for all $x,y\in C$. For each $u=(u_{1},u_{2})^{T}$ and $v=(v_{1},v_{2})^{T}\in Q$, define $F_{1}: Q\times Q\to\mathbb{R}$ and $F_{2}:Q\times Q\to\mathbb{R}$ by
$$F_{1}(u,v)=u_{1}+u_{2}-v_{1}-v_{2} $$
$$F_{2}(u,v)= \textstyle\begin{cases} 0, & \mbox{if } u=v,\\ 2 , & \mbox{if } u=(1,1) \mbox{ or } (\frac{1}{2},\frac{1}{2} ) \mbox{ and } v\neq(1,1) \mbox{ or } (\frac{1}{2},\frac{1}{2} ),\\ -2 ,& \mbox{if } v=(1,1) \mbox{ or } (\frac{1}{2},\frac{1}{2} ) \mbox{ and } u\neq(1,1) \mbox{ or } (\frac{1}{2},\frac{1}{2} ),\\ u_{1}^{2}+u_{2}^{2}-v_{1}-v_{2}, & \mbox{otherwise}. \end{cases} $$
It is easy to check that the bifunctions F, $F_{1}$, and $F_{2}$ satisfy the conditions (A1)-(A4) and $F_{1}$. Moreover, $\Omega=\{1\}$, where $\Omega=\{z\in C: z\in \operatorname {EP}(F), A_{1}z\in \operatorname {EP}(F_{1}) \mbox{ and } A_{2}z\in \operatorname {EP}(F_{2}) \}$. Therefore, $\Theta=\Gamma\cap \mathrm{VI}\cap \Omega=\{1\}$.
Let $\alpha_{0}=1$, $\gamma_{1}=\gamma_{2}=\frac{1}{2}$, and $\gamma=\frac {1}{4}$. For each $n\in\mathbb{N}$, let $r_{n}= 2$, $\lambda_{n}=\frac {1}{4}$, $\alpha_{n}=\frac{1}{n}$. Then the sequences $\{\alpha_{n}\}$, $\{ \lambda_{n}\}$, $\{r_{n}\}$ satisfy the conditions (1)-(3) in Theorem 3.1.
For each $x\in C$ and each $n\in\mathbb{N}$, we compute $T_{r_{n}}^{F_{1}}A_{1}x$, i.e., $T_{r_{n}}^{F_{1}}(x,x)$. Find $z=(1,1) $ such that
$$\begin{aligned} F_{1}(z,y)+\frac{1}{r_{n}}\langle y-z,z-A_{1}x \rangle&=2-(y_{1}+y_{2})+ \frac {1}{2} \bigl[(y_{1}-1) (1-x)+(y_{2}-1) (1-x)\bigr] \\ &=2-(y_{1}+y_{2})+ \frac{1}{2}(1-x) ( y_{1} + y_{2}-2) \\ &=\bigl[2-(y_{1}+y_{2})\bigr] \biggl[1- \frac{1}{2}(1-x) \biggr] \\ &\geq0 \end{aligned} $$
for all $y=(y_{1},y_{2})\in Q$. Thus, from Lemma 2.1(1), it follows that $T_{r_{n}}^{F_{1}}A_{1}x=(1,1)$ for each $x\in C$. Similarly, for each $x\in [0,1]$, we can find $z=(1,1)$ such that, for $y=(\frac{1}{2}, \frac{1}{2})$,
$$\begin{aligned} F_{2}(z,y)+\frac{1}{r_{n}}\langle y-z,z-A_{2}x \rangle&=1-\frac {1}{2}(1-x)=\frac{1 }{2}+\frac{x}{4}\geq0; \end{aligned} $$
for $y=(1, 1)$,
$$F_{2}(z,y)+\frac{1}{r_{n}}\langle y-z,z-A_{2}x\rangle=0; $$
for $y\in Q\setminus\{(1,1),(\frac{1}{2},\frac{1}{2})\}$,
$$F_{2}(z,y)+\frac{1}{r_{n}}\langle y-z,z-A_{2}x\rangle=2+ \frac{1}{2} \biggl[(y_{1}-1) \biggl(1-\frac{x}{2} \biggr)+(y_{2}-1) \biggl(1-\frac{x}{2} \biggr) \biggr] \geq0. $$
Thus $z=(1,1)=T_{r_{n}}^{F_{2}}A_{2}x$ for all $x\in C$ by Lemma 2.1(1).
Now, take $v=\frac{1}{2}$ and $x_{1}=\frac{1}{4}$ and define the sequence $\{x_{n}\}$ defined by (3.1). Since each $x_{n}\in C$, from the statement above we get $T_{r_{n}}^{F_{i}}A_{i}x_{n}=(1,1)$ for each $i=1,2$. Furthermore, we can get
$$\begin{aligned} \bigl(I-\gamma A_{1}^{*}\bigl(I-T_{r_{n}}^{F_{1}} \bigr)A_{1}\bigr) x_{n}&=\bigl(x_{n}-\gamma A_{1}^{*}\bigl(A_{1}x_{n}-T_{r_{n}}^{F_{1}}A_{1}x_{n} \bigr)\bigr) \\ &=\bigl(x_{n}-\gamma A_{1}^{*}\bigl((x_{n},x_{n})-(1,1) \bigr)\bigr) \\ &=x_{n}-2\gamma( x_{n}-1) \\ &=\frac{1+x_{n}}{2}. \end{aligned} $$
$$\begin{aligned} F(1,y)+\frac{1}{r_{n}} \biggl\langle y-z,z-\frac{1+x_{n}}{2} \biggr\rangle &=1-y+\frac{1}{2}(y-1) \biggl(1-\frac{1+x_{n}}{2} \biggr) \\ &=(1-y) \biggl(1-\frac{1}{2} \biggl(1-\frac{1+x_{n}}{2} \biggr) \biggr) \\ &=(1-y) \biggl( \frac{1}{2}+\frac{1+x_{n}}{4} \biggr) \\ &\geq0 \end{aligned} $$
for all $y\in C$. Thus $u_{1,n}=1$ by Lemma 2.1(1) for each $n\in \mathbb{N}$. Similarly, we can conclude that $u_{2,n}=1$ for each $n\in \mathbb{N}$.
Next, we compute the sequence $\{y_{n}\}$. By the definition of $\{y_{n}\} $, we see that
$$\begin{aligned} y_{n}&=P_{C} \biggl[ \biggl(I- \lambda_{n}\frac{B_{1}+B_{2}}{2} \biggr)\frac {u_{1,n}+u_{2,n}}{2} \biggr] \\ &= P_{C} \biggl(1+\frac{2}{4} \biggr) =1 \end{aligned} $$
for all $n\in\mathbb{N}$.
Finally, we compute the sequence $\{x_{n}\}$ by the following iteration:
$$\begin{aligned} x_{n+1}&=\alpha_{n}v+\sum _{i=1}^{n}(\alpha_{i-1}-\alpha_{i})S_{i}y_{n} \\ &=\alpha_{n}v+1-\alpha_{n} \\ &= 1-\frac{1}{2n} \\ &\to1=P_{\Theta}v=P_{\{1\}}\frac{1}{2} \end{aligned} $$
as $n\to\infty$ as shown by Theorem 3.1.
In this section, let $H_{1}$, $H_{2}$ be two real Hilbert spaces and C, Q be two nonempty closed convex subsets of $H_{1}$ and $H_{2}$, respectively. Let $f: C\to \mathbb{R}$, $g: Q\to \mathbb{R}$ be two operators and $A: H_{1}\to H_{2}$ be a bounded linear operator.
We consider the following optimization problem:
$$ \begin{aligned} \mbox{find } x^{*}\in C& \mbox{ such that } f\bigl(x^{*}\bigr)\leq f(x),\quad \forall x\in C, \\ &\mbox{ and } y^{*}=Ax^{*} \mbox{ such that } g\bigl(y^{*}\bigr)\leq g(y), \quad \forall y\in Q. \end{aligned} $$
We denote the set of solutions of (4.1) by Θ and assume that $\Theta\neq\emptyset$. Let $F(x,y)=f(y)-f(x)$ for all $x,y\in C$ and $F_{1}(x,y)=g(y)-g(x)$ for all $x,y\in Q$. Then $F(x,y)$ and $G(x,y)$ satisfy the conditions (A1)-(A4) in Section 2 provided that f is convex and lower semicontinuous on C and g is convex and lower semicontinuous on Q. Let $\Omega=\{z\in C: z\in \operatorname {EP}(F)\mbox{ and }Az\in \operatorname {EP}(F_{1}) \}$. Obviously, $\Theta=\Omega$.
By Corollary 3.3 with $B=I$ and $S=I$, we have the following iterative algorithm, which strongly converges to a point $z=P_{\Theta}v$, which solves the optimization problem (4.1):
$$ \textstyle\begin{cases} u_{n}=T_{r_{n}}^{F}(I-\gamma A^{*}(I-T_{r_{n}}^{F_{1}})A )x_{n},\\ y_{n}=P_{C}(u_{n}-\lambda_{n}u_{n}),\\ x_{n+1}=\alpha_{n} v+ (1-\alpha_{n})y_{n}, \end{cases} $$
where $\{r_{n}\}\subset(r,\infty)$ with $r>0$, $\{\lambda_{n}\}\subset (0,2)$, and $\gamma\subset(0,1/L^{2}]$, L is the spectral radius of the operator $A^{*}A$ and $A^{*}$ is the adjoint of A, $\{\alpha_{n}\} \subset(0,1)$ is a sequence. Assume that the control sequences $\{\alpha _{n}\}$, $\{\lambda_{n}\}$, and $\{r_{n}\}$ satisfy the following conditions:
$\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty$, $\sum_{n=1}^{\infty} \vert \alpha_{n+1}-\alpha_{n}\vert <\infty$, and $\sum_{n=1}^{\infty} \vert \lambda _{n+1}-\lambda_{n}\vert <\infty$;
$\lim_{n\to\infty} \lambda_{n}=\lambda\in(0,2)$.
For the special case with $H_{1}=H_{2}$ and $C=Q$, we consider the following multi-objective optimization problem:
$$ \textstyle\begin{cases} \min\{f(x), g(x)\},\\ x\in C. \end{cases} $$
We denote the set of solution of (4.3) by Γ and assume that $\Gamma\neq\emptyset$. In (4.2), setting $A=I$ we get the following algorithm, which strongly converges to the solution of multi-objective optimization problem (4.3):
$$\textstyle\begin{cases} u_{n}=T_{r_{n}}^{F}(I-\gamma(I-T_{r_{n}}^{F_{1}}))x_{n},\\ y_{n}=P_{C}(u_{n}-\lambda_{n}u_{n}),\\ x_{n+1}=\alpha_{n} v+ (1-\alpha_{n})y_{n}, \end{cases} $$
where $\gamma\subset(0,1/L^{2}]$, L is the spectral radius of the operator $I^{*}I$ and $I^{*}$ is the adjoint of I, other parameters such as $\{\alpha_{n}\}$, $\{\lambda_{n}\}$, and $\{r_{n}\}$ satisfy the same conditions (1)-(3).
In this paper, we construct an iterative algorithm to find a common element of the set of solutions of a finite family of split equilibrium problems and the set of common fixed points of a countable family of nonexpansive mappings in Hilbert spaces. In the proof methods, we use the inverse strong monotonicity of each $A^{*}(I-T_{r_{n}})A$, which is such that the proof is simple and is different from the ones given in [14–16]. Also, in the results of this paper, we do not assume that each $F_{i}$ is upper semi-continuous in the first argument for each $i=1,\ldots,N_{1}$, which is required in the result in [14–16]. As an application, we solve an optimization problem by the result of this paper.
Chang, SS, Lee, HWJ, Chan, CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 70, 3307-3319 (2009)
Katchang, P, Kumam, P: A new iterative algorithm of solution for equilibrium problems, variational inequalities and fixed point problems in a Hilbert space. J. Appl. Math. Comput. 32, 19-38 (2010)
Plubtieng, S, Punpaeng, R: A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 336, 455-469 (2007)
Qin, X, Shang, M, Su, Y: A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces. Nonlinear Anal. 69, 3897-3909 (2008)
Combettes, PL, Hirstoaga, SA: Equilibrium programming using proximal like algorithms. Math. Program. 78, 29-41 (1997)
Agarwal, RP, Chen, JW, Cho, YJ: Strong convergence theorems for equilibrium problems and weak Bregman relatively nonexpansive mappings in Banach spaces. J. Inequal. Appl. 2013, 119 (2013)
Tada, A, Takahashi, W: Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 133, 359-370 (2007)
Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)
Takahashi, S, Takahashi, W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 69, 1025-1033 (2008)
Bnouhachem, A, Chen, Y: An iterative method for a common solution of generalized mixed equilibrium problems, variational inequalities, and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 155 (2014)
Bnouhachem, A: An interactive method for system of generalized equilibrium problem and fixed point problem. Fixed Point Theory Appl. 2014, 235 (2014)
Censor, Y, Gibali, A, Reich, S: Algorithms for the split variational inequality problem. Numer. Algorithms 69, 301-323 (2012)
Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011)
Kazmi, KR, Rizvi, SH: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21, 44-51 (2013)
Bnouhachem, A: Algorithms of common solutions for a variational inequality, a split equilibrium problem and a hierarchical fixed point problem. Fixed Point Theory Appl. 2013, 278 (2013)
Bnouhachem, A: Strong convergence algorithm for split equilibrium problems and hierarchical fixed point problems. Sci. World J. 2014, 390956 (2014)
Iiduka, H, Takahashi, W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. 61, 341-350 (2005)
Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)
Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010, 383740 (2010)
Yang, L, Zhao, F, Kim, JK: Hybrid projection method for generalized mixed equilibrium problem and fixed point problem of infinite family of asymptotically quasi-ψ-nonexpansive mappings in Banach spaces. Appl. Math. Comput. 218, 6072-6082 (2012)
Xu, HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659-678 (2003)
Bruch, RE Jr.: Properties of fixed point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 179, 251-289 (1973)
Zhou, HY, Wang, PY, Zhou, Y: Minimum-norm fixed point of nonexpansive mappings with applications. Optimization 64, 799-814 (2015)
This work is supported by the Natural Science Funds of Hebei (Grant Number: A2015502021), the Fundamental Research Funds for the Central Universities (Grant Number: 2014ZD44) and the Project-sponsored by SRF for ROCS, SEM. The fourth author thanks the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (Grant Number: 2014R1A2A2A01002100).
Department of Mathematics and Physics, North China Electric Power University, Baoding, 071003, China
Shenghua Wang
Department of Mathematics and Sciences, Shijiazhuang University of Economics, Shijiazhuang, 050031, China
Xiaoying Gong
Department of Mathematics, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
Afrah AN Abdou
Department of Mathematics Education and RINS, Gyeongsang National University, Jinju, 660701, Korea
Yeol Je Cho
Center for General Education, China Medical University, Taichung, 40402, Taiwan
Search for Shenghua Wang in:
Search for Xiaoying Gong in:
Search for Afrah AN Abdou in:
Search for Yeol Je Cho in:
Correspondence to Yeol Je Cho.
All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
65J15
split equilibrium problem
nonexpansive mapping
split feasible solution problem
Hilbert space
New Challenges and Trends in Fixed Point Theory and Its Applications
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Cliquet option pricing with Meixner proc ...
Article info Full article Related articles
Cliquet option pricing with Meixner processes
Volume 5, Issue 1 (2018), pp. 81–97
Markus Hess
Pub. online: 12 February 2018 Type: Research Article Open Access
We investigate the pricing of cliquet options in a geometric Meixner model. The considered option is of monthly sum cap style while the underlying stock price model is driven by a pure-jump Meixner–Lévy process yielding Meixner distributed log-returns. In this setting, we infer semi-analytic expressions for the cliquet option price by using the probability distribution function of the driving Meixner–Lévy process and by an application of Fourier transform techniques. In an introductory section, we compile various facts on the Meixner distribution and the related class of Meixner–Lévy processes. We also propose a customized measure change preserving the Meixner distribution of any Meixner process.
Cliquet option based contracts constitute a customized subclass of equity indexed annuities. The underlying options commonly are of monthly sum cap style paying a credited yield based on the sum of monthly-capped rates associated with some reference stock index. In this regard, cliquet type investments belong to the class of path-dependent exotic options. In [15] cliquet options are regarded as "the height of fashion in the world of equity derivatives". In the literature, there are different pricing approaches for cliquet options involving e.g. partial differential equations (see [15]), Monte Carlo techniques (see [2]), numerical recursive algorithms related to inverse Laplace transforms (see [9]) and analytical computation methods (see [3, 7, 8]). The present article belongs to the last category.
The aim of the present paper is to provide analytical pricing formulas for globally-floored locally-capped cliquet options with multiple resetting times where the underlying reference stock index is driven by a pure-jump time-homogeneous Meixner–Lévy process. In this setup, we derive cliquet option price formulas under two different approaches: once by using the distribution function of the driving Meixner–Lévy process and once by applying Fourier transform techniques (as proposed in [8]). All in all, the present article can be seen as an accompanying (but to a large degree self-contained) paper to [8], as it presents a specific application of the results derived in [8] to the class of Meixner–Lévy processes.
The paper is organized as follows: In Section 2 we compile facts on the Meixner distribution and the related class of stochastic Meixner–Lévy processes. In Section 3 we introduce a geometric pure-jump stock price model driven by a Meixner–Lévy process. In Section 3.1 we establish a customized structure preserving measure change from the risk-neutral to the physical probability measure. Section 4 is dedicated to the pricing of cliquet options. We obtain semi-analytic expressions for the cliquet option price by using the probability distribution function of the driving Meixner–Lévy process in Section 4.1 and by an application of Fourier transform techniques in Section 4.2. In Section 5 we draw the conclusions.
2 A review of Meixner processes
Let $(\Omega ,\mathbb{F},(\mathcal{F}_{t})_{t\in [0,T]},\mathbb{Q})$ be a filtered probability space satisfying the usual hypotheses, i.e. $\mathcal{F}_{t}=\mathcal{F}_{t+}:=\cap _{s>t}\mathcal{F}_{s}$ constitutes a right-continuous filtration and $\mathbb{F}$ denotes the sigma-algebra augmented by all $\mathbb{Q}$-null sets (cf. p. 3 in [10]). Here, $\mathbb{Q}$ is a risk-neutral probability measure and $0<T<\infty $ denotes a finite time horizon. In the following, we compile various facts on the Meixner distribution and Meixner–Lévy processes from [1, 6, 12, 13] and [14].
A real-valued, càdlàg, pure-jump, time-homogeneous Lévy process $M=(M_{t})_{t\in [0,T]}$ (with independent and stationary increments) satisfying $M_{0}=0$ is called Meixner (-Lévy) process with scaling parameter $\alpha >0$, shape/skewness parameter $\beta \in (-\pi ,\pi )$, peakedness parameter $\delta >0$ and location parameter $\mu \in \mathbb{R}$, if $M_{t}$ possesses the Lévy–Itô decomposition
\[ M_{t}=\theta t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{Q}}(s,z)\]
where $\mathbb{R}_{0}:=\mathbb{R}\setminus \{0\}$, the drift parameter
\[ \theta :=\mu +\delta \alpha \tan (\beta /2)\]
is a real-valued constant and the $\mathbb{Q}$-compensated Poisson random measure (PRM) is given by
\[ d{\tilde{N}}^{\mathbb{Q}}(s,z):=dN(s,z)-d{\nu }^{\mathbb{Q}}(z)ds\]
with positive and finite Meixner-type Lévy measure
\[ d{\nu }^{\mathbb{Q}}(z):=\delta \frac{{e}^{\beta z/\alpha }}{z\sinh (\pi z/\alpha )}dz\]
(cf. [12, 13], Eq. (3) in [6]) satisfying ${\nu }^{\mathbb{Q}}(\{0\})=0$ and
\[ \int _{\mathbb{R}_{0}}\big(1\wedge {z}^{2}\big)d{\nu }^{\mathbb{Q}}(z)<\infty .\]
We denote the Lévy triplet of $M_{t}$ by $(\theta ,0,{\nu }^{\mathbb{Q}})$. (Note that this notation is not entirely consistent with [6, 8].) We recall that $M_{t}$ possesses moments of all orders (cf. Section 5.3.10 in [13]). Evidently, $M_{t}$ has no Brownian motion part. Since
\[ \int _{\mathbb{R}_{0}}|z|d{\nu }^{\mathbb{Q}}(z)=\infty \]
the process $M_{t}$ possesses infinite variation (cf. Section 5.3.10 in [13]). We write for any fixed $t\in [0,T]$
\[ M_{t}\sim \mathcal{M}(\alpha ,\beta ,\delta t,\mu t)\]
(cf. Section 3.6 in [1]) and say that M is Meixner distributed under $\mathbb{Q}$ with parameters α, β, δ and μ. From (2.1) and (2.2) we instantly receive the mean value
\[ \mathbb{E}_{\mathbb{Q}}[M_{t}]=\theta t=\mu t+\delta t\alpha \tan (\beta /2)\]
standing in accordance with Eq. (11) in [6]. The variance, skewness and kurtosis of $M_{t}$ are respectively given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{Var}_{\mathbb{Q}}[M_{t}]& \displaystyle =\frac{\delta t}{2}\frac{{\alpha }^{2}}{{\cos }^{2}(\beta /2)},\hspace{2em}\mathbb{S}_{\mathbb{Q}}[M_{t}]=\sqrt{2/(\delta t)}\sin (\beta /2),\\{} \displaystyle \mathbb{K}_{\mathbb{Q}}[M_{t}]& \displaystyle =3+\frac{2-\cos (\beta )}{\delta t}\end{array}\]
(cf. Table 6 in [1]). Furthermore, for all $x\in \mathbb{R}$ and $t\in [0,T]$ the real-valued probability density function (pdf) of $M_{t}$ under $\mathbb{Q}$ reads as
\[ f_{M_{t}}(x):=\frac{{(2\cos (\beta /2))}^{2\delta t}}{2\pi \alpha \varGamma (2\delta t)}{e}^{\beta (x-\mu t)/\alpha }{\bigg|\varGamma \bigg(\delta t+i\frac{x-\mu t}{\alpha }\bigg)\bigg|}^{2}\]
(cf. [1, 12], Eq. (4) in [6]) wherein
\[ \varGamma (\zeta ):={\int _{0}^{\infty }}{u}^{\zeta -1}{e}^{-u}du\]
denotes the gamma function which is defined for all $\zeta \in \mathbb{C}$ with $Re(\zeta )>0$. Taking the definition of the gamma function and Euler's formula into account, we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varGamma \bigg(\delta t+i\frac{x-\mu t}{\alpha }\bigg)& \displaystyle ={\int _{{0}^{+}}^{\infty }}{u}^{\delta t-1}{e}^{-u}\cos \bigg(\frac{x-\mu t}{\alpha }\ln u\bigg)du\\{} & \displaystyle \hspace{1em}+i{\int _{{0}^{+}}^{\infty }}{u}^{\delta t-1}{e}^{-u}\sin \bigg(\frac{x-\mu t}{\alpha }\ln u\bigg)du\end{array}\]
which implies
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\bigg|\varGamma \bigg(\delta t+i\frac{x-\mu t}{\alpha }\bigg)\bigg|}^{2}& \displaystyle ={\Bigg({\int _{{0}^{+}}^{\infty }}{u}^{\delta t-1}{e}^{-u}\cos \bigg(\frac{x-\mu t}{\alpha }\ln u\bigg)du\Bigg)}^{2}\\{} & \displaystyle \hspace{1em}+{\Bigg({\int _{{0}^{+}}^{\infty }}{u}^{\delta t-1}{e}^{-u}\sin \bigg(\frac{x-\mu t}{\alpha }\ln u\bigg)du\Bigg)}^{2}.\end{array}\]
Note that the latter object appears in (2.6). The cumulative distribution function (cdf) of $M_{t}$ does not possess a closed form representation but it can be computed numerically. Further on, the characteristic function of $M_{t}$ can be computed by the Lévy–Khinchin formula (see e.g. [4, 5, 11, 13]) due to
\[ \phi _{M_{t}}(u):=\mathbb{E}_{\mathbb{Q}}\big[{e}^{iuM_{t}}\big]={e}^{\psi (u)t}\]
with ${i}^{2}=-1$, $u\in \mathbb{R}$, $t\in [0,T]$ and a characteristic exponent
\[ \psi (u):=iu\bigg[\mu +\delta \alpha \tan \bigg(\frac{\beta }{2}\bigg)\bigg]+\delta \int _{\mathbb{R}_{0}}\frac{{e}^{iuz}-1-iuz}{z}\frac{{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz.\]
Moreover, let us define the Fourier transform, respectively inverse Fourier transform, of a deterministic function $q\in {\mathcal{L}}^{1}(\mathbb{R})$ via
\[ \hat{q}(y):=\int _{\mathbb{R}}q(x){e}^{iyx}dx,q(x)=\frac{1}{2\pi }\int _{\mathbb{R}}\hat{q}(y){e}^{-iyx}dy.\]
Then for all $u\in \mathbb{R}$ and $t\in [0,T]$ we receive the well-known relationship
\[ \phi _{M_{t}}(u)=\hat{f}_{M_{t}}(u)\]
where $\hat{f}_{M_{t}}$ denotes the Fourier transform of the density function $f_{M_{t}}$ defined in (2.6). An application of the inverse Fourier transform yields
\[ f_{M_{t}}(x)=\frac{1}{2\pi }\int _{\mathbb{R}}{e}^{\psi (u)t-iux}du\]
thanks to (2.7). On the other hand, from Eq. (1) in [6] we know that
\[ \phi _{M_{t}}(u)={e}^{iu\mu t}{\bigg(\frac{\cos (\beta /2)}{\cosh ((\alpha u-i\beta )/2)}\bigg)}^{2\delta t}\]
where $u\in \mathbb{R}$ and $t\in [0,T]$. Taking the logarithm in (2.7) and (2.9), we finally deduce
\[ \psi (u)=iu\mu +2\delta \bigg[\ln \cos \bigg(\frac{\beta }{2}\bigg)-\ln \cosh \bigg(\frac{\alpha u-i\beta }{2}\bigg)\bigg].\]
Further on, for the Meixner distribution the following properties are well-known (cf. [14], Section 5.3.10 in [13], Section 3.6 in [1], Corollary 1 in [6]).
Lemma 2.1.
(a) If $X\sim \mathcal{M}(\alpha ,\beta ,\delta ,\mu )$, then $cX+m\sim \mathcal{M}(c\alpha ,\beta ,\delta ,c\mu +m)$ with constants $c>0$ and $m\in \mathbb{R}$.
(b) If $X_{1}\sim \mathcal{M}(\alpha ,\beta ,\delta _{1},\mu _{1})$ and $X_{2}\sim \mathcal{M}(\alpha ,\beta ,\delta _{2},\mu _{2})$ are independent random variables, then $X_{1}+X_{2}\sim \mathcal{M}(\alpha ,\beta ,\delta _{1}+\delta _{2},\mu _{1}+\mu _{2})$.
(c) The characteristic function $\phi _{X}(u;\alpha ,\beta ,\delta ,\mu )$ of a Meixner distributed random variable $X\sim \mathcal{M}(\alpha ,\beta ,\delta ,\mu )$ satisfies
\[ \phi _{X}(u;\alpha ,\beta ,\delta ,\mu )=\phi _{X}{(u;\alpha ,\beta ,\delta /n,\mu /n)}^{n}\]
for arbitrary $n\in \mathbb{N}$ such that the Meixner distribution is infinitely divisible.
3 A stock price model driven by a Meixner process
Let $t\in [0,T]$ and define the stochastic stock price process $S_{t}$ via
\[ S_{t}:=S_{0}{e}^{M_{t}+bt}\]
with deterministic initial value $S_{0}$, a constant $b\in \mathbb{R}$ and a real-valued Meixner process
such as introduced in (2.1)–(2.4). Here, the constant b provides some additional degree of freedom which is introduced in order to ensure the arbitrage-freeness of the stock price model. More details on this topic will be given below. Verify that (3.1) belongs to the same model class (geometric Lévy models) as (2.2)–(2.3) in [8]. We next introduce the historical filtration
\[ \mathcal{F}_{t}:=\sigma \{S_{u}:0\le u\le t\}=\sigma \{M_{u}:0\le u\le t\}.\]
Using Itô's formula, we obtain the stochastic differential equation (SDE)
\[ \frac{dS_{t}}{S_{t-}}=\bigg(\theta +b+\int _{\mathbb{R}_{0}}\big[{e}^{z}-1-z\big]d{\nu }^{\mathbb{Q}}(z)\bigg)dt+\int _{\mathbb{R}_{0}}\big[{e}^{z}-1\big]d{\tilde{N}}^{\mathbb{Q}}(t,z)\]
under $\mathbb{Q}$. Let us further define the discounted stock price via
\[ \hat{S}_{t}:=\frac{S_{t}}{B_{t}}\]
where $S_{t}$ is such as defined in (3.1) and $B_{t}:={e}^{rt}$ is the value of a bank account with normalized initial capital $B_{0}=1$ and risk-less interest rate $r>0$. Due to (3.1) we find
\[ \hat{S}_{t}=S_{0}{e}^{M_{t}+(b-r)t}\]
while Itô's formula yields the following SDE under $\mathbb{Q}$
\[ \frac{d\hat{S}_{t}}{\hat{S}_{t-}}=\bigg(\theta +b-r+\int _{\mathbb{R}_{0}}\big[{e}^{z}-1-z\big]d{\nu }^{\mathbb{Q}}(z)\bigg)dt+\int _{\mathbb{R}_{0}}\big[{e}^{z}-1\big]d{\tilde{N}}^{\mathbb{Q}}(t,z).\]
In accordance to no-arbitrage theory, the discounted stock price $\hat{S}$ must form a martingale under the risk-neutral probability measure $\mathbb{Q}$. For this reason, we require the drift restriction
\[ b=r-\theta -\int _{\mathbb{R}_{0}}\big[{e}^{z}-1-z\big]d{\nu }^{\mathbb{Q}}(z).\]
With this particular choice of the coefficient b, we deduce
\[ \frac{dS_{t}}{S_{t-}}=rdt+\int _{\mathbb{R}_{0}}\big[{e}^{z}-1\big]d{\tilde{N}}^{\mathbb{Q}}(t,z)\]
under $\mathbb{Q}$. Combining (3.1) and (3.2), we receive
\[ S_{t}=S_{0}{e}^{rt}\exp \Bigg\{{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{Q}}(s,z)-{\int _{0}^{t}}\int _{\mathbb{R}_{0}}\big[{e}^{z}-1-z\big]d{\nu }^{\mathbb{Q}}(z)ds\Bigg\}\]
where the last factor on the right hand side constitutes a Doléans-Dade exponential which again shows the $\mathbb{Q}$-martingale property of the discounted stock price process $\hat{S}_{t}=S_{t}{e}^{-rt}$. Moreover, taking (2.2), (2.4), (2.8) and (2.10) into account, Eq. (3.2) can be expressed as
\[ b=r-\psi (-i)=r-\mu -2\delta \ln \bigg(\frac{\cos (\beta /2)}{\cos ((\alpha +\beta )/2)}\bigg).\]
Unless otherwise stated, from now on we assume that the constant $b\in \mathbb{R}$ appearing in (3.1) is such as given in (3.3). Though constituting an admissible choice, taking $b=0$ in (3.1)–(3.3) might be too restrictive in practical applications. In the following, we investigate the log-returns related to our model (3.1). For an arbitrary time step $\varDelta >0$ and $t\le T-\varDelta $ we obtain
\[ \ln \bigg(\frac{S_{t+\varDelta }}{S_{t}}\bigg)\cong M_{\varDelta }+b\varDelta \sim \mathcal{M}\big(\alpha ,\beta ,\delta \varDelta ,(\mu +b)\varDelta \big)\]
by Lemma 2.1 (a). Here, the symbol ≅ denotes equality in distribution. Hence, in our stock price model (3.1) the log-returns are Meixner distributed. We stress that in [12] it was shown that the Meixner distribution fits empirical financial log-returns very well. Furthermore, for $n\in \mathbb{N}$ we introduce the time partition $\mathcal{P}:=\{0<t_{0}<t_{1}<\cdots <t_{n}\le T\}$ and define the return/revenue process associated with the period $[t_{k-1},t_{k}]$ via
\[ R_{k}:=\frac{S_{t_{k}}-S_{t_{k-1}}}{S_{t_{k-1}}}\]
where $k\in \{1,\dots ,n\}$. A substitution of (3.1) into the latter equation yields
\[ R_{k}={e}^{Y_{t_{k}}-Y_{t_{k-1}}}-1\]
\[ Y_{t}:=M_{t}+bt\sim \mathcal{M}\big(\alpha ,\beta ,\delta t,(\mu +b)t\big)\]
is a Meixner–Lévy process. Taking (2.1)–(2.4) and (3.3) into account, we get
\[ Y_{t}=\gamma t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zdN(s,z)\]
\[ \gamma :=r+\delta \bigg[\alpha \tan \bigg(\frac{\beta }{2}\bigg)-2\ln \bigg(\frac{\cos (\beta /2)}{\cos ((\alpha +\beta )/2)}\bigg)-\int _{\mathbb{R}_{0}}\frac{{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz\bigg]\]
is a real-valued constant. Recall that the Meixner–Lévy process Y given in (3.5) above just is a special case of the more general Lévy process X defined in Eq. (2.3) in [8]. For this reason, the cliquet option pricing results derived in [8] simultaneously apply in our current Meixner modeling case. More details on this topic are given in Section 4 below. Also note that $R_{1},\dots ,R_{n}$ are $\mathbb{Q}$-independent random variables and that $R_{k}>-1$ $\mathbb{Q}$-almost surely for all k. Since Y is a Lévy process under $\mathbb{Q}$, we observe $Y_{t_{k}}-Y_{t_{k-1}}\cong Y_{\tau }$ (stationary increments) where $\tau :=t_{k}-t_{k-1}$ (equidistant partition). Here, the symbol ≅ denotes equality in distribution. For the sake of notational simplicity, we always work under the assumption of equidistant time points in the following, unless otherwise stated. Taking (3.4) into account, we obtain the subsequent relationship between the cumulative distribution functions of $R_{k}$ and $Y_{\tau }$
\[ \mathbb{Q}(R_{k}\le \xi )=\mathbb{Q}\big(Y_{\tau }\le \ln (1+\xi )\big)\]
where $\xi >-1$ is an arbitrary real-valued constant.
3.1 A structure preserving measure change to the physical probability measure
Recall that we worked under the risk-neutral probability measure $\mathbb{Q}$ in the previous sections. Since log-returns of financial assets are commonly observed under the physical measure $\mathbb{P}$ (instead of under $\mathbb{Q}$), we establish a measure change from $\mathbb{Q}$ to $\mathbb{P}$ in the sequel. In this context, we have to pay special attention to the so-called structure preserving property of the measure change, as the log-returns under $\mathbb{P}$ shall again follow a Meixner distribution. In other words, the Meixner process $M_{t}$ introduced in (2.1) under $\mathbb{Q}$ shall also be a Meixner process under $\mathbb{P}$. First of all, for $t\in [0,T]$ we define the Radon–Nikodym density process
\[ \varLambda _{t}:=\frac{d\mathbb{P}}{d\mathbb{Q}}\bigg|_{\mathcal{F}_{t}}:=\exp \Bigg\{{\int _{0}^{t}}\int _{\mathbb{R}_{0}}h(z)d{\tilde{N}}^{\mathbb{Q}}(s,z)-{\int _{0}^{t}}\int _{\mathbb{R}_{0}}\big[{e}^{h(z)}-1-h(z)\big]d{\nu }^{\mathbb{Q}}(z)ds\Bigg\}\]
where the $\mathbb{Q}$-compensated PRM ${\tilde{N}}^{\mathbb{Q}}$ and the corresponding Lévy measure ${\nu }^{\mathbb{Q}}$ are such as defined in (2.3), respectively (2.4), while $h(z)$ is a time-independent deterministic function on $\mathbb{R}_{0}$. Recall that we may write
\[ \varLambda _{t}=\frac{{e}^{L_{t}}}{\mathbb{E}_{\mathbb{Q}}[{e}^{L_{t}}]}\]
with a local $\mathbb{Q}$-martingale process
\[ L_{t}:={\int _{0}^{t}}\int _{\mathbb{R}_{0}}h(z)d{\tilde{N}}^{\mathbb{Q}}(s,z)\]
such that the density process Λ is detected to be of Esscher transform type. Note that Λ is a discontinuous Doléans-Dade exponential which constitutes a local martingale under $\mathbb{Q}$ satisfying the SDE
\[ d\varLambda _{t}=\varLambda _{t-}\int _{\mathbb{R}_{0}}\big[{e}^{h(z)}-1\big]d{\tilde{N}}^{\mathbb{Q}}(t,z).\]
In accordance to Theorem 12.21 in [5], we further impose the Novikov condition
\[ \mathbb{E}_{\mathbb{Q}}\Bigg[\exp \Bigg\{{\int _{0}^{t}}\int _{\mathbb{R}_{0}}\big[1-{e}^{h(z)}+h(z){e}^{h(z)}\big]d{\nu }^{\mathbb{Q}}(z)ds\Bigg\}\Bigg]<\infty \]
for all $t\in [0,T]$. Then it holds $\mathbb{E}_{\mathbb{Q}}[\varLambda _{t}]\equiv 1$ for all $t\in [0,T]$ such that Λ constitutes a true $\mathbb{Q}$-martingale. Hence, we may apply Girsanov's theorem stating that
\[ d{\tilde{N}}^{\mathbb{P}}(s,z):=dN(s,z)-d{\nu }^{\mathbb{P}}(z)ds\]
constitutes the $\mathbb{P}$-compensated Poisson random measure with Lévy measure
\[ d{\nu }^{\mathbb{P}}(z):={e}^{h(z)}d{\nu }^{\mathbb{Q}}(z).\]
Note that the Novikov condition is equivalent to requiring that
\[ \int _{\mathbb{R}_{0}}\big[1-{e}^{h(z)}+h(z){e}^{h(z)}\big]d{\nu }^{\mathbb{Q}}(z)<\infty \]
since h and ${\nu }^{\mathbb{Q}}$ both are deterministic. A combination of (2.1), (2.3), (3.7) and (3.8) yields the following Lévy–Itô decomposition
\[ M_{t}=\bigg(\theta +\int _{\mathbb{R}_{0}}z\big[{e}^{h(z)}-1\big]d{\nu }^{\mathbb{Q}}(z)\bigg)t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{P}}(s,z)\]
under $\mathbb{P}$ where θ, ${\nu }^{\mathbb{Q}}$ and ${\tilde{N}}^{\mathbb{P}}$ are such as defined in (2.2), (2.4) and (3.7), respectively. The remaining challenge now consists in finding an appropriate function $h(z)$ which, firstly, fulfills the Novikov condition, secondly, guarantees that ${\nu }^{\mathbb{P}}$ in (3.8) constitutes a Lévy measure of Meixner-type and, thirdly, ensures that $M_{t}$ in (3.9) is a Meixner–Lévy process. In this regard, we propose to work with the specification
\[ h(z):=\frac{{\beta }^{\ast }-\beta }{\alpha }z\]
from now on. Herein, the constant skewness parameter ${\beta }^{\ast }\in (-\pi ,\pi )$ satisfies ${\beta }^{\ast }\ne \beta $ while $\alpha >0$ is the scaling parameter introduced above. Note that taking ${\beta }^{\ast }=\beta $ would imply $h(z)\equiv 0$ and hence, $\mathbb{P}=\mathbb{Q}$. Combining (3.8) with (2.4) and (3.10), we deduce
\[ d{\nu }^{\mathbb{P}}(z)=\delta \frac{{e}^{{\beta }^{\ast }z/\alpha }}{z\sinh (\pi z/\alpha )}dz\]
which constitutes a Meixner-type Lévy measure with parameters α, ${\beta }^{\ast }$ and δ [recall Eq. (2.4)]. Moreover, with respect to (3.8) and (3.10), we obtain
\[ \int _{\mathbb{R}_{0}}\big[1-{e}^{h(z)}+h(z){e}^{h(z)}\big]d{\nu }^{\mathbb{Q}}(z)={\nu }^{\mathbb{Q}}(\mathbb{R}_{0})-{\nu }^{\mathbb{P}}(\mathbb{R}_{0})+\frac{{\beta }^{\ast }-\beta }{\alpha }\int _{\mathbb{R}_{0}}zd{\nu }^{\mathbb{P}}(z)\]
which is finite, because the Lévy measures ${\nu }^{\mathbb{Q}}$ and ${\nu }^{\mathbb{P}}$ are finite. Thus, the function $h(z)$ defined in (3.10) indeed fulfills the Novikov condition. Further on, we take (2.2), (2.4), (3.9) and (3.10) into account and receive
\[ \mathbb{E}_{\mathbb{P}}[M_{t}]={\theta }^{\ast }t\]
with drift parameter
\[ {\theta }^{\ast }:={\mu }^{\ast }+\delta \alpha \tan \bigg(\frac{{\beta }^{\ast }}{2}\bigg)\]
and a constant and real-valued location parameter
\[ {\mu }^{\ast }:=\mu +\delta \bigg[\alpha \tan \bigg(\frac{\beta }{2}\bigg)-\alpha \tan \bigg(\frac{{\beta }^{\ast }}{2}\bigg)+\int _{\mathbb{R}_{0}}\frac{{e}^{{\beta }^{\ast }z/\alpha }-{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz\bigg].\]
Note in passing that (3.12) possesses the same structure as (2.5). Also verify that
\[ {\theta }^{\ast }-\theta =\delta \int _{\mathbb{R}_{0}}\frac{{e}^{{\beta }^{\ast }z/\alpha }-{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz=\int _{\mathbb{R}_{0}}z\big[d{\nu }^{\mathbb{P}}(z)-d{\nu }^{\mathbb{Q}}(z)\big]\]
due to (2.2), (2.4), (3.11), (3.12) and (3.13). All in all, combining (3.9) with (2.4), (3.10) and (3.14), we conclude that
\[ M_{t}={\theta }^{\ast }t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{P}}(s,z)\]
which constitutes a Meixner–Lévy process under $\mathbb{P}$ with distribution
\[ M_{t}\sim \mathcal{M}\big(\alpha ,{\beta }^{\ast },\delta t,{\mu }^{\ast }t\big).\]
For this reason, we call the recently introduced measure change structure preserving. Recall that in Section 2 under $\mathbb{Q}$ we observed $M_{t}\sim \mathcal{M}(\alpha ,\beta ,\delta t,\mu t)$ on the other hand. Thus, the proposed measure change does neither affect the scaling parameter α nor the peakedness parameter δ whereas both the skewness parameter β and the location parameter μ are changed. Moreover, the Lévy triplet of the process M claimed in (3.15) is given by $({\theta }^{\ast },0,{\nu }^{\mathbb{P}})$. In analogy to the result provided in the sequel of (3.3), we remark that under $\mathbb{P}$ it holds
\[ \ln \bigg(\frac{S_{t+\varDelta }}{S_{t}}\bigg)\cong M_{\varDelta }+b\varDelta \sim \mathcal{M}\big(\alpha ,{\beta }^{\ast },\delta \varDelta ,\big({\mu }^{\ast }+b\big)\varDelta \big)\]
due to Lemma 2.1 (a). Here, $\varDelta >0$ is a constant and b is such as given in (3.3). Hence, if we specify the Radon–Nikodym function $h(z)$ like in (3.10), then the log-returns again are Meixner distributed under the real-world probability measure $\mathbb{P}$. Further note that
\[ \theta t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{Q}}(s,z)=M_{t}={\theta }^{\ast }t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{P}}(s,z)\]
holds $\mathbb{P}$- respectively $\mathbb{Q}$-almost surely for all $t\in [0,T]$ thanks to (2.1) and (3.15). We obtain the following expressions for the variance, skewness and kurtosis of $M_{t}$ under $\mathbb{P}$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{Var}_{\mathbb{P}}[M_{t}]& \displaystyle =\frac{\delta t}{2}\frac{{\alpha }^{2}}{{\cos }^{2}({\beta }^{\ast }/2)},\hspace{2em}\mathbb{S}_{\mathbb{P}}[M_{t}]=\sqrt{2/(\delta t)}\sin \big({\beta }^{\ast }/2\big),\\{} \displaystyle \mathbb{K}_{\mathbb{P}}[M_{t}]& \displaystyle =3+\frac{2-\cos ({\beta }^{\ast })}{\delta t}\end{array}\]
while under $\mathbb{P}$ the density and characteristic function of $M_{t}$ are such as given in (2.6) and (2.9) but with μ and β therein replaced by ${\mu }^{\ast }$ and ${\beta }^{\ast }$, respectively.
3.1.1 A generalized structure preserving measure change
In this section, we present a generalized structure preserving measure change from the risk-neutral to the physical probability measure. Recall that the measure change proposed above only affects the skewness parameter β and the location parameter μ whereas both the scaling parameter α and the peakedness parameter δ remain untouched. From a practical point of view, this fact might be regarded as an advantage, as there is no need to recalibrate the parameters α and δ when changing from the risk-neutral to the physical probability measure. Conversely, the described feature might likewise cause some difficulties when it comes to calibrating under $\mathbb{P}$, since the values of the parameters α and δ have to be the same as under $\mathbb{Q}$ yielding some loss of flexibility. To avoid this disadvantage, we now propose a generalized measure change which affects each of the four parameters of the Meixner distribution. For this purpose, we presently require that the Meixner–Lévy measure under $\mathbb{P}$ is of the form
\[ d{\nu }^{\mathbb{P}}(z)={\delta }^{\ast }\frac{{e}^{{\beta }^{\ast }z/{\alpha }^{\ast }}}{z\sinh (\pi z/{\alpha }^{\ast })}dz\]
[cf. Equation (2.4)] with new parameters ${\alpha }^{\ast }>0$, ${\beta }^{\ast }\in (-\pi ,\pi )$ and ${\delta }^{\ast }>0$ which are different from α, β and δ introduced previously under $\mathbb{Q}$. Following this approach, we are led to the equality
\[ {e}^{h(z)}=\frac{{\delta }^{\ast }\sinh (\pi z/\alpha )}{\delta \sinh (\pi z/{\alpha }^{\ast })}\exp \bigg\{\bigg(\frac{{\beta }^{\ast }}{{\alpha }^{\ast }}-\frac{\beta }{\alpha }\bigg)z\bigg\}\]
due to (3.17), (3.8) and (2.4). Taking the logarithm in the latter equation, we receive
\[ h(z)=\bigg(\frac{{\beta }^{\ast }}{{\alpha }^{\ast }}-\frac{\beta }{\alpha }\bigg)z+\ln \bigg(\frac{{\delta }^{\ast }\sinh (\pi z/\alpha )}{\delta \sinh (\pi z/{\alpha }^{\ast })}\bigg)\]
which corresponds to (3.10) above. Note that (3.18) is well-defined for all $z\in \mathbb{R}_{0}$ and that we obtain (3.10), if we take ${\alpha }^{\ast }=\alpha $ and ${\delta }^{\ast }=\delta $ in (3.18). Hence, (3.10) is a special case of (3.18). Moreover, if we take ${\alpha }^{\ast }=\alpha $ and ${\beta }^{\ast }=\beta $ in (3.18), then we get $h(z)=\ln {\delta }^{\ast }-\ln \delta $ which is constant and independent of z. We summarize our findings in the following proposition.
Proposition 3.1.
Consider the measure change from the risk-neutral to the physical probability measure with Radon–Nikodym density process Λ such as defined at the beginning of Section 3.1. Then the new Lévy measure ${\nu }^{\mathbb{P}}$ under $\mathbb{P}$ is of Meixner-type again, if and only if the Radon–Nikodym function $h(z)$ is of the form (3.18).
In the sequel, we investigate the distributional properties of the corresponding Meixner–Lévy process under $\mathbb{P}$ related to the Radon–Nikodym function $h(z)$ given in (3.18). A substitution of (2.2), (2.4) and (3.18) into (3.9) yields the following Lévy–Itô decomposition under $\mathbb{P}$
\[ M_{t}=\overline{\theta }t+{\int _{0}^{t}}\int _{\mathbb{R}_{0}}zd{\tilde{N}}^{\mathbb{P}}(s,z)\]
with deterministic and real-valued drift parameter
\[ \overline{\theta }:=\mu +\delta \alpha \tan \bigg(\frac{\beta }{2}\bigg)+{\delta }^{\ast }\int _{\mathbb{R}_{0}}\frac{{e}^{{\beta }^{\ast }z/{\alpha }^{\ast }}}{\sinh (\pi z/{\alpha }^{\ast })}dz-\delta \int _{\mathbb{R}_{0}}\frac{{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz.\]
The Meixner–Lévy process $M_{t}$ given in (3.19) possesses the Lévy triplet $(\overline{\theta },0,{\nu }^{\mathbb{P}})$ where ${\nu }^{\mathbb{P}}$ is such as claimed in (3.17). In the next step, we require that $\overline{\theta }$ is of the form (2.2), i.e.
\[ \overline{\theta }=\overline{\mu }+{\delta }^{\ast }{\alpha }^{\ast }\tan \big({\beta }^{\ast }/2\big)\]
with some new location parameter $\overline{\mu }\in \mathbb{R}$. Following this onset, we deduce
\[ \overline{\mu }\hspace{0.1667em}=\hspace{0.1667em}\mu +\delta \alpha \tan \bigg(\frac{\beta }{2}\bigg)-{\delta }^{\ast }{\alpha }^{\ast }\tan \bigg(\frac{{\beta }^{\ast }}{2}\bigg)+{\delta }^{\ast }\int _{\mathbb{R}_{0}}\frac{{e}^{{\beta }^{\ast }z/{\alpha }^{\ast }}}{\sinh (\pi z/{\alpha }^{\ast })}dz-\delta \int _{\mathbb{R}_{0}}\frac{{e}^{\beta z/\alpha }}{\sinh (\pi z/\alpha )}dz.\]
Hence, if the measure change from $\mathbb{Q}$ to $\mathbb{P}$ is performed with the Radon–Nikodym function $h(z)$ defined in (3.18), then the corresponding Meixner–Lévy process $M_{t}$ again is Meixner distributed under $\mathbb{P}$ with parameters
\[ M_{t}\sim \mathcal{M}\big({\alpha }^{\ast },{\beta }^{\ast },{\delta }^{\ast }t,\overline{\mu }t\big).\]
If we compare (3.20) with (3.16), we see that in the generalized measure change related to (3.18) each of the four parameters of the Meixner–Lévy process M is affected.
4 Cliquet option pricing in a geometric Meixner model
This section is devoted to the pricing of cliquet options in the Meixner stock price model presented in Chapter 3. Since the Meixner process Y in (3.5) above just is a special case of the more general Lévy process X defined in Eq. (2.3) in [8], the cliquet option pricing results derived in [8] simultaneously apply to our present Meixner–Lévy modeling case. The details are worked out in the remainder of the current section. Parallel to [8] and Eq. (1.1) in [3], we consider a monthly sum cap style cliquet option with payoff
\[ H_{T}=K+K\max \Bigg\{g,{\sum \limits_{k=1}^{n}}\min \{c,R_{k}\}\Bigg\}\]
where T is the maturity time, K denotes the notional (i.e. the initial investment), g is the guaranteed rate at maturity, $c\ge 0$ is the local cap and $R_{k}$ is the return process given in (3.4). Recall that the payoff $H_{T}$ is globally-floored by the constant $K(1+g)$ and locally-capped by c. By a case distinction, we get
\[ H_{T}=K\max \Bigg\{1+g,1+{\sum \limits_{k=1}^{n}}\min \{c,R_{k}\}\Bigg\}=K\Bigg(1+g+\max \Bigg\{0,{\sum \limits_{k=1}^{n}}Z_{k}\Bigg\}\Bigg)\]
where for all $k\in \{1,\dots ,n\}$ the appearing objects
\[ Z_{k}:=\min \{c,R_{k}\}-g/n\]
are independent and identically distributed (i.i.d.) random variables. Note that $R_{k}$ is $\mathcal{F}_{t_{k}}$-measurable such that $H_{T}$ is $\mathcal{F}_{t_{n}}$-measurable. Since $t_{n}\le T$, it holds $\mathcal{F}_{t_{n}}\subseteq \mathcal{F}_{T}$ such that $H_{T}$ constitutes an $\mathcal{F}_{T}$-measurable claim. As before, let us denote the constant interest rate by $r>0$. Then the price at time $t\le T$ of a cliquet option with payoff $H_{T}$ at maturity T is given by the discounted risk-neutral conditional expectation of the payoff, i.e.
\[ C_{t}={e}^{-r(T-t)}\mathbb{E}_{\mathbb{Q}}(H_{T}\mid \mathcal{F}_{t}).\]
Combining the latter equations, we obtain
\[ C_{0}=K{e}^{-rT}\Bigg(1+g+\mathbb{E}_{\mathbb{Q}}\Bigg[\max \Bigg\{0,{\sum \limits_{k=1}^{n}}Z_{k}\Bigg\}\Bigg]\Bigg)\]
which shows that the considered cliquet option with payoff $H_{T}$ essentially is a plain-vanilla call option with strike zero written on the basket-style underlying ${\sum _{k=1}^{n}}Z_{k}$.
Proposition 4.1 (Cliquet option price).
Let $k\in \{1,\dots ,n\}$ and consider the independent and identically distributed random variables $Z_{k}=\min \{c,R_{k}\}-g/n$ where $c\ge 0$ is the local cap, $R_{k}$ is the return process given in (3.4) and g is the guaranteed rate at maturity. Denote the maturity time by T, the notional by K and the risk-less interest rate by r. Then the price at time zero of a cliquet option with payoff $H_{T}$ can be represented as
\[ C_{0}=K{e}^{-rT}\Bigg(1+g+\frac{n}{2}\mathbb{E}_{\mathbb{Q}}[Z_{1}]+\frac{1}{\pi }{\int _{{0}^{+}}^{\infty }}\frac{1-Re(\phi _{Z}(x))}{{x}^{2}}dx\Bigg)\]
where $Re$ denotes the real part and the characteristic function $\phi _{Z}(x)$ is defined via
\[ \phi _{Z}(x):={\prod \limits_{k=1}^{n}}\phi _{Z_{k}}(x)={\prod \limits_{k=1}^{n}}\mathbb{E}_{\mathbb{Q}}\big[{e}^{ixZ_{k}}\big]={\big(\phi _{Z_{1}}(x)\big)}^{n}={\big(\mathbb{E}_{\mathbb{Q}}\big[{e}^{ixZ_{1}}\big]\big)}^{n}.\]
See the proof of Prop. 3.1 in [3], respectively of Prop. 3.1 in [8]. □
In the subsequent sections, we derive explicit expressions for $\phi _{Z}(x)$ and $\mathbb{E}_{\mathbb{Q}}[Z_{1}]$ appearing in the pricing formula (4.3). As before, we stick to the presumption of equidistant resetting times and set $\tau =t_{k}-t_{k-1}$ for all $k\in \{1,\dots ,n\}$ in the following.
4.1 Cliquet option pricing with distribution functions
Let us first apply a method involving probability distribution functions (cf. [3] and Section 3.1 in [8]). We initially investigate the treatment of $\phi _{Z}(x)$ defined in (4.4).
Let $Y_{\tau }\sim \mathcal{M}(\alpha ,\beta ,\delta \tau ,(\mu +b)\tau )$ and suppose that $Z_{k}=\min \{c,R_{k}\}-g/n$ where $k\in \{1,\dots ,n\}$. Then the characteristic function of $Z_{k}$ under $\mathbb{Q}$ can be represented as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \phi _{Z_{k}}(x)& \displaystyle ={e}^{-ix(1+g/n)}\Bigg({e}^{ix(1+c)}{\int _{1+c}^{\infty }}f_{Y_{\tau }}(u)du\\{} & \displaystyle \hspace{1em}+{\int _{-\infty }^{0}}f_{Y_{\tau }}(u)du+{\int _{0}^{1+c}}{e}^{ixu}f_{Y_{\tau }}(u)du\Bigg)\end{array}\]
\[ f_{Y_{\tau }}(u)=\frac{{(2\cos (\beta /2))}^{2\delta \tau }}{2\pi \alpha \varGamma (2\delta \tau )}{e}^{\beta (u-(\mu +b)\tau )/\alpha }{\bigg|\varGamma \bigg(\delta \tau +i\frac{u-(\mu +b)\tau }{\alpha }\bigg)\bigg|}^{2}\]
constitutes the probability density function of the Meixner–Lévy process Y given in (3.5) and b is the real-valued constant claimed in (3.3).
By similar arguments as in the proof of Prop. 3.2 in [8], we obtain
\[ \phi _{Z_{k}}(x)={e}^{-ix(1+g/n)}\Bigg({e}^{ix(1+c)}-ix{\int _{0}^{1+c}}{e}^{ixw}\mathbb{Q}(R_{k}\le w-1)dw\Bigg).\]
Using (3.6) and the definition of the distribution function, we get for the last integral in (4.7)
\[ {\int _{0}^{1+c}}{e}^{ixw}\mathbb{Q}(R_{k}\le w-1)dw={\int _{0}^{1+c}}{\int _{-\infty }^{\ln (w)}}{e}^{ixw}f_{Y_{\tau }}(u)dudw\]
where $Y_{\tau }\sim \mathcal{M}(\alpha ,\beta ,\delta \tau ,(\mu +b)\tau )$ is the Meixner–Lévy process given in (3.5) and $f_{Y_{\tau }}(u)$ constitutes the probability density function of $Y_{\tau }$ under $\mathbb{Q}$ claimed in (4.6). Applying Fubini's theorem and hereafter splitting up the resulting outer integral, we deduce
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\int _{0}^{1+c}}{e}^{ixw}\mathbb{Q}(R_{k}\le w-1)dw\\{} & \displaystyle \hspace{1em}={\int _{-\infty }^{0}}{\int _{0}^{1+c}}{e}^{ixw}f_{Y_{\tau }}(u)dwdu+{\int _{0}^{1+c}}{\int _{u}^{1+c}}{e}^{ixw}f_{Y_{\tau }}(u)dwdu.\end{array}\]
We next compute the emerging $dw$-integrals and finally substitute the resulting expression into (4.7) which yields (4.5). □
If we insert (4.5) into (4.4), we receive a representation for the characteristic function $\phi _{Z}(x)$. Let us proceed with the computation of $\mathbb{E}_{\mathbb{Q}}[Z_{k}]$.
Suppose that $Z_{k}=\min \{c,R_{k}\}-g/n$ where $k\in \{1,\dots ,n\}$. Then the first moment of $Z_{k}$ under $\mathbb{Q}$ is given by
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=-1-\frac{g}{n}+(1+c){\int _{1+c}^{\infty }}f_{Y_{\tau }}(u)du+{\int _{0}^{1+c}}uf_{Y_{\tau }}(u)du\]
where $f_{Y_{\tau }}(u)$ is the probability density function of $Y_{\tau }$ under $\mathbb{Q}$ given in (4.6).
In accordance to Prop. 2.4 in [4], we have
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=\frac{1}{i}\frac{\partial }{\partial x}\big(\phi _{Z_{k}}(x)\big)\big|_{x=0}.\]
A substitution of (4.5) into (4.9) instantly yields (4.8). □
As mentioned in Section 2, we recall that the cumulative distribution function (cdf) of the Meixner-Lévy process $Y_{t}$, i.e.
\[ F_{Y_{t}}(x):={\int _{-\infty }^{x}}f_{Y_{t}}(u)du\]
does not possess a closed form representation, but it can be computed efficiently with numerical methods. Also note that all integrals appearing in (4.5) and (4.8) are finite, since $f_{Y_{\tau }}(\boldsymbol{\cdot })$ constitutes a probability density function while the Meixner–Lévy process $Y_{\tau }$ possesses moments of all orders (cf. [13]).
4.2 Cliquet option pricing with Fourier transform techniques
There is an alternative method to derive expressions for $\mathbb{E}_{\mathbb{Q}}[Z_{k}]$, $\phi _{Z}(x)$ and $C_{0}$ involving Fourier transforms and the Lévy–Khinchin formula. In the following, we present this method which has firstly been proposed in [8] in a cliquet option pricing context.
Let $Y_{t}\sim \mathcal{M}(\alpha ,\beta ,\delta t,(\mu +b)t)$ be the Meixner–Lévy process considered in (3.5). Suppose that $Z_{k}=\min \{c,R_{k}\}-g/n$ where $k\in \{1,\dots ,n\}$ and let $\vartheta >0$ be a finite real-valued dampening parameter. Then the first moment of $Z_{k}$ under $\mathbb{Q}$ can be represented as
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=c-\frac{g}{n}-\frac{1}{2\pi }\int _{\mathbb{R}}\frac{{(c+1)}^{1+\vartheta +iy}}{(\vartheta +iy)(1+\vartheta +iy)}\phi _{Y_{\tau }}(i\vartheta -y)dy\]
where the characteristic function $\phi _{Y_{\tau }}$ is given by
\[ \phi _{Y_{\tau }}(i\vartheta -y)={e}^{-(\vartheta +iy)(\mu +b)\tau }{\bigg(\frac{\cos (\beta /2)}{\cosh ((i(\alpha \vartheta -\beta )-\alpha y)/2)}\bigg)}^{2\delta \tau }.\]
The proof follows the same lines as the proof of Prop. 3.4 in [8]. From (4.1) and the equality
\[ \min \{c,R_{k}\}=c-{[c-R_{k}]}^{+}\]
we deduce
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=c-g/n-\mathbb{E}_{\mathbb{Q}}\big[{(c-R_{k})}^{+}\big].\]
Taking (3.4) into account, we next receive
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=c-g/n-\mathbb{E}_{\mathbb{Q}}\big[{\big(c+1-{e}^{Y_{\tau }}\big)}^{+}\big]\]
where $\tau =t_{k}-t_{k-1}$ and Y is the real-valued Meixner–Lévy process given in (3.5). With a finite and real-valued dampening parameter $\vartheta >0$ we define the function
\[ \varphi (u):={e}^{\vartheta u}{\big(c+1-{e}^{u}\big)}^{+}.\]
Since $\varphi \in {\mathcal{L}}^{1}(\mathbb{R})$, its Fourier transform exists and reads as
\[ \hat{\varphi }(y)=\frac{{(c+1)}^{1+\vartheta +iy}}{(\vartheta +iy)(1+\vartheta +iy)}.\]
Using the inverse Fourier transform along with Fubini's theorem, we get
\[ \mathbb{E}_{\mathbb{Q}}\big[{\big(c+1-{e}^{Y_{\tau }}\big)}^{+}\big]=\mathbb{E}_{\mathbb{Q}}\big[{e}^{-\vartheta Y_{\tau }}\varphi (Y_{\tau })\big]=\frac{1}{2\pi }\int _{\mathbb{R}}\hat{\varphi }(y)\mathbb{E}_{\mathbb{Q}}\big[{e}^{-(\vartheta +iy)Y_{\tau }}\big]dy\]
which implies (4.10). The expression for the characteristic function $\phi _{Y_{\tau }}$ given in (4.11) can directly be obtained by virtue of (2.9). □
Our argumentation in the proof of Proposition 4.4 motivates the following considerations.
Let $Y_{t}\sim \mathcal{M}(\alpha ,\beta ,\delta t,(\mu +b)t)$ be the Meixner–Lévy process presented in (3.5). Suppose that $Z_{k}=\min \{c,R_{k}\}-g/n$ with $k\in \{1,\dots ,n\}$ and $c\ge 0$. Then the characteristic function of $Z_{k}$ under $\mathbb{Q}$ reads as
\[ \phi _{Z_{k}}(x)={e}^{-ixg/n}\Bigg({e}^{ixc}+{\int _{-\infty }^{\ln (1+c)}}\big[{e}^{ix({e}^{u}-1)}-{e}^{ixc}\big]f_{Y_{\tau }}(u)du\Bigg)\]
where the probability density function $f_{Y_{\tau }}$ of $Y_{\tau }$ under $\mathbb{Q}$ is such as given in (4.6).
Similar computations as in the proof of Prop. 3.5 in [8] yield (4.12). □
There is an alternative method involving (4.9) to derive an expression for $\mathbb{E}_{\mathbb{Q}}[Z_{k}]$ which is presented in the following.
Corollary 4.6.
In the setup of Proposition 4.5, we receive the representation
\[ \mathbb{E}_{\mathbb{Q}}[Z_{k}]=c-\frac{g}{n}+{\int _{-\infty }^{\ln (1+c)}}\big[{e}^{u}-1-c\big]f_{Y_{\tau }}(u)du.\]
The claimed representation immediately follows from Eq. (3.16) in [8]. □
Inspired by the Fourier transform techniques applied in the proof of Proposition 4.4, we now focus on the derivation of an alternative representation for the cliquet option price $C_{0}$ given in (4.2). The corresponding result reads as follows.
Theorem 4.7 (Fourier transform cliquet option price).
Let $k\in \{1,\dots ,n\}$ and consider the independent and identically distributed random variables $Z_{k}\hspace{0.1667em}=\hspace{0.1667em}\min \{c,R_{k}\}-g/n$ where $c\ge 0$ is the local cap, g is the guaranteed rate at maturity and $R_{k}$ is the return process defined in (3.4). For $n\in \mathbb{N}$ we set $\varrho :=nc-g$ and denote the maturity time by T, the notional by K and the riskless interest rate by r. Let $Y_{t}\sim \mathcal{M}(\alpha ,\beta ,\delta t,(\mu +b)t)$ be the Meixner–Lévy process given in (3.5). Then the price at time zero of a cliquet option paying
\[ H_{T}=K\Bigg(1+g+\max \Bigg\{0,{\sum \limits_{k=1}^{n}}Z_{k}\Bigg\}\Bigg)\]
at maturity can be represented as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle C_{0}& \displaystyle =K{e}^{-rT}\Bigg[1+g+{\int _{{0}^{+}}^{\infty }}\frac{1+iy\varrho -{e}^{iy\varrho }}{2\pi {y}^{2}}\\{} & \displaystyle \hspace{1em}\times {\Bigg(1+{\int _{-\infty }^{\ln (1+c)}}\big[{e}^{iy({e}^{u}-1-c)}-1\big]f_{Y_{\tau }}(u)du\Bigg)}^{n}dy\Bigg]\end{array}\]
where $f_{Y_{\tau }}(u)$ constitutes the probability density function claimed in (4.6).
The proof of Theorem 3.7 in [8] here applies equally, if we replace $f_{X_{\tau }}$ therein by $f_{Y_{\tau }}$. □
In this paper, we investigated the pricing of a monthly sum cap style cliquet option with underlying stock price modeled by a geometric pure-jump Meixner–Lévy process. In Section 2, we compiled various facts on the Meixner distribution and the related class of stochastic Meixner–Lévy processes. In Section 3, we introduced a stock price model driven by a Meixner–Lévy process and established a customized structure preserving measure change from the risk-neutral to the physical probability measure. Moreover, we obtained semi-analytic expressions for the cliquet option price by using the probability distribution function of the driving Meixner–Lévy process in Section 4.1 and by an application of Fourier transform techniques in Section 4.2. To read more on cliquet option pricing in a jump-diffusion Lévy model, the reader is referred to the accompanying article [8].
Albrecher, H., Ladoucette, S., Schoutens, W.: A generic one-factor Lévy model for pricing synthetic CDOs. In: Advances in Mathematical Finance, pp. 259–277. Birkhäuser, (2007). MR2359372. https://doi.org/10.1007/978-0-8176-4545-8_14
Bernard, C., Boyle, P., Gornall, W.: Locally-capped Contracts and the Retail Investor. Journal of Derivatives 18(4), 72–88 (2011). https://doi.org/10.3905/jod.2011.18.4.072
Bernard, C., Li, W.: Pricing and Hedging of Cliquet Options and Locally-capped Contracts. SIAM Journal on Financial Mathematics 4, 353–371 (2013). MR3038023. https://doi.org/10.1137/100818157
Cont, R., Tankov, P.: Financial Modeling with Jump Processes, 1st edn. Chapman & Hall/CRC, (2004). MR2042661
Di Nunno, G., Øksendal, B., Proske, F.: Malliavin Calculus for Lévy Processes with Applications to Finance, 1st edn. Springer, (2009). MR2460554
Grigelionis, B.: Processes of Meixner Type. Lithuanian Mathematical Journal 39(1), 33–41 (1999). MR1711971
Haifeng, Y., Jianqi, Y., Limin, L.: Pricing Cliquet Options in Jump-diffusion Models. Stochastic Models 21, 875–884 (2005). MR2179304. https://doi.org/10.1080/15326340500294587
Hess, M.: Cliquet Option Pricing in a Jump-Diffusion Lévy Model (2017). SSRN working paper. https://ssrn.com/abstract=2979296
Iseger P., den Oldenkamp, E.: Cliquet options: Pricing and Greeks in deterministic and stochastic volatility models (2005). SSRN working paper. https://ssrn.com/abstract=1013510
Protter, P.: Stochastic Integration and Differential Equations, 2nd edn. Springer, (2005). MR2273672
Sato, K.: Lévy Processes and Infinitely Divisible Distributions. Cambridge studies in advanced mathematics, vol. 68 (1999). MR1739520
Schoutens, W.: The Meixner Process: Theory and Applications in Finance, Eurandom Report 2001-004, Eindhoven, pp. 1–24 (2002)
Schoutens, W.: Lévy Processes in Finance: Pricing Financial Derivatives. John Wiley & Sons, Ltd., (2003)
Schoutens, W., Teugels, J.: Lévy Processes, Polynomials and Martingales. Stochastic Models 14(1–2), 335–349 (1998). MR1617536
Wilmott, P.: Cliquet Options and Volatility Models, Wilmott magazine, December (2002)
© 2018 The Author(s). Published by VTeX
Open access article under the CC BY license.
Cliquet option pricing path-dependent exotic option equity indexed annuity log-return of financial asset Meixner distribution Meixner–Lévy process stochastic differential equation probability measure change characteristic function Fourier transform
60G51 (primary) 60H10 (primary) 60H30 (primary) 91B30 (secondary) 91B70 (secondary)
G22 D52
Metrics (since March 2018)
Theorems | CommonCrawl |
Design Science
Patent stimuli search and its influence on ideation...
Method and data
Conclusions and future work
Patent stimuli search and its influence on ideation outcomes
Part of: Network-based modeling and analysis in design
Published online by Cambridge University Press: 07 December 2017
Binyang Song ,
V. Srinivasan and
Jianxi Luo
Binyang Song
Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore
V. Srinivasan
Instrument Design and Development Centre, Indian Institute of Technology, Delhi, India
Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore SUTD-MIT International Design Centre, Singapore University of Technology and Design, Singapore
Corresponding
[email protected]
Save pdf (1 mb)
Save to Dropbox
Save to Kindle
Prior studies on design ideation have demonstrated the efficacy of using patents as stimuli for concept generation. However, the following questions remain: (a) From which part of the large patent database can designers identify stimuli? (b) What are their implications on ideation outcomes? This research aims to answer these questions through a design experiment of searching and identifying patent stimuli to generate new concepts of spherical rolling robots. We position the identified patent stimuli in the home, near and far fields defined in the network of patent technology classes, according to the network's community structure and the knowledge proximity of the stimuli to the spherical rolling robot design. Significant findings are: designers are most likely to find patent stimuli in the home field, whereas most patent stimuli are identified in the near field; near-field patents stimulate the most concepts, which exhibit a higher average novelty; combined home- and far-field stimuli are most beneficial for high concept quality. These findings offer insights on designers' preferences in search for patent stimuli and the influence of stimulation distance on ideation outcomes. The findings will also help guide the development of a computational tool for the search of patents for design inspiration.
design ideation concept generation novelty patent network analysis
Design Science , Volume 3 , 2017 , e25
DOI: https://doi.org/10.1017/dsj.2017.27[Opens in a new window]
Distributed as Open Access under a CC-BY 4.0 license (http://creativecommons.org/licenses/by/4.0/)
Copyright © The Author(s) 2017
Design creativity is the ability of an agent to address a design opportunity by developing outcomes that are both novel and useful (Sarkar & Chakrabarti Reference Sarkar and Chakrabarti2011). Concept generation is an early phase in the design process where solution principles are conceived to address design opportunities (Jensen et al. Reference Jensen, Weaver, Wood, Linsey and Wood2009; Taura & Yukari Reference Taura and Yukari2012). Concept generation is a significant phase of the design process because a successful product is likely to be an outcome of an exploration of a variety of solution principles (Pahl & Beitz Reference Pahl, Beitz and Wallace2013). Owing to the ease of making changes that are less expensive in this phase, the scope for design creativity is greater in this early phase than the downstream phases (French Reference French1985). Several guidelines, methods and tools have been proposed to foster creativity during the concept generation phase.
Providing stimuli to designers in order to identify analogies from them for generating concepts is one of the most potent and useful methods (Chakrabarti et al. Reference Chakrabarti, Sarkar, Leelavathamma and Nataraju2005; Chan et al. Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011). A stimulus is beneficial for concept generation by helping develop creative solutions, enhance novelty, inhibit fixation, etc. (Qian & Gero Reference Qian and Gero1996; Goel Reference Goel1997; Linsey et al. Reference Linsey, Tseng, Fu, Cagan, Wood and Schunn2010; Chan et al. Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011). Simultaneously, certain stimuli can also inhibit concept generation by causing bias and fixation (Jansson & Smith Reference Jansson and Smith1991). Therefore, stimuli need to be carefully chosen before using them. Several kinds of aids to foster the use of stimuli and analogies have been proposed and found to be effective at improving quantity, novelty and creativity of solutions (Chakrabarti et al. Reference Chakrabarti, Sarkar, Leelavathamma and Nataraju2005; Linsey et al. Reference Linsey, Tseng, Fu, Cagan, Wood and Schunn2010). Prior studies have found that it is easier to analogize with stimuli from near than far domains to the target domain, because stimuli from near domains have more structural similarities to the target design problem than stimuli from far domains (Christensen & Schunn Reference Christensen and Schunn2007). However, stimuli from far domains, owing to their surface dissimilarities, are the best sources for novelty and creative breakthroughs (Gentner & Markman Reference Gentner and Markman1997; Ward Reference Ward1998). Several researchers investigated the effects of stimulation from near and far analogical distances on the outcomes of ideation, for instance Wilson et al. (Reference Wilson, Nelson, Rosen and Yen2010), Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013b ) and Chan & Schunn (Reference Chan and Schunn2015). However, to date, the characterization of near and far stimuli have been inconsistent in existing studies, and so, their findings cannot be generalized across these studies.
Meanwhile, patents have been increasingly explored as sources of stimuli for engineering design (Fantoni et al. Reference Fantoni, Apreda, Orletta and Monge2013; Fu et al. Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013b , Reference Fu, Murphy, Yang, Otto, Jensen and Wood2015; Murphy et al. Reference Murphy, Fu, Otto, Yang, Jensen and Wood2014; Srinivasan et al. Reference Srinivasan, Song, Luo, Subburaj and Rajesh2017a ,Reference Srinivasan, Song, Luo, Subburaj and Rajesh b ). Patents contain technical descriptions of products and processes, which are both novel and functional, from various domains. The patent database is an enormous reservoir of design precedents. The growing patent data, as inventors continually file patent applications over time, present both opportunities and challenges for using them as design stimuli. Therefore, efficient methods and tools are required for designers to retrieve the most relevant from among millions of patents in the vast patent databases. While it is acknowledged that patents are useful for inspiration, questions persist pertaining to: from where in the complex database can designers find useful patent stimuli, and which of these patent stimuli can most effectively inspire designers to generate novel and valuable concepts. Moreover, due to a lack of uniform characterization of near and far stimuli, there is no single method to characterize stimuli as near or far, and consequently, not much work has been done to identify from where in the patent database can near and far patent stimuli be identified.
As a solution, network analysis techniques have been increasingly exploited to uncover the knowledge structure in the patent database to facilitate engineering design. For example, Fu et al. (Reference Fu, Cagan, Kotovsky and Wood2013a ) analyzed the similarities of occurrences of functional verbs between patents to construct the Bayesian networks of patents. Such a network provides information of functional similarity between individual patents, which in turn has potential for a patent recommendation system for design stimulation (Fu et al. Reference Fu, Murphy, Yang, Otto, Jensen and Wood2015). At a higher level, the patent classification and citation information have also been analyzed to measure knowledge proximity or distance between different classes of patents and construct technology network maps to approximate the total technology space (Kay et al. Reference Kay, Newman, Youtie, Porter and Rafols2014; Leydesdorff, Kushnir & Rafols Reference Leydesdorff, Kushnir and Rafols2014; Alstott et al. Reference Alstott, Triulzi, Yan and Luo2017a ; Yan & Luo Reference Yan and Luo2017). In these network maps, nodes are technology classes that represent various technology categories and contain patents related to corresponding technology categories. These nodes are connected according to the knowledge proximity between them. A structural analysis of the networks can allow one to define and identify the technology classes near or far from a given design problem in the technology space. In this study, we will utilize such a technology space network to locate the patents that designers found useful in an ideation exercise, according to the proximity between technology classes in the network. Herein, we consider a patent as useful if it is used as a stimulus for concrete concepts generated by designers.
The broad objectives of this research are: (a) to identify locations within the network of technology classes from where designers identify useful patent stimuli and (b) to study the implications of using such patent stimuli for ideation on the outcomes of ideation. Toward these broad objectives, the research in this paper examines the effects of using patents – sourced from technology classes which are located at the home field, near field and far field to a design problem – as stimuli for ideation on the outcomes, based on the data from an open concept generation exercise. The three fields in the technology space are defined based on community detection within the network of technology classes. The home field entails the technology classes that are directly relevant to the design problem, the near field comprises the technology classes that are in the same cohesive network communities as those in the home field, and the far field includes the technology classes in all the other communities in the technology network.
In the following sections, we review prior literature relevant to the theories and methods grounding our research (Section 2), introduce our data and research method (Section 3), present and discuss our findings (Sections 4 and 5).
This study is theoretically motivated and grounded by the literature on design by analogy. Within the field of Design Science, the area of analogical design has been extensively researched. However, to fit the scope of this research, only those prior studies that use patents for stimulation in ideation or analyze the effect of analogical distance on the performance of ideation are reviewed here.
2.1 Patent stimuli and design by analogy
Many researchers have studied the use of patents as stimuli for design and developed tools for the search and analysis of patents. For example, several tools have been developed to search for patents to facilitate the use of TRIZ principles (Altshuller & Shapiro Reference Altshuller and Shapiro1956) in solving design problems (Mukherjea, Bamba & Kankar Reference Mukherjea, Bamba and Kankar2005; Cascini & Russo Reference Cascini and Russo2006; Souili et al. Reference Souili, Cavallucci, Rousselot and Zanni2015) developed the Biomedical Patent Semantic Web for retrieving patents based on the semantic associations between biological terms within the abstracts of biomedical patents. Particularly, a recent strand of research has focused on analyzing and using patents to aid in design by analogy.
Fu et al. (Reference Fu, Cagan, Kotovsky and Wood2013a ) developed a computational tool for automatically identifying patent stimuli at different analogical distances. They extracted verb and noun content from the technical descriptions of patents, used semantic analysis to quantify the functional and surface similarities between patents, and created function- and surface-based Bayesian networks of patents, respectively. In the networks, a design problem can be located as the starting point, and the 'analogical distance' between the problem and patents is defined as the length of path between them. Murphy et al. (Reference Murphy, Fu, Otto, Yang, Jensen and Wood2014) proposed a functional vector approach to systematically search and identify functional analogies from the patent database. The following steps constitute the methodology: (a) process patents to identify a vocabulary of functions, (b) define a set of functions in patents comprising primary, secondary and correspondent functions, (c) index patents using the functional set to create a vector representation of the patent database, (d) develop methods for generating query and estimate relevance of patents to a query, and (e) retrieve and display patents relevant to the query. Fu et al. (Reference Fu, Murphy, Yang, Otto, Jensen and Wood2015) empirically tested the functional vector approach of Murphy et al. (Reference Murphy, Fu, Otto, Yang, Jensen and Wood2014), to aid in the search for functional analogies from patent databases to stimulate design concepts, and found the experimental group generated solutions of higher novelty than the control group. Srinivasan et al. (Reference Srinivasan, Song, Luo, Subburaj and Rajesh2017a ) tested the efficacy of using patents as design stimuli through a concept generation experiment, and found that the average quality and novelty of the concepts generated with patent stimuli individually or in combination with other resources is higher than those generated without any stimuli.
2.2 Stimulation distance
Design by analogy leverages existing solutions from source fields to solve design problems in target fields (Gick & Holyoak Reference Gick and Holyoak1980; Weisberg Reference Weisberg2006; Linsey Reference Linsey2007). The distance between the source and target fields is referred to as the stimulation or analogical distance. The Conceptual Leap hypothesis states that stimuli from far sources, owing to their surface dissimilarities, provide the best stimulation for creative breakthroughs (Gentner & Markman Reference Gentner and Markman1997; Ward Reference Ward1998). Some anecdotal evidence exists in support of this hypothesis. However, empirical findings related to the validation of this hypothesis have not been consistent.
Chan et al. (Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011) observed that far-field analogies help develop concepts of higher novelty, higher variability in quality and greater solution transfer but stimulate fewer concepts than near-field analogies. Chan & Schunn (Reference Chan and Schunn2015) reasoned that the most creative solutions are more likely to be developed from near distance than far distance stimuli, owing to better perception and connection to the problem at hand. Srinivasan et al. (Reference Srinivasan, Song, Luo, Subburaj and Rajesh2017b ) observed that as analogical distance of patent stimuli from the design problem increases, novelty of concepts generated using these stimuli increases but quality of concepts decreases. However, Wilson et al. (Reference Wilson, Nelson, Rosen and Yen2010) observed no distinctions between stimuli from far sources and near sources. Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013b ) found that stimuli from near sources or 'middle ground' help generate solutions of higher 'maximum novelty' than far sources; no significant differences were seen in 'average novelty' between near and far sources. Fu et al. also observed that both the 'mean quality' and the 'maximum quality' of solutions generated using stimuli from near sources are higher than those generated using stimuli from far sources. Consequently, they argued stimuli from 'middle ground' to be more beneficial for developing creative solutions. With these findings, Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013b ) posited that comparisons of effects of analogical distance across different studies are hard owing to different metrics being used to measure distance in these studies. They also argued about the terms 'near' and 'far' as being relative and not being able to completely characterize these across different studies due to lack of a common metric to measure distance.
2.3 Network of technologies by distance or proximity
These prior studies have implied the potential value for designers to make use of the knowledge of the relative distance or proximity between technologies in the search for design stimuli from either near or far sources. For example, to use patents as design stimuli, the Bayesian network of patents of Fu et al. (Reference Fu, Cagan, Kotovsky and Wood2013a ) quantifies and visualizes the analogical distance between patents and a design problem, and thus designers can potentially use the network to identify patent stimuli from near or far distance from the design problem. However, the network of patents is only applicable for a small set of patents, whereas the total patent database contains millions of patents that may provide varied inspirations from different distances to a design problem.
According to the patent classification systems, such as the International Patent Classification (IPC) system, each patent is classified in one or multiple technology classes, which are categories of patents and represent different technology fields. This presents a structure for locating patents in the enormous database. A few recent studies have proposed methods to measure the knowledge proximity between the patent technology classes and used such proximity information to construct the network map of technology classes (Kay et al. Reference Kay, Newman, Youtie, Porter and Rafols2014; Leydesdorff et al. Reference Leydesdorff, Kushnir and Rafols2014; Yan & Luo Reference Yan and Luo2017). The network of all technology classes in the patent database can be used to approximate the total technology space (Alstott et al. Reference Alstott, Triulzi, Yan and Luo2017a ). Such a network of technology classes, given the proximity information, may serve as a framework to define the near or far field of design stimulation. In turn, such a network map will allow the designers to be better informed of the proximity (or distance) between the source field of potential patent stimuli and the target field where a design problem or opportunity is located, or be better oriented to identify patents specifically from either the near or far field from the design problem.
In particular, the key requirement to create such a network is the measure of knowledge proximity between the patent technology classes, i.e., link weight in the network. In the literature, a variety of measures of knowledge proximity have been reported. One group of measures are computed using the data of patent references. For example, Jaccard index can be adopted to calculate the number of shared references of a pair of classes normalized by the total number of all unique references of patents in either class (Jaccard Reference Jaccard1901; Small Reference Small1973) as an indicator of knowledge proximity. Alternatively, the cosine similarity index can be calculated between two vectors indicating patent references made from the patents in a pair of classes to all classes respectively (Jaffe Reference Jaffe1986; Kay et al. Reference Kay, Newman, Youtie, Porter and Rafols2014; Leydesdorff et al. Reference Leydesdorff, Kushnir and Rafols2014), i.e., class-to-class reference vectors. For a higher granularity, Yan & Luo (Reference Yan and Luo2017) extended the cosine similarity measure to class-to-patent vectors, concerning references to specific patents instead of aggregated classes. Another group of measures use the 'co-classification' information, i.e., how often two classes are co-assigned to individual patents, to compute knowledge proximity. For instance, the cosine similarity index can be calculated between two vectors of the occurrences of a pair of classes with all other classes in patents (Breschi, Lissoni & Malerba Reference Breschi, Lissoni and Malerba2003; Ejermo Reference Ejermo2005; Kogler, Rigby & Tucker Reference Kogler, Rigby and Tucker2013). The normalized co-classification index measures the deviation of the actual observed co-occurrences of class pairs in patents from random expectations (Teece et al. Reference Teece, Rumelt, Dosi and Winter1994; Dibiaggio, Nasiriyar & Nesta Reference Dibiaggio, Nasiriyar and Nesta2014; Yan & Luo Reference Yan and Luo2017) have reviewed and compared various knowledge proximity measures used in patent mapping. Note that, this strand of research on measuring the knowledge proximity between different patent technology classes was not previously engaged in the engineering design literature.
In brief, patents have been used as stimuli to foster ideation; however, while there exists evidence that the use of such stimuli is beneficial, the observations on the effect of analogical distance on for example, the attributes of design outcomes have not been consistent. Moreover, several metrics have been used to measure the proximity between stimuli and design problems and distinguish near- and far-field stimuli (Fu et al. Reference Fu, Cagan, Kotovsky and Wood2013a ). However, most of the prior studies are based on the textual analysis of small sets of patents selected from the patent database. No efforts, to our knowledge, have been pursued at identifying near and far fields to a design problem in the total technology space, and at searching for patent stimuli in the total patent database. The network of all technology classes may serve as a macro and consistent framework to define home, near and far fields to a target design problem or more open-ended design interest.
In the present study, we make use of the patent technology class network to classify the patents in the total patent database into home, near and far fields to a design problem. On this basis, we seek to answer the following questions:
(1) Where are the sources of useful patent stimuli in the technology class network: home, near or far fields?
(2) What are the implications of using patent stimuli from these different fields on the outcomes of ideation?
3 Method and data
This study analyzes the data, including the patent stimuli and generated concepts, from an ideation exercise. In this section, we will introduce the exercise and the methods used to analyze the patent stimuli and concepts.
3.1 Ideation exercise and data
Data from an ideation exercise of 30.007 Engineering Design and Project Engineering, a course offered at the Engineering Product Development (EPD) Pillar (https://epd.sutd.edu.sg/) of Singapore University of Technology & Design (SUTD) (https://www.sutd.edu.sg/) is used for this research. This course is mandatory for the second-year undergraduate students in the EPD Pillar and provides a holistic understanding and competency in engineering design. All the students participating in this ideation exercise had undertaken several design courses and structured design projects prior to this course. The ideation exercise was an early part of a design project, which ran throughout the course. The objective in this project was to conceive, design and develop an innovative spherical rolling robot (SRR) concept of self-defined system requirements, and fabricate a functional prototype. This objective was deliberately kept open to provide students the flexibility and room for creativity and innovation.
Before ideation, all the student designers were provided with Sphero $^{\text{TM}}$ , a SRR toy manufactured by the company Sphero Inc. (http://www.sphero.com/sphero), to play, analyze and understand the structure and functioning of a SRR. Sphero is propelled by a self-contained cart and installed with an on-board micro-controller unit. Users may manipulate its motion remotely via a smartphone or tablet. Sphero represents a generic design of SRRs and is also a successful commercial product in the market. The designers were also offered access to 15 prototypes of SRRs developed earlier at SUTD. The purpose of such sharing before ideation is to allow the students to rapidly learn and build up the basic design knowledge of SRRs.
The research team prepared two sets of patents for student designers to read and get inspired. The most cited US patent from each of the 121 3-digit technology classes defined in the IPC system was provided. The number of forward citations received by a patent is highly correlated to its realized value or importance (Trajtenberg Reference Trajtenberg1990; Hall, Jaffe & Trajtenberg Reference Hall, Jaffe and Trajtenberg2000). These 121 patents constituted the first set (Most Cited set). In addition, a randomly identified patent from each of the 121 3-digit IPC technology classes was also provided. These 121 random patents were identified using a random number generator and constituted the second set (Random set). The participants were provided with the title, abstract and images of the patents. If the participants found these contents relevant and inspirational for their problem, they could read the technical descriptions of the patents. Note that it was not mandatory for participants to use the provided patents as stimuli. In addition to the 242 given patents, all the participants were allowed to search and use other patents and resources (such as internet and books) for inspiration. The two sets of patents provide a basic coverage of patents from all the 121 technology classes in the total technology space, and complement the intuitive unguided search of the participants by bringing all the technology classes to the attention of the searchers.
The participants were instructed to generate functional and novel concepts, but no limit was fixed on the number of concepts they must generate. The participants were given a week to generate concepts and asked to sketch or render concepts with annotations and briefly explain how they work. At the end of the exercise, they needed to submit a report for each concept generated. Figure 1 shows an example of the submitted reports. Specifically, the participant must report which patents were used as stimuli and their justification, other resources accessed, and how stimuli were transformed into the new SRR concept (see Figure 1a), in addition to a textual description and a sketch of the generated SRR concept (Figure 1b). In the end of the concept generation exercise, 138 SRR concepts were generated using 231 patent stimuli. Among these patent stimuli, 39 patents were from the Most Cited set, 33 from the Random set, and the rest were searched and identified by the student designers on their own.
Figure 1. A concept generation report from a student designer. (a) Reported information in the concept generation report. (b) Concept sketch with annotations in the concept generation report.
In addition, a consent form seeking the approval of participation was also collected from all the participants. A pre- and a post-ideation survey were conducted to collect information relating to age, gender, academic background, nationality, and other demographic data of the participants, to understand their experience of using patents a priori and posteriori to this exercise and the effects of their use.
3.2 Evaluation of ideation outcome
From the concept generation reports from individual participants, the stimuli used to generate each concept were identified, and novelty and quality of generated concepts were assessed based on their sketches, renderings and annotations. In the literature, researchers have proposed various metrics to assess the performance of ideation, in terms of the attributes of ideation outcomes, such as quantity, quality, novelty, variety, fluency, usefulness, feasibility, and similarity (Mcadams & Wood Reference Mcadams and Wood2002; Shah et al. Reference Shah, Li, Gessmann and Schubert2003; Sarkar & Chakrabarti Reference Sarkar and Chakrabarti2011). In this research, novelty and quality were used as metrics to assess performance of ideation.
Novelty of a design outcome is a measure of unusualness or unexpectedness of the outcome in comparison to other outcomes that perform the same overall function. An expert in robotics and SRRs rated the novelty of the concepts on a 4-point scale (0–3), corresponding to no, low, medium and high novelty. This expert has extensive knowledge of prior arts in SRRs, based on which novelty of the generated concepts was evaluated. For example, the concept shown in Figure 1(b) can climb stairs by extending its arms, which had been seldom seen in prior designs. Therefore, this concept obtained a novelty score of 3.
Quality of an outcome is the degree of the fulfillment of requirements for which the outcome is developed. In the assessment of quality, three abstraction levels, namely functional, working principle, and structural levels, were considered. Quality of a concept was assessed using the formula:
(1) $$\begin{eqnarray}\displaystyle Q=0.5\times f+0.3\times w+0.2\times s & & \displaystyle\end{eqnarray}$$
where $Q$ is the overall quality of a concept, $f$ is a measure of the degree of fulfillment of the identified requirements by the functions in the concept, $w$ is the degree of fulfillment of the identified functions by the working principles in the concept, and $s$ is the degree of fulfillment of the working principles by the components and their relations in the concept. A weighting scale of 0.5, 0.3 and 0.2Footnote 1 was used corresponding to the function, working principle, and structural levels, respectively, because higher abstraction levels are the basis for building the lower abstraction levels. $f$ , $w$ and $s$ were rated by one of the authors using a 3-point scale (0–2)Footnote 2 , corresponding to no, partial and complete fulfillment. Therefore, the overall quality of a concept also varied between 0 and 2. For example, the sketch of the design for climbing stairs (see Figure 1b) describes a full set of functions required to fulfill the stated objectives (rolling on ground and climbing stairs) including rotate two hemispheres for propelling and steering, increase grip, monitor environment with camera, and extend arms for lifting the robot. So, it received 2 points for fulfilling requirements. This concept lacks mechanism details of how to propel the robot and extend the arms, and so, it received 1 point for partially fulfilling the functions identified earlier. Due to the absence of working principles, the design also lacks information of structural features required to fulfill the missing working principles, such as the transmission system for propelling, and so, it received 1 point for the fulfillment of working principles. When these individual weightings were substituted in (1), an overall quality score of 1.5 was obtained for the concept.
An inter-rater reliability test was conducted using three raters for 20 concepts. After two iterative rounds of analyzing, settling, reconciling differences and reaching Cohen's Kappa ratio of 0.86, the quality of the remaining concepts was rated based on the learning gained from the earlier iterations.
3.3 Locating patent stimuli in home, near and far fields within the patent technology network
To analyze the influence of stimulation distance of patent stimuli on ideation outcomes, we located the patents used as stimuli in the ideation exercise within the network of all technology classes. In the network, the stimulation distance of a patent to a design problem can be measured according to the knowledge proximity between the technology classes containing the patent and the technology classes that correspond to the designers' knowledge related to the design problem, i.e., the home field. To align with the theoretical lens of near and far analogies in the literature, we further located patents in home, near and far fields, which are groups of technology classes based on the latent community structure of the technology network.
3.3.1 Construct a technology network
First, we used the entire USPTO database from 1976 to 2016 to empirically create a patent technology network that approximates the total space of all known technologies to date (Alstott et al. Reference Alstott, Triulzi, Yan and Luo2017a ; Yan & Luo Reference Yan and Luo2017). In the network, 121 3-digit IPC classes, such as node F02 that represents a class of patents for combustion engine and node G06 for computing, are used to operationalize the nodes. Each network node representing a technology class can be viewed as a category of patents. The nodes are connected to each other according to the knowledge proximity between them, as shown in Figure 2.
Figure 2. Home, near and far fields of the SRR design in the technology class network. The size of each node is proportional to the number of patents in each technology class, and the thickness of a link is proportional to the knowledge proximity between the corresponding pair of technology classes. Nodes in the home field are highlighted in green, near field in orange, and far field in blue.
If two technology classes have low knowledge proximity, i.e., design processes in two technology categories require relatively distinct design knowledge, designers specializing in one technology category may find it difficult to understand or design using knowledge and technologies from the other (Luo Reference Luo2015). On the contrary, if the design process in two technology categories requires similar knowledge pieces, designers in one category can easily understand and leverage design knowledge from the other. Prior patent data analysis has also statistically shown that inventors are more likely to succeed in filing patents in proximate categories in the technology space (Alstott et al. Reference Alstott, Triulzi, Yan and Luo2017a ,Reference Alstott, Triulzi, Yan and Luo b ). Therefore, the information of knowledge proximity among technology classes will enable one to locate patents with different distances to a design problem in the technology space.
We utilized the reference-based cosine similarity index to calculate the knowledge proximity. Specifically, the distribution of references from patents in a technology class to unique patents is represented as a vector to characterize the design knowledge base of the technology class. The references of a patented technology are the proxy of the design knowledge used in the design of the technology. Then the knowledge proximity between a pair of technology classes is calculated as the cosine of the angle between their corresponding vectors (Yan & Luo Reference Yan and Luo2017), as follows:
(2) $$\begin{eqnarray}\displaystyle \text{Proximity}=\text{cosin}(i,j)=\frac{\mathop{\sum }_{k}C_{ik}C_{jk}}{\sqrt{\mathop{\sum }_{k}C_{ik}^{2}}\sqrt{\mathop{\sum }_{k}C_{jk}^{2}}} & & \displaystyle\end{eqnarray}$$
where $C_{ik}$ or $C_{jk}$ denotes the number of citations referred from patents in technology class $i$ or $j$ to the specific patent $k$ ; $k$ belongs to all the patents cited by patents in either technology class $i$ or $j$ . The cosine similarity index value is in the range $[0,1]$ and indicates the proximity of knowledge pieces required in designing technologies in both classes. In this study, the references of more than 6 million utility patents in the USPTO database were analyzed to calculate the cosine between each pair of the 121 IPC classes for the best possible empirical approximation of knowledge proximity between them.
This knowledge proximity measure is theoretically motivated by the design-by-analogy literature that has primarily focused on 'similarity', e.g., functional, structural and surface similarity, to define and measure analogical stimulation distance (Gentner & Markman Reference Gentner and Markman1997; Ward Reference Ward1998; Christensen & Schunn Reference Christensen and Schunn2007; Fu et al. Reference Fu, Cagan, Kotovsky and Wood2013a ; Murphy et al. Reference Murphy, Fu, Otto, Yang, Jensen and Wood2014; Fu et al. Reference Fu, Murphy, Yang, Otto, Jensen and Wood2015). In contrast to these prior studies addressing the similarity between specific designs or individual patent documents, our measure is formulated for the similarity between technology classes, i.e., categories of patents. At this level, a few studies (Kay et al. Reference Kay, Newman, Youtie, Porter and Rafols2014; Leydesdorff et al. Reference Leydesdorff, Kushnir and Rafols2014) have used the cosine similarity of the vectors of patent references made from a pair of classes to other classes (i.e., class-to-class reference vectors). Our measure extends to class-to-patent vectors, concerning references to specific patents instead of aggregated classes, for a higher granularity. In addition, according to a recent study that compared 12 alternative knowledge proximity measures, our measure appears as one of the most correlated with and representative of other alternative knowledge proximity measures in the literature (Yan & Luo Reference Yan and Luo2017).
3.3.2 Detect communities in the technology network
In the technology network, some groups of nodes are more cohesively connected internally and have a higher density of links within than between them. Such dense groups of nodes are often called communities or clusters. In the network analysis and graph theory literature, various community detection algorithms have been developed to discover and analyze the latent community structures in networks (Clauset, Newman & Moore Reference Clauset, Newman and Moore2004; Newman Reference Newman2006; Blondel et al. Reference Blondel, Guillaume, Lambiotte and Lefebvre2008; Chen et al. Reference Chen, Wang, Chen and Xu2010; Browet, Absil & Van Dooren Reference Browet, Absil and Van Dooren2013; Wu et al. Reference Wu, Jin, Li and Zhang2015). Specifically, in terms of the technology network in Figure 2, communities are cohesive groups of technologies (i.e., patent classes) with high knowledge proximity between them. Technologies in the same communities possess more common knowledge than technologies in different communities, and thus it is more likely to draw analogies across technology classes in the same community.
In this paper, we employed a hierarchical agglomeration algorithm proposed by Clauset et al. (Reference Clauset, Newman and Moore2004) to detect the technology network's latent community structure. This algorithm was chosen because it is more efficient and faster than competing algorithms and returns a uniquely determined community partition rather than heuristic results. We assessed the community groupings of technology classes resulting from the algorithm and deemed them reasonable based on our engineering knowledge. Consequently, the 121 technology classes of the technology network were clustered into 5 communities. We also compared the community detection result with those from Louvain's greedy optimization method (Blondel et al. Reference Blondel, Guillaume, Lambiotte and Lefebvre2008) and found the results are consistent.
3.3.3 Locate 'home field'
To locate the 'home field' of SRRs in the total technology space, we first need to retrieve a set of US patents that can comprehensively represent the participant designers' knowledge base that is related to SRRs. As introduced in Section 3.1, prior to the ideation exercise, Sphero provided the students with the basic knowledge and understanding of SRRs, so we utilized the patents related to Sphero to define the students' SRR-related knowledge base. On this basis, we searched the patents of Sphero Inc. and obtained 16 patents as of 31st August 2016.
Then we used the classification information of the retrieved patents to identify the home field in the technology network. To do this, the following two steps are carried out: (1) identify the technology classes that contain the retrieved patents and sort them in descending order of the number of retrieved patents they contain; (2) successively identify the minimum set of classes required to cover all the retrieved patents. Such a procedure is unambiguous and reproducible. Specifically, the 16 patents are classified in 8 technology classes. Among them, the 6 technology classes – 'G05 Controlling & Regulating', 'A63 Sports & Amusements', 'B62 Land Vehicles', 'G06 Computing', 'B60 Vehicles in General' and 'B63 Ships' – constitute the smallest set of top classes that cover all the 16 patents.
In addition, the technologies used in the 15 exemplar SRRs presented to the students are also well covered by the 6 technology classes. Therefore, we considered these 6 technology classes as the 'home field' of the SRR design in the total technology space, which are located at the center and highlighted in green in Figure 2. We also tested the robustness of the choice of technology classes to represent the home field. First, we found each of the top 3 technology classes contains more than a half of the total set of 16 patents and has a much greater coverage than the other technology classes. We tested only using the top 3 classes to define the home field. The statistical results presented in Section 4 vary slightly and do not affect the general conclusions. In addition, we also compared the patent set used in this paper with the patent set that resulted from an exhaustive patent search for SRRs and contained 153 SRR patents (Song & Luo Reference Song and Luo2017). The results show that only 1 of the top 6 technology classes differs for the two patent sets. These results show the robustness of the definition of the home field.
3.3.4 Identify 'near field' and 'far field'
Based on the network partition results from the algorithm of Clauset et al. (Reference Clauset, Newman and Moore2004), the home-field classes G05, A63, B62, G06, B60 and B63 belong to 2 network communities. Then the technology classes other than these 6 in the same 2 communities were designated as the 'near field' of the SRR design, which are located at the inner ring and highlighted in orange in Figure 2. The near field surrounds the home field. The technology classes outside these 2 network communities were considered the 'far field' of the SRR design, which are located at the outer layer and highlighted in light blue in Figure 2. Thus, the technology class network, which represents the total technology space, was divided into three mega fields: home, near and far fields. On this basis, the patent stimuli used were assigned to one or multiple of the three fields in the technology space, according to their classification information. Note that a patent may belong to multiple fields if it is assigned in multiple technology classes.
In brief, the method of locating patents in home, near or far field involves three main procedures: (1) construct a network map of all technology classes in the patent database to represent the total technology space, (2) detect network communities, and (3) determine the home, near and far fields in the network. In turn, these three procedures respectively require: (1) a measure of the knowledge proximity between patent classes, (2) a community detection algorithm, and (3) a patent set representing the home of a design problem. In this sub-section, we have introduced our choices for each of the three elements. On this basis, we located the patent stimuli used in the ideation exercise in the home, near and far fields of the SRR design in the technology network, for further analysis.
4 Findings
In this section, we report the frequencies and likelihood of the participants finding patent stimuli from home, near and far fields, and the novelty and quality of the concepts generated with patent stimuli from home, near and far fields.
4.1 Where designers find patent stimuli in the technology space
Figure 4(a) shows the number of reported unique patent stimuli from home, near and far fields. The participant designers can use a patent as a stimulus for generating multiple concepts. Figure 4(b) shows the frequency of patents being used with multiple counting, i.e., it counts the use of a patent as stimulus for multiple concepts. Both the figures (Figure 4a,b) show a similar pattern: most patent stimuli used for concept generation are from the near field of the SRR design. We also calculated the likelihood for patent stimuli being identified from the various fields, as the number of patents used in a field to generate concepts divided by the total number of patents granted in the corresponding field from 1976 to 2016. As seen in Figure 4(c), patents in the home field are more likely to be used to generate concepts than those in the near and far fields, for which the likelihoods are almost the same.
Figure 3. The three procedures of our method to determine home, near and far fields.
Figure 4. Patent stimuli in each field of the technology space: (a) number of unique patents used; (b) frequency of patents being used; (c) likelihood of patents being used.
A concept can be stimulated by either a single patent or multiple patents, whose classes may fall into one or more of the home, near and far fields. Figure 5 shows the number of the concepts generated using patents from the individual fields and their combinations. In Figure 5(a), a concept stimulated by patents from multiple fields is counted multiple times, once for each field to which a patent stimulus belongs. Patents from the near and far fields help generate more concepts than patents from the home field. In order to present the results more unambiguously, we categorize a concept into only one of the individual fields or the combinations of multiple fields according to the sources of patent stimuli used by the concept. Under this setting, each concept is counted only once in one category. As shown in Figure 5(b), the highest number of the concepts is generated using patents from a combination of home, near and far fields (H, N & F), followed by the combination of near and far fields (N & F). The influence of patents from the near field either individually or in combination with other fields is prominent. As a single source, patents from the near field help generate most concepts.
Figure 5. Numbers of concepts generated using patents from individual fields and combinations of fields: (a) with patents from each field (one concept may be counted more than once); (b) with patents from combinations of the three fields (each concept is counted only once).
4.2 Implications of home-, near- and far-field stimuli on ideation outcomes
The average quality of the concepts generated using patents from the individual fields and their combinations is shown in Figure 6(a) and Figure 6(b), respectively. No significant difference in average quality across the individual fields is observed in Figure 6(a), suggesting that the three fields contribute almost identically to the quality of the concepts. When considering the combinations of the fields, we can see in Figure 6(b) that concepts stimulated with patents from the far field individually or the combination of home and far fields (H & F) have a significantly higher average quality than the rest. Such differences in concept quality are significant at 5% level in most cases based on pairwise 2-tailed t-tests, as shown in Table 1.
Figure 6. Average quality of generated concepts: (a) with patents from each field; (b) with patents from combinations of the three fields.
Figures 7(a) and figure 7(b) shows the distributions of the concepts generated using patents from the individual fields and their combinations by quality, respectively. In the figures, the low, medium and high quality categories correspond to the ranges of quality scores $Q\leqslant 1.2$ , $1.2<Q<1.8$ and $Q\geqslant 1.8$ , respectively, according to the multimodal frequency distribution of the generated concepts by quality. As observed in Figure 7(a), patents from the home or far field stimulate a higher percentage of high quality concepts than those from the near field. In Figure 7(b), a higher percentage of the concepts stimulated by patents from: (a) the far field and (b) the combination of the home and far fields (H & F) have high quality. Interestingly, no low-quality concepts are generated using patents from the far field individually, the combination of the home and far fields (H & F) and the combination of the home and near fields (H & N).
Figure 7. Distributions of concepts by quality: (a) with patents from each field; (b) with patents from combinations of the three fields.
Table 1. $t$ statistics with $p$ -values in parentheses for the pairwise comparison of the quality of concept sets as indicated by the row and column labels. Underlines denote significance at 5% level
The average novelty of the concepts generated using patents from the individual fields and their combinations is shown in Figure 8(a) and Figure 8(b), respectively. The differences in average novelty of the concepts generated using patents from the individual fields are not statistically significant (see Figure 8a). Concepts stimulated by patents from the near field individually and all combinations containing the near field (N; H & N; N & F; H, N & F) have higher average novelty than concepts stimulated by patents from other individual fields or combinations (see Figure 8b). Specifically, the differences in average novelty are statistically significant at 5% level between concepts stimulated by patents from the combinations containing the near field (H & N; N & F; H, N & F) and those stimulated by patents from the home or far field individually or their combination (H; F; H & F), as shown in Table 2.
Figure 8. Average novelty of generated concepts: (a) with patents from each field; (b) with patents from combinations of the three fields.
Figure 9. Distributions of concepts by novelty: (a) with patents from each field; (b) with patents from combinations of the three fields.
Table 2. $t$ statistics with $p$ -values in parentheses for the pairwise comparison of the novelty of concept sets as indicated by the row and column labels. Underlines denote significance at 5% level
Figures 9(a) and figure 9(b) shows the distributions of the concepts generated using patents from the individual fields and their combinations by novelty, respectively. As mentioned earlier, the high, medium, low and no novelty corresponds to novelty score 3, 2, 1 and 0, respectively. As observed in Figure 9(a), patents from the near field contribute a slightly higher percentage of high novelty concepts than patents from the home and far fields. It is clear from Figure 9(b) that the concepts generated with patents from the near field individually and all combinations containing the near field (N; H & N; N & F; H, N & F) present a higher percentage of high and medium novelty. These results are consistent with the findings in Figure 8(b). The results suggest that patents from the near field play a major role in stimulating concepts of higher novelty compared to patents from the home or far field.
4.3 Summary of findings
The following are observed in the data from the open concept generation exercise and using the technology space network-based definition of home, near and far fields of patent stimuli:
(1) Patents from the home field are most likely to be used as stimuli for concept generation.
(2) The near field contributes most patent stimuli for concept generation, and the most number of concepts are generated using patents from the near field.
(3) The concepts generated with patent stimuli from the far field individually and the combination of the home and far fields have a higher average quality than concepts generated with patents from other individual fields or other combinations of the three fields.
(4) The concepts generated with patent stimuli from the near field individually and the combinations of the near field with other fields have higher average novelty than concepts generated with patent stimuli from the home field, far field or their combination.
5.1 Significance of findings
In this paper, we have presented a network-based approach to divide the technology space into three fields – home field, near field and far field – with varied knowledge proximity to a design problem based on the community detection within the network of all 3-digit IPC technology classes in the patent databases. The 'home field' entails technology classes that contain patents that are directly relevant to a design problem, the 'near field' comprises technology classes that are in the same network communities as those in the 'home field,' and the remaining technology classes from other communities in the technology network constitute the 'far field'. Each technology class can be further viewed as a category of patents. The data-driven definition of the home, near and far fields is motivated by the literature on design by analogy using patents as stimuli, which has used such terms as near or far analogies or stimuli, to characterize discrete stimuli or patents. Our approach provides a systemic and macro framework, which one can consistently use to position a patent in either home, near or far fields of design stimulation in the total technology space.
Based on the definitions of the home, near and far fields in the total technology space, our experiment shows that patents from the home field are more likely to be used than those from the near or far field. Since the home field is the area nearest to the defined design problem or opportunity, the result produces evidence in support of the argument that it is easier to analogize with stimuli from near domains than far domains to some extent. In the studied case, patents in the home field record technologies and processes having much more common knowledge basis with the SRR-related design knowledge of the student designers, and so the patents are much easier to be identified, understood and assimilated by them.
Results in this paper further suggest that in the given context, concepts generated using patents from the far field or the combination of the home and far fields have a higher average quality than the concepts stimulated by patents from other individual fields or combinations. This seems to contradict the findings of Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013b ) that solutions generated using stimuli from near sources had higher quality than those generated using far sources, but support the findings of Chan et al. (Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011) that stimuli from far sources were more beneficial for developing solutions of higher quality. However, these studies and ours might not be comparable, primarily because the characterization of near and far is not the same. Fu et al. and Chan et al. used functional similarity to characterize the distance as near and far between individual patents and a specific design problem, while in this study knowledge base similarity between patent technology classes is used. Also, it must be noted that the context setting for this study is quite different from theirs. That is, the objective of the design problem is open and the participants are free to select patent stimuli from the given sets or search their own independently. In this case, it is understandable that where the patent stimuli are from has limited influence on feature creation. Although patents from different fields are used with different likelihoods, once they are identified and chosen as stimuli for concept generation, the designers would make effort to understand and make sense of the information provided in the patents and transform it into features in their own concepts.
The near field provides the most patent stimuli, which further stimulate the largest portion of the concepts. Concepts generated using patents from the near field have higher novelty on average than those generated without patents from the near field. On one hand, the near field is relatively nearer to the design problem than the far field, and contains design knowledge that is relatively easy to understand and make sense of. On the other hand, it is relatively more distant to the design problem than the home field, and probably provides stimuli with plenty of additional features for conceiving innovative attributes. Potentially, these attributes may contribute to novelty. In brief, it can be argued that patents from the near field are more beneficial for identifying stimuli and creating novel attributes in concepts compared to those from the other fields.
This research uses data that comprises several variables. 138 concepts were generated with 231 patents by student designers, who ideated individually in uncontrolled conditions. The student designers in this study, unlike other laboratory-based controlled ideation experiments, required more domain knowledge to accomplish the task. From among an alternative set of the concepts generated by individuals, one concept was chosen and if necessary, modified, and then prototyped and demonstrated by each team. So, the quality of the prototype depended on the concept set generated earlier. Therefore, the ideation was a fun exercise with lots of project stakes attached to it. The grades of student designers depended on their performance at every phase of the SRR development process. Some of the projects were further pursued toward entrepreneurial and co-curricular activities. So, the participants had adequate vested incentives to pursue this ideation exercise seriously. Therefore, the results of this study must be viewed taking into consideration the wide span of variables and the seriousness with which this exercise was pursued.
In addition to our experiment findings, this paper may have made a contribution to ideation methodology development, specifically regarding patent stimuli search. Although prior studies have suggested the efficacy of using patents as stimuli for concept generation, browsing through the huge patent database within a short period to identify relevant stimuli may be cumbersome. To address the problem, an efficient search-and-retrieve interface is required, through which millions of patents can be searched through using defined keywords and relevant patents can be retrieved and ranked in the order of their appropriateness to the keywords. Fu et al. (Reference Fu, Cagan, Kotovsky and Wood2013a ) and Murphy et al. (Reference Murphy, Fu, Otto, Yang, Jensen and Wood2014) have developed computational design tools to search and identify functionally relevant analogies from the US patent database. Results in this paper offer fundamental insights on designers' natural preferences for stimuli and the influence of stimulation distance on ideation outcomes. With these findings, the introduced method in Section 3.3, which is based on a technology class network and community detection to define the home field, near field and far field, might be a first step toward a data-driven computational tool for better-guided and more-informed search of patents as design stimuli. Such a tool is expected to allow designers to locate the home field of a design problem and be informed of the fields of their search for patent stimuli, and systematically guide them through the search for patent stimuli in either home field, near field, far field or their combinations.
In fact, the method we used in this experimental research is structured and repeatable. As introduced in Section 3.3 and depicted in Figure 3, the method generally involves three main procedures: (1) construct a network map of technology classes to represent the total technology space; (2) detect network communities; and, (3) determine the home, near and far fields in the network. In practice, once the home, near and far fields have been identified in the network, a designer can search, locate and use patent stimuli within the home, near and far fields, with the guidance of the understanding of the potential effectiveness of finding useful patent stimuli from different fields and the corresponding performance implications, as suggested from our experimental findings.
Meanwhile, the procedures require three key elements in practical implementation: (a) a knowledge proximity measure for constructing the network of technology classes, (b) a computer algorithm for partitioning the technology network into a few communities, and (c) a patent set for identifying the technology classes to determine the home field. For each of the elements, there are alternative implementation choices. In this paper, we have provided one superior choice for each element for our case study, but do not limit to them. For the first two elements, in this study, alternative knowledge proximity measures and alternative community detection algorithms result in quite similar community partitions of the technology network. In a different case, one can still pick out superior choices of the measures and algorithms by comparing the resulting community partitions with the expectation based on his or her knowledge and the specific context. Moreover, the patent set used to identify the home field can also be determined using an approach according to the context. For example, to find the home field of a designer, one can search the patents granted to the designer; to find the home field of a designer who has no patents, one can search patents using keywords that describe the technical expertise of the designer; one can also combine the search approaches. In brief, our method provides a structured but flexible framework, whose elements can be operationalized and calibrated according to the specific context and situation of its application.
The results in this research have the following limitations. First, in this research the measure used to assess the knowledge proximity between 3-digit technology classes may not be directly related to the relevance of a stimulus to a problem. Future research may explore knowledge proximity at alternative granularity levels, such as proximity between 4-digit IPC technology classes or between classes and patents. Second, the student designers received two sets of 121 patents (the most cited patent and a random patent from each of the 121 technology classes) for the ideation exercise. A patent's description text is often lengthy and written in a tedious and non-obvious manner, so it may be difficult for the student designers to browse through all the 242 patents to assimilate the information, identify relevant stimuli from them and use these for generating concepts all within a week. That is, coverage of all the 121 technology classes in the total technology space may not be guaranteed in practice. Future research may seek approaches for communicating the technical information and design knowledge in patent documents to designers more effectively and efficiently. Also, the ideation was an after-class exercise and thus uncontrolled. For concepts generated using both patents and other resources of inspiration, the influence of other resources was not accounted for locating the stimuli in the home, near and far fields in the technology space. This discounts a significant influence of other resources on the novelty or quality of the generated concepts.
Moreover, it should be noted that our findings result from a technology-driven design process with undergraduate designers as the participants. The findings may not hold true for experienced designers or in other design situations, such as the design of market-driven products. For example, when solving a design problem, experienced designers typically have extensive knowledge of the near field as well as the home field through their own learning and experience, which allows them to build their own 'feeling for near'. In such a situation, the experienced designers would not find patent stimuli from the home and near fields so useful as the student designers do. In addition, for the design of market-driven products, the goal is to identify users' value-based needs and convert them into product features successfully. In such a design process, the distance of stimuli from the design problem is largely determined by users' requirements but not the intrinsic proximity or distance between technologies. Moreover, the findings in this paper are based on the case of SRRs and may not hold true for other products, whereas the introduced method of locating patent stimuli is applicable to other contexts. This suggests a future research opportunity to apply our network-based methodology to more diverse products and contexts and potentially develop a contingency understanding on the influence of stimulation distance on ideation outcomes for different types of products.
6 Conclusions and future work
This research contributes to fundamental insights on designers' preferences for patent stimuli and the influence of stimulation distance on the ideation outcomes, as well as a network-based methodology for better-guided search of patents as design stimuli in concept generation practices. The objectives of this research are: (a) to identify where designers identify useful patent stimuli within the technology space: home, near or far fields, and (b) to study the implications of using patent stimuli from these fields on the novelty and quality of the concepts generated. It is observed that: (a) patents from the home field are more likely to be used as stimuli, (b) the near field contributes most patent stimuli, which further stimulates the most number of the concepts, (c) concepts generated with patents from the far field and the combination of the home and far fields have higher average quality than concepts generated using patents from other individual fields or combinations of the three fields, and (d) the concepts generated with patents from the near field have higher average novelty than concepts generated without patents from the near field. The methodology based on a technology class network and community detection to define the home field, near field and far field might be a first step toward a tool for better-guided search of patents as design stimuli. Further efforts can be made to retrieve most useful patents for ideation at the field or class level.
This research is supported partly by a grant from the SUTD-MIT International Design Centre (IDG31600105), and by Singapore Ministry of Education Tier 2 Academic Research Grant (T2MOE1403).
1 We carried out sensitivity analyses with altered weights for $f$ , $w$ and $s$ to investigate the robustness of the findings. For each of the tests, we held one of the three weights fixed, increased or decreased the second weight by 10%, and then adjusted the third one accordingly to ensure the weights sum up to 1. The findings regarding concept quality hold true in the tests.
2 We alternatively experimented 5-point scale to assess $f$ , $w$ and $s$ , and found it was difficult for three raters to achieve a Cohen's Kappa ratio higher than 0.8 in the inter-rater reliability tests. By contrast, a 3-point scale evaluation enabled satisfactory inter-rater reliability.
Alstott, J., Triulzi, G., Yan, B. & Luo, J. 2017a Mapping technology space by normalizing technology networks. Scientometrics 110 (1), 443–479.Google Scholar
Alstott, J., Triulzi, G., Yan, B. & Luo, J. 2017b Inventor's explorations across technology domains. Design Science 3; doi:10.1017/dsj.2017.21.Google Scholar
Altshuller, G. S. & Shapiro, R. B.1956 'О Психологии изобретательсҡого творчества (On the psychology of inventive creation) (in Russian)', Вопросы Психологии (The Psychological Issues), 6(Вопросы Психологии (The Psychological Issues)), pp. 37–49.Google Scholar
Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Lefebvre, E. 2008 Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment 2008 (10), P10008.CrossRefGoogle Scholar
Breschi, S., Lissoni, F. & Malerba, F. 2003 Knowledge-relatedness in firm technological diversification. Research Policy 32, 69–87.CrossRefGoogle Scholar
Browet, A., Absil, P. & Van Dooren, P.2013 Fast community detection using local neighbourhood search, Preprint, pp. 1–10. Available at http://arxiv.org/abs/1308.6276.Google Scholar
Cascini, G. & Russo, D. 2006 Computer-Aided analysis of patents and search for TRIZ contradictions. International Journal of Product Development 4 (1–2), 52–67.CrossRefGoogle Scholar
Chakrabarti, A., Sarkar, P., Leelavathamma, B. & Nataraju, B. S. 2005 A functional representation for aiding biomimetic and artificial inspiration of new ideas. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 19 (2), 113–132.CrossRefGoogle Scholar
Chan, J., Fu, K., Schunn, C., Cagan, J., Wood, K. & Kotovsky, K. 2011 On the benefits and pitfalls of analogies for innovative design: ideation performance based on analogical distance, commonness, and modality of examples. Journal of Mechanical Design 133 (8), 81004.CrossRefGoogle Scholar
Chan, J. & Schunn, C. 2015 The impact of analogies on creative concept generation: lessons from an in vivo study in engineering design. Cognitive Science 39 (1), 126–155.CrossRefGoogle Scholar
Chen, D., Wang, J., Chen, X. & Xu, X. 2010 A search algorithm for clusters in a network or graph. International Journal of Digital Content Technology and its Applications 4 (6), 115–122.Google Scholar
Christensen, B. T. & Schunn, C. D. 2007 The relationship of analogical distance to analogical function and preinventive structure: the case of engineering design. Memory and Cognition 35 (1), 29–38.CrossRefGoogle ScholarPubMed
Clauset, A., Newman, M. E. J. & Moore, C. 2004 Finding community structure in very large networks. Physics 70 (6), P066111.Google ScholarPubMed
Dibiaggio, L., Nasiriyar, M. & Nesta, L. 2014 Substitutability and complementarity of technological knowledge and the inventive performance of semiconductor companies. In Research Policy, pp. 1–12. Elsevier.Google Scholar
Ejermo, O. 2005 Technological diversity and Jacobs' externality hypothesis revisited. Growth and Change 36 (2), 167–195.CrossRefGoogle Scholar
Fantoni, G., Apreda, R., Orletta, F. D. & Monge, M. 2013 Advanced engineering informatics automatic extraction of function – behaviour – state information from patents. Advanced Engineering Informatics 27 (3), 317–334.CrossRefGoogle Scholar
French, M. J. 1985 Conceptual Design for Engineers. Springer.CrossRefGoogle Scholar
Fu, K., Cagan, J., Kotovsky, K. & Wood, K. 2013a Discovering structure in design databases through functional and surface based mapping. Journal of Mechanical Design 135 (March 2013), 31006.CrossRefGoogle Scholar
Fu, K., Chan, J., Cagan, J., Kotovsky, K., Schunn, C. & Wood, K. 2013b The meaning of 'near' and 'far': the impact of structuring design databases and the effect of distance of analogy on design output. Journal of Mechanical Design 135 (2), 21007.CrossRefGoogle Scholar
Fu, K., Murphy, J., Yang, M., Otto, K., Jensen, D. & Wood, K. 2015 Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement. Research in Engineering Design 26 (1), 77–95.CrossRefGoogle Scholar
Gentner, D. & Markman, A. B. 1997 Structure mapping in analogy and similarity. American Psychologist 52 (I), 45–56.CrossRefGoogle Scholar
Gick, M. L. & Holyoak, K. J. 1980 Analogical problem solving. Cognitive psychology 12 (3), 306–355.CrossRefGoogle Scholar
Goel, A. K. 1997 Design, Analogy, and Creativity. IEEE Expert 12 (3), 0–25.Google Scholar
Hall, B. H., Jaffe, A. B. & Trajtenberg, M.2000 Market Value and Patent Citations: A First Look. NBER Working Paper No. 7741 (National Bureau of Economic Research); doi:10.3386/w7741.CrossRefGoogle Scholar
Jaccard, P. 1901 Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bulletin de la Société vaudoise des sciences naturelles 37, 241–272.Google Scholar
Jaffe, A. B. 1986 Technological opportunity and spillovers of R & D: evidence from firms' patents, profits and market value. The American Economic Review 76 (5), 984–1001.Google Scholar
Jansson, D. G. & Smith, S. M. 1991 Design-Fixation. Design Studies 12 (1), 3–11.CrossRefGoogle Scholar
Jensen, D., Weaver, J., Wood, K., Linsey, J. & Wood, J. 2009 Techniques to enhance concept generation and develop creativity. In Annual Conference, Austin, TX.Google Scholar
Kay, L., Newman, N., Youtie, J., Porter, A. L. & Rafols, I. 2014 Patent overlay mapping: visualizing technological distance. Journal of the American Society for Information Science and Technology 65 (12), 2432–2443.Google Scholar
Kogler, D. F., Rigby, D. L. & Tucker, I. 2013 Mapping knowledge space and technological relatedness in US cities. European Planning Studies 21 (9), 1374–1391.CrossRefGoogle Scholar
Leydesdorff, L., Kushnir, D. & Rafols, I. 2014 Interactive overlay maps for US patent (USPTO) data based on international patent classifications (IPC). Scientometrics 98 (3), 1583–1599.CrossRefGoogle Scholar
Linsey, J. S. 2007 Design-by-analogy and representation in innovative engineering concept generation.Google Scholar
Linsey, J. S., Tseng, I., Fu, K., Cagan, J., Wood, K. L. & Schunn, C. 2010 A study of design fixation, its mitigation and perception in engineering design faculty. Journal of Mechanical Design 132 (4), 41003.CrossRefGoogle Scholar
Luo, J. 2015 The united innovation process: integrating science, design, and entrepreneurship as sub-processes. Design Science 1; doi:10.1017/dsj.2015.2.CrossRefGoogle Scholar
Mcadams, D. A. & Wood, K. L. 2002 A quantitative similarity metric for design-by-analogy. Journal of Mechanical Design 124 (2), 173–182.CrossRefGoogle Scholar
Mukherjea, S., Bamba, B. & Kankar, P. 2005 Information retrieval and knowledge discovery utilizing a biomedical patent semantic web. IEEE Transactions on Knowledge and Data Engineering 17 (8), 1099–1110.CrossRefGoogle Scholar
Murphy, J., Fu, K., Otto, K., Yang, M., Jensen, D. & Wood, K. 2014 Function based design-by-analogy: a functional vector approach to analogical search. Journal of Mechanical Design 136 (10), 101102.CrossRefGoogle Scholar
Newman, M. E. 2006 Modularity and community structure in networks. Proceedings of National Academy of Sciences, USA 103 (23), 8577–8582.CrossRefGoogle ScholarPubMed
Pahl, G. & Beitz, W. 2013 Engineering Design: A Systematic Approach (ed. Wallace, K.). Springer Science & Business Media.Google Scholar
Qian, L. & Gero, J. S. 1996 Function-behavior-structure paths and their role in analogy-based design. In Artificial Intelligence for Engineering Design, Analysis and Manufacturing, vol. 10(4), pp. 289–312.Google Scholar
Sarkar, P. & Chakrabarti, A. 2011 Assessing design creativity. In Design Studies, pp. 1–36. Elsevier.Google Scholar
Shah, J. M., Li, Y. L., Gessmann, T. & Schubert, E. F. 2003 Experimental analysis and theoretical model for anomalously high ideality factors (n2.0) in AlGaN/GaN p-n junction diodes. Journal of Applied Physics 94 (4), 2627–2630.CrossRefGoogle Scholar
Small, H. 1973 Co-citation in the scientific literature: a new measure of the relationship between two documents. Journal of the Association for Information Science and Technology 24 (4), 265–169.Google Scholar
Song, B. & Luo, J. 2017 Mining patent precedents for data-driven design: the case of spherical rolling robots. Journal of Mechanical Design 139 (11), 111420.Google Scholar
Souili, A., Cavallucci, D., Rousselot, F. & Zanni, C. 2015 Starting from patents to find inputs to the problem graph model of IDM-TRIZ. Procedia Engineering 131, 150–161.CrossRefGoogle Scholar
Srinivasan, V., Song, B., Luo, J., Subburaj, K. & Rajesh, M. 2017a Investigating Effects of Stimuli on Ideation Outcomes. In DS 87-8 Proceedings of the 21st International Conference on Engineering Design (ICED 17) Vol 8: Human Behaviour in Design, Vancouver, Canada, Aug 21-25, 2017.Google Scholar
Srinivasan, V., Song, B., Luo, J., Subburaj, K. & Rajesh, M. 2017b Understanding effects of analogical distance on performance of. In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers.Google Scholar
Taura, T. & Yukari, N. 2012 Concept Generation for Design Creativity: A Systematized Theory and Methodology. Springer Science & Business Media; doi:10.1007/978-1-4471-4081-8.Google Scholar
Teece, D. J., Rumelt, R., Dosi, G. & Winter, S. 1994 Understanding corporate coherence: theory and evidence. Journal of Economic Behavior and Organization 23 (1), 1–30.CrossRefGoogle Scholar
Trajtenberg, M. 1990 A penny for your quotes: patent citations and the value of innovations. The RAND Journal of Economics 21 (1 (Spring, 1990)), 172–187; Published by: Wiley on behalf of RAND Corporation.CrossRefGoogle Scholar
Ward, T. B. 1998 Analogical distance and purpose in creative thought: mental leaps versus mental hops. In Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences, pp. 221–230. New Bulgarian University Press, Bulgaria.Google Scholar
Weisberg, R. W. 2006 Creativity: Understanding Innovation in Problem Solving, Science, Invention, and the Arts. John Wiley & Sons.Google Scholar
Wilson, J. O., Nelson, B. A., Rosen, D. & Yen, J. 2010 The effects of biological examples in idea generation. Design Studies 31 (2), 169–186.CrossRefGoogle Scholar
Wu, Y., Jin, R., Li, J. & Zhang, X. 2015 Robust local community detection: on free rider effect and its elimination. Proceedings of the International Conference on Very Large Data Bases (VLDB) 8 (7), 798–809.Google Scholar
Yan, B. & Luo, J. 2017 Measuring technological distance for patent mapping. Journal of the Association for Information Science and Technology 68 (2), 423–437.Google Scholar
Total number of HTML views: 79
Total number of PDF views: 398 *
* Views captured on Cambridge Core between 07th December 2017 - 17th January 2021. This data will be updated every 24 hours.
Hostname: page-component-77fc7d77f9-cctwg Total loading time: 0.506 Render date: 2021-01-17T06:33:33.448Z Query parameters: { "hasAccess": "1", "openAccess": "1", "isLogged": "0", "lang": "en" } Feature Flags last update: Sun Jan 17 2021 05:54:24 GMT+0000 (Coordinated Universal Time) Feature Flags: { "metrics": true, "metricsAbstractViews": false, "peerReview": true, "crossMark": true, "comments": true, "relatedCommentaries": true, "subject": true, "clr": true, "languageSwitch": true, "figures": false, "newCiteModal": false, "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true }
Gyory, Joshua T. Goucher-Lambert, Kosa Kotovsky, Kenneth and Cagan, Jonathan 2019. Exploring the Application of Network Analytics in Characterizing a Conceptual Design Space. Proceedings of the Design Society: International Conference on Engineering Design, Vol. 1, Issue. 1, p. 1953.
Bertoni, A. 2020. DATA-DRIVEN DESIGN IN CONCEPT DEVELOPMENT: SYSTEMATIC REVIEW AND MISSED OPPORTUNITIES. Proceedings of the Design Society: DESIGN Conference, Vol. 1, Issue. , p. 101.
Fiorineschi, Lorenzo Frillici, Francesco Saverio and Rotini, Federico 2020. Impact of missing attributes on the novelty metric of Shah et al.. Research in Engineering Design, Vol. 31, Issue. 2, p. 221.
Binyang Song (a1), V. Srinivasan (a2) and Jianxi Luo (a1) (a3)
DOI: https://doi.org/10.1017/dsj.2017.27 | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.